uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,996,763
arxiv
\section{Introduction} Faddeev equations in differential form were introduced by H.P. Noyes and H.~Fiedelday in 1968 \cite{NF} \begin{equation} (H_0-E)\varphi_{\alpha}+V_{\alpha}\sum_{\beta=1}^{3}\varphi_{\beta}=0, \label{fadeq} \end{equation} and since that time are used extensively as for investigating theoretical aspects of the three-body problem as well as for numerical solutions of three-body bound-state and scattering state problems. The simple formula $$ \sum_{\beta=1}^{3}\varphi_{\beta}=\Psi $$ allows one to obtain the solution to the three-body Schr\"odinger equation $$ (H_0+\sum_{\beta=1}^{3}V_{\beta}-E)\Psi=0 $$ in the case when \begin{equation} \sum_{\beta=1}^{3}\varphi_{\beta} \ne 0. \label{nonzero} \end{equation} Such solutions of (\ref{fadeq}) can be called {\bf physical}. The proper asymptotic boundary conditions should be added to Eqs. (\ref{fadeq}) in order to guarantee (\ref{nonzero}). This conditions were studied by many authors and are well known \cite{FM}. So that, I will not discuss them here. On the other hand, Eqs. (\ref{fadeq}) themselves allow solutions of the different type (to physical ones) with the property $$ \sum_{\beta=1}^{3}\varphi_{\beta}=0. $$ This solutions can be constructed explicitly and have the form $$ \varphi_{\alpha}=\sigma_{\alpha}\phi^{0}, $$ where $\phi^{0}$ is an eigenfunction of operator $H_0$: $$ H_{0}\phi^{0}=E^{0}\phi^{0} $$ and $\sigma_{\alpha}$, $\alpha=1,2,3$ are numbers such that $\sum_{\alpha=1}^{3}\sigma_{\alpha}=0$. The solutions of this type can be called {\bf spurious} or {\bf ghost}, because they do not correspond to any three-body system and do not contain any information about interactions between particles. First observation of the existence of spurious solutions was made in ref. \cite{Friar}. Some spurious solutions corresponding to particular values of the total angular momentum were found in refs. \cite{Pup1}, \cite{Pup2}. All the spurious solutions on subspaces with fixed total angular momentum were constructed in ref. \cite{RYa}. So that, there exist at least two types of solutions to Eqs. (\ref{fadeq}) corresponding to real energy:\\ \hspace*{1cm} {\bf physical} ones with the property $\sum\limits_{\beta=1}^{3}\varphi_{\beta}\ne 0$, \\ \hspace*{1cm} {\bf spurious} ones with the property $\sum\limits_{\beta=1}^{3}\varphi_{\beta}= 0$. \\ The QUESTION is do these solutions form the complete set or there could be exist solutions of different type which do not belong to physical and spurious classes. The ANSWER is not so evident because the operator corresponding to Eqs. (\ref{fadeq}) is not selfadjoint and moreover even symmetrical: \begin{equation} {\bf H}= \left( \begin{array}{ccc} H_0 & 0 & 0 \\ 0 &H_{0} & 0 \\ 0 &0 &H_0 \\ \end{array} \right) + \left( \begin{array}{ccc} V_1 & 0 & 0 \\ 0 &V_2 & 0 \\ 0 &0 &V_3 \\ \end{array} \right) \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right)= {\bf H}_{0}+{\bf V}{\bf X}, \label{boldH} \end{equation} and, in principle, this operator can have not real eigenvalues even the ingredients $H_0$, $V_{\alpha}$ and three-body Hamiltonian $H=H_{0}+\sum\limits_{\beta=1}^{3}V_{\beta}$ are selfadjoint operators. In this report I will answer on the QUESTION and will give a classification of eigenfunctions of the operator ${\bf H}$ and its adjoint. This report is based on refs. \cite{Ya1}, \cite{Ya2}. \section{Faddeev operator and its ajoint} Let us consider the Hilbert space ${\cal H}$ of three component vectors $F =\{ f_1, f_2, f_3\} $. The operator ${\bf H}$ acts in ${\cal H}$ according to the formula \begin{equation} ({\bf H} F)_{\alpha}= H_0 f_{\alpha}+V_{\alpha}\sum_{\beta}f_{\beta}. \label{fadoper} \end{equation} The adjoint ${\bf H}^{*}$ is defined as $$ {\bf H}^{*}={\bf H}_{0}+{\bf X}{\bf V}= \left( \begin{array}{ccc} H_0 & 0 & 0 \\ 0 &H_{0} & 0 \\ 0 &0 &H_0 \\ \end{array} \right) + \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right) \left( \begin{array}{ccc} V_1 & 0 & 0 \\ 0 &V_2 & 0 \\ 0 &0 &V_3 \\ \end{array} \right) $$ and acts as follows \begin{equation} ({\bf H}^{*} G)_{\alpha}= H_0 g_{\alpha}+\sum_{\beta}V_{\beta}g_{\beta}. \label{adjoint} \end{equation} The equations for eigenvectors of operators ${\bf H}$ and ${\bf H}^{*}$ $$ {\bf H}\Phi = E\Phi, \ \ \ \ \ {\bf H}^{*}\Psi = E\Psi $$ in components have the form $$ H_0\varphi_{\alpha}+V_{\alpha}\sum_{\beta=1}^{3}\varphi_{\beta}=E\varphi_{\alpha}, $$ $$ H_0\psi_{\alpha}+\sum_{\beta=1}^{3}V_{\beta}\psi_{\beta}=E\psi_{\alpha}. $$ The first one coincides to the Faddeev equations (\ref{fadeq}) and the second one has the direct connection to the so called triad of Lippmann-Schwinger equations \cite{Gloeckle}. It follows directly from the definitions (\ref{fadoper}) and (\ref{adjoint}) that operators ${\bf H}$ and ${\bf H}^{*}$ have the following invariant subspaces:\\ for ${\bf H}$ $$ {\cal H}_{s} =\{ F\in {\cal H}_{s}:\ \sum_{\alpha}f_{\alpha}=0\} , $$ for ${\bf H}^{*}$ $$ {\cal H}_{p}^{*}=\{ G\in {\cal H}_{p}^{*}:\ g_{1}=g_{2}=g_{3}=g\} . $$ It is worth to notice that operators ${\bf H}$ and ${\bf H}^{*}$ on the subspaces ${\cal H}_{s}$ and ${\cal H}_{p}^{*}$ act as free Hamiltonian $H_0$ and three-body Hamiltonian $H$, respectively: $$ ({\bf H} F)_{\alpha} = H_{0}f_{\alpha} \ \ , \mbox{if}\ \ F\in {\cal H}_{s}, $$ $$ ({\bf H}^{*}G)_{\alpha}= Hg = H_{0}g+\sum_{\beta}V_{\beta}g \ \ , \mbox{if}\ \ G\in {\cal H}_{p}^{*} . $$ As a consequence the spectrum of ${\bf H}$ on ${\cal H}_{s}$ coincides to the spectrum of $H_{0}$ and the spectrum of ${\bf H}^{*}$ on ${\cal H}^{*}_{}$ does to the spectrum of three-body Hamiltonian $H$. In order to describe eigenfunctions of operators ${\bf H}$ and ${\bf H}^{*}$ let us introduce the resolvents $$ {\bf R}(z)=({\bf H}-z)^{-1}, $$ $$ {\bf R}^{*}(z)=({\bf H}^{*}-z)^{-1}. $$ The components of these resolvents can be expressed through the resolvent of three-body Hamiltonian and free Hamiltonian as follows \begin{equation} R_{\alpha \beta}(z)=R_{0}(z)\delta_{\alpha \beta} - R_{0}(z)V_{\alpha}R(z), \label{R} \end{equation} \begin{equation} R^{*}_{\alpha \beta}(z)=R_{0}(z)\delta_{\alpha \beta} - R(z)V_{\beta}R_{0}(z). \label{R*} \end{equation} Here $$ R(z)=(H-z)^{-1}=(H_{0}+\sum_{\beta}V_{\beta}-z)^{-1},\ \ \ R_{0}(z)=(H_{0}-z)^{-1} . $$ It is worth to note that the components of resolvents obey the following Faddeev equations \begin{equation} R_{\alpha \beta}(z) = R_{\alpha}(z)\delta_{\alpha \beta}-R_{\alpha}(z)V_{\alpha}\sum_{\gamma\ne\alpha}R_{\gamma \beta}(z), \label{Rfad} \end{equation} \begin{equation} R^{*}_{\alpha \beta}(z) = R_{\alpha}(z)\delta_{\alpha \beta}-R_{\alpha}(z)\sum_{\gamma\ne\alpha} V_{\gamma} R^{*}_{\gamma \beta}(z). \label{R*fad} \end{equation} Here $R_{\alpha}(z)=(H_0+V_{\alpha}-z)^{-1}$ is the two-body resolvent for the pair $\alpha$ in the three-body space. In order to proceed it is convenient to introduce the spectral representation for the resolvent of three-body Hamiltonian $$ R(z)= \sum_{E_{i}}\frac{|\psi^{i}\rangle \langle \psi^{i}|} {E_{i}-z} + \sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}(p_{\gamma})\rangle \langle \psi^{\gamma}(p_{\gamma})|}{p^{2}_{\gamma}-z} + \int dP \frac{|\psi^{0}(P)\rangle \langle \psi^{0}(P)|} {P^{2}-z}. $$ It is implied here that the system of eigenfunctions of the operator $H$ is complete {\it i.e.,} $$ I= \sum_{i}|\psi^{i}\rangle \langle \psi^{i}| + \sum_{\gamma}\int dp_{\gamma}|\psi^{\gamma}(p_{\gamma})\rangle \langle \psi^{\gamma}(p_{\gamma})| + \int dP |\psi^{0}(P)\rangle \langle \psi^{0}(P)|. $$ Introducing this representation into (\ref{R}) and (\ref{R*}) one arrives to the spectral representations for components $R_{\alpha \beta}(z)$: \begin{eqnarray} R_{\alpha \beta}(z)= \sum_{E_{i}}\frac{|\psi^{i}_{\alpha}\rangle \langle \psi^{i}|} {E_{i}-z} + \sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}_{\alpha}(p_{\gamma})\rangle \langle \psi^{\gamma}(p_{\gamma})|}{p^{2}_{\gamma}-z} + \nonumber \\ \int dP \frac{|\psi^{10}_{\alpha}(P)\rangle \langle \psi^{0}(P)|} {P^{2}-z} +\sum_{k=1}^{2}\int dP \frac{|u_{\alpha}^{k}(P)\rangle \langle w^{k}_{\beta}(P)|} {P^{2}-z}. \label{Rsr} \end{eqnarray} Here $\psi^{i}_{\alpha}$, $\psi^{\gamma}_{\alpha}(p_{\gamma})$ and $\psi^{10}_{\alpha}(P)$ are the Faddeev components of eigenfunctions of three-body Hamiltonian: $$ \psi^{i}_{\alpha}=-R_{0}(E_i)V_{\alpha}\psi^{i}, $$ $$ \psi^{\gamma}_{\alpha}(p_{\gamma})= -R_{0}(\varepsilon_{\gamma}+p_{\gamma}^{2}+i0)V_{\alpha}\psi^{\gamma}(p_{\gamma}), $$ $$ \psi^{10}_{\alpha}(P)=\delta_{\alpha 1} \phi^{0}(P) -R_{0}(P^{2}+i0)V_{\alpha}\psi^{0}(P), $$ where $\phi^{0}(P)$ is an eigenfunction of the free Hamiltonian: $$ H_{0}\phi^{0}(P)=P^{2}\phi^{0}(P). $$ A new feature in (\ref{Rsr}) is the appearance of the last term related to the spurious solutions of Faddeev equations and its adjoint. The explicit formulas for the spurious eigenfunctions $u^{k}_{\alpha}(P)$ of ${\bf H}$ are of the form $$ u^{k}_{\alpha}(P)=\sigma^{k}_{\alpha}\phi^{0}(P), $$ where $\sigma_{\alpha}^{k}$, $k=1,2$, are the components of two noncollinear vectors from ${\bf R}^{3}$ lying on the plane $\sum_{\alpha} \sigma_{\alpha} =0$. The spurious eigenfunctions $w^{k}_{\beta}(P)$ of ${\bf H}^{*}$ can be expressed by the formula $$ w^{k}_{\beta}(P)= \theta^{k}_{\beta}\phi^{0}(P)- \sum_{\alpha} [{\cal P}^{*}_{p}]_{\beta \alpha} \theta_{\alpha}^{k}\phi^{0}(P), $$ where $$ [{\cal P}^{*}_{p}]_{\beta \alpha}= \sum_{i}|\psi^{i}\rangle \langle \psi^{i}_{\alpha}| + \sum_{\gamma}\int dp^{'}_{\gamma}|\psi^{\gamma}(p^{'}_{\gamma})\rangle \langle \psi^{\gamma}_{\alpha}(p^{'}_{\gamma})| + \int dP^{'} |\psi^{0}(P^{'})\rangle \langle \psi^{01}_{\alpha}(P^{'})|. $$ Here the vectors $\theta^{k}\in {\bf R}^{3}$ are defined by following biorthogonality conditions $$ \sum_{\alpha}\theta_{\alpha}^{i}\sigma_{\alpha}^{j}=\delta_{ij},\ \ \ i,j=0,1,2, $$ with $\sigma^{0}_{\alpha}=\delta_{\alpha 1}$ and $\theta_{\alpha}^{0}=1$. For the components of resolvent $R^{*}_{\alpha \beta}(z)$ one can obtain the similar to (\ref{Rsr}) formula $$ R^{*}_{\alpha \beta}(z)= \sum_{E_{i}}\frac{|\psi^{i}\rangle \langle \psi^{i}_{\beta}|} {E_{i}-z} + \sum_{\gamma}\int dp_{\gamma}\frac{|\psi^{\gamma}(p_{\gamma})\rangle \langle \psi^{\gamma}_{\beta}(p_{\gamma})|}{p^{2}_{\gamma}-z} + \int dP \frac{|\psi^{0}(P)\rangle \langle \psi^{10}_{\beta}(P)|} {P^{2}-z} + $$ \begin{equation} +\sum_{k=1}^{2}\int dP \frac{|w_{\alpha}^{k}(P)\rangle \langle u^{k}_{\beta}(P)|} {P^{2}-z}. \label{R*sr} \end{equation} It is follows from (\ref{Rsr}) and (\ref{R*sr}) that operators ${\bf H}$ and ${\bf H}^{*}$ have the following system of eigenfunctions:\\ $\{ $ $\Phi^{i}$, $\Phi^{\gamma}(p_{\gamma})$, $\Phi^{10}(P)$ and $U^{k}(P)$ $\} $ $$ {\bf H}\Phi^{i}=E_{i}\Phi^{i}, $$ $$ {\bf H}\Phi^{\gamma}(p_{\gamma})=(\varepsilon_{\gamma}+p_{\gamma}^{2})\Phi^{\gamma}(p_{\gamma}), $$ $$ {\bf H}\Phi^{10}(P)=P^{2}\Phi^{10}(P), $$ $$ {\bf H} U^{k}(P)=P^{2}U^{k} , \ \ k=1,2; $$ $\{$ $\Psi^{i}$, $\Psi^{\gamma}(p_{\gamma})$, $\Psi^{10}(P)$ and $W^{k}(P)$ $\} $ $$ {\bf H}^{*}\Psi^{i}=E_{i}\Psi^{i}, $$ $$ {\bf H}^{*}\Psi^{\gamma}(p_{\gamma})=(\varepsilon_{\gamma}+p_{\gamma}^{2})\Psi^{\gamma}(p_{\gamma}), $$ $$ {\bf H}^{*}\Psi^{10}(P)=P^{2}\Psi^{10}(P), $$ $$ {\bf H}^{*} W^{k}(P)=P^{2}W^{k} , \ \ k=1,2, $$ with components of physical eigenfunctions: $$ \phi^{i}_{\alpha}=-R_{0}(E_i)V_{\alpha}\psi^{i}, $$ $$ \phi^{\gamma}_{\alpha}(p_{\gamma})= -R_{0}(\varepsilon_{\gamma}+p_{\gamma}^{2}+i0)V_{\alpha}\psi^{\gamma}(p_{\gamma}), $$ $$ \phi^{10}_{\alpha}(P)=\delta_{\alpha 1} \phi^{0}(P) -R_{0}(P^{2}+i0)V_{\alpha}\psi^{0}(P), $$ for ${\bf H}$ and with components for physical eigenfunctions: $$ \psi^{i}_{\alpha}=\psi^{i}, $$ $$ \psi^{\gamma}_{\alpha}(p_{\gamma})= \psi^{\gamma}(p_{\gamma}), $$ $$ \psi^{10}_{\alpha}(P)=\psi^{0}(P) $$ for ${\bf H}^{*}$. Physical eigenfunctions span the physical subspace of ${\cal H}$. This subspace can be defined as $$ {\cal H}_{p} = {\cal P}_{p}{\cal H}, $$ where the projection ${\cal P}_{p}$ is defined by formula $$ {\cal P}_{p}= \sum_{i}|\Phi^{i}\rangle \langle \Psi^{i}| + \sum_{\gamma}\int dp_{\gamma}|\Phi^{\gamma}(p_{\gamma})\rangle \langle \Psi^{\gamma}(p_{\gamma})| + \int dP |\Phi^{10}(P)\rangle \langle \Psi^{10}(P)|. $$ Spurious solutions span the spurious subspace of ${\cal H}$: $$ {\cal H}_{s}= {\cal P}_{s}{\cal H}. $$ where $$ {\cal P}_{s} = \sum_{k=1}^{2} \int dP |U^{k}(P)\rangle \langle W^{k}(P)|. $$ It is follows from construction and completeness of eigenfunctions of three-body Hamiltonian, that physical and spurious subspaces are complete in ${\cal H}$: $$ {\cal H}= {\cal H}_{p}+{\cal H}_{s}. $$ The same is valid for physical and spurious subspaces of operator ${\bf H}^{*}$: $$ {\cal H}= {\cal H}_{p}^{*}+{\cal H}_{s}^{*}, $$ where the subspaces ${\cal H}_{p}^{*}$ and ${\cal H}_{s}^{*}$ are defined as $$ {\cal H}_{p}^{*}= {\cal P}_{p}^{*}{\cal H}, \ \ \ {\cal H}_{s}^{*}= {\cal P}_{s}^{*}{\cal H}. $$ Here the operators ${\cal P}_{p}^{*}$ and ${\cal P}_{s}^{*}$ are Hilbert space adjoints for ${\cal P}_{p}$ and ${\cal P}_{s}$. \noindent The results described above can be summarized as the following \\ {\bf Theorem}: {\it Faddeev operator} {\bf H} $$ {\bf H}= \left( \begin{array}{ccc} H_0 & 0 & 0 \\ 0 &H_{0} & 0 \\ 0 &0 &H_0 \\ \end{array} \right) + \left( \begin{array}{ccc} V_1 & 0 & 0 \\ 0 &V_2 & 0 \\ 0 &0 &V_3 \\ \end{array} \right) \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right) $$ {\it and its adjoint} ${\bf H}^{*}$ $$ {\bf H}^{*}= \left( \begin{array}{ccc} H_0 & 0 & 0 \\ 0 &H_{0} & 0 \\ 0 &0 &H_0 \\ \end{array} \right) + \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{array} \right) \left( \begin{array}{ccc} V_1 & 0 & 0 \\ 0 &V_2 & 0 \\ 0 &0 &V_3 \\ \end{array} \right) $$ {\it have coinciding spectrums of real eigenvalues} $$ \sigma({\bf H})=\sigma({\bf H}^{*})=\sigma(H)\cup \sigma(H_{0}), $$ {\it where the physical part of the spectrum} $\sigma(H)$ {\it is the spectrum of the three-body Hamiltonian} $H=H_{0}+\sum_{\alpha}V_{\alpha}$ {\it and the spurious part} $\sigma(H_{0})$ {\it is the spectrum of the free Hamiltonian} $H_{0}$. {\it The sets of physical and spurious eigenfunctions are complete and biorthogonal in the sense:} $$ {\cal P}_{p}+{\cal P}_{s}={\cal P}^{*}_{p}+{\cal P}^{*}_{s}=I, $$ $$ {\cal P}^{2}_{p(s)}={\cal P}_{p(s)}, \ \ {{\cal P}^{*}}^{2}_{p(s)}= {{\cal P}^{*}}_{p(s)}, \ \ {\cal P}_{p}{\cal P}^{*}_{s}=0, \ \ {\cal P}_{s}{\cal P}^{*}_{p}=0. $$ \section{Extension on CCA equations} It is shown that the matrix operator generated by Faddeev equations in differential form has the (additional to physical) spurious spectrum. The existence of this spectrum strongly relates to the invariant spurious subspace formed by components which sum is equal to zero. The theorem formulated in preceding section can be extended on any matrix operator corresponding to few-body equations for components of wave-function obtained in framework of so called coupled channel array (CCA) method \cite{Levin} as follows. CCA equations can be written in the matrix form as \begin{equation} {\bf H}\Phi = E \Phi, \label{CCA} \end{equation} where ${\bf H}$ is a $n\times n$ matrix operator acting in the Hilbert space ${\cal H}$ of vector-functions $\Phi $ with components $\phi_{1}, \phi_{2},..., \phi_{n}$ each belonging to few-body system Hilbert space $h$. The equivalence of Eq. (\ref{CCA}) to the Schr\"{o}dinger equation $H\psi=(H_{0}+\sum_{\beta}V_{\beta})\psi=E\psi$ by requiring $\sum_{\alpha}\phi_{\alpha}=\psi$ can be reformulated as the following intertwining property for operators ${\bf H}$ and $H$ \begin{equation} {\cal S}{\bf H} = H {\cal S}. \label{SHHS} \end{equation} Here ${\cal S}$ is the summation operator $$ {\cal S}\Phi = \sum_{\alpha}\phi_{\alpha} $$ acting from ${\cal H}$ to $h$. Due to (\ref{SHHS}) the subspace ${\cal H}_{s}$ formed by spurious vectors such that ${\cal S}\Phi =0$ is invariant with respect to ${\bf H}$ and as a consequence the operator ${\bf H}$ has the spurious spectrum $\sigma_{s}$. Clearly, that the concrete form of $\sigma_{s}$ and of corresponding eigenfunctions depends on the particular form of the matrix operator ${\bf H}$ and is the subject of special investigation. The physical part $\sigma_{p}$ of the spectrum of ${\bf H}$ can be found with adjoin variant of (\ref{SHHS}) \begin{equation} {\bf H}^{*}{\cal S}^{*}= {\cal S}^{*}H, \label{SHHS*} \end{equation} where adjoint ${\cal S}^{*}$ acts from $h$ to ${\cal H}$ according to the formula $$ [{\cal S}^{*}\phi]_{\alpha}= \phi. $$ It follows from Eq. (\ref{SHHS*}) that the range ${\cal H}^{*}_{p}$ of operator ${\cal S}^{*}$ consisting of vector-functions with the same components is invariant with respect to ${\bf H}^{*}$ and the restriction of ${\bf H}^{*}$ on ${\cal H}^{*}_{p}$ is reduced to few-body Hamiltonian $H$. So that, $\sigma_{p}=\sigma(H)$ and, similarly to the case of the Faddeev operator, the same formula for the spectrums of operators ${\bf H}$ and ${\bf H}^{*}$ is valid $$ \sigma({\bf H})=\sigma({\bf H}^{*})= \sigma(H)\cup \sigma_{s}, $$ where $\sigma(H)$ is the spectrum of few-body Hamiltonian $H=H_{0}+\sum_{\alpha}V_{\alpha}$. \section*{Acknowledgement} This work was partially supported by Russian Foundation for Basic Research grant No. 98-02-18190. Author is grateful to Organizing Committee of 16 European Conference on Few-Body Problem in Physics for financial support of his participation in the Conference.
1,314,259,996,764
arxiv
\section{Significance Statement} \section{Introduction} \label{sec:Intro} Shape defines the cell. In the 1677 book \textit{Micrographia}, Robert Hooke showed sections within a herbaceous plant under a microscope. The shape of those sections resembles cells in a monastery, so he named the structures cells \cite{Hooke1665}. Many breakthroughs followed Hooke's discovery, from the cell theory of Schwann and Schleiden, to the theory of tissue formation by Remak, Virchow and Kolliker, and the theory of cellularpathologie by Virchow, all of which are inspired by observations of cell shapes, or morphology in general \cite{Mazzarello1999,Mayr1982}. In our modern view cell shape is determined by cell function \cite{Walter2014,Ingber1994}. A nerve cell has long branched protrusions for communication with other neurons; while the cuboidal shape of epithelial cells allow them to tile the surface of organs. Loss of characteristic shape, on the other hand, is associated with functional abnormality. Thus morphological characterization has been an important tool for diagnosis such as in red blood cell disease \cite{Diez2010}, neurological disease \cite{Serrano2011}, and cancer \cite{Bakal2013,Wu2015}. More recently, cell shape analysis is boosted by techniques from computer vision. As a result, it becomes possible to obtain high content information of cellular states from morphological data alone \cite{Perrimon2007,Wu2015,Lam2017,Carpenter2017}. While most research focuses on the static cell morphology, the dynamic fluctuation of cell shape is much less understood. However, shape fluctuation -- namely morphodynamics, is of central importance for dynamic cellular functions. The abnormal diffusion of small protrusions - microvilli - on the surface of a T cell allows the T cell to efficiently scan antigen-presenting surfaces \cite{Caieaal3118}. For a migrating cancer cell, morphodynamics drives the motility of the cell in many ways similar to our body frame movements that enable swimming. In fact, just as there are different swimming styles, cancer cells have been observed to execute multiple programs -- migration modes -- during invasion in 3D tissue space \cite{Konstantopoulos2017}. Each mode has distinct signatures of morphology and morphodynamics, and are usually classified based on cell morphology as filopodial, lamellipodial, lobopodial, blebbing, and actin-enriched leading edge \cite{Yamada2012commentary}. Cancer cell migration modes is controlled by intracellular signaling such as the Rho-Rock-myosin pathways \cite{Marshall2008rac,Marshall2005rho}, and extracellular factors such as the elasticity, and degradability of the extracellular matrix (ECM) \cite{Marshall2010plasticity,Yamada2012commentary}. The ability of a cancer cell to switch between migration modes is important for tumor prognosis. Many therapies, such as MMP inhibitors that target a particular mode of cell motility, fail to stop tumor metastasis largely because cells take other available migration programs \cite{Zucker2003,Friedl2003MMP_inhibition_transition}. In this paper, we study the morphodynamics of MDA-MB-231 cells, a highly invasive human breast cancer cell line in 3D collagen matrices. We devise machine learning techniques to classify cell shapes into morphological phenotypes that correspond to known migration modes. This approach provides a mesoscale mapping of cell morphodynamics into transitions among morphological phenotypes. We find individual cells are capable of rapidly sampling multiple morphological phenotypes, implying spontaneous migration mode transitions. We find ECM mechanics coupled with cell mechanosensing pathways regulate the stability and transition rates between morphological phenotypes. We also find that such transitions facilitate cancer cells to navigate ECM with inherent structural and mechanical heterogeneity. \begin{figure}[h] \centering \includegraphics[width=0.99\columnwidth]{Figs_v2/Figure_1.pdf} \caption{Three-dimensional migration of MDA-MB-231 cells is accompanied with significant cell shape fluctuation. (A) A typical time lapse recording of 25 hours is projected onto a single image with colors representing time. (B) The real space (x-y plane), and shape fluctuations of 3 cells shown in (A). (C) The mean square displacement ($\sigma^2$) of selected cell geometric measures. Dots: experimental measurements. Solid lines: linear fit. Dashed lines: 95\% prediction interval. Here the form factor is defined as $\text{perimeter}^2/\text{area}$. Curl is defined as the ratio between the major axis length and skeletonized contour length.} \label{fig1} \end{figure} \section{Results} \label{sec:results} We find 3D migrating cancer cells demonstrate rapid shape fluctuations (Fig. \ref{fig1}(A-B)). In order to quantify the cell morphodynamics, we take time-lapse fluorescent images of MDA-MB-231 cells migrating in collagen matrices. The GFP-labeled cells typically stay within the focal depth of the objective lens (20X, NA 0.7) for 10-20 hours, while we obtain 2D cell images at a rate of 4 frames per hour (see \textit{SI Appendix} section S1). After binarization and segmentation, we compute a total of twenty-one geometric measures which collectively quantify the shape of a cell (see \textit{SI Appendix} section S2). These geometric measures characterize cell size (such as area and perimeter), deviation from circle (such as aspect ratio and form factor), surface topography (such as solidity), and backbone curvature (such as curl -- the ratio between the major axis length and skeletonized contour length). The morphodynamics of a cell manifests itself as a random walk in the geometric shape space concurrent with its motility in the 3D matrix (Fig. \ref{fig1}). However, unlike the real space motility that is slightly diffusive, we find cell morphodynamics is subdiffusive in the shape space (Fig. \ref{fig1}C and \textit{SI Appendix} section S3). The subdiffusivity suggests physical barriers that are present both intrinsic to the cells and from the 3D ECM. Indeed, we find cells moving on 2D surface exhibit faster shape fluctuations than cells embedded in 3D ECM. Still, on flat surfaces cells show subdiffusive random walks in shape space and superdiffusive walks in real space (SI Appendix section S3). \begin{figure}[h] \centering \includegraphics[width=0.99\columnwidth]{Figs_v2/Figure_2.pdf} \caption{Develop a supervised machine learning model to classify cells into morphological phenotypes corresponding to different migration modes. (A) MDA-MB-231 cells in 3D collagen matrices exhibit multiple morphological phenotypes that are characteristic of four distinct migration modes: actin-enriched leading edge (AE), small blebbing (BB), filopodial (FP), and lamellipodial (LA). Scale bars are 20 $\mu$m. (B) The cell images are quantified using a total of 21 geometric measures such as area, solidity, and aspect ratio. With 3800 manually labeled single cell images we have trained a supported vector machine (SVM) to calculate probability scores (Val) for a cell to belong to each morphological phenotypes (classes). We assign a cell to the class $C$ with the maximum score ($Max(C, val)$), if this maximum score is greater than a threshold of 60\% ($Max(C, val)>$0.6). We consider a cell to be at an intermediate state if none of the four classes have a score higher than 0.6. In (B) a sample cell image is classified as a lamellipodial cell (LA), because LA class has a score of greater than 0.6. (C) To better visualize the high dimensional geometric measures, we apply t-SNE method to generate a 3D projection of the cell shape spaces. 25,000 unseen data set is presented here. Different morphological phenotypes are well separated. AE (yellow): actin-enriched leading edge. BB (green): small blebbing. FP (magenta): filopodial. LA (blue): lamellipodial.} \label{fig2} \end{figure} After quantitatively demonstrating the shape fluctuations of migrating cancer cells, we next investigate cell morphodynamics at a mesoscale that allows us to gain insights on cell migration modes. This is possible because different migration modes are associated with distinct characteristic cell morphologies (Fig. \ref{fig2}A) \cite{Yamada2012commentary,Konstantopoulos2017}. Using 3800 manually labeled single cell images, we have trained machine classifiers that classify cell morphology into four morphological phenotypes based on and named after their corresponding migration modes. We consider four morphological phenotypes including two amoeboidal ones: actin-enriched leading edge (or AE in short) and small blebbing (BB); as well as two mesenchymal ones: filopodial (FP) and lamellipodial (LA). Of note, another migration mode, namely lobopodial or nuclear piston mode, has not been observed in our experiments which is consistent with previous reports \cite{Yamada2017restore}. Once the classifier is trained, morphological phenotypes are determined automatically from a cell image if a particular phenotype receives more than 60\% probability score (Fig. \ref{fig2}B). For a small fraction of cells ($\approx$ 10\%), none of the four phenotypes receive more than 60\% probability score, we consider these cells to be in an intermediate state. We have trained two classifiers (see \textit{SI Appendix} S4). The first one is based on support vector machines (SVM \cite{Cortes1995,Ben-Hur2001}) involving 21 geometric measures of binarized cell images. The second one is based on Random-Forest model using the same geometric properties \cite{Breiman2001}. The two classifiers agree with each other well on test data sets (90\% overlapping). The SVM classifier particularly has a higher success rate of classifying unseen data (92\%). In the following we mainly report the results from SVM algorithm. The validity of the SVM classifier is also evident from the color-coded t-SNE embedding of unseen data (Fig. \ref{fig2}C, 25,000 data points), as data points belong to different classes (morphological phenotypes) form separable clusters in the embedding space. By applying the SVM classifier to time lapse recordings of 3D migrating MDA-MB-231 cells we find cells spontaneously make transitions among different morphological phenotypes. Fig. \ref{fig3}A shows snapshots of a typical cell. The cell switches directly from filopodial (F) to lamellipodial (L) mode, then to small blebbing (B) mode via intermediate state (I). Therefore using machine learning technique we map cell morphodynamics into transitions between morphological phenotypes, or their associated migration modes. \begin{figure}[h] \centering \includegraphics[width=0.99\columnwidth]{Figs_v2/Figure_3.pdf} \caption{Rho-signaling internally controls mesoscale morphodynamics of 3D cultured MDA-MB-231 cells. (A) A sample time series of morphological phenotype. Insets: three snapshots showing the GFP-labeled cell morphology. Abbreviations: F -- filopodial, L -- lamellipodial, I -- intermediate state, B: small blebbing. (B) Representative morphological changes under treatment of CN03 and Y27632, which upregulates and downregulates Rho A respectively. (C-D) Characteristic morphodynamic trajectories of cells in the t-SNE embedded shape space. The trajectories start immediately after introducing Y27632 or CN03, and ends after 4 hours of incubating with the drugs. The forward time directions are shown as thick curves with arrows as guide to the eyes. Two representative trajectories (one with circular symbols, and another one with triangular symbols) per each treatment are shown as colored symbols connected by black lines, where color represents the instantaneous phenotype. Unconnected light-colored dots show training sets which are the same as in Fig. \ref{fig2}C. Note to better visualize the 3D trajectories coordinates have been rotated with respect to Fig. \ref{fig2}C. } \label{fig3} \end{figure} In order to understand the mechanisms underlying cell morphological phenotype transitions, we examine the effects of manipulating Rho-signaling, which is a master regulator that determines the mechanical state of a cell. To this end, we apply Y27632 \cite{Bissell2016_Y27632}, a Rho-inhibitor; and CN03 \cite{Isaacs2017_CN03}, a Rho-activator to MDA-MB-231 cells cultured in collagen ECM (see \textit{SI appendix} S5). Y27632 reduces actomyosin contractility, promoting transitions from blebbing to mesenchymal phenotypes \cite{Janmey2005}. On the other hand, CN03 elevates myosin II activity, leading to retraction of filopodia to rounded cell shapes (Fig. \ref{fig3}B). These results are consistent with previous reports on the molecular control of cell migration modes by Rho-signaling \cite{Yamada2012commentary}. While previous studies focus on the end points of manipulating Rho-signaling, morphodynamic analysis offers insights to the transition paths between migration modes. In particular, we take advantage of a modified t-SNE algorithm \cite{Maaten2009}, which projects a cell image in the embedding space defined by the training sets (Fig. \ref{fig2}C, \textit{SI appendix} S4). This approach allows us to map the continuous shape change of a cell as a trajectory in the (dimensional-reduced) mesoscale morphodynamic space. Similar approaches have also been employed previously in studying complex body movements of other organisms such as fruit flies, where transition paths between different fly behaviors can be visualized \cite{Berman2014}. Tracking the mesoscale morphodynamics of MDA-MB-231 cells under pharmacological perturbations, we find up and down regulation of Rho signaling do not lead to reversal morphodynamic trajectories. In particular, when treated with Y27632 blebbing cells turn to filopodial or lamellipodial via strongly converging trajectories most of which first visiting AE states (see also \textit{SI appendix} S5). Fig. \ref{fig3}C shows two representative trajectories. AE state exhibits weak cell-ECM adhesions and F-actin rich protrusions \cite{Soldatl2006,Sahai2006pseudopodia}. Our results suggest AE states mediate Rho-signaling controlled transition from amoeboidal to mesenchymal motility. On the other hand, CN03 treatment causes the majority of mesenchymal cells to switch to blebbing modes. However, without going through AE states, CN03 leads to strongly fluctuating and diverging trajectories from multiple cells (Fig. \ref{fig3}D shows two representative trajectories, see also \textit{SI appendix} S5). \begin{figure*}[t] \centering \includegraphics[width=1.8\columnwidth]{Figs_v2/Figure_4.pdf} \caption{Physical properties of collagen ECM regulate the morphological phenotype homeostasis of 3D migrating MDA-MB-231 cells. (A-D) Confocal reflection images and pseudo colored MDA-MB-231 cells for collagen matrices prepared at varying conditions. Scale bars: 20 $\mu$m. A: collagen ECM prepared at room temperature (RT, or 25 $^\circ$C) and collagen concentration of $[col] = $ 1.5 mg/mL. B: collagen ECM prepared at 37 $^\circ$C and [col] = 1.5 mg/mL. C: collagen ECM prepared at RT and $[col] =$ 3.0 mg/mL. D: collagen ECM prepared with flow-aligned collagen fibers. (E) Fraction of cells in each morphological phenotype. 8,000 single cell images are analyzed under each ECM condition. (F) Dwell time of cells in each morphological phenotype. Errorbars in (E-F) represent 95\% confidence intervals calculated from 1000 bootstrap iterations. (G-J): The transition matrix -- morphological phenotype transition rates under varying ECM conditions. G: collagen ECM prepared at room temperature and $[col] = $ 1.5 mg/mL. H: collagen ECM prepared at 37 $^\circ$C and $[col] =$ 1.5 mg/mL. I: collagen ECM prepared at RT and [col] = 3.0 mg/mL. J: collagen ECM prepared with flow-aligned collagen fibers. Under each ECM condition a total of more than 2,000 hours of single cell trajectories are analyzed.} \label{fig4} \end{figure*} We next investigate the external control of cell morphodynamics. In particular, we focus on the role of ECM physical properties in regulating cell morphological phenotype transitions. In order to control the microstructure of collagen matrices we employ three methods as shown in Fig. \ref{fig4}(A-D) (see also \textit{SI appendix} S6). First, increasing gelation temperature from room temperature (RT) to 37 $^\circ$C, while keeping collagen density at [col]= 1.5 mg/mL significantly reduce fiber length and pore size (Fig. \ref{fig4}B). Second, increasing collagen density to [col]= 3.0 mg/mL while keeping gelation temperature at room temperature moderately reduces pore size, preserves clear fibrous structure, and increases stiffness (Fig. \ref{fig4}C). Finally, keeping gelation at room temperature and [col]=1.5 mg/mL, while generating an unidirectional flow field during gelation leads to aligned collagen fibers in the ECM. This method creates strong anisotropy in the ECM. We find the occurrence probability (or population fraction) of different morphological phenotypes are remarkably different at different ECM conditions. As shown in Fig. \ref{fig4}E, increasing gelation temperature does not affect the probability of AE and LA cells. However, the homogeneous matrix microstructure at 37 $^\circ$C significantly reduces the fraction of FP cells from 43\% to 25\%, while increases fraction of BB cells from 15\% to 25\%. Compared with increasing gelation temperature, doubling collagen concentration leads to less dramatic changes of the ECM microstructure. Correspondingly, only moderate changes of phenotype probabilities are observed. On the other hand, when matrix anisotropy is increased by aligning ECM fibers, we find significant shift of cells from amoeboid phenotypes to filopodial mode. Taken together, these results show that ECM heterogeneity and anisotropy determine the probability of different morphological phenotypes. We have also examined the stability of each morphological phenotypes by measuring the average dwell time -- duration of a cell to stay continuously in a morphological phenotype before transition to another (\textit{SI appendix} S7). As shown in Fig. \ref{fig4}F, in all three cases manipulating ECM physical properties moderately increase the dwell time of all four morphological phenotypes. Therefore the changes in the phenotype probability can not be explained by the phenotype stability alone, and in some cases move in opposite trend from the dwell times observed. To reveal further details of morphological phenotype dynamics, we have computed the phenotype transition matrix: rates $r$ that characterize the probability of transitions per hour between any two phenotypes (Fig. \ref{fig4}G-J (see also \textit{SI appendix} S7). While the rates vary dramatically for different ECM conditions (arrows in Fig. \ref{fig4}G-J), we notice several remarkable common features. First, direct transitions along FP - BB path rarely happens ($r<$0.03 hr$^{-1}$). Instead, amoeboidal - mesenchymal transitions are primarily mediated by LA and AE states, presumably by turning cell-ECM adhesion on and off. On the other hand, transitions within the amoeboidal (AE - BB) and mesenchymal (FP - LA) modes are frequent, and the rates can go up to 1 per hour. Finally, while the morphological phenotype transitions are intrinsically non-equilibrium processes, probability fluxes between states are generally very small (\textit{SI appendix} S7). This means that an approximate detailed balance exists among morphological phenotypes. In comparison with other nonequilibium stationary processes at mesoscale \cite{MacKintosh2016_flux}, we speculate morphological phenotype transitions are not gated by active processes such as ATP consumption. The transition rates also offer insights to understand the ECM-dependence of the fraction of cells in each morphological phenotype (Fig. \ref{fig4}E). For instance, as gelation temperature increases from RT to 37 $^\circ$C, rates from AE to FP decreases by 52 percent, and rates from BB to AE decreases by 22 percent (Fig. \ref{fig4}G and Fig. \ref{fig4}I). As a result, we observe more blebbing cells and less filopodial cells in collagen matrices prepared at 37 $^\circ$C. This is consistent with the mechanical mechanism of blebbing formation \cite{Tinevez2009,Yamada2012nonpoloarized}. Blebs form when actomyosin contractility exceeds the binding between cortical actin and cell membrane. A blebbing cell turns to AE when actin polymerization causes sharp protrusion on the membrane. Our results suggest that as collagen ECM loses structure heterogeneity, actin polymerziation is less effective to drive the transition from BB to AE states. Conversely, as ECM becomes more anisotropic (Fig. \ref{fig4}G and Fig. \ref{fig4}J), the transition rate from LA to FP increases as much as 27 percent, while rate from AE to BB decrease by 44 percent. Together, these altered rates lead to a significant fraction of blebbing cells turning to filopodial as shown in Fig. \ref{fig4}{E}. Filopodial protrusions consist of elongated F-actin bundles supported by elevated actin polymerization and cross-linking by Ena/VASP proteins \cite{Borisy2004}. Our results suggest that the mechanical barrier separating filopodia and blebbing protrusions is too high for actomyosin contractility to overcome directly. Instead, a blebbing cell turning into a filopodial one has to first transform into AE or LA states. Because the morphological phenotype of a cell is linked to its 3D migration mode, we next investigate if the invasion potential of MDA-MB-231 cells depends on the mesoscale morphodynamics. Due to the short dwell times for each morphological phenotype, we only consider two coarse-grained classes of morphologies: mesenchymal (ME), which consists of FP and LA states; and amoeboidal (AM), which consists of AE and BB states. In particular, we measure for short time scales the step size distributions and for longer time scales the mean square displacement of the cells in randomly aligned collagen matrices geled at room temperature (Fig. \ref{fig5}). Interestingly we find the steps are better described by a log-normal, rather than Gaussian distribution (\textit{SI appendix} S8) due to frequent large steps. Fig. \ref{fig5}A show the mean and variance of the fitting parameters. It is clear that the steps in physical space are coupled with the corresponding mesoscale dynamics. For cells that dwell in the amoeboidal class, both mean and variance of the steps are the smallest. Correspondingly, the mean square displacement of amoeboid cells have a small slope, corresponding to an effective diffusivity of 8 $\mu$m$^2$/hour (for each spatial dimension, Fig. \ref{fig5}B). On the other hand, cells make larger steps when dwell in the mesenchymal class, and the effective diffusivity increases by three-fold to 26 $\mu$m$^2$/hour. Importantly, when cell migration is simultaneously coupled with phenotype transitions, the class-switching steps have distinct statistical distributions (Fig. \ref{fig5}A). Our analysis shows that not only it is important to distinguish different morphological phenotypes in studying the motility of cancer cells, but also one may need to take into account of phenotype transitions. For instance, without accounting for the class-switching events, the weighted average of mean square displacements from mesenchymal and amoeboidal cells underestimates the observed cell motility by 15\% (Fig. \ref{fig5}B, the weighted average MSD curve corresponds to a diffusivity of approximately 20 $\mu$m$^2$/hour, as compared with 24 $\mu$m$^2$/hour for full cell trajectories). Since phenotype transitions occur rapidly at single cell level regardless of ECM concentration, porosity and rigidity, we conclude that mesoscale dynamics contributes to determine the invasive potential of cancer cells. \begin{figure}[h] \centering \includegraphics[width=0.99\columnwidth]{Figs_v2/Figure_5.pdf} \caption{Cancer cell migration in 3D ECM is coupled with morphological phenotype and phenotype transitions. (A \& B) Means ($\mu$) and variances ($\sigma^2$) of the step size distributions by fitting the experimental measurements with log-normal distribution. The steps are categorized based on morphological phenotype (coarse-grained as mesenchymal and amoeboidal) dynamics. ME: abbreviation for mesenchymal, which includes FP and LA states. AM: abbreviation for amoeboidal, which includes BB and AE states. If a cell makes a step by starting from AM state to ME state, the step is categorized as an AM-ME step. The starting and ending frames of steps are separated by 15 minutes. In (A-B) errorbars show the 95\% confidence interval of fitted parameters. (C) Real-space (2D projection) mean square displacements (MSD) of cells. AM: the MSD of cells dwelling in the AM state. ME: the MSD of cells dwelling in the ME state. average: the weighted average of mean square displacements of AM and ME dwelling cells. The average is based on the occurrence fraction of AM and ME cells. Full trajectory: the MSD obtained from entire cell trajectories regardless of the morphological phenotypes. Shaded areas in C show SEM. In (A-C) a total of 1974 hours of single cell trajectories are analyzed. The ECM are prepared at room-temperature with a concentration [col] = 1.5 mg/mL.} \label{fig5} \end{figure} During metastasis a migrating cancer cell must navigate ECM of distinct mechanical properties. Therefore we next investigate how cell morphodynamics facilitate cell traverse interfaces and adapt to ECM of distinct mechanics. To this end, we create collagen matrices consist of two integrated layers (\textit{SI appendix} S9). The RT layer is prepared at room temperature that shows a porous fibrous network, and the 37 $^\circ$C layer is prepared at 37 $^\circ$C showing a much more homogeneous structure (Fig. \ref{fig6}A). Without additional cues MDA-MB-231 cells randomly navigate the ECM, occasionally traverse the interface to experience a sudden change of ECM physical properties. Over the course of 24 hours we do not observe durotaxis. Consistent with the corresponding results in uniform ECM, we find the likelihood of observing a filopodial cell is significantly higher in the ECM layer prepared at room temperature, while for blebbing cells the probability is higher in the gel layer prepared at 37 $^\circ$C (Fig. \ref{fig6}B). The dwell times of phenotypes, on the other hand, follow the same trend of occurrence probabilities but change rather moderately (Fig. \ref{fig6}C). The shift of phenotype homeostasis once again can be understood from the phenotype transition matrices. To simplify the discussion in Fig. \ref{fig6}D we plot the top four matrix elements of the transition matrices calculated from cells in each of the two layers. In the RT layer, filopodial cell population is enriched by frequent LA-AE exchange and LA to FP transition. As cells cross the interface, LA to FP becomes less likely, and the AE-BB pathway is steered to favor blebbing states. To further quantify the effect of the interface in modulating cell morphodynamics, we calculate spatial frequencies of dwell events (Fig. \ref{fig6}E) and AE-originating transition events (Fig. \ref{fig6}F). We define a coordinate system where the y-axis is along the interface passing $x$=0 (Fig. \ref{fig6}A). This allows us to combine data from multiple repeating experiments where cell locations are seeded randomly. After aligning the coordinates, we define the spatial frequency of transition (or dwell) events from state $i$ to state $j$ as $R(i \rightarrow j,x)$, which satisfies \begin{eqnarray} P(i \rightarrow j,x) = R(i \rightarrow j,x)M(x) \end{eqnarray} Here $P(i \rightarrow j,x)$ is the probability density of observing event $i \rightarrow j$ per unit time (1 hour), and $M(x)$ is the cell density (along $x$-axis). We use a 1-D Gaussian kernel to estimate $P(i \rightarrow j,x)$ and $M(x)$ (\textit{SI appendix} S9). As a result, the spatial frequency $R(i \rightarrow j,x)$ represents the likelihood of a cell to undergo a specific type of transition over unit time (1 hour) as a function of distance to the interface. The spatial frequency of dwell events clearly show that while FP state is increasingly stable into the RT layer, LA and BB states are more stable in the 37 $^\circ$C layer. AE state, on the other hand, is most stable at the interface (Fig. \ref{fig6}E). Therefore AE state plays a special role in mediating the cell adaptation across distinct ECM layers. Indeed, we find a gradual shift of favorable AE-originating transitions as distance to the interface varies. The frequency of AE to LA events, the main amoeboidal to mesenchymal path, peaks in the RT layer. AE to BB events, which is mainly responsible of enriching blebbing cells, has peak frequency in the 37 $^\circ$C layer. Taken together, we find morphological phenotype transitions and the associated migration mode switching are integral parts of cancer cell invasion and adaptation to complex ECM. \begin{figure*}[t] \centering \includegraphics[width=1.8\columnwidth]{Figs_v2/Figure_6.pdf} \caption{Morphological phenotype transition facilitates cell migration in heterogeneous ECM. (A) Time-lapse projection of 3D migrating MDA-MB-231 cells navigating engineered heterogeneous ECM. The ECM contains two adjacent layers that are prepared at room temperature (RT) and 37 $^\circ$C respectively. A confocal reflection image shows the ECM structure next to the interface (dashed line, $y$-axis). Scale bar: 50 $\mu$m. (B) Fraction of cells of each morphological phenotype in both sides of the interface. (C) Dwell time of cells of each morphological phenotype in both sides of the interface. (D) Phenotype transition rates in both sides of the interface. (E) Spatial frequency of of dwell events. (F) Spatial frequency of AE to AE, AE to LA and AE to BB events. See main text for the definition of spatial frequency $R(i \rightarrow j,x)$. In (E-F) dashed lines indicate the interface ($x$=0) separating the 37 $^\circ$C gel (left) and RT gel (right). A total of 3,800 hours of single cell recordings from three independent experiments have been used to calculate the results in (B-F). } \label{fig6} \end{figure*} \section{Discussion} \label{sec:discuss} In this paper, we report the morphodynamics of MDA-MB-231 cells in type I collagen ECM as a model system of metastatic cancer cells migrating in 3D tissue. MDA-MB-231 cells rapidly change their geometry, exhibiting a subdiffusive random walk in the shape space. This occurs simultaneously with their superdiffusive walks in the real space (Fig. \ref{fig1}). The biological significance of the morphodynamics is further demonstrated by classifying cell shapes into morphological phenotypes corresponding to different migration programs (Fig. \ref{fig2}). This allows us to study cell morphodynamics at the mesoscale, in terms of morphological phenotype transitions. Utilizing machine learning and visualization techniques, we show that cell morphodynamics is regulated by Rho-signaling (Fig. \ref{fig3}), which is a molecular control hub of cell mechanosensing and force generation. It has been shown previously that Rho/Rac signaling regulates the shift between mesenchymal and amoeboidal motility \cite{Marshall2008rac,Marshall2010plasticity}. Our analysis further reveals that instead of favoring a particular mode of motility, perturbations of Rho signaling alter the migration mode transition rates. In particular, down regulating Rho leads to overall amoeboidal-to-mesenchymal transition that routes through AE and LA states. Up regulation of Rho, on the other hand, leads to strongly fluctuating morphodynamics that enriches blebbing cells. The irreversibility of up and down regulating Rho signaling results suggest a complex phenotype landscape that controls 3D cancer cell motility. We study morphological phenotype transitions in ECM of distinct physical properties and find ECM microstructure modulates the probabilities, dwell times, and transition rates of morphological phenotypes. Collagen matrices with homogeneous structure, as those prepared at higher temperature, enrich the population of blebbing cells. By comparing the transition matrices, we find the enrichment of blebbing cells is directly related with the reduced transition rate from BB to AE state, and also indirectly contributed by the mesenchymal-to-amoeboidal transition through LA and AE states. Similarly, collagen matrices with structural anisotropy enrich the population of filopodial cells. The enrichment is directly attributed to an increased LA to FP rate, and indirectly contributed by the amoeboidal-to-mesenchymal transition mediated by LA and AE states. These results show that it is possible to execute external control of cell morphodynamics (and the corresponding 3D migration modes) through ECM mechanics. Importantly, taking into account of the phenotype transitions allows us to better predict the outcome of manipulating cell migration mode through ECM physical properties \cite{Yamada2016adhesion,Konstantopoulos2017}. In light of the rapid phenotype transitions exhibited by individual cells, 3D cancer cell motility may be considered as a hidden Markov process where each phenotype is associated with characteristic step size distributions (Fig. \ref{fig5}). Specifically, we find steps that occur simultaneously with a phenotype transition have distinct sizes compared with steps that occur while cells dwell in a particular morphological phenotype. This makes morphodynamics a crucial factor in determining the invasive potential of cancer cells. To our knowledge, this aspect has been so far largely overlooked in the literature. In the lens of a hidden Markov process, morphodynamics may facilitate cancer invasion because phenotype transitions allow cancer cells to search for and commit to a more effective migration program \cite{Lander2011}. Using a ECM model consisting of two mechanically distinct layers, we show the cells gradually adjust their morphodynamics as they approach and cross the layer interface (Fig. \ref{fig6}). Therefore morphological phenotype transitions may be essential in cancer cell metastasis by enabling the cells to navigate non-uniform ECM. In summary, we demonstrate the morphodynamics of 3D migrating cancer cells as a powerful tool to inspect the internal state and microenvironment of the cells. Investigated at mesoscale, the morphodynamics imply that 3D cancer cell migration is inherently plastic \cite{Yamada2012commentary}. The plasticity is controlled by the mode transition matrices, rather than a deterministic decision tree \cite{Konstantopoulos2017}. In order to further exploit the information provided by the cell shape fluctuations, future research is needed to decode morphodynamics as a rich body language of cells, and to control morphodynamics as a route of mechanical programming of cell phenotype. \section{Materials and methods} \label{sec:methods} See \textit{SI Appendix} S1-S9 for details of 3D cell culture, microscopy, pharmacological treatments, and data analysis. \begin{acknowledgments} We thank Prof. Michelle Digman and Prof. Steve Press{\'e} for helpful discussions. The funding for this research results from a Scialog Program sponsored jointly by Research Corporation for Science Advancement and the Gordon and Betty Moore Foundation through a grant to Oregon State University by the Gordon and Betty Moore Foundation. Part of this research was conducted at the Northwest Nanotechnology Infrastructure, a National Nanotechnology Coordinated Infrastructure site at Oregon State University which is supported in part by the National Science Foundation (grant NNCI-1542101) and Oregon State University. C. Eddy and B. Sun are supported by DOD award W81XWH-20-1-0444 (BC190068). B. Sun is also supported by the National Institute of General Medical Sciences award 1R35GM138179. \end{acknowledgments} \section*{S1: Additional experimental details} \label{S1: Imaging} \subsection*{3D cell culture} GFP-labeled MBA-MB-231 human breast cancer cells are purchased from GenTarget Inc. and are maintained according to the manufacturer's instructions. To embed the cells in 3D collagen matrices, cells are suspended at very low density in neutralized collagen solutions. Generally, the suspension is then immediately transferred to glass bottom dishes (ibidi $\mu$-dish) and incubated on either a warming plate set to 25$^{\circ}$C, or in a tissue culture incubator (37$^{\circ}$C, 5$\%$ CO2) for 30 minutes in order to solidify the matrix. For fiber alignment, a small magnetic iron shaving (<200 $\mu$m) is first placed onto the dish and then immersed in cell-collagen solution and placed onto the warming plate. We then drag the particle through the solution by an external magnet along a line for approximately 3 minutes while warming, and is then left to solidify \cite{Kim2020}. The cellularized ECM is then immersed with tissue culture medium and continuously incubated for 24 hours before imaging. \subsection*{Microscopy and image analysis} Continuous imaging is done with a Leica TCS SPE confocal microscope equipped with a stage-top incubator (ibidi). Images are captured at a rate of 1 frame per 15 minutes. The raw images are gray scale with a resolution of 1024 x 1024 pixel$^2$. The voxel size has been calibrated to equal 0.538 $\mu$m. A single x-y plane is imaged every 10 $\mu$m in the z-dimension per experiment for up to 24 hours. Using custom Matlab scripts, cell images are maximum-projected onto a x-y plane and tracked over time (see section S2). The projected images are manually segmented, and screened to remove cells that are not entirely within the viewing window. \section*{S2: Geometric characterization of cell images} \label{S2: Cell Measure} \subsection*{Image processing} Following acquisition of fluorescent images, data regarding cell shape and position are obtained by processing and binarizing the time-lapse images using custom NIH ImageJ and Matlab scripts. First, fluorescence images are background subtracted using a rolling ball radius of 50 pixels (26.88 $\mu$m) and then log-transformed in order to make cell edges highly visible and so that less fluorescent-intense cells are also quantified. A manual threshold is then applied for each image. After, cells are manually segmented for each z-stack if applicable. Since consecutive z-stacks may have cell overlap, custom Matlab scripts are then used to determine if the same cell is in multiple z-stacks. After, we take a maximum projection (2D) of each cell. Geometrical measurements are then taken on binary objects using Matlab’s regionprops function, including area, perimeter, major axis length, minor axis length, solidity, eccentricity, convex area, extent, and equivalent diameter, convex perimeter, fiber length (skeletonized max length), maximum inscribed radius, and maximum bleb radius (maximum secondary circle). Additional measures of form factor ($\frac{Area}{Perimeter^2}$), aspect ratio ($\frac{Major axis length}{Minor axis length}$), Convexity ($\frac{Perimeter}{Convex Perimeter}$), Curl ($\frac{Major axis length}{fiber length}$), Perimeter Curl ($\frac{Perimeter}{\pi}(1-\sqrt{1-4\pi form factor}$)), Sphericity ($\frac{2* Max Inscribed Radius}{Major Axis Length}$), Inscribed Area ($\frac{Major Axis Length^2 * \pi}{Max Inscribed Radius}$), and Bleb Ratio ($\frac{Max Bleb Radius}{Max Inscribed Radius}$) are subsequently calculated, totalling to 21 geometric measures. Collectively, this is shown schematically in figure \ref{fig:S2}. \begin{figure}[ht] \centering \caption{Schematic of image processing and measures taken from binary using custom Matlab and Python scripts (scale-bar = 10 $\mu$m).} \includegraphics[scale=0.5]{SI/s2/Measure_Cell.pdf} \label{fig:S2} \end{figure} \subsection*{Tracking cell position} In order to track the real-space center of the cell, we use maximum inscribed circle (MIC) of the cell image. Compared to imaging the nucleus directly, which causes phototoxicity and is prone to poly(nucleus), MIC does not require additional probes. To further demonstrate the accuracy of MIC, we compared short videos of dual labeled MDA-MB-231 cells where the GFP channel labels the cytoplasm and RFP channel labels the cell nucleus (SYTO-64, ThermoFisher). We find that MIC agrees very well with direct nucleus staining when determining the cell position, as shown in figure \ref{figS2}. For most of the frames, the deviation is less than 10\% of the cell long axis. The root mean squared deviation is approximately 3 microns. \\ \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{SI/s2/MIC.pdf} \caption{Positional data acquisition: Position measurements of a cell determined by the nucleus centroid stained with SYTO-64 (red) and max-inscribed circle center (green), and histogram of square deviations between the two positional measures. All scale-bars are 20 $\mu$m.} \label{figS2} \end{figure} \newpage \section*{S3: ECM dimension modulates real space and shape space dynamics} \label{S3: ECM Mod Stats} The morphodynamics of a cell display a strongly sub-diffusive characterization in shape-space. Shown in Table \ref{tab:S3}, we quantify the fits of mean square displacement of measures and report the power over a ten hour lag period. \begin{table}[h!] \centering \begin{tabular}[c]{|p{4cm} || p{2cm}| p{2cm}|} \hline \multicolumn{3}{|c|}{$y \sim x^n$} \\ \hline Measure & $n$ (3D) & $n$ (2D) \\ \hline Area & 0.6261 & 0.9812 \\ Major Axis Length & 0.5761 & 0.8781\\ Minor Axis Length & 0.4338 & 0.4702\\ Eccentricity & 0.3638 & 0.3234\\ Convex Area & 0.5738 & 0.9230\\ Equivalent Diameter & 0.5999 & 0.9246\\ Solidity & 0.4493 & 0.5794\\ Extent & 0.4965 & 0.5208\\ Perimeter & 0.5635 & 0.8353\\ Convex Perimeter & 0.6043 & 0.9636\\ Fiber Length & 0.5981 & 0.8931\\ Maximum-Inscribed Radius & 0.5059 & 0.6059\\ Bleb Length & 0.5125 & 0.6510\\ Aspect Ratio & 0.4240 & 0.6419\\ Form Factor & 0.4642 & 0.5326\\ Convexity & 0.2645 & 0.4477\\ Perimeter Curl & 0.4790 & 0.5681\\ Curl & 0.2515 & 0.2775\\ Sphericity & 0.5533 & 0.5016\\ Inscribed Area & 0.5139 & 0.8617\\ Relative Bleb Length & 0.4667 & 0.6052\\ Real-Space Migration & 1.2197 & 1.3881\\ \hline \end{tabular} \caption{Power ($n$) of anomalous diffusion quantified by fitting the mean square displacements of shape and position measures for cells embedded in a 1.5 mg/mL collagen matrix prepared at room temperature (3D) and cells plated on top of similarly prepared collagen matrices (2D).} \label{tab:S3} \end{table} \newpage \section*{S4: Morphological Phenotype Analysis} \label{S4: Classifiers} \subsection*{Details of machine-learning and SVM} \indent In order to classify cells into particular migration mechanisms, we used support-vector machine (SVM) learning \cite{Cortes1995, Ben-Hur2002}. As a maximal-margin classifier, SVM was particularly attractive as the overlap between different phenotypes was unknown. Also important is that cells can display multiple phenotypes at once and thus a soft-margin classifier was vital. Lastly, given our small dimensional space and small training-sets, SVM was an optimum choice for classification purposes.\\ \indent Labeled data were first acquired as described below. Images were binarized and then geometrical data was obtained on labeled cells for training. We performed parameter grid search for RBF, Linear, and polynomial kernel models, with 10-fold cross validation to determine best performance. A grid search determined that an RBF kernel with $\gamma = 5.8^{-3}$ and $C=5000$ yields an average training, validation, and test set accuracy of 93.4$\%$, 86.0$\%$, and 94.3$\%$, respectively. However, this model was only nominally improved from the RBF model used with $\gamma = 0.01$ and $C=1000$ (average training, validation, and test set accuracy of 93.0$\%$, 85.6$\%$, and 94.3$\%$, respectively). A linear model also recorded strong performance with $C=193.2$ with average training, validation, and test set accuracy of 90.8$\%$, 86.0$\%$, and 94.3$\%$, respectively. Although easier to interpret, the linear model was found to not consistently match supervised classification. For this reason, along with the slight increase in performance, we proceeded with the RBF kernel SVM model. The optimized SVM model performs at 92.3\%, 91.8\%, and 94.3\% on the train, validation, and held out test sets discussed in the Random-forest classifier. \subsection*{Details of training, validation, and test sets} Training images were processed as previously described. At least 280 images of each class were acquired to train machine learning networks, including the use of an optimized small shearing augmentation (randomly sheared with magnitude < 0.4) and rotation on images as an over-sampling technique to make our models robust to slight perturbations and noise, as well as to expand training examples. To evaluate true performance in these networks, an unseen test set was prepared with 35 images. The training and test sets are available on Figshare \cite{Eddy2020_svm}. \subsection*{Comparisons with Random-Forest classifier} To compare performance to other multi-classification models, we have trained a Random-Forest classifier, tuning hyperparameters of depth, number of estimators, and features \cite{Breiman2001}. The optimized model uses 200 estimators, with a maximum depth of 220 splits, and uses no feature subset selection, yielding a bagged ensemble of regression trees. We perform 10-fold cross validation on 80\% of training data and use 20\% for validation data, utilizing class weighting to account for the class imbalance during training. We also include our 35 image held out test set for evaluation. The mean accuracy score was 91.3 $\pm$ 1.7\%, with average recall scores of 100 and 93.9\% on train and validation scores, respectively, and 82.1\% on the held out test set. SVM and Random-Forest classifiers generally agree very well in terms of predictions, although SVM performs notably better on the held-out test set. The Random-Forest classifier disagrees with the SVM classifier on 232/3012 training examples (7.7\%), 41/754 validation examples (5.4\%), and 3/35 test set examples (8.6\%). \subsection*{Details of parametric t-SNE embedding algorithm} \indent To visualize a high-dimensional morphological trajectories in three-dimensional space, a t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm is prepared for dimensionality reduction utilizing the geometric characterization of cells without requiring labels \cite{Berman2014}. Since the t-SNE algorithm is a non-parametric model, it utilizes all available data to separate data clusters well, and cannot classify new data points without retraining and often leads to different cluster shapes. To embed new data points or entire trajectories in a consistent morphological space, a parametric t-SNE model is prepared. This model uses an Adam optimizer set to minimize the Kullback-Leibler divergence loss, and is trained with 800 iterations with a batch size of 256 examples, and the tunable perplexity parameter set to 30.0. The training set consists of 1024 examples, with equal numbers of samples from each phenotype. The model is made up of three fully connected layers with subsequent ReLU activation after each layer, with an additional final fully connected layer calculating the three t-SNE components for each points. We then use the model to transform the high-dimensional trajectories of new cells into the three-dimensional space. \section*{S5: Cell Chemical Treatment} \label{S5: Chem Treat} To demonstrate the effects of the Rho-ROCK pathway on cellular morphology, we chemically induce activation and inhibition of pathway proteins. Y27632, a specific inhibitor of Rho-associated protein kinases as well as ROCK-II activity \cite{Poincloux2011,Gordonov2015, Matsubara2016}, was purchased from Sigma-Aldrich and diluted to a working concentration at 3 $\mu$g/mL (0.1$\%$ v/v DMSO) in serum-free growth medium. Similarly, Rho activator II (CN03), known to robustly increase the level of GTP bound RhoA \cite{Lee2015,Daoud2017}, was purchased from Cytoskeleton and diluted to a working concentration at 2 $\mu$g/mL (0.1$\%$ v/v DMSO) in serum-free growth medium. 1.5 mg/mL gels were prepared as discussed in the main text with cells at a low concentration and incubated in serum-free growth medium for 6 hours. Cells were then imaged with confocal microscopy for up to 4 hours. Then, serum-free growth medium is removed from the dish and replaced with fresh serum-free growth medium containing the prepared chemical. We then proceed to immediately image for up to another 20 hours. Figure \ref{figS5:CN03} shows additional trajectories of cells that responded to chemical treatment of CN03. Generally, cells treated with CN03 exhibit morphologies characteristic of cell contraction. Notably, cells do not produce protrusions following treatment and typically retract protrusions post-treatment. Visually, this is seen as a notable slide toward the blebbing migration mode (green) or toward lamellipodial cell-spreading (blue). Conversely, figure \ref{figS5:Y27632} shows additional trajectories of cells that responded to chemical treatment of Y27632. Cells treated with Y27632 show characteristic production of protrusions and/or sustained pre-existing protrusions. In the figure, this is shown by most trajectories moving toward filopodial migration (magenta) or toward actin-enriched leading edge migration (yellow). \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{SI/s5/Y27632_final.pdf} \caption{Additional t-SNE time projections (solid) of ROCK-inhibiting Y27632 drug treated cells.} \label{figS5:Y27632} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{SI/s5/CN03_final.pdf} \caption{Additional t-SNE time projections (solid) of Rho-activating CN03 treated cells.} \label{figS5:CN03} \end{figure} \newpage \section*{S6: ECM Characterization} \label{S6: Gel Quant} \subsection*{Gel Autocorrelation} In order to quantify the density fluctuations of collagen ECM, we compute the autocorrelation functions of confocal reflection images of collagen gels. Reflection images are first background subtracted using a rolling ball with radius of 50 pixels (26.88 $\mu$m). Images were then log-transformed to make fibers highly quantifiable. Images are then mean subtracted, and the autocorrelation is calculated. Following, the autocorrelation is normalized and then smoothed using interpolation. The results are shown in Figure \ref{fig:Autocorrelations}. The spatial uniformity indicates an appropriate distribution of randomly oriented gel fibers. Additionally, these show that the decay in the autocorrelation is slower for higher density gels, an indication of the fiber quantity, and much faster for higher temperature gel, caused by the much shorter fiber lengths and smaller pores \cite{Jones2014}. The anisotropy of decay in the final graph indicates the direction of alignment. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{SI/s6/Autocorrelations.pdf} \caption{Autocorrelations shown for 25$^{\circ}$C 1.5 mg/mL Randomly oriented gel (upper left), 25$^{\circ}$C 3.0 mg/mL Randomly oriented gel (upper right), 37$^{\circ}$C 1.5 mg/mL Randomly oriented gel (lower left), and 25$^{\circ}$C 1.5 mg/mL Aligned oriented gel (lower right).} \label{fig:Autocorrelations} \end{figure} \subsection*{Gel Coherency} In order to determine the degree of local and global alignment of collagen fibers, we take the pre-processed confocal reflection images and use OrientationJ with 9-14 circular ROIS per image (packed without overlap), with ROI sizes ranging from 145 to 450 pixel diameter. At least 3 images (one per experiment) were used to quantify coherency. Empirically, larger ROIs were reliable to quantify global alignment, while smaller ROIs were better used for local alignment measures. Smaller ROIs than 145 pixels were not used, as it was noticed the coherency measured by OrientationJ can be highly biased for large fibers such as in 25$^{\circ}$C gels (145 pixels [~78 $\mu$m] is approximately twice the average fiber length [~41 $\mu$m] in a 25$^{\circ}$C gel). The results are shown in figure \ref{figS6:Coherency} and follows previous literature \cite{Clemons2018}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{SI/s6/Coherency.pdf} \caption{Coherency measurements taken using custom Matlab scripts utilizing OrientationJ plug-in for ImageJ (NIH). Measurements are for 1.5 mg/mL collagen gels prepared at room temperature, where we have utilized the alignment protocol to compare against typical randomly-oriented collagen gels.} \label{figS6:Coherency} \end{figure} \subsection*{Rheology} Strain sweep rheology measurements were performed on the varying density gels with a Discovery Hybrid Rheometer-3 (TA instruments) at a 1 Hz frequency in a parallel plate geometry. A standard peltier plate (TA instruments) allowed gels to be formed at 37$^{\circ}$ C. Young's modulus is shown in figure \ref{figS6:strain-sweep}. As shown, storage moduli in the linear regime for gels with 1.5, 3.0, and 4.5 mg/mL collagen were about 70, 180, and 250 Pa, respectively. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{SI/s6/strain_sweep.eps} \caption{Strain-sweep rheology measurements of collagen fiber networks gelled at 37$^{\circ}$ C using DHR-3 Rheometer (TA Instruments) in a parallel plate geometry at a 1 Hz frequency.} \label{figS6:strain-sweep} \end{figure} \section*{S7: Additional details of morphological phenotype dynamics} \label{S7: Phenotype Dynamics} \subsection*{Dwell time definition} The dwell time of state i is determined by using \[ D_{i\rightarrow{i}} = \frac{1}{1-r_{i\rightarrow{i}}}. \] where $r_{i\rightarrow{i}}$ is the transition rate from state i back to itself, and $D_{i\rightarrow{i}}$ is the corresponding dwell time. We found this definition of dwell time maximizes the usefulness of data and is more robust to experimental shortfalls that affect the naive method of simply counting the number of frames a cell remains in the same phenotype for. \subsection*{Details of transition rate calculations} Assuming the probability distribution of cell morphological phenotypes follow a Boltzmann distribution, then the probability, $P_i$, given by \[ P_i = \frac{1}{1+\sum_{j\neq i}^{N}\frac{r_{i\rightarrow{j}}}{r_{j\rightarrow{i}}}} \] where $r_{i\rightarrow{j}}$ is the transition rate from state $i$ to state $j$, and conversely $r_{j\rightarrow{i}}$ is the transition rate from state $j$ to state $i$. The transition matrix is thus calculated as probability per unit time, and hence each row in the transition matrix will sum to 1 \[ (r_{i\rightarrow{i}} - 1) + \sum_{j}^{N} r_{i\rightarrow{j}} = 1. \] The transition rate $r_{i\rightarrow{j}}$ is simply calculated by counting the number of transitions from state $i$ to state $j$, and then dividing by all observed transitions from state $i$. Using this method, we find that the transition rates are stable, regardless of the length of trajectories in computational experiments. \\ \indent Following SVM classification, phenotype dynamics could be properly drawn from data. Importantly, where maximum decisions by the SVM classifier do not exceed 60\%, the classification is thus determined to an intermediate between two states. The intermediate state can be a chimera of two states, with most occurrences being as a cell transitions between morphological phenotypes. Because this state is not considered to be unique, we calculate the transition rate from state $i$ that passes through $N$ intermediate states prior to state $j$ as $1/N_{frames}$. We find by simulation that this method can most-accurately recover transition rates in comparison to methods using soft-max or ignoring intermediate states. \subsection*{Probability flux calculations} To investigate broken detailed balance in morphological phenotype space, we report probability flux calculations shown in figure \ref{figS7}. The probability flux from state $i$ to state $j$ is given by \[ \Delta\phi_{i\rightarrow{j}} = \frac{N_{i\rightarrow{j}}}{\sum_{k} \sum_{l} N_{k\rightarrow{l}}} \] where $N_{i\rightarrow{j}}$ is the number of transitions from state $i$ to state $j$, and both summations in the denominator are over all available states. Therefore, the net probability flux between states $i$ and $j$ is given by $\Delta\phi_{i\rightarrow{j}}-\Delta\phi_{j\rightarrow{i}}$. We find that the net flux between states are all less than 0.003 probability difference per hour for cells in any ECM condition we tested. To reveal if any small difference in probability flux may be significant, we also report the probability flux percent difference between states $i$ and $j$ is given by \[ \frac{\Delta\phi_{i\rightarrow{j}}-\Delta\phi_{j\rightarrow{i}}}{\Delta\phi_{i\rightarrow{j}}+\Delta\phi_{j\rightarrow{i}}} \] We find that the maximum net probability flux percent difference for cells in any ECM condition evaluated is less than 9\%. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{SI/s7/flux.pdf} \caption{(A-D) Net probability flux calculations under varying ECM conditions. (E-H) Percent difference of net probability flux under varying ECM conditions. (A, E) collagen ECM prepared at room temperature (RT) and [$col$] = 1.5 mg/mL. (B, F) collagen ECM prepared at RT and [$col$] = 3.0 mg/mL. (C, G) collagen ECM prepared at 37 $^\circ$C and [$col$] = 1.5 mg/mL. (D, H) collagen ECM prepared at RT and [$col$] = 1.5 mg/mL with flow-aligned collagen fibers.} \label{figS7} \end{figure} \newpage \newpage \section*{S8: Motility analysis of morphological phenotypes} \label{S8: Motility} In order to determine the migrational persistence of migration modes, we report first the velocity autocorrelation observed for cells in collagen ECM prepared at room temperature (RT) and [$col$] = 1.5 mg/mL. We find that the autocorrelation quickly decays to zero in a single time step (15 minutes) shown in Figure \ref{figS8:autocorr}(A). We additionally checked the $cos(\theta)$ distribution between consecutive steps and found steps that the concurrent steps seem to be taken randomly in direction, as given by the U-shaped distribution of Figure \ref{figS8:autocorr}(B). \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{SI/s8/Distributions.pdf} \caption{(A) Velocity autocorrelation and (B) binned directionality given by $cos(\theta)$ between consecutive velocity vectors for cell migration in collagen ECM prepared at room temperature and [$col$] = 1.5 mg/mL. (Bin width = 0.1)} \label{figS8:autocorr} \end{figure} \indent A soft-max classification strategy was used to determine step-sizes dominated by a single migration mode and to avoid confounding step sizes from classifications near the decision function boundary. To evaluate the motility of various migration mode transitions, step sizes were separated into components parallel and perpendicular to the direction of the previous step. Both directions had step-size distributions that were approximately Gaussian. Figure \ref{figS8:fits}(A) shows the binned step size distribution in the persistent direction fit with a Gaussian distribution. We find that these fits yield mean step sizes close to zero and small variances, which are not concurrent with the mean-square displacements (MSD) of RT data shown in Figure 5B. Rather, we find more suitable fits are obtained by fitting the magnitude of the step sizes, which follow a log-normal distribution as shown in figure \ref{figS8:fits}(B). These fits result in step distributions that yield similar MSD values to RT data. The parameters used to generate the log-normal fits are calculated from fitting the empirical cumulative distribution of step size magnitudes with a log-normal CDF, shown in figure \ref{figS8:cdfFits}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{SI/s8/Fits.pdf} \caption{(A) Gaussian fitting (red) of binned step sizes of in the persistent direction (blue) [bin width = 1 $\mu$m] for various migration mode transitions. (B) Log-normal fitting (red) of binned step size magnitudes (blue) using the parameters shown in Figure 5A to generate the log-normal PDF [bin-width = 0.5 $\mu$m] for various migration mode transition.} \label{figS8:fits} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{SI/s8/cdfFits.pdf} \caption{Empirical cumulative distribution (blue) and log-normal cumulative distribution function fit (red) of step sizes for transitions between amoeboid (AM) and mesenchymal (ME) modes.} \label{figS8:cdfFits} \end{figure} \newpage \section*{S9: Interface calculations} \label{S9: Interface} \subsection*{Experiment Details} Interface experiment was done in triplet. Briefly, the outer gel was first made by gelling cells in collagen solution (1.5 mg/mL neutralized) at 25$^{\circ}$C for 20 minutes on the DIGME stage \cite{Alobaidi2017}, warmed at 25$^{\circ}$C. After, the needle was gently removed and a new ice-cold collagen solution (1.5 mg/mL neutralized) containing cells was then dripped into the hole, gently swirled, and then placed into the incubator at 37$^{\circ}$C for 15 minutes. After, 3 mL of growth medium was added on top and sat for 24 hours before imaging. Imaging was taken near the interface, imaging bright-field, fluorescence (green), and back-reflection confocal images. Using the back-reflection images, the interface was manually traced out through the z-stack. The distance to the interface from cell centroids in corresponding z-stacks were then measured away from the closest marked interface point using Matlab bwdist. The frequency vs distance is shown in Figure \ref{figs9:freq}, indicating a large number of cells were images close to the interface. \begin{figure}[ht] \centering \includegraphics[scale=0.4]{SI/s9/loc_frequency.pdf} \caption{Frequency of cell locations away from interface (at 0). Negative is in 37$^{\circ}$C gel, Positive indicates 25$^{\circ}$C gel).} \label{figs9:freq} \end{figure} \subsection*{Details of interface analysis} \indent Migration mode transitions were first determined by using continuous trajectories accounting properly for intermediate state classification, and then distances away from the interface were determined by the initial state location. A 1-D Gaussian kernel was used to acquire continuous local spatial probability densities of transitions (per hour), centered every 5.376 $\mu$m (10x the distance-to-pixel ratio) with a standard deviation of 26.88 $\mu$m. This yields the probability density (per hour) of observing a transition $P(i\rightarrow j,x)$ at a location x, as used in the main text. To mitigate the bias from non-uniform cell density, we then divide the prior probability by the spatial probability of observing a cell within the spatial window centered at x, $M(x)$. $M(x)$ is calculated using the same Gaussian kernel.
1,314,259,996,765
arxiv
\section{Appendix} \section{Proof of Theorem 1}\label{append:proof_theorem} Without loss of generality, using permutation of clients indices, we will prove the upper bound for the following term \begin{equation} \label{eq-40} I\left(\mathbf{x}^{(t)}_N ; \frac{1}{N} \sum_{i=1}^N \mathbf{x}^{(t)}_i\middle| \left \{ \frac{1}{N} \sum_{i =1 }^N \mathbf{x}^{(k)}_i \right\}_{k \in [t-1]} \right), \end{equation} where $\mathbf{x}_N$ is the mini-batch gradient of node $i$ which is given by \begin{equation}\label{eq-3} \mathbf {x}_i^{(t)} = \frac{1}{B} \sum_{b \in \mathcal{B}^{(t)}_i } g_i( \bm {\theta}^{(t)}, b), \end{equation} \noindent We will use the following property of vectors with singular covariance matrices in the proof of this theorem. \begin{prope}\label{prop:nonsingular_to_singular} \textit{Given a random vector $\mathbf{q}$ with a singular covariance matrix $\mathbf{K}_q$ of rank $d^*$, there exists a sub-vector $\bar{\mathbf{q}}$ of $\mathbf{q}$ with a non-singular covariance matrix $\mathbf{K}_{\bar{q}}$ such that $\mathbf{q}[k] = \mathbf{A}\bar{\mathbf{q}}$ where $\mathbf{A} \in \mathbb{R}^{d\times d^*}$ is a deterministic linear transformation matrix.} \end{prope} Let us define $S_{{N}}^{(t)} = \frac{1}{{N}}\sum_{i = 1}^N \mathbf{x}_i^{(t)}$. We also use the the definition of $\bar{{g}}_i( \bm {\theta}^{(t)}, b) \in \mathbb{R}^{d^*}$, for $d^* \leq d$ where $d$ is the model size, which is the largest sub-vector of the stochastic gradient $ g_i( \bm {\theta}^{(t)}, b) $ such that $\bar{{g}}_i( \bm {\theta}^{(t)}, b)$ has a non-singular covariance matrix $K_{\bar{G}^{(t)}}$ for all $ i \in \mathcal{N}$. According to the definition of $\bar{{g}}_i( \bm {\theta}^{(t)}, b)$, we can rewrite \eqref{eq-40} and the term $S_{{N}}^{(t)}$ as follows: \begin{align} \bar{\mathbf {x}}_i^{(t)} &= \frac{1}{B} \sum_{b \in \mathcal{D}_i } \bar{{g}}_i( \bm {\theta}^{(t)}, b) \nonumber \\ \bar{S}_{{N}}^{(t)}&= \frac{1}{{N}}\sum_{i \in \mathcal{N}} \bar{\mathbf{x}}_i^{(t)} \end{align} Let also define ${F}_{{N}}^{(t)} = \sqrt{N} \bar{S}_{{N}}^{(t)}$. We can decompose the expression in \eqref{eq-40} as follows: \begin{align}\label{eq:mutual_info_decomp_1} &I\left(\mathbf{x}^{(t)}_N ; S_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber \\ &\overset{(a)}= I\left(\sqrt{B}\mathbf{x}_N^{(t)}; \sqrt{N}S_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber \\ &\overset{(b)} = I\left(\sqrt{B}\bar{\mathbf{x}}_N^{(t)}; F_{{N}}^{(t)} \middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber \\ & = h\left(\!\!\!\sqrt{B}\bar{\mathbf{x}}_N^{(t)}\!\middle|\! \left\{\!S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\!\right) {+} h\left(\!\!{F}_{{N}}^{(t)}\!\middle|\!\left\{S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\! \right)\nonumber\\ &\qquad- h\left(\sqrt{B}\bar{\mathbf{x}}_N^{(t)}, F_{{N}}^{(t)} \middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber\\ &= h\left(\!\!\!\sqrt{B}\bar{\mathbf{x}}_N^{(t)}\!\middle|\! \left\{\!S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\!\right) {+} h\left(\!\!{F}_{{N}}^{(t)}\!\middle|\!\left\{S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\! \right)\nonumber\\ &\qquad-\! h\left(\!\begin{bmatrix} \mathrm{I}_d^* & \mathrm{0}_d^* \nonumber\\ \mathrm{I}_d^*\frac{1}{\sqrt{N}} & \frac{\sqrt{N-1}}{\sqrt{N}}\mathrm{I}_d^*\end{bmatrix}\!\!\!\begin{bmatrix} \sqrt{B}\bar{\mathbf{x}}_N^{(t)} \\ {F}_{{N-1}}^{(t)}\end{bmatrix}\! \middle|\! \left\{S_{{N}}^{(k)} \right\}_{\!k\in[t-1]}\!\right) \nonumber\\ &\stackrel{(c)}{=} h\left(\!\!\!\sqrt{B}\bar{\mathbf{x}}_N^{(t)}\!\middle|\! \left\{\!S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\!\right) {+} h\left(\!\!{F}_{{N}}^{(t)}\!\middle|\!\left\{S_{{N}}^{(k)} \!\right\}_{\!k\in[t-1]}\!\right)\nonumber\\ &\qquad{-} h\!\left(\!\!\!\sqrt{B}\bar{\mathbf{x}}_N^{(t)}\!\middle|\! \left\{\!S_{{N}}^{(k)} \!\right\}_{\!\!k\in[t-1]}\!\right)\!{-}h\!\left(\!{F}_{{N-1}}^{(t)}\!\middle|\! \left\{S_{{N}}^{(k)} \right\}_{\!\!k\in[t-1]}\!\right)\nonumber\\ &\qquad- \log\left|\det \begin{bmatrix} \mathrm{I}_d^* & \mathrm{0}_d^* \\ \mathrm{I}_d^*\frac{1}{\sqrt{N-1}} & \frac{\sqrt{N-1}}{\sqrt{N}}\mathrm{I}_d^*\end{bmatrix} \right|\nonumber \\ &\stackrel{(d)}= h\left(\!F_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{\!k\in[t-1]}\!\right)\nonumber\\ &\qquad- h\!\left(\!\!{F}_{{N-1}}^{(t)}\middle|\! \left\{S_{{N}}^{(k)} \right\}_{\!k\in[t-1]}\!\right) {+} \frac{d^*}{2}\log\left(\!\frac{N}{N-1}\!\right)\!, \end{align} where: $(a)$ follows from the fact that the mutual information is invariant under deterministic multiplication; $(b)$ from Property~\ref{prop:nonsingular_to_singular} $(c)$ follows from the property of the entropy of linear transformation of random vectors \cite{10.5555/1146355} and the fact that $\bar{\mathbf{x}}_N^{(t)}$ and ${F}_{{N-1}}^{(t)}$ are conditionally independent given $\left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}$ (e.g., the last global model at time $t$) ; $(d)$ follows from the Schur compliment of the matrix. We will now turn our attention to characterizing the entropy term $h\left(F_{M}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\right)$ for any ${M}$. Note that \begin{align}\label{eq:mutual_info_decomp_2} &h\left(F_{M}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\right) \nonumber \\ &= h \left(\!\!\frac{1}{\sqrt{MB}} \sum_{i=1}^M \sum_{d \in \mathcal{B}_i^{(t)}} \bar{g}_i(b, \bm {\theta}^{(t)}) \middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\!\!\right) \nonumber \\ &\stackrel{(i)}{=} h\!\! \left(\frac{\mathbf{K}_{\bar{G}^{(t)}}^{1/2}}{\sqrt{MB}} \sum_{i=1}^M\!\sum_{b \in \mathcal{B}_i^{(t)}} \!\!\!\widehat{g}_i(b, \bm {\theta}^{(t)})\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\right) \nonumber \\ &\stackrel{(ii)}= \log\left(|\det \mathbf{K}_{\bar{G}^{(t)}}^{1/2} |\right) \nonumber \\ &\qquad {+} \underbrace{h\!\!\left(\!\!\frac{1}{\sqrt{MB}}\!\sum_{i=1}^M\!\sum_{b \in \mathcal{B}_i^{(t)}} \!\!\!\widehat{g}_i(b, \bm {\theta}^{(t)})\middle|\! \left\{S_{{N}}^{(k)} \right\}_{\!k\in[t-1]}\!\right)}_{H_M}\!, \end{align} where: $(i)$ makes use of the fact that the covariance matrix is the same across clients and using the whitening definition (Definition \ref{def-2}) on the vector $\bar{g}_i(b, \bm {\theta}^{(t)})$; $(ii)$ again uses the property of entropy of linear transformation of random vectors. Note that the term of $h\left(F_{M}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\right)$ only depends on $M$ in the second term $H_M$. As a result by substituting~\eqref{eq:mutual_info_decomp_2} in~\eqref{eq:mutual_info_decomp_1}, we get that \begin{align}\label{eq:mutual_info_restated} & I\left(\mathbf{x}^{(t)}_N ; S_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right)\nonumber \\ & = H_N - H_{N-1} + \frac{d^*}{2}\log\left(\frac{N}{N-1}\right), \end{align} Our final step is to find suitable upper and lower bounds for $H_M$ to use in~\eqref{eq:mutual_info_restated}. Recall for the following arguments that due to whitening, the vector $\widehat{g}_b^{(t)} = \widehat{g}(b, \bm {\theta}^{(t)})$ has zero mean and identity covariance. \subsection{Upper bound on \texorpdfstring{$H_M$}{H\textunderscore M}} The upper bound is the simplest due to basic entropy properties. In particular, the sum $\frac{1}{\sqrt{MB}} \sum_{i=1}^M \sum_{b \in \mathcal{B}_i^{(t)}} \widehat{g}_b^{(t)}$ has zero mean and $\mathrm{I}_{d^*}$ covariance. Thus, \begin{align} H_M &= h \left(\frac{1}{\sqrt{MB}} \sum_{i=1}^M \sum_{b \in \mathcal{B}_i^{(t)}} \widehat{g}_b^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber \\ & \overset{(a)}\leq \frac{1}{2} d^*\log\left(2\pi e\right), \end{align} where $(a)$ follows from the fact that for a fixed first and second moment, Gaussian distribution maximizes the entropy. The distinction between the proof of the bound in Case 1 and Case 2 in Theorem 1 is in the lower bound on the term $H_M$. We start by providing the lower bound that is used for proving Case 1. \subsection{Lower bound on \texorpdfstring{$H_M$}{H\textunderscore M} for Case 1 in Theorem 1} For the lower bound, we will rely heavily on the assumption that the elements of $\widehat{g}_b^{(t)}$ are independent and the interesting result that gives Berry-Esseen style bounds for the entropic central limit theorem \cite{bobkov2014berry}. In particular, in its simplest form, the result states that for IID zero mean random variables $X_i$, the entropy of the normalized sum $T_m = \frac{1}{\sqrt{M}} \sum_{i=1}^M X_i$ approaches the entropy of a Gaussian random variable $\Phi_{\sigma^2}$ with the same variance $\sigma^2$ as $X_i$, such that the following is always satisfied \begin{equation}\label{eq:berry_esseen_entropy} h(\Phi_{\sigma^2}) - h(T_M) \leq \tilde{C} \frac{\mathbb{E}|X_i|^4}{M}, \end{equation} Using~\eqref{eq:berry_esseen_entropy}, we can find a lower bound for $H_M$ as follows: \begin{align} H_M &= h \left(\frac{1}{\sqrt{MB}} \sum_{i=1}^M \sum_{b \in \mathcal{B}_i^{(t)}} \widehat{g}_b^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) \nonumber \\ &= \sum_{j=1}^{d^*} h \underbrace{\left( \frac{1}{\sqrt{MB}} \sum_{b \in \mathcal{B}_i^{(t)}} \sum_{i=1}^M \widehat{g}_b^{(t)}[j]\middle | \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}\right)}_{\text{variance = $1$}} \nonumber \\ &\stackrel{\eqref{eq:berry_esseen_entropy}}\geq\!\! \sum_{j=1}^{d^{*}}\! \left(\! h(\Phi_{1}) - \frac{C_{0,\bar{g}}}{B}\!\right) {=} \frac{d^*}{2} \log\left(2\pi e\right) {-} \frac{C_{0,\bar{g}}}{MB}. \end{align} In other words, we have the following bound on $H_M$ \begin{equation}\label{eq:bound_H_M} \frac{d^*}{2} \log\left(2\pi e\right) - \frac{d^{*}C_{0,\bar{g}}}{MB} \leq H_M \leq \frac{d^*}{2} \log\left(2\pi e\right). \end{equation} By substituting~\eqref{eq:bound_H_M} in~\eqref{eq:mutual_info_restated} (lower bound for $M=N-1$ and upper bound for $M=N$), we get that \begin{align} I\left(\mathbf{x}^{(t)}_N ; S_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right) &= H_N - H_{N-1} + d^*\log\left(\frac{N}{N-1}\right) \nonumber \\ &\leq \frac{d^*}{2}\log\left(\frac{N}{N-1}\right) + \frac{d^*C_{0,\bar{g}}}{(N-1)B}. \end{align} This concludes the proof of Case 1. \subsection{Lower bound on \texorpdfstring{$H_M$}{H\textunderscore M} for Case 2 in Theorem 1} The proof of this lower bound relies on the entropic central limit theorem for the vector case \cite{eldan2020clt} and Lemma 1 which will be stated later in this section. We start by giving the entropic central limit theorem for the case of IID random vectos~\cite{eldan2020clt}. \begin{thm}[Entropic central limit theorem \cite{eldan2020clt}] Let $\mathbf{q}$ be a $\sigma$-uniformly log concave $d$-dimensional random vector with $\mathbb{E}[\mathbf{q}] = 0$ and non-singular covariance matrix $\Sigma$. Additionally, let $\mathbf{z} \sim \mathcal{N}(0, \Sigma)$ be a Gaussian vector with the same covariance as $\mathbf{q}$, and let $\gamma \sim \mathcal{N}(0, \mathrm{I}_{d}) $ to be a standard Gaussian. The entropy of the normalized sum $T_M = \frac{1}{\sqrt{M}} \sum_{i=1}^M \mathbf{q}_i$, where $\mathbf{q}_i$'s are random samples, approaches the entropy of a Gaussian random vector $Z$, such that the following is always satisfied \begin{equation}\label{eq:entropy_vector} \text{Ent}(T_M||\mathbf{z})\leq \frac{2(d+2(\text{Ent}(\sqrt{\sigma} \mathbf{q}||\gamma)}{M\sigma^4}, \end{equation} where $\text{Ent}(T_M||\mathbf{z})$ is the relative entropy. \end{thm} \begin{lemma}\label{lemma:entropy} Given a random vector $\mathbf{q} \in \mathbb{R}^d$ with a distribution $f_{\mathbf{q}} (y)$ and Cov$({\mathbf{q}}) = \Sigma$, and defining $\mathbf{z} \sim \mathcal{N}(0, \Sigma)$ to be a Gaussian vector with the same covariance as $\mathbf{q}$, for $\sigma >0$ , we get \begin{align} \text{Ent}(\sqrt{\sigma }\mathbf{q}||\mathbf{z}) &= - h(\mathbf{q}) -\frac{d}{2} \log(\sigma)+ \frac{d}{2} \log(2 \pi)\nonumber \\& + \frac{1}{2} \log(| \Sigma|) + \sigma\frac{d}{2} \\ \text{Ent}(\mathbf{q}||\mathbf{z}) &= h(\mathbf{z}) - h(\mathbf{q}) \label{eq-entropy2} \end{align} \end{lemma} Given the assumption that $\widehat{g}_b^{(t)}$ has a $\sigma$-log concave distribution while both the term $\frac{1}{\sqrt{MB}} \sum_{i=1}^M \sum_{b \in \mathcal{B}_i^{(t)}} \widehat{g}_b^{(t)}$ and $\widehat{g}_b^{(t)}$ have an identity covariance matrix $ \Sigma = \mathrm{I}_{d^*}$ given $\left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]}$, we can use ~\eqref{eq:entropy_vector} with $\mathbf{z} \sim \mathcal{N}(0, \mathrm{I}_{d^*}) $. Furthermore, by using Lemma \ref{lemma:entropy}, we get \begin{equation}\label{d} h(\mathbf{z})- H_M \leq \frac{d^* C_{1,\bar{g}} - C_{2,\bar{g}}}{MB}, \end{equation} where, $C_{1,\bar{g}} = \frac{2\left( 1+\sigma +\log(2\pi) - \log(\sigma) \right)}{\sigma^4} $ and $C_{2,\bar{g}} = \frac{4h\left(\widehat{g}(b, \bm {\theta}^{(t)})\right)}{\sigma^4}$, and $h(\widehat{g}(b, \bm {\theta}^{(t)}))$ is the entropy of the random vector $\bar{g}_i(b, \bm {\theta}^{(t)})$ after whitening. Finally, using the fact that the entropy of the Gaussian random vector $\mathbf{z}$ with covariance $\mathrm{I}_{d^*}$ is given by $h(\mathbf{z}) = \frac{d^*}{2}\log(2 \pi e)$, we get the following bound on $H_M$ \begin{equation}\label{eq:bound_H_M1} \frac{d^*}{2} \log\left(2\pi e\right) - \frac{d^* C_{1,\bar{g}} - C_{2,\bar{g}}}{(N-1)B} \leq H_M \leq \frac{d^*}{2} \log\left(2\pi e\right). \end{equation} By substituting~\eqref{eq:bound_H_M1} in~\eqref{eq:mutual_info_restated} (lower bound for $M=N-1$ and upper bound for $M=N$), we can now upper bound the mutual information term as follows \begin{align} &I\left(\mathbf{x}^{(t)}_N ; S_{{N}}^{(t)}\middle| \left\{S_{{N}}^{(k)} \right\}_{k\in[t-1]} \right)\nonumber \\ &= H_N - H_{N-1} + \frac{d^*}{2}\log\left(\frac{N}{N-1}\right) \nonumber \\ &\leq \frac{d^*}{2}\log\left(\frac{N}{N-1}\right) + \frac{d^* C_{1,\bar{g}} - C_{2,\bar{g}}}{(N-1)B}. \end{align} This concludes the proof of Theorem~1. \section{Proof of Corollary 1}\label{append:proof_Corollary} In the following, we define $S_{{N}}^{(t)} = \frac{1}{{N}}\sum_{i = 1}^N \mathbf{x}_i^{(t)}$. Using this notation, we can upper bound $I_{\rm priv/data}$ as follows \begin{align}\label{co1} &I_{\rm priv/data} = {I}\left(\mathcal{D}_i ; \left \{ S_{{N}}^{(k)} \right\}_{k \in [T]} \right) \nonumber \\ &\overset{(a)}= \sum_{t =1}^{T} {I}\left(\mathcal{D}_i ; S_{{N}}^{(t)}\middle| \left \{ S_{{N}}^{(k)} \right\}_{k \in [t-1]} \right) \nonumber \\ & \overset{(b)}\leq \sum_{t =1}^{T} {I}\left(\mathcal{B}_i^{(t)} ;S_{{N}}^{(t)} \middle| \left \{ S_{{N}}^{(k)} \right\}_{k \in [t-1]} \right) \nonumber \\ &\overset{(c)}\leq \sum_{t =1}^{T}\! \underbrace{ I\!\left( \mathbf{x}_i^{(t)} \!\left(\mathcal{B}_i^{(t)} ; \left \{\! S_{{N}}^{(k)} \!\right\}_{\!\!k \in [t-1]}\!\right); S_{{N}}^{(t)} \middle| \left \{ \!S_{{N}}^{(k)} \!\right\}_{k \in [t-1]} \!\right) }_{\text{This is bounded by the result in Theorem 1}}\!. \end{align} where: (a) comes from the chain-rule; (b) from data processing inequality $D_i \rightarrow B_i^{(t)} \rightarrow \mathbf{x}_i^{(t)}$, where $B_i^{(t)}$ is the sampled mini-batch from the data set of node $i$; (c) from data processing inequality $ B_i^{(t)} \rightarrow \mathbf{x}_i^{(t)} \rightarrow \frac{1}{N} \sum_{i\in \mathcal{N}} \mathbf{x}^{(t)}_i $;. Combining the results given in the two cases of Theorem 1 with \eqref{co1} concludes the proof of Corollary 1. \balance \section{Proof of Lemma 1 } \begin{align}\label{eq: lemma proof} &\text{Ent}(\sqrt{\sigma }\mathbf{q}||\mathbf{Z}) = \text{Ent}(\mathbf{q}'||\mathbf{Z}) = \int f_{ \mathbf{q}'} (y) \log \frac{f_{\mathbf{q}'} (y)}{f_{\mathbf{Z}} (y)} dy \nonumber \\ &= \int f_{\mathbf{q}'} (y) \log f_{\mathbf{q}'} dy - \int f_{\mathbf{q}'}(y) \log f_{\mathbf{Z}} (y) dy \nonumber \\ & \overset{(a)}= - h(\mathbf{q}') + \frac{d}{2} \log(2 \pi) \nonumber \\ &\qquad + \frac{1}{2} \log(| \Sigma|) + \frac{1}{2} \int f_{\mathbf{q}'}(y) y^T \Sigma^{-1} y dy \nonumber \\ & \overset{(b)}= - h(\mathbf{q}) -\frac{d}{2} \log(\sigma)+ \frac{d}{2} \log(2 \pi) \nonumber \\ &\quad + \frac{1}{2} \log(| \Sigma|) + \frac{1}{2} \int f_{\mathbf{q}'}(y) \text{ Tr}(\Sigma^{-1} y^T y ) dy \nonumber \\ & \overset{(c)}= - h(\mathbf{q}) -\frac{d}{2} \log(\sigma)+ \frac{d}{2} \log(2 \pi) \nonumber \\ &\qquad + \frac{1}{2} \log(| \Sigma|) + \frac{1}{2} \text{ Tr} \left( \Sigma^{-1} \int f_{\mathbf{q}'}(y) y^T y dy \right) \nonumber \\ & = - h(\mathbf{q}) -\frac{d}{2} \log(\sigma)+ \frac{d}{2} \log(2 \pi)\nonumber \\ &\qquad + \frac{1}{2} \log(| \Sigma|) + \frac{1}{2} \text{ Tr} \left( \Sigma^{-1} \mathbb{E}_{\mathbf{q}'} [ \mathbf{q}'^{T} \mathbf{q}'] \right) \nonumber\\ & \overset{(d)}= - h(\mathbf{q}) + \frac{d}{2} \log(\frac{2 \pi}{\sigma}) + \frac{1}{2} \log(| \Sigma|) + \frac{1}{2} \sigma \text{ Tr} \left( \Sigma^{-1} \Sigma \right) \nonumber \\ & = - h(\mathbf{q}) + \frac{d}{2} \log(\frac{2 \pi}{\sigma}) + \frac{1}{2} \log(| \Sigma|) + \sigma\frac{d}{2}, \end{align} where: $\rm Tr$ represents the trace function; $(a) $ follows from using the multivariate distribution of the Gaussian vector $\mathbf{z}$; $(b)$ using the scaling property of the entropy with $\mathbf{q}' = \sqrt{\sigma} \mathbf{q}$; $(c)$ from follows from using the linearity of the trace function; finally $(d)$ from using the linear transformation of the random vector $\mathbf{q}' = \sqrt{\sigma} \mathbf{q}$ and the fact that $\mathbf{q}$ has the same covariance matrix $\Sigma$ as $\mathbf{z}$. The proof of \eqref{eq-entropy2} follows directly by substituting $\sigma =1 $ in the equation \eqref{eq: lemma proof} and using entropy of a Gaussian vector with covariance $\Sigma$. \section{Overview of MINE} \label{subsec:mine} In our empirical evaluation in Section~\ref{sec:eval}, we use the Mutual Information Neural Estimator (MINE)~\cite{belghazi2018mine} to estimate the mutual information, which is the state-of-the-art method for mutual information estimation \cite{belghazi2018mine}. Specifically, given random vectors $X$ and $Z$, and a function family parameterized by a neural network $\mathcal{F}=\{T_{\theta}:X\times Z\rightarrow\mathbb{R}\}_{\theta\in\Theta}$, the following bound holds: \begin{equation} I(X;Z)\geq I_{\Theta}(X;Z), \end{equation} where $I_{\Theta}(X;Z)$ is the neural mutual information measure defined as: \begin{equation} I_{\Theta}(X;Z)=\sup_{\theta\in\Theta}\mathbb{E}_{\mathbb{P}_{XZ}}[T_{\theta}]-\log(\mathbb{E}_{\mathbb{P}_{X}\otimes\mathbb{P}_{Z}}[e^{T_{\theta}}]), \end{equation} $\mathbb{P}_{X}$ and $\mathbb{P}_{Z}$ are the marginal distribution of $X$ and $Z$ respectively, $\mathbb{P}_{XZ}$ is the joint distribution of $X$ and $Z$, and $\mathbb{P}_{X}\otimes\mathbb{P}_{Z}$ is the product of marginals $\mathbb{P}_{X}$ and $\mathbb{P}_{Z}$. As an empirical estimation of $I_{\Theta}(X;Z)$, MINE is implemented as \begin{equation} \label{eq:mine} \widehat{I(X;Z)}_K=\sup_{\theta\in\Theta}\mathbb{E}_{\mathbb{P}_{XZ}^{(K)}}[T_{\theta}]-\log(\mathbb{E}_{\mathbb{P}_{X}^{(K)}\otimes\mathbb{P}_{Z}^{(K)}}[e^{T_{\theta}}]), \end{equation} where $\mathbb{P}_{(\cdot)}^{(K)}$ is the empirical distribution of $\mathbb{P}_{(\cdot)}$ with $K$ IID samples. Finally, solving Eq. \ref{eq:mine} (i.e. get the MI estimation) can be achieved by solving the following optimization problem via gradient ascent: \begin{align*} &\widehat{I(X;Z)}_K \nonumber \\ &= \max_{\theta\in\Theta}\left\{\!\frac{1}{K}\!\sum_{k=1}^{K}T_\theta(x_k,z_k) -\log\left(\frac{1}{K}\sum_{k=1}^{K}e^{T_\theta(x_k,\bar{z}_k)}\right)\right\}, \end{align*} where $(x_k,z_k)$ is the $k$-th sample from $\mathbb{P}_{XZ}$ and $\bar{z}_k$ is the $k$-th sample from $\mathbb{P}_{Z}$. \subsection{Conclusions} \section{Further Discussion and Conclusions} \label{sec:discussion} In this paper, we derived the first formal privacy guarantees for FL with SA using MI as a metric to measure how much information the aggregated model update can leak about the local dataset of each user. We proved theoretical bounds on the MI privacy leakage in theory and showed through an empirical study that this holds in practice after FL settings. Our concluding observations is that by using FL with SA, we get that: 1) the MI privacy leakage will decrease at a rate of $\mathcal{O}(\frac{1}{N})$ ($N$ is the number of users participating in FL with SA); 2) increasing model size will not have a linear impact on the increase of MI privacy leakage, and the MI privacy leakage only linearly increases with the rank of the covariance matrix of the individual model update; 3) larger batch size during local training can help to reduce the MI privacy leakage. We hope that our findings can shed lights on how to select FL system parameters with SA in practice to reduce privacy leakage and provide an understanding for the baseline protection provided by SA in settings where it is combined with other privacy-preserving approaches such as differential privacy. \begin{figure} \centering \includegraphics[width=0.482\textwidth]{figs/MI_vs_DP/counter_example.pdf} \vspace{-1em} \caption{Heatmap of the absolute values of sampled updates from clients $1,2$ and $3$ in the counter example. $\mathbf{x}_4$ and $\mathbf{x}_4'$ can be distinguished even adding the aggregated noise from $\sum_{i=1}^3 \mathbf{x}_i$. } \label{fig:counter_example} \end{figure} \noindent{\bf Can we provide differential privacy guarantees using SA?} Note that when using FL with SA, then from the point of view of an adversary that is interested in the data of the $i$-th user, the aggregated model in $i^- = [N]\backslash\{i\}$ can be viewed as noise that is independent of the gradient $\mathbf{x}_i$ given the last global model, which is very similar to an LDP mechanism for the update $\mathbf{x}_i^{(t)}$ of user $i$ that adds noise to $\mathbf{x}_i^{(t)}$. This leads to an intriguing question: \textit{Can we get LDP-like guarantees from the securely aggregated updates?} Since DP is interested in a worst-case guarantee, it turns out that their exist model update distributions where it is impossible to achieve an $\epsilon < \infty$ DP guarantee by using other model updates as noise as illustrated in Fig.~\ref{fig:counter_example}. In this case, the alignment of the sparsity pattern in $x_1, x_2$ and $x_3$ allows an adversary to design a perfect detector to distinguish between $x_4$ and $x_4'$.\\ \noindent{\bf Why our MI privacy guarantee can avoid this?} Although, the previous example illustrates that DP flavored guarantees are not always possible, in practical scenarios, the worst-case distribution for $\mathbf{x}_1, \mathbf{x}_2$ and $\mathbf{x}_3$ that enables the distinguishing between $\mathbf{x}_4$ and $\mathbf{x}_4'$ in Fig.~\ref{fig:counter_example} are an unlikely occurrence during training. For instance, in our theoretical analysis, since users have IID datasets, then having the distribution of $x_1$, $x_2$ and $x_3$ be restricted to a subspace $\mathcal{S}_{x_{i^-}}$, implies also that points generated from $\mathbf{x}_4$ would also belong to $\mathcal{S}_{x_{i^-}}$ almost surely. This is a key reason why we can get mutual information guarantee in Theorem ~\ref{Thm:main_Thm}: for an aggregated gradient direction $\sum_{i=1}^N\mathbf{x}_i$, where each component is restricted to a common subspace $\mathcal{S}_x$ protects the contribution of each individual component $\mathbf{x}_i$ as $N$ increases. In the worst case, where one component is not restricted to the subspace $\mathcal{S}_x$ spanned by the remaining components, then we get the privacy leakage discussed in the example above. We highlight that through our experiments and other studies in the literature~\cite{basak_sparse}, we observe that such sparsity alignment happens with very low probability. This presents motivation for studying a probabilistic notion of DP that satisfies $(\epsilon,\delta)$-DP with a probability at least $\gamma$, instead of the worst-case treatment in current DP notions, but this is beyond the scope of the study in this current work. Another interesting future direction is to use the results from this work for a providing ``privacy metrics'' to users to estimate/quantify their potential leakage for participating in a federated learning cohort. Such metrics can be embedded in platforms, such as FedML~\cite{he2020fedml}, to guide users to make informed decisions about their participation in federated learning. Finally, it would also be important to extend the results to model aggregation protocols that are beyond weighted averaging (e.g., in federated knowledge transfer~\cite{FedGKT}). \section{Empirical Evaluation} \label{sec:eval} In this section, we empirically evaluate how different FL system parameters affect the MI privacy leakage in SA. Our experiments explore the effect of the system parameters on \texttt{FedSGD}, \texttt{FedAvg} and \texttt{FedProx}~\cite{fedprox_paper}. Note that our evaluation results on \texttt{FedSGD} are backed by our theoretical results in Section~\ref{sec:privacy_guarantee}, while our evaluation results on \texttt{FedAvg} and \texttt{FedProx} are purely empirical. We start by evaluating the impact of the number of users $N$ on the MI privacy leakage for \texttt{FedSGD}, \texttt{FedAvg} and \texttt{FedProx} (see in Section \ref{subsec:num_user}). Then, we evaluate the impact of batch size $B$ on the MI privacy leakage for both \texttt{FedSGD}, \texttt{FedAvg} and \texttt{FedProx} (see in Section \ref{subsec:batch_size}). Next, in Section \ref{subsec:accum}, we measure the accumulative MI privacy leakage across all global training rounds. \yahya{We evaluate how the local training rounds $E$ for each user will affect the MI privacy leakage for \texttt{FedAvg} and \texttt{FedProx} in Section \ref{subsec:local_epoch}. Finally, the impact of user heterogeneity on the MI privacy leakage for \texttt{FedAvg} is evaluated in Section \ref{subsec:hetero}.} \yahya{We would like to preface by noting that \texttt{FedProx} differs from \texttt{FedAvg} by adding a strongly-convex proximal term to the loss used in \texttt{FedAvg}. Thus, we expect similar dependencies on the number of users $N$, batch-size $B$ and local epochs $E$, when using \texttt{FedAvg} and \texttt{FedProx}.} \subsection{Impact of Number of Users (N)} \label{subsec:num_user} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_sgd_high_avg_small.jpg} \caption{Unnormalized MI, MNIST.} \label{fig:fedsgd_n1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_sgd_cifar10_high_avg_small.jpg} \caption{Unnormalized MI, CIFAR10.} \label{fig:fedsgd_n2} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_sgd_low_avg_small.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedsgd_n3} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_sgd_cifar10_low_avg_small.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedsgd_n4} \end{subfigure} \caption{Impact of the number of users ($N$) when using FedSGD. Note that we set and $B=32$ for all users on both MNIST and CIFAR10 datasets. We normalize the MI by entropy of a single data batch (i.e. $32*567$ for MNIST and $32*1403$ for CIFAR10).} \label{fig:num_user_fedsgd} \vspace{-.1in} \end{figure} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_high_avg_small.jpg} \caption{Unnormalized MI, MNIST.} \label{fig:fedavg_n1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_cifar10_high_avg_small.jpg} \caption{Unnormalized MI, CIFAR10.} \label{fig:fedavg_n2} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_low_avg_small.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedavg_n3} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_avg_cifar10_low_avg_small.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedavg_n4} \end{subfigure} \caption{Impact of the number of users ($N$) when using \texttt{FedAvg}. Note that we set $E$=1 and $B=32$ for all users on both MNIST and CIFAR10 datasets. We normalize the MI by entropy of a single data batch (i.e. $1200*567$ for MNIST and $1000*1403$ for CIFAR10).} \label{fig:num_user_fedavg} \vspace{-.1in} \end{figure} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_may_mnist_fedprox_high_avg_small.jpg} \caption{Unnormalized MI, MNIST.} \label{fig:fedprox_n1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_may_cifar10_fedprox_high_avg_small.jpg} \caption{Unnormalized MI, CIFAR10.} \label{fig:fedprox_n2} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_may_mnist_fedprox_low_avg_small.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedprox_n3} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_N/results_cmp_may_cifar10_fedprox_low_avg_small.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedprox_n4} \end{subfigure} \caption{Impact of the number of users ($N$) when using \texttt{FedProx}. Note that we set $E$=1 and $B=32$ for all users on both MNIST and CIFAR10 datasets. We normalize the MI by entropy of a single data batch (i.e. $1200*567$ for MNIST and $1000*1403$ for CIFAR10).} \label{fig:num_user_fedprox} \vspace{-.1in} \end{figure} \noindent\textbf{FedSGD.} Fig. \ref{fig:num_user_fedsgd} shows the impact of varying $N$ on MI privacy leakage in \texttt{FedSGD}, where the number of users is chosen from $\{2,5,10,20,50\}$, and we measure the MI privacy leakage of different models on both MNIST and CIFAR10 datasets. We observe that increasing the number of users participating in FL using \texttt{FedSGD} will decrease the MI privacy leakage in each global training round (see Fig. \ref{fig:fedsgd_n1} and \ref{fig:fedsgd_n2}), which is consistent with our theoretical analysis in Section \ref{subsubsec:num_user}. Notably, as demonstrated in Fig. \ref{fig:fedsgd_n3} and \ref{fig:fedsgd_n4}, the percentile of MI privacy leakage (i.e. normalized by the entropy of a data batch) can drop below 2\% for MNIST and 5\% for CIFAR10 when there are more than 20 users. \noindent\textbf{FedAvg.} Fig. \ref{fig:num_user_fedavg} shows the impact of varying $N$ on MI privacy leakage in \texttt{FedAvg}. Similar to the results in \texttt{FedSGD}, as the number of users participating in \texttt{FedAvg} increases, the MI privacy leakage in each global training round will decrease (see Fig. \ref{fig:fedavg_n1} and \ref{fig:fedavg_n2}), and the decreasing rate is approximately $\mathcal{O}(N)$. Moreover, as shown in Fig. \ref{fig:fedavg_n3} and \ref{fig:fedavg_n4}, the percentile of MI privacy leakage drops below 0.1\% on both MNIST and CIFAR10 when there are more than 20 users participating in FL. It is worth noting that we normalize the MI by the entropy of the whole training dataset in \texttt{FedAvg} instead of the entropy of a single batch, since users will iterate over all their data batches to calculate their local model updates in \texttt{FedAvg}. Therefore, although we observe that the unnormalized MI is comparable for \texttt{FedSGD} and \texttt{FedAvg}, the percentile of MI privacy leakage in \texttt{FedAvg} is significantly smaller than that in \texttt{FedSGD}. \noindent\textbf{FedProx.} \yahya{Similar to \texttt{FedAvg}, Fig.~\ref{fig:num_user_fedprox} shows how the MI privacy leakage with \texttt{FedProx} varies with the number of users $N$. As the number of users increase, the MI privacy leakage decreases in each training round at an approximate rate of $O(N)$. With more than 20 participating users, the percentile of MI leakage drops below 0.12\% under both MNIST and CIFAR10. Same as \texttt{FedAvg}, we normalize the MI privacy leakage by the entropy of the whole training dataset of a single user. } In conclusion, while our theoretical analysis on the impact of $N$ in Section \ref{subsubsec:num_user} is based on the assumption that the \texttt{FedSGD} protocol is used, our empirical study shows that it \yahya{holds not only in \texttt{FedSGD} but also in \texttt{FedAvg} and \texttt{FedProx}}. \subsection{Impact of Model Size (d)} \label{subsec:model_size} \noindent\textbf{FedSGD.} From Fig. \ref{fig:num_user_fedsgd}, we observe that increasing model size $d$ will increase the MI leakage during each global training round. However, the increase rate of MI leakage is smaller than the increase rate of $d$. This is expected since the upper bound of MI privacy leakage is proportional to $d^*$ (i.e. the rank of the covariance of matrix as proved in Theorem \ref{Thm:main_Thm}), which will not increase linearly with $d$ especially for overparameterized neural networks (see Section \ref{subsubsec:model_size}). Finally, we observe that the MI privacy leakage on CIFAR10 is generally higher than that on MNIST. Since the input images on CIFAR10 have higher dimension than the images on MNIST, larger model size are required during training. Therefore, we expect that the MI privacy leakage on CIFAR10 is higher than that on MNIST. \noindent\textbf{FedAvg and FedProx.} \yahya{As shown in Fig. \ref{fig:num_user_fedavg} and Fig. \ref{fig:num_user_fedprox}, increasing the model size will also have a sub-linear impact on the increase of the MI privacy leakage in \texttt{FedAvg} and \texttt{FedProx}, which is consistent with our results in \texttt{FedSGD}.} \subsection{Impact of Batch Size (B)} \label{subsec:batch_size} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_bs_low_avg_sgd.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedsgd_b3} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_avg_bs_low_avg_sgd_cifar10.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedsgd_b4} \end{subfigure} \caption{Impact of batch size ($B$) when using FedSGD. The MI is normalized by the entropy of a data batch, which is proportional to the batch size $B$ (i.e. $B*567$ for MNIST and $B*1403$ for CIFAR10).} \label{fig:bs_fedsgd} \vspace{-.1in} \end{figure} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_bs_low_avg.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedavg_b1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_bs_low_avg_new_cifar10.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedavg_b2} \end{subfigure} \caption{Impact of batch size ($B$) when using FedAvg. The MI is normalized by the entropy of a user's local dataset, which is a constant (i.e. $1200*567$ for MNIST and $1000*1403$ for CIFAR10).} \label{fig:bs_fedavg} \vspace{-.1in} \end{figure} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_bs_low_avg_may_mnist_fedprox.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedprox_b1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_B/results_cmp_bs_bs_low_avg_may_cifar10_fedprox.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedprox_b2} \end{subfigure} \caption{Impact of batch size ($B$) when using FedProx. The MI is normalized by the entropy of a user's local dataset, which is a constant (i.e. $1200*567$ for MNIST and $1000*1403$ for CIFAR10).} \label{fig:bs_fedprox} \vspace{-.1in} \end{figure} \noindent\textbf{FedSGD.} Fig. \ref{fig:bs_fedsgd} shows the impact of varying $B$ on the normalized MI privacy leakage in \texttt{FedSGD}, where the batch size is chosen from $\{16,32,64,128,256\}$ and we use MLP model on MNIST and CNN model on CIFAR10 during experiments. Note that we normalize the MI by the entropy of a single data batch used in each training round, which is proportional to the batch size $B$. On both MNIST and CIFAR10 datasets, we consistently observe that increasing $B$ will decrease the MI privacy leakage in \texttt{FedSGD}, and the decay rate of MI is inversely proportional to batch size $B$. As demonstrated in Fig. \ref{fig:bs_fedsgd}, when there are more than 20 users, the percentile of MI privacy leakage for a single training round can be around 4\% on MNIST and 12\% on CIFAR10 with batch size 16. However, such leakage can drop to less 1\% on both MNIST and CIFAR10 with batch size 256, which is significantly reduced. \noindent\textbf{FedAvg and FedProx.} \yahya{Fig. \ref{fig:bs_fedavg} and Fig.~\ref{fig:bs_fedprox} show the impact of varying the batch size $B$ on MI privacy leakage in \texttt{FedAvg} and \texttt{FedProx}, respectively, following the same experimental setup as in Fig. \ref{fig:bs_fedsgd}. Since in both \texttt{FedAvg} and \texttt{FedProx}, each user will transverse their whole local dataset in each local training round, we normalize the MI by the entropy of the target user's local training dataset. As shown in Fig. \ref{fig:bs_fedavg} and Fig. \ref{fig:bs_fedprox}, the impact of $B$ in \texttt{FedAvg} and \texttt{FedProx} is relatively smaller than that in \texttt{FedSGD}. However, we can still observe that increasing $B$ can decrease the MI privacy leakage in both \texttt{FedAvg} and \texttt{FedProx}. For example, with 20 users participating in \texttt{FedAvg}, the percentile of MI privacy leakage at each training round can drop from 0.8\% to 0.3\% when the batch size increases from 16 to 256, achieving a reduction in privacy leakage by a factor of more than 2$\times$. Similarly, in \texttt{FedProx}, this causes a decrease in the MI privacy leakage from 0.09\% to 0.04\% when the batch size increases from 16 to 256.} In conclusion, we observe that increasing the batch size $B$ can decrease the MI privacy leakage from the aggregated model update in \yahya{\texttt{FedSGD}, \texttt{FedAvg} and \texttt{FedProx}} which verifies our theoretical analysis in Section \ref{subsubsec:model_size}. \subsection{Accumulative MI leakage} \label{subsec:accum} To evaluate how the accumulative MI privacy leakage will accumulate with the number of training round $T$, we measure the MI between training data and the aggregated model updates across training round. Specifically, given a local training dataset sample $\mathcal{D}_i$, we will concatenate the aggregated model updates $\{\frac{1}{N}\sum_{i\in\mathcal{N}}\mathbf{x}_i^{(t)}\}_{t\in[T]}$ across $T$ training rounds in a single vector with dimension $d*T$. By randomly generating $\mathcal{D}_i$ for the target user for $K$ times, we can get $K$ concatenated aggregated model update vectors. Then, we use MINE to estimate $I(\mathcal{D}_i;\{\frac{1}{N}\sum_{i\in\mathcal{N}}\mathbf{x}_i^{(t)}\}_{t\in[T]})$ with these $K$ dataset and concatenated model update samples. As illustrated in Fig. \ref{fig:accum}, the MI privacy leakage will accumulate linearly as we increase the global training round $T$ on both MNIST and CIFAR dataset, which is consistent with our theoretical results in Section \ref{subsubsec:accum}. That also says, by reducing the times of local model aggregation, the MI privacy leakage of secure aggregation will be reduced. In practice, we can consider using client sampling to reduce the participation times of each client in FL, such that the accumulative MI leakage of individual users can be reduced. Moreover, we can also consider increasing the number of local averaging as much as possible to reduce the aggregation times for local model updates. \yahya{Although, the three aggregation algorithms exhibit a similar trend with $T$, these algorithms can result in different convergence speeds to a target accuracy. To highlight the effect of convergence rate on the accumulative MI privacy leakage, we show, in Fig.~\ref{fig:accum_acc}, how the accuracy changes with the amount of MI leakage incurred for the three algorithms during the training process up to a maximum of 30 training rounds for FedSGD. We observe that although FedSGD achieves lower MI leakage for a fixed number of rounds (see Fig.~\ref{fig:accum}), its slow convergence rate will make it suffer from more leakage before reaching a target accuracy rate. For example, given a target accuracy of 85\% on the MNIST dataset, both FedAvg and FedProx achieve the target accuracy with 0.058\% and 0.057\% leakage while FedSGD will reach 85\% accuracy in later rounds resulting in an accumulative MI leakage of 0.11\% (even with smaller leakage per round).} \begin{figure}[!t] \centering \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_accum/results_accum_all_mnist.jpg} \caption{Normalized accumulative MI, MNIST.} \label{fig:fedsgd_accum} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_accum/results_accum_all_cifar10.jpg} \caption{Normalized accumulative MI, CIFAR10.} \label{fig:fedavg_accum} \end{subfigure} \caption{Accumulative MI privacy leakage on MNIST and CIFAR10 datasets. Note that we normalize the MI by the entropy of each user's local dataset, which will not change with $T$. We use the linear model for both MNIST and CIFAR10 datasets.} \label{fig:accum} \vspace{-.1in} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_accum/results_accum_all2_mnist.jpg} \caption{MNIST} \label{fig:fedsgd_accum_acc} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_accum/results_accum_all2_cifar10.jpg} \caption{CIFAR} \label{fig:fedavg_accum_acc} \end{subfigure} \caption{Accumulative MI privacy leakage vs model accuracy of different FL algorithms. Note that we use a linear model for case study and normalize the MI by the entropy of each user's local dataset.} \label{fig:accum_acc} \vspace{-.1in} \end{figure} \subsection{Impact of Local Training Epochs (E)} \label{subsec:local_epoch} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_E/results_cmp_ep_ep_low_avg_new32_mnist_small.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedavg_e1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_E/results_cmp_ep_ep_low_avg_new32_cifar10_small.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedavg_e2} \end{subfigure} \caption{Impact of the local training round ($E$) when using FedAvg. We normalize the MI by the entropy of each user's local dataset, and we consider $N\in\{10,20\}$.} \label{fig:ep_fedavg} \vspace{-.1in} \end{figure} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_E/results_cmp_ep_ep_low_avg_may_mnist_fedprox_small.jpg} \caption{Normalized MI, MNIST.} \label{fig:fedprox_e1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_E/results_cmp_ep_ep_low_avg_may_cifar10_fedprox_small.jpg} \caption{Normalized MI, CIFAR10.} \label{fig:fedprox_e2} \end{subfigure} \caption{Impact of the local training round ($E$) when using FedProx. We normalize the MI by the entropy of each user's local dataset, and we consider $N\in\{10,20\}$.} \label{fig:ep_fedprox} \vspace{-.1in} \end{figure} Fig. \ref{fig:ep_fedavg} shows the impact of varying the number of local training epochs $E$ on MI privacy leakage in \texttt{FedAvg} on both MNIST and CIFAR10 datasets. We select $E$ from $\{1,2,5,10\}$ and $N$ from $\{10,20\}$, and we consider MLP model for MNIST and CNN model for CIFAR10. We observe that increasing the local training round $E$ will increase the MI privacy leakage in \texttt{FedAvg}. An intuitive explanation is that with more local epochs, the local model updates become more biased towards the user's local dataset, hence it will potentially leak more private information about users' and make it easier for the server to infer the individual model update from the aggregated update. However, as shown in Fig. \ref{fig:ep_fedavg}, increasing the local epochs $E$ will not have a linear impact on the increase of MI privacy leakage. As $E$ increases, the increase rate of MI privacy leakage becomes smaller. \yahya{Similar to \texttt{FedAvg}, we observe from Fig.~\ref{fig:ep_fedprox} that the local training epochs $E$ has a sub-linear impact on the MI privacy leakage when using \texttt{FedProx}. As aforementioned, this can be attributed to the fact that \texttt{FedProx} represents an application of \texttt{FedAvg} with the original loss function in addition to a convex regularization term.} \subsection{Impact of Data Heterogeneity} \label{subsec:hetero} \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_alpha/results_cnn__new32_cifar10_low_avg_small_alpha.jpg} \caption{Normalized MI when $E=1$.} \label{fig:fedavg_heter1} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/impact_alpha/results_cnn__ep5new32_cifar10_low_avg_small_alpha.jpg} \caption{Normalized MI when $E=5$.} \label{fig:fedavg_heter2} \end{subfigure} \caption{Impact of user heterogeneity when using FedAvg on non-IID CIFAR10. Note that $\alpha=\infty$ means that the user data distributions are identical (IID users), and the MI is normalized by the entropy of a user's local dataset.} \label{fig:heter_fedavg} \vspace{-.1in} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=.75\linewidth]{figs/impact_alpha/results_cmp_femnist_fedavg.jpg} \caption{Impact of user heterogeneity when using FedAvg on FEMNIST. Note that the MI is normalized by the entropy of target user's local dataset, which is $678 * 176$ .} \label{fig:heter_FEMNIST} \vspace{-.1in} \end{figure} As discussed in Remark 3 of Section \ref{sec:privacy_guarantee}, in our theoretical analysis, we considered IID data distribution across users in Theorem \ref{Thm:main_Thm} in order to make use of entropic central limit theorem results in developing our upper bounds on privacy leakage. However in practice, the data distribution at the users can be heterogeneous. Hence, in this subsection, we analyze the impact of the non-IID (heterogeneous) data distribution across the users' on the privacy leakage. {\color{black} To measure how user heterogeneity can potentially impact the MI privacy leakage in \texttt{FedAvg}, we consider two different data settings. In the first setting, we create synthetic users with non-IID data distributions following the methodology in \cite{hsu2019measuring}. For the second setting, we consider FEMNIST \cite{caldas2018leaf}, a benchmark non-IID FL dataset extended from MNIST, which consists of $62$ different classes of 28$\times$28 images ($10$ digits, $26$ lowercase letters, $26$ uppercase letters) written by $3500$ users. In the first, synthetic non-IID data setting, we use Dirichlet distribution parameterized by $\alpha$ to split the dataset into multiple non-IID distributed local datasets. Smaller $\alpha$ (i.e., $\alpha\rightarrow 0$) represents that the users' datasets are more non-identical with each other, while larger $\alpha$ (i.e., $\alpha\rightarrow\infty$) means that the user datasets are more identical with each other. We choose CIFAR10 as the dataset, CNN as the model, and use \texttt{FedAvg} for a case study while using a batch size of $B=32$. Note that we do not consider \texttt{FedSGD} since it will not be affected by user heterogeneity. During the experiments, we choose the $\alpha$ value from $\{1,10,100,\infty\}$ to create different levels of non-IID user datasets, and we consider $N\in\{2,5,10,20\}$ and $E\in\{1,5\}$. Fig. \ref{fig:heter_fedavg} shows how the MI privacy leakage varies with the number of users under different $\alpha$, where the MI privacy leakage is normalized by the entropy of each user's local dataset. We notice that the MI privacy leakage will decrease with the number of users consistently under different $\alpha$, which empirically shows that our theoretical results in Section \ref{sec:privacy_guarantee} also holds in the case where users are heterogeneous. For the second, FEMNIST data setting, we split the dataset by users into 3500 non-overlapping subsets, each of which contains character images written by a specific user. Considering that the size of each subset is small, in order to have enough training data, we choose to sample $N$ users at each training round instead of using a fixed set of $N$ users, which simulates the user sampling scenario in FL. Specifically, at the beginning of each FL training round with $N$ participating users, we use the same target user and randomly pick the other $N-1$ out of 3500 users. Note that we consider $N\in\{2,5,10,20,50\}$ and $E\in\{1,5\}$, and use the same model (CNN), batch size ($B=32$), and FedAvg algorithm in our evaluation.. Fig. \ref{fig:heter_FEMNIST} shows how the MI privacy leakage varies with the number of users. Similar to the synthetic non-IID data setting in Fig. \ref{fig:heter_fedavg}, the privacy leakage decreases with increasing the number of user $N$. } \subsection{Practical Privacy Implications} \subsubsection*{\bf Success of Privacy attacks} {\color{black}To provide insights on how MI translates to practical privacy implications, we conduct experiments using one of the state-of-the-art data reconstruction attack, i.e., the Deep Leakage from Gradients (DLG) attack from \cite{NEURIPS2019_60a6c400}, to show how the MI metric reflects the reconstructed image quality of the attack as we vary system parameters. Specifically, we choose MNIST as the dataset, the same SLP used in Section \ref{Models} as the model, and FedSGD with batch size of $32$ as training algorithm. For the data distribution across the users, we consider the IID setting. At the end of each training round, each user uses a batch of images with size 32 to calculate their local gradients, which will be securely aggregated by the server. The DLG attack will reconstruct a batch of images with size 32 from the aggregated gradient, making them as similar as possible to the batch of images used by the target user. After that, we apply the same PSNR (Peak Signal-to-noise Ratio) metric used in \cite{NEURIPS2019_60a6c400} to measure the quality of reconstructed images compared with the images used by the target user during training. Note that without loss of generality, we report the PSNR value of reconstructed images by DLG attack for the first training round. Fig. \ref{practical_Implications} shows the impact of number of users $N$ on the privacy leakage metric (MI) and the reconstructed image quality of DLG attack (PSNR). We pick the image of digit 3 out of the target $32$ images as an example of reconstructed images. We can observe that increasing the number of users $N$ decreases the MI metric as well as the PSNR at almost the same rate. This demonstrates that the MI metric used in this paper can translate to practical privacy implications well. \subsubsection*{\bf MI Privacy leakage under the joint use of DP and SA} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{docs/Practical_attacks.png} \caption{Impact of varying the number of users $N$, on the reconstructed image quality (PSNR) of the DLG attack and on the MI privacy leakage. } \label{practical_Implications} \end{figure} To highlight the joint effect of differential privacy with secure aggregation, we conduct experiments on the MNIST dataset with a linear model to measure the MI privacy leakage in the presence of centralized DP noise added at the server after SA. Specifically, following~\cite{abadi2016deep}, we first clip the aggregated model updates to make its norm bounded by $C$, and then add Gaussian noise with variance $\sigma^2$ to achieve $(\epsilon, \delta)$-DP. We set $C=1$, $\delta=1/1200$, and $\sigma=\sqrt{2\log(\frac{1.25}{\delta})}/\epsilon$. Fig.~\ref{fig:dp_sa_mi} shows the MI privacy leakage for different $(\epsilon, \delta)$-DP levels with SA ($\delta$ is fixed at $1/1200$). As the number of users increase, SA improves the privacy level (measured in terms of MI leakage) for different levels of DP noise, with the effect being most pronounced for weak DP noise level ($\epsilon =5000$ in Fig.~\ref{fig:dp_sa_mi}). Our experiments also show that as the number of users increase, the gain from using higher DP noise levels is diminished. In particular, with $N=1000$ users, the MI leakage level for $\epsilon=$5, 10 and 5000 are almost the same; MI leakage is only reduced from 0.046\% to 0.034\% when using $\epsilon=5$ instead of $\epsilon=5000$. In contrast, we get a reduction from 0.234\% to 0.056\% when there are $N=2$ users. Importantly, the reduction observed in privacy leakage due to applying additional DP noise results in a severe degradation in accuracy as seen in Fig.~\ref{fig:dp_sa_acc}, whereas privacy improvement gained by having more users has a negligible effect on the performance of the trained model. For example, consider the case of 1000 users. One may achieve the same level of privacy in terms of MI leakage (lower than 0.05\% MI) with either (i) $(\epsilon, \delta)$-DP with $\epsilon=10$, which, however, results in unusable model accuracy (less than 50\%), or, (ii) by aggregating the 1000 users and using a tiny amount of DP noise (equivalent to $\epsilon=5000$), which achieves a model accuracy higher than 90\%. } \begin{figure}[!t] \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/MI_vs_DP/results_cmp_ep_eps_low_avg_may_mnist_fedavg.jpg} \caption{Normalized MI, MNIST.} \label{fig:dp_sa_mi} \end{subfigure} \begin{subfigure}{.235\textwidth} \includegraphics[width=.99\linewidth]{figs/MI_vs_DP/results_cdp_acc.jpg} \caption{Model accuracy, MNIST.} \label{fig:dp_sa_acc} \end{subfigure} \caption{Effects of using DP noise together with SA on MI privacy leakage and model accuracy. Note that we add DP noise in aggregated model updates after SA.} \label{fig:dp_sa} \vspace{-.1in} \end{figure} \section{Introduction} Federated learning (FL) has recently gained significant interest as it enables collaboratively training machine learning models over locally private data across multiple users without requiring the users to share their private local data with a central server~\cite{cc, kairouz2019advances,FedAvg}. The training procedure in FL is typically coordinated through a central server who maintains a global model that is frequently updated locally by the users over a number of iterations. In each training iteration, the server firstly sends the current global model to the users. Next, the users update the global model by training it on their private datasets and then push their local model updates back to the server. Finally, the server updates the global model by aggregating the received local model updates from the users. In the training process of FL, users can achieve the simplest notion of privacy in which users keep their data in-device and never share it with the server, but instead they only share their local model updates. However, it has been shown recently in different works (e.g., ~\cite{NEURIPS2019_60a6c400,geiping2020inverting,yin2021gradients}) that this alone is not sufficient to ensure privacy, as the shared model updates can still reveal substantial information about the local datasets. Specifically, these works have empirically demonstrated that the private training data of the users can be reconstructed from the local model updates through what is known as the model inversion attack. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/Intro_fig.pdf} \caption{Figure (a) illustrates the current formal privacy guarantee of FL with SA protocols and sheds light on the missing privacy guarantee on the aggregated model information leakage which is studied in this paper. Figure (b) gives a preview of the behavior of the privacy leakage through the global aggregated model for a CNN model as a function of the number of users in FL. The privacy leakage follows a $\mathcal{O}(1/N)$ decay as proved in our theoretical bounds. } \label{fig-intro} \end{figure*} To prevent such information leakage from the individual models that are shared during the training process of FL, Secure Aggregation (SA) protocols have emerged as a remedy to address these privacy concerns by enabling the server to aggregate local model updates from a number of users, without observing any of their model updates in the clear. As shown in Fig.~\ref{fig-intro}a, in each training round, users encrypt their local model updates before sending it to the server for aggregation. Thus, SA protocols formally guarantee that: 1) both the server and other users have no information about any user's clear model update from the encrypted update in the information theoretic sense; 2) the server only learns the aggregated model. In other words, secure aggregation ensures that only the aggregated model update is revealed to the server. Note that these SA guarantees allow for its use as a supporting protocol for other privacy-preserving approaches such as differential privacy~\cite{dwork2006calibrating}. In particular, these approaches can benefit from SA by reducing the amount of noise needed to achieve a target privacy level (hence improving the model accuracy) as demonstrated in different works (e.g., \cite{truex2019hybrid,kairouz2021distributed}). However, even with these SA guarantees on individual updates, it is not yet fully understood how much privacy is guaranteed in FL using SA, since the aggregated model update may still leak information about an individual user's local dataset. This observation leads us to the central question that this work addresses: \vspace{0.1em} \begin{align*} \parbox{3in}{\centering\it How much information does the aggregated model leak about the local dataset of an individual user?} \end{align*} \vspace{0.1em} In this paper, we tackle this question by studying how much privacy can be guaranteed by using FL with SA protocols. We highlight that this work does not propose any new approaches to tackle privacy leakage but instead analyzes the privacy guarantees offered by state-of-the-art SA protocols, where updates from other users can be used to hide the contribution of any individual user. An understanding of this privacy guarantee may potentially assist other approaches such as differential privacy, such that instead of introducing novel noise to protect a user's model update, the randomized algorithm can add noise only to supplement the noise from other users' updates to the target privacy level. We can summarize the contributions of the work as follows. \noindent\textbf{Contributions.} In this paper, we provide information-theoretic upper bounds on the amount of information that the aggregated model update (using \texttt{FedSGD}~\cite{cc}) leaks about any single user's dataset under an honest-but-curious threat model, where the server and all users follow the protocol honestly, but can collude to learn information about a user outside their collusion set. Our derived upper bounds show that SA protocols exhibit a more favorable behavior as we increase the number of honest users participating in the protocol at each round. We also show that the information leakage from the aggregated model decreases by increasing the batch size, which has been empirically demonstrated in different recent works on model inversion attacks (e.g., \cite{NEURIPS2019_60a6c400,geiping2020inverting,yin2021gradients}), where increasing the batch size limits the attack's success rate. Another interesting conclusion from our theoretical bounds is that increasing the model size does not have a linear impact on increasing the privacy leakage, but it depends linearly on the rank of the covariance matrix of the gradient vector at each user. In our empirical evaluation, we conduct extensive experiments on the CIFAR10~\cite{krizhevsky2009learning} and MNIST~\cite{MNIST} datasets in different FL settings. In these experiments, we estimate the privacy leakage using a mutual information neural estimator~\cite{belghazi2018mine} and evaluate the dependency of the leakage on different FL system parameters: number of users, local batch size and model size. Our experiments show that the privacy leakage empirically follows similar dependencies to what is proven in our theoretical analysis. Notably, as the number of users in the FL system increase to 20, the privacy leakage (normalized by the entropy of a data batch) drops below $5\%$ when training a CNN network on the CIFAR10 dataset (see Fig. ~\ref{fig-intro}b. We also show empirically that the dependencies, observed theoretically and empirically for \texttt{FedSGD}, also extend when using the \texttt{FedAvg}~\cite{cc} FL protocol to perform multiple local training epochs at the users. \iffalse {\color{blue}WIP} In this paper, we are the first to the best of our knowledge to shed some light on these questions. In the following we state our main contributions: \begin{itemize} \item We derive the theoretical upper bound of MI privacy leakage in FL with SA under \textit{i.i.d} user assumption. \item We conduct an empirical study to systematically measure the MI privacy leakage with different FL system steps. \item Both of theoretical and empirical results demonstrate that increasing the number of users and batch size will reduce the MI privacy leakage in FL with SA, and increasing model size can only have a sublinear impact on the increase of MI privacy leakage. \end{itemize} \fi \section{Theoretical Privacy Guarantees of FL with Secure Aggregation} \label{sec:privacy_guarantee} In this section, we theoretically quantify the privacy leakage in FL when using secure aggregation with the \texttt{FedSGD} protocol. \subsection{Main Results}\label{sec-Main resuslts} For clarity, we first state our main results under the honest-but-curious threat model discussed in Section \ref{sec-threat model} while assuming that there is no collusion between the server and users. We also assume that there is no user dropout. Later in Section \ref{sec-Impact of User Sampling, Users' Dropout, and Collusion}, we discuss the general result with user dropout and the collusion with the server. Our central result in this section characterizes the privacy leakage in terms of mutual information for a single round of \texttt{FedSGD}, which for round $t$ is defined as \begin{equation}\label{eq:I_priv_round} I_{\rm priv}^{(t)} = \max_{i\in[N]} I\left(\mathbf{x}^{(t)}_i ; \sum_{i=1}^N\mathbf{x}_i^{(t)}\middle| \left\{\sum_{i=1}^N\mathbf{x}_i\right\}_{k\in[t-1]} \right) \end{equation} and then extends the privacy leakage bound to multiple rounds. Before stating our main result in Theorem~\ref{Thm:main_Thm} below, we first define two key properties of random vectors that will be used in stating our theorem and formally state our operational assumptions. \begin{defin}[Independent under whitening]\label{def-2} We say that a random vector $\mathbf{v}$ with mean $\mu_{v}$ and non-singular covariance matrix $\mathbf{K}_{v}$ is \textit{independent under whitening}, if the whitened vector $\widehat{\mathbf{v}}$ is composed of independent random variables, where $\widehat{\mathbf{v}} = \mathbf{K}_{v}^{-1/2} \left(\mathbf{v} - \mu_{v} \right)$. \end{defin} \begin{defin}[Uniformly $\sigma$-log concave] \label{def-3} A random vector $\mathbf{v}$ with covariance $\mathbf{K}_v$ is {\it uniformly $\sigma$-log concave} if it has a probability density function $e^{-\phi (\mathbf{v})}$ satisfying $\nabla^2 \phi(\mathbf{v}) \succeq \mathbf{I}$ and $\exists\ \sigma >0$, such that $\mathbf{K}_q \succeq \sigma \mathbf{I}$. \end{defin} \begin{assump}[IID data distribution] Throughout this section, we consider the case where the local dataset $\mathcal{Z}_i$ are sampled IID from a common distribution, i.e., the local dataset of user $i$ consists of IID data samples from a distribution $\mathcal{P}_i$, where $\mathcal{P}_i = \mathcal{P}$ for $ \forall i \in [N]$. This implies that the distribution of the gradients $g_i( \bm {\theta}^{(t)}, b)$, for $i \in [N]$, conditioned on the last global model $\bm{\theta}^{(t)}$ is also IID. For this common conditional distribution, we will denote its mean with $\mu_G^{(t)}$ and the covariance matrix $\mathbf{K}_{G}^{(t)}$ in the $t$-th round. \end{assump} With the above definitions and using Assumption 1, we can now state our main result below, which is proved in Appendix~\ref{append:proof_theorem}. \begin{thm}[Single Round Leakage]\label{Thm:main_Thm} \textit{ Let $d^* \leq d$ be the rank of the gradient covariance matrix $\mathbf{K}_{G}^{(t)}$, and let $\mathcal{S}_g$ denote the set of subvectors of dimension $d^*$ of $g( \bm {\theta}^{(t-1)}, b)$ that have a non-singular covariance matrices. Under Assumption 1, we can upper bound $I_{\rm priv}^{(t)}$ for \texttt{FedSGD} in the following two cases:\\ \textbf{Case. 1} If $\exists \bar{g} \in \mathcal{S}_g$, such that $\bar{g}$ is independent under whitening (see Def.~\ref{def-2}), and ${\rm E}|\bar{g}_i|^4 < \infty, \forall i \in [d^*]$, then $\exists\ C_{0,\bar{g}} > 0$, such that \begin{align}\label{6} &I_{\rm priv}^{(t)} \leq \frac{C_{0,\bar{g}}\ d^*}{(N-1)B} + \frac{d^*}{2}\log\left(\frac{N}{N-1}\right), \end{align} \textbf{Case. 2} If $\exists \bar{g} \in \mathcal{S}_g$, such that $\bar{g}$ is $\sigma$-log concave under whitening (see Def.~\ref{def-3}) then we have that \begin{align}\label{eq-case2-thm1} I_{\rm priv}^{(t)} \leq \frac{d^* C_{1,\bar{g}} - C_{2,\bar{g}}}{(N-1)B\sigma^4}+ \frac{d^*}{2}\log\left(\frac{N}{N-1}\right), \end{align} where: the constants $C_{1,\bar{g}} = 2\left( 1+\sigma +\log(2\pi) - \log(\sigma) \right) $ and $C_{2,\bar{g}} = 4\left(h(\bar{g})-\frac{1}{2}\log(|\Sigma_{\bar{g}}|\right)$, with $\Sigma_{\bar{g}}$ being the covariance matrix of the vector $\bar{g}$. } \end{thm} \begin{remark}[Simplified bound]\label{remark-simplified-bound}{\rm Note that each $\bar{g} \in \mathcal{S}_g^{(t)}$ satisfying Case 1 or Case 2 gives an upper bound on $I_{\rm priv}^{(t)}$. Let $\mathcal{S}_{g,c}^{(t)}$ be the set of $\bar{g} \in \mathcal{S}_g^{(t)}$ satisfying either Case 1 or Case 2. Then, we can combine these different bounds in Theorem~\ref{Thm:main_Thm} as follows \begin{align}\label{eq:simplified_round_bounds} I_{\rm priv}^{(t)} \leq \frac{d^*}{2} \log\left(\!\frac{N}{N{-}1}\!\right) + \dfrac{\displaystyle\min_{\bar{g}\in \mathcal{S}_{g,c}^{(t)}} \left\{d^*\widehat{C}_{1,\bar{g}} - \widehat{C}_{2,\bar{g}}\right\} }{(N-1)B}, \end{align} where \begin{align*} (\widehat{C}_{1,\bar{g}},\widehat{C}_{2,\bar{g}}) = \begin{cases} \left(C_{0,\bar{g}}, 0\right),& \text{if $\bar{g}$ satisfies Case 1}, \\ \left(\frac{C_{1,\bar{g}}}{\sigma^4}, \frac{C_{2,\bar{g}}}{\sigma^4}\right),& \text{if $\bar{g}$ satisfies Case 2}, \end{cases} \end{align*} where $C_{0,\bar{g}}, C_{1,\bar{g}}$ and $C_{2,\bar{g}}$ are defined as in Theorem~\ref{Thm:main_Thm}. } \end{remark} \begin{remark}{\rm (Why the IID assumption?) Our main result in Theorem~\ref{Thm:main_Thm} relies on recent results on the entropic central \cite{eldan2020clt,bobkov2014berry} for the sum of independent and identically random variables/vectors. Note that the IID assumption in the entropic central limit theorem can be relaxed to independent (but not necessarily identical) distributions, however, in this case, the upper bound will have a complex dependency on the moments of the $N$ distributions in the system. In order to high-light how the privacy guarantee depends on the different system parameters (discussed in the next subsection), we opted to consider the IID setting in our theoretical analysis. } \end{remark} \begin{remark}\yahya{\rm (Independence under whitening) One of our key assumptions in Theorem~\ref{Thm:main_Thm} is the independence under whitening assumption for stochastic gradient descent (SGD). This assumption is satisfied if the SGD vector can be approximated by a distribution with independent components or by a multivariate Gaussian vector. Our adoption of this assumption is motivated by recent theoretical results for analyzing the behaviour of SGD. These results have demonstrated great success in approximating the practical behaviour of SGD, in the context of image classification problems, by modeling the SGD with (i) a non-isotropic Gaussian vector \cite{anisotropic_noise_SGD}, or, (ii) $\alpha$-stable random vectors with independent components \cite{alpha_stable_noise_SGD}. For both these noise models, the independence under whitening assumption in Theorem~\ref{Thm:main_Thm} is valid. However, a key practical limitation for the aforementioned SGD models (and thus of the independence under whitening assumption) is assuming a smooth loss function for learning. This excludes deep neural networks that make use of non-smooth activation and pooling functions (e.g., ReLU and max-pooling). } \end{remark} Now using the bounds in Theorem~\ref{Thm:main_Thm}, in the following corollary, we characterize the privacy leakage of the local training data $\mathcal{D}_i$ of user $i$ after $T$ global training rounds of \texttt{FedSGD}, which is defined as \begin{equation} \label{eq_short_hand_for_data_leakage_expression} I_{\rm priv/data} = \max_{i\in[N]} I\left(\mathcal{D}_i ; \left \{ \frac{1}{N} \sum_{i\in [N]} \mathbf{x}^{(t)}_i \right\}_{t \in [T]} \right), \end{equation} \begin{corollary} \label{corollary_data_lekage} Assuming that users follow the \texttt{FedSGD} training protocol and the same assumptions in Theorem~\ref{Thm:main_Thm}, we can derive the upper bound of the privacy leakage $I_{\rm priv/data}$ after $T$ global training rounds of \texttt{FedSGD} in the following two cases:\\ \textbf{Case. 1:} Following the assumptions used in Case 1 in Theorem 1, we get \begin{align} \label{6-a} I_{\rm priv/data}\leq T \left[\frac{C_{0,\bar{g}}d^*}{(N-1)B} + \frac{d^*}{2}\log\left(\frac{N}{N-1}\right) \right], \end{align} \textbf{Case. 2:} Following the assumptions used in Case 2 in Theorem 1, we get \begin{align} \label{6-b} I_{\rm priv/data} \leq T \left[\! \frac{d^* C_{1,\bar{g}} - C_{2,\bar{g}}}{(N-1)B\sigma^4}+ \frac{d^*}{2}\log\left(\frac{N}{N{-}1}\right)\! \right]. \end{align} \end{corollary} We prove Corollary~\ref{corollary_data_lekage} in Appendix~\ref{append:proof_Corollary}. Note that, we can combine the bounds in Corollary~\ref{corollary_data_lekage} similar to the simplification in~\eqref{eq:simplified_round_bounds} from Theorem~\ref{Thm:main_Thm}. \subsection{Impact of System Parameters}\label{sec-theoratical-system-parameter} \subsubsection{Impact of Number of Users (N)} \label{subsubsec:num_user} As shown in Theorem \ref{Thm:main_Thm} and Corollary \ref{corollary_data_lekage}, the upper bounds on information leakage from the aggregated model update decrease in the number of users $N$. Specifically, the leakage dependency on $N$ is at a rate of $\mathcal{O}(1/N)$. \subsubsection{Impact of Batch Size (B)} \label{subsubsec:batch_size} Theorem 1 and Corollary \ref{corollary_data_lekage} show that the information leakage from the aggregated model update could decrease when increasing the batch size that is used in updating the local model of each user. \subsubsection{Impact of Model Size (d)} \label{subsubsec:model_size} Given our definition of $d^*$ in Theorem 1, where $d^*$ represents the rank of the covariance matrix $K_{G^{(t)}}$ and $d^* \leq d$ ($d$ is the model size), the leakage given in Theorem \ref{Thm:main_Thm} and Corollary \ref{corollary_data_lekage} only increases with increasing the rank of the covariance matrix of the gradient. This increase happens at a rate of $\mathcal{O}(d^*)$. In other words, increasing the model size $d$ (especially when the model is overparameterized) does not have a linear impact on the leakage. The experimental observation in Section \ref{sec-5} supports these theoretical findings. \label{remark:model_size} \subsubsection{Impact of Global Training Rounds (T)} \label{subsubsec:accum} Corollary \ref{corollary_data_lekage} demonstrates that the information leakage from the aggregated model update about the private training data of the users increases with increasing the number of global training rounds. This result reflects the fact as the training proceed, the model at the server start to memorize the training data of the users, and the data of the users is being exposed multiple times by the server as $T$ increases, hence the leakage increases. The increase of the leakage happens at a rate of $\mathcal{O}(T)$. \subsection{Impact of User Dropout, Collusion, and User Sampling }\label{sec-Impact of User Sampling, Users' Dropout, and Collusion} In this section, we extend the results given in Theorem~\ref{Thm:main_Thm} and Corollary~\ref{corollary_data_lekage} to cover the more practical FL scenario that consider, user dropout, the collusion between the server and the users and user sampling. We start by discussing the impact of user dropout and collusion. \subsubsection{Impact of User Dropout and Collusion with the Server}\label{sec-Impact of users dropout} Note that, in the case of user dropouts, this is equivalent to a situation where the non-surviving users send a deterministic update of zero. As a result, their contribution can be removed from the aggregated model, and we can, without loss of generality, consider an FL system where only the surviving subset $\mathcal{N}_s \subset [N]$ users participate in the system. Similarly, when a subset of users colludes with the server, then the server can subtract away their contribution to the aggregated model in order to unmask information about his target user $i$. As a result, we can again study this by considering only the subset of non-colluding (and surviving, if we also consider dropout) users in our analysis. This observation gives us the following derivative of the result in Theorem~\ref{Thm:main_Thm} which can summarized by the following corollary. \begin{corollary} In \texttt{FedSGD}, under the assumptions used in Theorem 1, if there is only a subset $\mathcal{N}_s^{(t)} \subset [N]$ of non-colluding and surviving users in the global training round $t$, then, we have the following bound on $I_{\rm priv}^{(t)}$ \begin{align}\label{eq:simplified_round_bounds_dropout} I_{\rm priv}^{(t)} \leq \frac{d^*}{2} \log\left(\!\frac{|\mathcal{N}_s|}{|\mathcal{N}_s|{-}1}\!\right) + \dfrac{\displaystyle\min_{\bar{g}\in \mathcal{S}_{g,c}^{(t)}} \left\{d^*\widehat{C}_{1,\bar{g}} - \widehat{C}_{2,\bar{g}}\right\} }{(|\mathcal{N}_s|-1)B}, \end{align} where the maximization in $I_{\rm priv}^{(t)}$ (given in~\eqref{eq:I_priv_round}) is only over the set of non-colluding surviving and non-colluding users; and the constants $\widehat{C}_{1,\bar{g}}$ and $\widehat{C}_{2,\bar{g}}$ are given in Remark 2. \end{corollary} This implies that the per round leakage increases when we have a smaller number of surviving and non-colluding users. Similarly, we can modify the bound in Corollary 1 to take into account user dropout and user collusion by replacing $N$ with $|\mathcal{N}_s|$. \subsubsection{Impact of User Sampling} In Theorem~\ref{Thm:main_Thm} and Corollary~\ref{corollary_data_lekage}, we assume that all $N$ users in the FL system participate in each training round. If instead $K$ users are chosen each round, then all leakage upper bound will be in terms of $K$, the number of users in each round, instead of $N$. Furthermore, through Corollary~\ref{corollary_data_lekage}, we can develop upper bounds for each user $i$, depending on the number of rounds $T_i$ that the user participated in. For example, taking into account selecting $K$ users in each round denoted by $\mathcal{K}^{(t)}$, then the upper bound in~\eqref{6-a} is modified to give the following information leakage for user $i$ \begin{align}\label{eq:bound_individual_user} &I_{\rm priv/data}(i) = I\left(\mathcal{D}_i ; \left \{ \frac{1}{K} \sum_{i\in \mathcal{K}^{(t)}} \mathbf{x}^{(t)}_i \right\}_{t \in [T]} \right)\nonumber \\ & \leq T_i \left[\frac{C_{0,\bar{g}}d^*}{(K-1)B} + \frac{d^*}{2}\log\left(\frac{K}{K-1}\right) \right], \end{align} where $T_i = K/N$ if the set of $K$ users are chosen independently and uniformly at random in each round. Thus user sampling would improve the linear dependence of the leakage on $T$ (Section~\ref{subsubsec:accum}), but increase the per round leakage due to a smaller number of users in each round (Section~\ref{subsubsec:num_user}). \section{Preliminaries} We start by discussing the basic federated learning model, before introducing the secure aggregation protocol and its state-of-the-art guarantees. \subsection{Basic Setting of Federated Learning } Federated learning is a distributed training framework~\cite{FedAvg} for machine learning, in which a set of users $\mathcal{N} = [N]$ ($|\mathcal {N}|=N$), each with its own local dataset $\mathcal{D}_i$ ($\forall i \in[N]$), collaboratively train a $d$-dimensional machine learning model parameterized by $\bm {\theta}\in \mathbb{R}^{d}$, based on all their training data samples. For simplicity, we assume that users have equal-sized datasets, i.e., $D_i = D$ for all $i\in [N]$. The typical training goal in FL can be formally represented by the following optimization problem: \begin{equation} \label{ma} \bm {\theta}^* = \arg \min_{\bm {\theta}\in\mathbb{R}^{d} } \left[ C(\bm {\theta}) := \frac{1}{N} \sum_{i=1}^{N} C_i(\bm {\theta})\right], \end{equation} where $\bm {\theta}$ is the optimization variable, $C(\bm {\theta})$ is the global objective function, $C_i(\bm {\theta})$ is the local loss function of user $i$. The local loss function of user $i$ is given by \begin{equation} C_i(\bm {\theta}) = \frac{1}{D} \sum_{(x , y ) \in \mathcal{D}_i } \ell_i(\bm {\theta}, (x , y ) ), \end{equation} where $\ell_i(\bm {\theta}, (x,y)) \in \mathbb{R}$ denotes the loss function at a given data point $(x_i , y_i )\in \mathcal{D}_i$. The dataset $\mathcal{D}_i$ at user $i \in [N]$ is sampled from a distribution $\mathcal{P}_i$. To solve the optimization problem in \eqref{ma}, an iterative training procedure is performed between the server and distributed users, as illustrated in Fig. \ref{SystemModel}. Specifically, at iteration $t$, the server firstly sends the current global model parameters, $\bm {\theta}^{(t)}$, to the users. User $i\in[N]$ then computes its model update $\mathbf {x}_i^{(t)}$ and sends it to the server. After that, the model updates of the $N$ users are aggregated by the server to update the global model parameters into $\bm{\theta}^{(t + 1)}$ for the next round according to \begin{equation} \label{Agg1} \bm {\theta}^{(t+1)}=\bm {\theta}^{(t)}-\eta^{(t)} \frac{1}{N} \sum_{i=1}^{N} \mathbf{x}^{(t)}_i. \end{equation} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/M1.png} \caption{The training process in federated learning.} \label{SystemModel} \end{figure} There are two common protocols for computing the model update $\mathbf{x}_i$: \texttt{FedSGD} and \texttt{FedAvg} ~\cite{FedAvg}. Specifically, in \texttt{FedSGD}, each user uses a data batch $\mathcal{B}_i^{(t)}$ of size $B$ sampled uniformly at random from it local dataset $ \mathcal{D}_i$ to compute the model update as follows: \begin{equation} \mathbf {x}_i^{(t)} = \frac{1}{B} \sum_{b \in \mathcal{B}_i^{(t)} } g_i( \bm {\theta}^{(t)}, b), \end{equation} where $g_i( \bm {\theta}^{(t)})$ is the stochastic estimate of the gradient $\nabla C_i(\bm {\theta}^{(t)})$ of the local loss function $C_i$ of user $i$ computed based on a random sample $b$ (corresponding to $(x_b,y_b)$) drawn uniformly from $\mathcal{D}_i$ without replacement. In \texttt{FedAvg}, each user will run $E$ complete local training rounds over its local dataset $\mathcal{D}_i$ to get its model update $\mathbf {x}_i^{(t)}$. Specifically, during each training round, each user will use all their mini-batches sampled from $\mathcal{D}_i$ to perform multiple stochastic gradient descent steps. \subsection{Secure Aggregation Protocols for Federated Learning} Recent works (e.g., ~\cite{NEURIPS2019_60a6c400,geiping2020inverting,yin2021gradients}) have empirically shown that some of the local training data of user $i$ can be reconstructed from the local model update $\mathbf{x}_i$, for $i \in [N]$. To prevent such data leakage, different SA protocols \cite{aono2017privacy, truex2019hybrid,dong2020eastfly,xu2019hybridalpha,secagg_bell2020secure,secagg_so2021securing,secagg_kadhe2020fastsecagg,zhao2021information,so2021lightsecagg,9712310,mugunthan2019smpai,so2021turbo} have been proposed to provide a privacy-preserving FL setting without sacrificing the training performance. In the following, we discuss the threat model used in these SA protocols. \subsubsection{Threat Model in Secure Aggregation for Federated Learning}\label{sec-threat model} Most of SA protocols consider the honest-but-curious model~\cite{cc} with the goal of uncovering users’ data. In this threat model, the server and users honestly follow the SA protocol as specified. In particular, they will not modify their model architectures to better suit their attack, nor send malicious model update that do not represent the actually learned model. However, the server and the participating users are assumed to be curious and try to extract any useful information about the training data of any particular user. The extraction of the information is done by storing and analyzing the different data received during the execution of the protocol. On the other hand, the threat model in theses SA protocols assumes that the server can collude with any subset of users $\mathcal{T} \subset [N]$ by jointly sharing any data that was used during the execution of the protocol (including their clear model updates $\mathbf{x}_i$, for all $i \in \mathcal{T}$) that could help in breaching the data privacy of any target user $i \in[N]/\mathcal{T}$. Similarly, this threat model also assumes that users can collude with each other to get information about the training data of other users. \subsubsection{Secure Aggregation Guarantees}\label{subsection-SA_guarantee} In general, SA protocols that rely on different encryption techniques; such as homomorphic encryption \cite{aono2017privacy, truex2019hybrid,dong2020eastfly,xu2019hybridalpha}, and secure multi-party computing (MPC) \cite{secagg_bell2020secure,secagg_so2021securing,secagg_kadhe2020fastsecagg,zhao2021information,so2021lightsecagg,9712310,mugunthan2019smpai,so2021turbo}, are all similar in the encryption procedure in which each user encrypts its own model update $\mathbf{y}^{(t)}_i = \text{Enc}(\mathbf{x}^{(t)}_i)$ before sending it to the server. This encryption is done such that these protocols achieve: 1) Correct decoding of the aggregated model under users' dropout; 2) Privacy for the local model update of the users from the encrypted model. In the following, we formally describe each of these guarantees. \\ \noindent \textbf{Correct decoding.} The encryption guarantees correct decoding for the aggregated model of the surviving users even if a subset $\mathcal{U} \subset[N]$ of the users dropped out during the protocol execution. In other words, the server should be able to decode \begin{equation} \text{Dec} \left(\sum_{i \in \mathcal{V}} \mathbf{y}^{(t)}_i \right)= \sum_{i \in \mathcal{V}} \mathbf{x}^{(t)}_i, \end{equation} where $\mathcal{V}$ is the set of surviving users (e.g., $\mathcal{U} \cup \mathcal{V} =[N]$ and $\mathcal{U} \cap \mathcal{V} = \phi$). \noindent \textbf{Privacy guarantee.} Under the collusion between the server and any strict subset of users $\mathcal{T} \subset [N]$, we have the following \begin{equation}\label{eq-SA_guarantee} I\left({\{\mathbf{y}}^{(t)}_i\} _{i \in[N]}; \{\mathbf{x}^{(t)}_i\} _{i \in[N]} \middle | \sum_{i=1}^{N} \mathbf{x}^{(t)}_i, \mathbf{z}_\mathcal{T} \right) = 0, \end{equation} where $\mathbf{z}_\mathcal{T}$ is the collection of information at the users in $\mathcal{T}$. In other words, \eqref{eq-SA_guarantee} guarantees that under a given subset of colluding users $\mathcal{T}$ with the server, the encrypted model updates $\{\mathbf{y}^{(t)}_i\} _{i \in[N]}$ leak no information about the model updates $\{\mathbf{x}^{(t)}_i\} _{i \in[N]}$ beyond the aggregated model $\sum_{i=1}^{N} \mathbf{x}^{(t)}_i$. We note that the upper bound on the size of the colluding set $\mathcal{T}$ such that \eqref{eq-SA_guarantee} is always guaranteed has been analyzed in the different SA protocols. Assuming that $|\mathcal{T}|\leq \frac{N}{2}$ is widely used in most of the works (e.g., \cite{so2021lightsecagg,so2021turbo}). \begin{remark}{\rm Recently, there have been also some works that enable doing secure model aggregation by using Trusted Execution Environments (TEE) such as Intel SGX (e.g., \cite{9708971,zhang2021shufflefl}). SGX is a hardware-based security mechanism to protect applications running on a remote server. These TEE-based works are also designed to give the same guarantee in \eqref{eq-SA_guarantee}.} \end{remark} In the following, we formally highlight the weakness of the current privacy guarantee discussed in \eqref{eq-SA_guarantee}. \subsubsection{Our Contribution: Guarantees on Privacy Leakage from the Aggregated Model } Different SA protocols guarantee that the server doesn't learn any information about the local model update $\mathbf{x}^{(t)}_i$ of any user $i$ from the received encrypted updates $\{\mathbf{y}^{(t)}_i\}_{i \in \mathcal{N}}$, beyond the aggregated model as formally shown in \eqref{eq-SA_guarantee}. However, it is not clear how much information the aggregated model update itself leaks about a single user's local dataset $\mathcal{D}_i$. In this work, we fill this gap by theoretically analyzing the following term. \begin{align} \label{eq-aggregate_leake_prelimnies} I_{\rm priv/data} = \max_{i \in[N]}\ I\left(\mathcal{D}_i ; \left \{ \frac{1}{N} \sum_{i =1}^{N} \mathbf{x}^{(t)}_i \right\}_{t \in [T]} \right). \end{align} The term in \eqref{eq-aggregate_leake_prelimnies} represents how much information the aggregated model over $T$ global training rounds could leak about the private data $\mathcal{D}_i$ of any user $i \in[N]$. In the following section, we theoretically study this term and discuss how it is impacted by the different FL system parameters such as model size, number of users , etc. In Section \ref{sec:eval}, we support our theoretical findings by empirically evaluating $I_{\rm priv/data}$ in real-world datasets and different neural network architectures. \section{Related work} {\bf Secure Aggregation in FL.} As mentioned secure aggregation has been developed for FL~\cite{cc} to provide protection against model inversion attacks and robustness to user dropouts (due to poor connections or unavailability). There has been a series of works that aim at improving the efficiency of the aggregation protocol~\cite{secagg_bell2020secure,secagg_so2021securing,secagg_kadhe2020fastsecagg,zhao2021information,so2021lightsecagg,so2021turbo,9712310}. This general family of works using secure aggregation disallow the learning information about each client's individual model update beyond the global aggregation of updates, however there has not been a characterization of how much information the global aggregation can leak about the individual client's model and dataset. To the best of our knowledge, in this work, we provide the first characterization of the privacy leakage due to the aggregated model through mutual information for FL using secure aggregation. \noindent{\bf Differential Privacy.} One way to protect a client's contributions is to use differential privacy (DP). DP provides a rigorous, worst-case mathematical guarantee that the contribution a single client does not impact the result of the query. Central application of differential privacy was studied in~\cite{bassily2014private, chaudhuri2011differentially, abadi2016deep}. This form of central application of DP in FL requires trusting the server with individual model updates before applying the differentially private mechanism. An alternative approach studied in FL for an untrusted server entity is the local differential privacy (LDP) model~\cite{kasiviswanathan2011can,agarwal2018cpsgd,balle2019privacy} were clients apply a differentially private mechanism (e.g. using the Gaussian mechanism) locally on their update before sending to the central server. LDP constraints imply central DP constraints, however due to local privacy constraints LDP mechanisms significantly perturb the input and reduces globally utility due to the compounded effect of adding noise at different clients. In this work, we use a mutual information metric to study the privacy guarantees for the client's dataset provided through the secure aggregation protocol without adding differential privacy noise at the clients. In this case, secure aggregation uses contributions from other clients to mask the contribution of a single client. We will discuss in Section~\ref{sec:discussion} situations where relying only on SA can clearly fail to provide differential privacy guarantees and comment on the prevalence of such situations in practical training scenarios. \noindent{\bf Privacy Attacks.} There have been some works trying to empirically show that it is possible to recovery some training data from the gradient information. \cite{phong2017privacy, aono2017privacy,wang2019beyond, yin2021gradients}. Recently, the authors in \cite{geiping2020inverting} show that it is possible to recover a batch of images that were used in the training of non-smooth deep neural network. In particular, their proposed reconstruction attack was successful in reconstruction of different images from the average gradient computed over a mini-batch of data. Their empirical results have shown that the success rate of the inversion attack decreases with increasing the batch size. Similar observations have been demonstrated in the subsequent works \cite{yin2021gradients}. In contrast to this work, we are the first to the best of our knowledge to theoretically quantify the amount of information that the aggregated gradient could leak about the private training data of the users, and to understand how the training parameters (e.g., number of users) affect the leakage. Additionally, our empirical results are different from the ones in \cite{phong2017privacy, aono2017privacy,wang2019beyond, yin2021gradients,yin2021gradients} in the way of quantifying the leakage. In particular, we use the MINE tool to abstractly quantify the amount of information leakage in bits instead of the number of the reconstructed images. We have also empirically studied the effect of the system parameters extensively using different real world data sets and different neural network architectures. \section{Experimental Setup}\label{sec-5} \subsection{MI Estimation} In order to estimate the mutual information in our experiments, we use Mutual Information Neural Estimator (MINE) which is the state-of-the-art method \cite{belghazi2018mine} to estimate the mutual information between two random vectors (see Appendix~\ref{subsec:mine} for more details). In our experiments, at the $t$-th global training round, we use MINE to estimate $I(\mathbf{x}^{(t)}_i;\sum_{i=1}^{N}\mathbf{x}^{(t)}_i | \bm{\theta}^{(t-1)})$, i.e., the mutual information between model update of the $i$-th user $\mathbf{x}^{(t)}_i$ and the aggregated model update from all users $\sum_{i=1}^{N}\mathbf{x}^{(t)}_i$. Our sampling procedure is described as follows: 1) at the beginning of the global training round $t$, each user will first update its local model parameters as the global model parameters $\bm{\theta}^{(t-1)}$. 2) Next, each user shuffles its local dataset. 3) Then, each user will pick a single data batch from its local dataset (if using \texttt{FedSGD}) or use all local data batches (if using \texttt{FedAvg}) to update its local model. 4) Lastly, secure aggregation is used to calculate the aggregated model update. We repeat the above process for $K$ times to get $K$ samples $\{(\mathbf{x}^{(t)}_{i,k};\sum_{i=1}^{N}\mathbf{x}^{(t)}_{i,k})\}_{k=1}^{k=K}$, where $\mathbf{x}^{(t)}_{i,k}$ represents the model update from the $i$-th user in the $k$-th sampling and $\sum_{i=1}^{N}\mathbf{x}^{(t)}_{i,k}$ represents the aggregated model update from the $i$-th user in the $k$-th sampling. Note that we use the $K-th$ (last) sample $\sum_{i=1}^{N}\mathbf{x}^{(t)}_{i,K}$ to update the global model. We repeat the end-to-end training and MI estimation multiple times in order to get multiple MI estimates for each training round $t$. We use the estimates for each round to report the average MI estimate and derive the confidence interval (95\%) for the MI estimation\footnote{During our experiments, we observe that the estimated MI does not change significantly across training rounds. Hence, we average the estimated MI across training rounds when reporting our results.}. Lastly, when using MINE to estimate MI, we use a fully-connected neural network with two hidden layers each having 100 neurons each as $T_{\theta}$ (see Appendix~\ref{subsec:mine} for more details) and we perform gradient ascent for 1000 iterations to train the MINE network. \subsection{Datasets and Models}\label{Models} \noindent\textbf{Datasets.} We use MNIST and CIFAR10 datasets in our experiments. Specifically, the MNIST dataset contains 60,000 training images and 10,000 testing images, with 10 classes of labels. The CIFAR10 dataset contains 50,000 training images and 10,000 testing images, with 10 classes of labels. For each of the dataset, we randomly split the training data into 50 local datasets with equal size to simulate a total number of 50 users with identical data distribution. Note that we describe how to generate users with non-identical data distribution when we evaluate the impact of user heterogeneity in Section \ref{subsec:hetero}. Moreover, we use MINE to measure the entropy of an individual image in each of these datasets, as an estimate of the maximal potential MI privacy leakage per image. We report that the entropy of an MNIST image is 567 (bits) and the entropy of a CIFAR10 image is 1403 (bits). Note that we will use the entropy of training data to normalize the measured MI privacy leakage in Section \ref{sec:eval}. \noindent\textbf{Models.} Table \ref{tab:model} reports the models and their number of parameters used in our evaluation. For MNIST dataset, we consider three different models for federated learning. For each of these models, it takes as input a 28$\times$28 image and outputs the probability of 10 image classes. We start by using a simple linear model, with a dimension of 7850. Next, we consider a non-linear model with the same amounts of parameters as the linear model. Specifically, we use a single layer perceptron (SLP), which consists of a linear layer and a ReLU activation function (which is non-linear). Finally, we choose a multiple layer perceptron (MLP) with two hidden layers, each of which contains 100 neurons. In total, it has 89610 parameters. Since the MLP model we use can already achieve more than 95\% testing accuracy on MNIST dataset, we do not consider more complicated model for MNIST. For the CIFAR10 dataset, we also evaluate three different models for FL. For each of these models, it will take as input an 32$\times$32$\times$3 image and outputs the probability of 10 image classes. Similar to MNIST, the first two models we consider are a linear model and a single layer perceptron (SLP), both of which contains 30720 parameters. The third model we consider is a Convolutional Neural Network (CNN) modified from AlexNet \cite{krizhevsky2012imagenet}, which contains a total of 82554 parameters and is able to achieve a testing accuracy larger than 60\% on CIFAR. We do not consider larger CNN models due to the limited computation resources. \begin{table}[!t] \centering \begin{tabular}{p{0.6in}<{\centering}|p{0.65in}<{\centering}p{0.65in}<{\centering}p{0.65in}<{\centering}}\hline \multicolumn{4}{c}{Models for MNIST}\\\hline Name & Linear & SLP & MLP\\\hline Size ($d$) & 7890 & 7850 & 89610 \\\hline \multicolumn{4}{c}{Models for CIFAR10}\\\hline Name & Linear & SLP & CNN \\\hline Size ($d$) & 30730 & 30730 & 82554 \\\hline \end{tabular} \caption{Models used for MNIST and CIFAR10 datasets. Note that SLP, MLP, and CNN represent Single Layer Perceptron, Multiple Layer Perceptron, and Convolutional Neural Network, respectively.} \label{tab:model} \end{table} \section{Introduction} \section{Editorial Policy} \subsection{Choice of reviewers} The Editor responsible for a given area of physics, turns to experts of the subject for opinion. Research articles and communications are reviewed by minimum two reviewers, review papers by at least three. \subsection{Suggestions from authors} Authors are requested to suggest persons competent to review their manuscript. However, please note that this will be treated only as a suggestion, the final selection of reviewers is exclusively the Editor's decision. The reviewers remain anonymous in any case. The Editor is fully responsible for decision about the manuscript. The final decision, whether to accept or reject a paper, rests with him/her. The Managing Editor only communicates the final decision and informs the author about further processing. \subsection{Revised manuscript submission} When revision of a manuscript is requested, authors should return the revised version of their manuscript as soon as possible. Prompt action may ensure fast publication, if the paper is finally accepted for publication in .... If it is the first revision of an article authors need to return their revised manuscript within 60 days. If it is the second revision authors need to return their revised manuscript within 14 days. If these deadlines are no met, and no specific arrangements for completion have been made with the Editor, the manuscript will be treated as a new one and will receive a new identification code along with a new registration date. \subsection{Final proofreading} Authors will receive a PDF file with the edited version of their manuscript for final proofreading. This is the last opportunity to view an article before its publication on the journal's web site. No changes or modifications can be introduced once it is published. Thus authors are requested to check their proof pages carefully against manuscript within 3 working days and prepare a separate document containing list of all the changes that should be introduced. Authors are sometimes asked to provide additional comments and explanations in response to remarks and queries from the language and technical editors. In case the authors do not deliver the list of corrections to proofs in the requested time the manuscript will be published as is. \subsection{Reprints} Because the journal is published in an Open Access model, and has no printed version, the authors receive no reprints. \subsection{Erratum} If any errors are detected in the published material they should be reported to the Managing Editor. The corresponding authors should send appropriate corrected material to the Managing Editor via email. This material will be considered for publication in form of erratum in the earliest available issue of .... \subsection{Copyright } All authors retain copyright, unless -- due to their local circumstances -- their work is not copyrighted. The non-commercial use of each article will be governed by the Creative Commons Attribution-NonCommercial-NoDerivs license. The corresponding author grants De Gruyter Open the exclusive license to commercial use of the article, by signing the License to Publish. Scanned copy of license should be sent by e-mail to the Managing Editor of the journal, as soon as possible. \section{Paper writing guide} \subsection{Paper elements} \begin{enumerate} \item title page with: \begin{enumerate} \item title (short title), \item full name(s) of author(s), \item name and address of workplace(s), \item personal e-mail address(es), \end{enumerate} \item abstract, \item up-to five keywords, \item text, \item reference lists. \end{enumerate} \subsubsection{Abstract} An abstract must accompany every article. It should be a brief summary of the significant items of the main paper. An abstract should give concise information about the content of the core idea of your paper. It should be informative and not only present the general scope of the paper but also indicate the main results and conclusions. An abstract should not normally exceed 200 words. It should not contain literature citations or allusions to the tables or illustrations. All non-standard symbols and abbreviations should be defined. In combination with the title and key-words, the abstract is an indicator of the content of the paper. Authors should remember that on-line systems rely heavily on the content of titles and abstracts to identify articles in electronic bibliographic databases and search engines. They are therefore requested to take great care in preparing these elements. \subsubsection{Text} \paragraph{General rules for writing} \begin{itemize} \item use simple and declarative sentences, avoid long sentences, in which the meaning may be lost by complicated construction; \item be concise, avoid idle words; \item make your argumentation complete; use commonly understood terms; define all non-standard symbols and abbreviations when you introduce them; \item explain all acronyms and abbreviations when they first appear in the text; \item use all units consistently throughout the article; \item be self-critical as you review your drafts. \end{itemize} \paragraph{Structure of a paper} Research papers and review articles should follow a strict structure. Generally a standard scientific paper is divided into: \begin{itemize} \item introduction: you present the subject of your paper clearly, you indicate the scope of the subject, you present the goals of your paper and finally the organization of your paper; \item main text: you present all important elements of your scientific message; \item conclusion: you summarize your paper. \end{itemize} Experimental part and/or calculations should be presented in sufficient details to enable reader to repeat the original work. \paragraph{Footnotes/End-notes/Acknowledgments} We encourage authors to restrict the use of footnotes. If necessary, please make end-notes rather than footnotes. Allowable footnotes/end-notes may include: \begin{itemize} \item the designation of the corresponding author of the paper; \item the current address of an author (if different from that shown in the affiliation); \item traditional footnote content. \end{itemize} \paragraph{Tables} Authors should use tables only to achieve concise presentation, or where the information cannot be given satisfactorily in other ways. Tables should be numbered consecutively using Arabic numerals and referred to in the text by number. Each table should have an explanatory caption which should be as concise as possible. \paragraph{Figures} Authors may use line diagrams and photographs to illustrate theses from their text. The figures should be clear, easy to read and of good quality. Styles and fonts should match those in the main body of the article. All figures must be mentioned in the text in consecutive order and be numbered with Arabic numerals. \begin{figure} \framebox[4cm]{\Huge Figure 1} \caption{A figure caption should be placed {\bf below} the figure.\label{fig1}} \end{figure} \begin{figure} \framebox[4cm]{\Huge Figure 2} \caption{A figure caption for Fig. \ref{fig2}.\label{fig2}} \end{figure} \paragraph{Typesetting} Type main text in roman (upright) font. The chemical symbols and compounds, units of measure, most multi-letter operators and functions should are written in roman upright as well. The variables, constants, symbols for particles, most single-letter operators, axes and planes, channels, types (e.g., n, p), bands, geometric points, angles, lines, chemical prefixes, symmetry designations, transitions, critical points, color centers, quantum-state symbols in spectroscopy, and most single-letter abbreviations should be written in roman italic. Boldface roman type is reserved for indicating vectors and in some special cases matrices. \paragraph{Mathematical symbols} The multiplication signs are reserved for a vector product ($\mathbf{A}\times\mathbf{B}$) and simple dot product ($\mathbf{A}\cdot\mathbf{B}$). The only exception are numbers expressed in scientific notation ($9.7\times 10^3$ MeV). \paragraph{Units} Units and dimensions should be expressed according to the metric system and SI units. This system is based on: meter (m), second (s), kilogram (kg), ampere (A), kelvin (K), mole (mol), and candela (cd). Most units are spaced off from the number, e.g. 12 mV. The only exceptions are: \begin{center} 1\%, 1\textperthousand, 1\textdegree C, 1\textdegree, 1', 1". \end{center} Decimal multiples or sub-multiples of units are indicated by the use of prefixes \begin{center} \textmu=$10^{-6}$, m$=10^{-3}$, c$=10^{-2}$, d$=10^{-1}$, da$=10^1$, h$=10^2$, k$=10^3$, M$=10^6$, G$=10^9$, {\em etc}. \end{center} Compound units are written as \begin{center} 4221.9 J kg$^{-1}$ K$^{-1}$ or 4221.9 J/(kg K), \end{center} with a thin space between unit parts. Authors should indicate precisely in the main text {\bf where tables and figures should be inserted}, if these elements are given at the end in the original version of the manuscript (or supplied in separate files). If this information is not provided along with the manuscript, we will assume that the figures and/or tables should be insert at the closest position to first reference to them in the published paper. \paragraph{Multimedia and images} Authors can attach files in most popular formats, including (for example): \begin{itemize} \item images in BMP, GIF, JPEG formats, \item multimedia files in MPEG or AVI formats. \end{itemize} However please keep to file types that are read by standard media players (e.g. RealPlayer, Quicktime, Windows Media Player) and/or standard office applications (Adobe Acrobat Reader, Microsoft Office etc.). Your attachments may be accessible through links to external locations or to our internal locations (if you choose the second option, please remember to send us your attachments). Please remember that your images, video and animation clips are intended for Internet use and we need to consider the needs of users with slow Internet connections. Please try to minimize file sizes by using a lower resolution or number of colors for images and animations (as long as the material is still clear). To help you in formatting your images (including tables and figures) or multimedia files, please submit your paper with separate attachments, which are used in your paper. \paragraph{English language} Journal is published only in English. Make sure that your manuscripts are clearly and grammatically written. Please note that authors who are not native-speakers of English can be provided with help in rewriting their contribution in correct English. Try to prepare your manuscript in an easily readable style; this will help avoid severe misunderstandings which might lead to rejection of the paper. \subsubsection{Reference list} A complete reference should give the reader enough information to find the relevant article. All authors (unless there are six or more) should be named in the citation. If there are six or more, list the name of the first one followed by ``et al''. Please pay particular attention to spelling, capitalization and punctuation here. Completeness of references is the responsibility of the authors. A complete reference should comprise the following: \paragraph{Reference to an article in a journal} Elements to cite: Author's Initials. Surname, -- if more authors, see examples below, Title of journal -- abbreviated according to the ISI standards\footnote{ http://images.isiknowledge.com/WOK46/help/WOS/0-9\_abrvjt.html}, volume number, page or article number (year of publication). Please supply DOI or URL for e-version of the papers. See Refs. \cite{journal-1, journal-2, journal-3, journal-4, journal-5, journal-6, journal-7, journal-8} for example. \paragraph{Reference to a book} Elements to cite: Author's Initials. Surname, Title, Edition -- if not the first (Publisher, Place of publication, Year of publication) \cite{book}. \paragraph{Reference to a part/chapter book} Elements to cite: Author's Initials. Surname, In: Editor's Initials. Editor's Surname (Ed.), Book Title, Edition -- if not the first, (Publisher, Place of publication, Year of publication) page number \cite{chapter}. \paragraph{Reference to a preprint} Elements to cite: Author's Initials. Surname, arXiv:preprint-number and version \cite{arxiv-1,arxiv-2}. \paragraph{Reference to a conference proceedings} Elements to cite: Author's Initials. Surname, In: Editor's Initials. Editor's Surname (Ed.), Conference, date, place (town and country) of conference (Publisher, place of publication, year of publication) page number \cite{proceedings}. \paragraph{Reference to a thesis} Elements to cite: Author's Initials. Surname, D.Sc./Ph.D./M.Sc./B.Sc. thesis, University, (town, country, year of publication) \cite{thesis}. \paragraph{Reference to an article in a newspaper} Elements to cite: Author's Initials. Surname, Newspaper Title, date of publication, page number \cite{newspaper-1,newspaper-2}. \paragraph{Reference to a patent} Elements to cite: Originator, Series designation which may include full date \cite{patent}. \paragraph{Reference to a standard} Elements to cite: Standard symbol and number, Title \cite{standard-1,standard-2}. Please add language of publication for materials which are not written in English. Indicate materials accepting for publications by adding ``(in press)''. Please avoid references to unpublished materials, private communication and web pages. You should make sure the information is correct so that the linking reference service may link abstracts electronically. For the same reason please separate each reference from the others. Before submitting your article, please ensure you have checked your paper for any relevant references you may have missed. \subsubsection{Submission formats} Manuscripts for ... should be submitted in the \LaTeX ~format with figures in EPS, PDF or PNG format. Authors are strongly encouraged to register their manuscript in arXiv preprint server and submit it to our Editorial Manager using arXiv's paper ID. \subsubsection{Supplementary data} You can also submit any supplementary data files as well. These may include long tables (in HTML or plain TXT format) or movies (preferably in AVI format). \section{Threat Model} Since our main goal in this paper is to formally analyze fundamental questions regarding the privacy guarantee of secure aggregation in FL, we consider the same threat model used in most of the secure aggregation works (e.g., \cite{}), which involves a set of users and a centralized server. \noindent\textbf{Users.} We assume that users as the participants of FL, will use their local datasets to update their local models and send model updates to a centralized server. We assume that users want to protect their local data privacy, and hence they will follow secure aggregation protocols to encrypt their raw local model updates and only send the encrypted model updates. We assume that the secure aggregation protocol can guarantee that the encrypted model updates will not leak any information about the raw local model updates. Moreover, we assume that FL users can collude with each other, which means .... \noindent\textbf{Honest-but-curious Server.} We assume that the server is honest, who will follow FL and SA protocols as specified in Sec. \ref{sec:prelim}. Therefore, the server can only have access to the encrypted model update from each user, and then update the global model using the aggregated encrypted model updates. However, we assume the server can be curious, who will try to extract any useful private information about users' training data from the aggregated updates after SA (e.g. perform membership inference attack or data reconstruct attack). We also assume that the server can collude with any subset of users, which means .... \fi
1,314,259,996,766
arxiv
\section{Introduction}\label{Section1} Carbon is a key element in the evolution of organic material in the Universe. There is a rich carbon chemistry in our Galaxy and other galaxies because of its abundance and its ability to form complex molecules. Carbon chemistry starts in the circumstellar medium of evolved massive stars \citep{Henning1998}. It branches through different phases of the interstellar medium (ISM)\@. The material cycle between the stars and the gas in the ISM leads to the delivery of organic molecules in molecular clouds and planetary systems. A special sub-class of organic molecules called prebiotic molecules are thought to play a major role in the formation of life on Earth \citep{ChybaSagan1992, Ehrenfreund2010, Ehrenfreund2000}. Therefore, it is important to understand the life cycle of organic molecules and of the element carbon in space. Carbon is the $4^{th}$ most abundant element in the Universe. The elemental carbon abundance is the total carbon abundance in the gas and solid phases of the ISM. The total carbon abundance observed in the ISM should be in agreement with the cosmic carbon abundance estimations \citep{Zuo2021a, Zuo2021b, HensleyDrain2021}. The cosmic carbon abundance derived from the Solar atmosphere \citep{Grevesse1998, Turcotte2002, Chiappini2003, Asplund2005, Asplund2009, Asplund2021}, the Solar System abundances \citep{Lodders2003, Asplund2021}, Sun-like stars \citep{Bedell2018}, the atmosphere of young stars \citep{SnowWitt1995, Sofia2001, Bensby2006, Przybilla2008} imply that, there is up to $\sim$$358$\,ppm\footnote{ppm: parts per million} of carbon available in the ISM. Carbon is one of the major dust-forming elements. The cosmic carbon abundance sets a limit to the maximum amount of carbonaceous material available for making the interstellar dust (ISD) that accounts for the observed extinction, so that carbon abundance hidden in the ISD should be in accordance with the available carbon and other dust-forming elements in the ISM \citep{Zuo2021a, Zuo2021b, HensleyDrain2021}. The depletion of carbon in the gas phase refers to its missing part with respect to cosmic abundance. However, carbon depletion from the gas phase is not sufficient to account for the observed extinction. This has been called the \textit{carbon crisis} \citep{Kim1996, Cardelli1996, Henning2004, Wang2015, Zuo2021b}. The wavelength dependence of the extinction (A$_{\lambda}$) can be defined by an extinction curve. The interstellar extinction curves give clues as to the size and chemical composition of the dust particles \citep{Cardelli1989, Fitzpatrick1999}. The regimes covering the far-ultraviolet (UV) and mid-infrared (MIR) exhibit features of the main components of dust, carbonaceous and siliceous materials \citep{DraineISD2003, Gordon2019, Gordon2021, HensleyDrain2021}. Studies of the extinction curves show that they are spatially variable through the Galaxy. The size, structure and chemical composition of the ISD play a role in this variability \citep{Fitzpatrick1999, Fitzpatrick2019, Zuo2021b}. Since the composition and structure of dust is variable, \citep{Henning2004, Jones2012a, Jones2012b, Jones2012c, Jones2013, Jones2019, DraineHensley2021}, elemental abundance estimations taking into account particle size to reproduce extinction curves are prone to large discrepancies \citep{Mathis1977, Draine1984, Kim1996, Mathis1996, Li1997, Mishra2017, Zuo2021b}. Therefore, an observational method to trace carbonaceous ISD is required, as this invisible solid component is a reservoir for organic material and the element carbon \citep{Sandford1991, VanDishoeck2014}. Carbon molecules in the gas phase can be studied using the electromagnetic spectrum from ultraviolet to radio frequencies. Although complex molecules can be discerned in diffuse molecular gas through their vibrational and rotational spectra, it is more challenging to study large molecules, as their spectra are complex \citep{McGuire2018}. In particular, when they are in the solid phase, only vibrational spectra can be used for searching for some chemical groups and for probing the carbonaceous material in the ISD\@. There are some useful emission and absorption features in the infrared spectral region for this purpose. The 3.4\,$\mu$m (2940 cm$^{-1}$) absorption feature is of particular interest since it is prominent and suitable for observational measurements. This feature arises due to the aliphatic C--H stretch of methylene (--CH$_{2}$--) and methyl (--CH$_{3}$) groups in carbonaceous material in the ISD\@. The strength of the 3.4\,$\mu$m absorption feature is proportional to the number of aliphatic C--H bonds. To estimate the amount of aliphatic hydrocarbons in solid phase in the ISM, measurements of the 3.4\,$\mu$m absorption from ISD can be combined with the absorption coefficient measurements of interstellar dust analogues (ISDAs) produced in a laboratory. We undertook laboratory measurements in order to provide a revised value for the absorption coefficient of aliphatic hydrocarbons (\citealt{Gunay2018}, hereafter Paper 1). We produced ISDAs under simulated circumstellar / interstellar-like conditions. The integrated absorption coefficient ($A$, cm molecule$^{-1}$) and line width ($\Delta \bar{\nu}$, cm$^{-1}$) (for the low-resolution spectra it becomes the filter bandwidth) that were measured from ISDAs and the optical depth ($\tau$) of the 3.4\,$\mu$m absorption can be used to obtain column density (N, cm$^{-2}$) of aliphatic hydrocarbon groups, as follows: \begin{equation}\label{eq:1} N =\frac{\tau \Delta \bar{\nu}} {A} \end{equation} In order to investigate the amount and distribution of aliphatic hydrocarbon abundances incorporated in the ISD in our Galaxy we need to map the 3.4\,$\mu$m optical depth, $\tau_{3.4\,\mu m}$, over wide fields. This requires us to obtain optical depth at 3.4\,$\mu$m for as many sightlines as possible. Measurement of the optical depth can be done readily for individual sightlines by single point or long-slit spectroscopy. As these spectroscopic processes are comparatively slow and require long observing times, Integral Field Spectroscopy (IFS) \citep{Allington2006} is used to speed up observations by simultaneously obtaining spectra in a two-dimensional field. It has become an important method in cases where there is a need to examine the spectra of extended objects (such as the ISM) as a function of position. However, narrow-band imaging can also be used to obtain spatially resolved spectral information for larger fields of view. We previously implemented a new method (\citealt{Gunay2020}, hereafter Paper 2) and trialed it through the dusty sightlines of the diffuse interstellar medium (DISM) towards a field that contains the centre of the Galaxy (hereafter Field A), where the optical depth of the 3.4\,$\mu$m absorption had already been reported in the literature (references in Paper 2). We did this in order to test the veracity of this new method, using narrow-band filters spread across the absorption feature. Several well-studied bright sources in the field were used whose data were available in NASA/IPAC Infrared Science Archive (IRSA\footnote{https://irsa.ipac.caltech.edu/frontpage/}). Their brightness values are listed in Spitzer\footnote{https://irsa.ipac.caltech.edu/Missions/spitzer.html} and 2MASS\footnote{https://irsa.ipac.caltech.edu/Missions/2mass.html} catalogs. Their spectra were previously reported by \cite{Chiar2002} (hereafter [C02]) and \cite{Moultaka2004} (hereafter [M04]), and provided calibration. We used these to determine zero points for our measurements, and thence optical depths at 3.4\,$\mu$m (as described in detail in Paper 2). We applied Equation~\ref{eq:1} using the 3.4\,$\mu$m narrow-band filter width (62\,cm$^{-1}$). We then calculated aliphatic hydrocarbon column densities using the absorption coefficient of (\textit{A} = $4.7\times10^{-18}$\,cm\,group$^{-1}$) from Paper 1. In demonstrating that the technique worked, we were able to produce an aliphatic hydrocarbon column density map for the Galactic Centre cluster in Paper 2. We found a mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.2, corresponding to a typical aliphatic hydrocarbon column density of $\sim 3 \times10^{18}$\,cm$^{-2}$. Further, there were indications that the column density is increasing in a direction towards the Galactic mid-plane. Normalised aliphatic carbon abundances in ppm were also calculated based on a gas-to-extinction ratio $N(H) = 2.04 \times 10^{21}$\,cm$^{-2}$ mag$^{-1}$ \citep{Zhu2017}, by assuming A$_{V}$$\sim$30 mag. A mean value of 43\,ppm aliphatic hydrocarbon abundance was found. Comparing this to the ISM total carbon abundance of 358\,ppm \citep{Sofia2001}, we found that approximately 12$\%$ of the carbon is in aliphatic form. This shows that ISD is an important reservoir for aliphatic hydrocarbons in the Galactic field studied. In this work, we have obtained maps of the 3.4\,$\mu$m optical depth across the two new fields and investigated the amount of aliphatic hydrocarbons. We compared the new maps with the maps previously obtained for Galactic Centre cluster field (Paper 2). We discuss the amount and distribution of aliphatic hydrocarbons in these three fields in the Galactic plane. We also tried to investigate whether there are similarities between the distribution of the aliphatic hydrocarbons in ISD and the total dust, for the Galactic Centre fields. However we could not find any relation. This paper is organised as follows. The observations and data reduction are described in Section~\ref{Section2}, data analysis in Section~\ref{Section3}, mapping applications in Section~\ref{Section4}, results in Section~\ref{Section5}, and the discussion and conclusions are presented in Section~\ref{Section6}. \section{Observations and Data Reduction}\label{Section2} In this third paper we have now applied this new technique for mapping the 3.4\,$\mu$m optical depth to two other fields; one in the Galactic Centre region (hereafter Field B), but well separated from the well-studied central cluster, the other one in the Galactic plane (hereafter Field C), well away from the Galactic Centre. To the best of our knowledge, Field B has not yet been examined for the aliphatic hydrocarbon absorption feature. It has been chosen to contain several sources with brightness m$_{L}$ \textless 7$^{m}$ and many with m$_{L}$ \textless 10$^{m}$, which allow us to apply the method. The brightest 180 sources with m$_{L}$ \textless 10$^{m}$ were used for the spectrophotometric measurements. Field C was chosen to be well away from the Galactic Centre to determine whether the method could be applied to the spiral arm regions of the Milky Way. Field C samples the DISM and is less affected by strong extinction, unlike the Galactic Centre region fields, which sample other more local extinction of the ISM of Galactic nucleus \citep{Moultaka2019, Geballe2021}. It has been chosen towards the IRAS 18511+0146 cluster, where the 3.4$\,\mu$m absorption was previously reported by \cite{Ishii2002} (hereafter [I02]) and \cite{Godard2012} (hereafter [G12]). We have applied the method by using the fluxes of two sources from the IRAS 18511+0146 cluster listed in [G12] and compared our measurements with the optical depths reported in [I02] and [G12]. The spectrophotometric observations in this study cover a larger field of view (137 arcsec) than the previous studies on IRAS 18511+0146 \citep{Vig2007, Vig2017} and include optical depth measurements of more sources than [I02] and [G12]. There are several L--band (3.6$\,\mu$m) sources with brightness of m$_{L}$ \textless 11$^{m}$ and photometric data available \citep{Vig2007, Vig2017}. We detected 18 sources and used 15 of them with m$_{L}$ \textless 11$^{m}$ for the spectrophotometric measurements. Observations were carried out on the 3.8\,m United Kingdom Infrared Telescope (UKIRT) using the UIST camera\footnote{https://www.ukirt.hawaii.edu/instruments/uist/uist.html} ($1 - 5\,\mu$m imager--spectrograph, 1024 $\times$ 1024 InSb array, 0.12 arcsec /pixel with 123 arcsec field of view). Data were obtained through service observations for two fields in the Galactic Centre region; Field A and Field B (Project ID: U/15B/D01, 15--25 September 2015) and additionally for one field in the Galactic plane; Field C (Project ID: U/16B/D01, 20--21 September 2016). The results for Field A (which includes the Galactic Centre itself) were reported in Paper 2, and the methodology used here follows that described in that paper. We summarise the parameters for the three fields in Table~\ref{tab:1}. Spectrophotometric measurements had been obtained using narrow-band filters at 3.05$\,\mu$m, 3.29$\,\mu$m, 3.4$\,\mu$m, 3.5$\,\mu$m, 3.6$\,\mu$m and 3.99$\,\mu$m. However, we used only measurements with three filter 3.29$\,\mu$m, 3.4$\,\mu$m, 3.6$\,\mu$m in order to measure the optical depth of the 3.4$\,\mu$m absorption feature, as we followed the same methodology in our previous work (Paper 2). We calculated the sensitivity thresholds (mag) for signal-to-noise ratio (S/N) = 5 using the UIST online calculator\footnote{http://www.ukirt.hawaii.edu/cgi-bin/ITC/itc.pl} for the total integration time we applied for each filter (see table~2 in Paper~2). The fields were imaged with a 3$\times$3 jitter pattern and 1 minute of integration per jitter position. The 9 point jitter pattern was then repeated, to achieve a total on-source integration time of 18 minutes. While Fields A and B were observed with a 20 arcsec grid pattern, Field C used a 7 arcsec grid, being less crowded. The pixel scale was 0.12 arcsec. The resultant fields of view (FoV) were 163 arcsec for Fields A and B and 137 arcsec for Field C (see Table~\ref{tab:1}). The seeing ranged between 0.7 to 0.8 arcsec at 3.6$\,\mu$m during the observations. \begin{table*} \begin{center} \caption{Parameters for the observed fields.} \label{tab:1} \centering \begin{tabular}{| c | c | c | c | } \hline Targets & Field A & Field B & Field C \\ \hline Longitude & $l: 359.945^{\circ}$ & $l: 359.765^{\circ}$ & $l: 034.821^{\circ}$ \\ Latitude & $b: -0.045^{\circ}$ & $b: -0.045^{\circ}$ & $b: 0.352^{\circ}$ \\ \hline Field of View&163 arcsec & 163 arcsec & 137 arcsec \\ \hline \multirow{1}{*}{Jitter}&9pt jitter & 9pt jitter & 9pt jitter \\ \multirow{1}{*}{Pattern}&offsets of $20''$ & offsets of $20''$ & offsets of $7''$\\ \hline Observing Dates & September 2015 & September 2015 & September 2016\\ \hline \end{tabular} \end{center} \end{table*} Data reduction was carried out using the Starlink\footnote{http://starlink.eao.hawaii.edu/starlink} / ORAC-DR\footnote{http://www.starlink.ac.uk/docs/sun230.htx/sun230.html} data reduction pipeline using the recipe described in Paper 2. For each jittered frame a reduced image is created and added into a master mosaic to improve signal to noise. The final mosaics were not trimmed to the dimensions of a single frame, thus the noise is greater in the peripheral areas which have received less total exposure time. The resultant images for each of the three fields in the 3.4$\,\mu$m filter are shown in Figure~\ref{fig1}. \begin{figure*} \begin{center} \begin{tabular}{ccc} {\includegraphics[trim=130mm 15mm 140mm 15mm, clip, angle=0,scale=0.55]{figure1a.eps}} & {\includegraphics[trim=130mm 15mm 140mm 15mm, clip, angle=0,scale=0.55]{figure1b.eps}} & {\includegraphics[trim=140mm 15mm 140mm 15mm, clip, angle=0,scale=0.55]{figure1c.eps}}\\ \end{tabular} \caption{Images of Fields A, B and C obtained through the 3.4$\,\mu$m filter. Coordinates are celestial (J2000).} \label{fig1} \end{center} \end{figure*} \section{Data Analysis}\label{Section3} \subsection{Photometric measurements} As per Paper 2, optimal photometry was carried out by using the Starlink / GAIA\footnote{http://star-www.dur.ac.uk/~pdraper/gaia/gaia.html} package, involving profile fitting of bright and isolated stars \citep{Naylor1998}. Using the sensitivity thresholds for each filter (see table~2 in Paper~2), we determined that the minimum brightness level of 11.9$^{m}$ is required for the measurements with 3.29$\,\mu$m filter, 12.2$^{m}$ is required for the 3.4$\,\mu$m filter and 12.1$^{m}$ is required for the 3.6$\,\mu$m filter to satisfy the S/N = 5 criterion. To ensure that the optical depth measurements are obtained with as highest S/N ratio as possible, we eliminated the stars fainter than 11.0$^{m}$. Previously, we analysed the brightest 200 sources with magnitudes of $m_{3.6\,\mu m}$ $\le$ 9.5$^{m}$ in Field A (Paper 2). For Field B the number of bright sources is a little less than in Field A and the brightest 180 sources with magnitudes $m_{3.6\,\mu m}$ $\le$ 10$^{m}$ are selected. For Field C, only 15 sources with magnitudes of $m_{3.6\,\mu m}$ $\le$ 11$^{m}$ are found to be useful. \subsection{Zero point calibration} In the previous study, to calibrate Field A data, we extracted spectral fluxes for the bright L--band sources in the GC cluster, as measured by [C02] and [M04] to determine zero points (ZPs) for each filter. After the comparison, GCIRS 7 as measured by [C02] was chosen (hereafter referred to as GCIRS7--C02) to calibrate Field A data (described in detail in Paper 2). The fluxes for GCIRS7--C02 and the ZPs so obtained have been presented in Paper 2 (tables \ref{tab:3} and \ref{tab:4}). In this study, we applied the same ZP values to calibrate the narrow-band measurements of the new fields. This also allowed us to compare and interpret the optical depth measurements of Field B and Field C together with Field A. Unlike Field A, where we previously compared our results with the values obtained by [M04], there are no sources within the Field B available to provide a self-consistency check on this calibration, since the optical depths at 3.4$\,\mu$m have not been measured before. In the case of Field C there are sources previously studied in the literature (\citealt{Vig2007, Vig2017}, [I02] and [G12]), so we used Field C for checking the cross-calibration procedure we applied. We present the calibrated fluxes for the sources in Field C at 3.3$\,\mu$m, 3.4$\,\mu$m and 3.6$\,\mu$m as well as the derived 3.4\,$\mu$m optical depths, $\tau_{3.4\,\mu m}$, in Table \ref{tab:2}, together with their galactic and celestial coordinates. Their 2MASS K-band (2.17\,$\mu$m) band brightnesses and Spitzer IRAC Ch1(3.6\,$\mu$m) brightnesses are also listed and corresponding SSTGLMA\footnote{https://irsa.ipac.caltech.edu/data/SPITZER/GLIMPSE/} (Spitzer Space Telescope The GLIMPSE Archive) designations are indicated. For comparison, we also note the corresponding IDs of the three sources for which the 3.4\,$\mu$m optical depths were previously measured by [G12] (i.e.\ S7, S10, S11) in the footnote to Table \ref{tab:2}. \begin{table*} \begin{center} \caption{Source IDs, galactic and celestial coordinates (J2000) in degrees, fluxes ($\times$\,10$^{-17}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) and 3.4\,$\mu$m optical depths ($\tau_{3.4\,\mu m}$) determined for the sources in Field C. In addition, corresponding SSTGLMA designations, and brightness values (mag) from 2MASS at 2.17$\,\mu$m and Spitzer IRAC at 3.6$\,\mu$m, are also given}. \centering \label{tab:2} \begin{tabular}{| c | c | c | c | c | c c c | c | c | c c |} \hline Source & \multirow{2}{*}{ \textit {l} } & \multirow{2}{*}{ \textit {b} } & \multirow{2}{*}{ RA } & \multirow{2}{*}{ Dec} & \multicolumn{3}{c|}{Fluxes } & \multirow{2}{*}{$\tau_{3.4\,\mu m}$} & \multirow{2}{*}{ SSTGLMA } & \multicolumn{2}{c|}{Brightnesses} \\ No & & & & & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & & &2.17$\,\mu$m & 3.6$\,\mu$m \\ \hline 1 & 34.820 & 0.351 & 283.4091 & 1.8401 & 41.03 & 32.53 & 41.92 & 0.24 & - & - & - \\ 2 & 34.808 & 0.344 & 283.4094 & 1.8261 & 8.36 & 6.37 & 6.20 & 0.18 & G034.8087+00.3453 & 6.10 & 6.79 \\ 3 & 34.817 & 0.347 & 283.4112 & 1.8359 & 0.82 & 0.82 & 1.19 & 0.14 & G034.8183+00.3482 & 10.97 & 7.68 \\ 4 & 34.824 & 0.366 & 283.3973 & 1.8511 & 1.04 & 0.67 & 0.86 & 0.39 & - & - & - \\ 5 & 34.817 & 0.345 & 283.4126 & 1.8353 & 0.30 & 0.37 & 0.63 & 0.10 & G034.8185+00.3467 & 12.66 & 7.75 \\ 6 & 34.825 & 0.355 & 283.4076 & 1.8471 & 0.52 & 0.33 & 0.35 & 0.33 & G034.8266+00.3565 & 9.25 & 9.01 \\ 7 & 34.823 & 0.333 & 283.4259 & 1.8351 & 0.32 & 0.22 & 0.20 & 0.26 & G034.8243+00.3348 & 9.76 & 9.58 \\ 8 & 34.811 & 0.328 & 283.4245 & 1.8217 & 0.16 & 0.14 & 0.18 & 0.20 & G034.8117+00.3299 & 12.27 & 9.93 \\ 9 & 34.820 & 0.335 & 283.4227 & 1.8329 & 0.14 & 0.11 & 0.13 & 0.20 & G034.8208+00.3366 & 11.95 & 10.14 \\ 10 & 34.833 & 0.342 & 283.4226 & 1.8478 & 0.19 & 0.12 & 0.13 & 0.34 & G034.8341+00.3435 & 10.44 & 10.22 \\ 11 & 34.827 & 0.349 & 283.4136 & 1.8459 & 0.11 & 0.08 & 0.08 & 0.29 & G034.8282+00.3506 & 11.02 & 10.61 \\ 12 & 34.813 & 0.349 & 283.4073 & 1.8330 & 0.05 & 0.04 & 0.06 & 0.33 & G034.8139+00.3504 & 13.11 & 10.95 \\ 13 & 34.817 & 0.356 & 283.4027 & 1.8403 & 0.06 & 0.04 & 0.06 & 0.41 & G034.8184+00.3578 & 13.17 & 11.07 \\ 14 & 34.818 & 0.352 & 283.4072 & 1.8389 & 0.08 & 0.06 & 0.05 & 0.19 & - & - & - \\ 15 & 34.810 & 0.335 & 283.4185 & 1.8240 & 0.05 & 0.03 & 0.04 & 0.44 & G034.8110+00.3362 & 12.78 & 11.50 \\ \hline \end{tabular} \begin{flushleft} \begin{footnotesize} Note that the IDs for the sources measured by [G12] are as follows: 1:S7, 3:S10, 5:S11. \end{footnotesize} \end{flushleft} \end{center} \end{table*} The calibrated spectra for the Field C sources are shown in Figure \ref{fig2} (on the left panel). They are compared with the flux levels (on the right panel) calculated using interpolation between 2MASS K and Spitzer IRAC Ch1 measurements, and found to be largely consistent. \begin{figure*} \begin{center} \begin{tabular}{c} {\includegraphics[angle=0,scale=0.42]{figure2b.eps}} \end{tabular} \caption{The fluxes for Field C sources obtained by spectrophotometric calibration using GCIRS7 as described in the text. The 3.4$\,\mu$m optical depths were then calculated by interpolating across the aliphatic absorption feature from 3.3$\,\mu$m to 3.6$\,\mu$m, as shown by the dotted lines. } \label{fig2} \end{center} \end{figure*} The linear continua between 3.3$\,\mu$m and 3.6$\,\mu$m are indicated by dotted lines in Figure \ref{fig2}, from which the optical depth of the 3.4$\,\mu$m absorption feature was calculated. Since the optical depth is given by $\tau = -\ln$(I/I$_{0}$), where I is the measured flux and I$_{0}$ is the estimated continuum emission flux at 3.4\,$\mu$m, there is an uncertainty in optical depth values due to continuum flux estimations as we do not know the black body spectrum of the background stellar light source. This uncertainty has been mentioned in [I02], [M04] and [G12] and also discussed in detail in Paper 2. Moreover, the 3.4$\,\mu$m absorption feature is superimposed on the long wavelength wing of the broad 3.1$\,\mu$m H$_{2}$O ice absorption band for lines of sight through Field A, B and C ([I02], [C02], [M04] and [G12]) and this adds another complication for the determination of the optical depth of 3.4\,$\mu$m absorption. To estimate \textit{the lowest limit} of aliphatic hydrocarbon absorptions, \textit{local linear continua} between 3.3$\,\mu$m to 3.6$\,\mu$m were applied to the spectra in [I02], [M04] and [G12]. Similarly, in our previous work, optical depths for Field A sources were calculated by interpolating across the aliphatic absorption feature from 3.3$\,\mu$m to 3.6$\,\mu$m to yield minimum aliphatic hydrocarbon absorptions. In this study we followed the same methodology for the optical depth measurements, so that we can compare our results with the measurements of Field A and the minimum levels of the aliphatic hydrocarbon absorptions reported in [I02] and [G12]. There are three sources in Field C: S7, S10 and S11, that have previously measured 3.4$\,\mu$m optical depths in [G12]. The brightest of these, S7, is partially saturated, therefore our derived values for it are not reliable. However, the results we obtained for the other two sources, S10 and S11, were found to be similar to the literature values given in [G12] (see Table~\ref{tab:3}). Then we compared our results with the optical depth measurements reported by [I02] for the line of sight through the IRAS 18511+0146 cluster. We found the measurements of S10 and S11 in agreement with the maximum level they reported despite the fact that their methods involve spectroscopic data (they measured aliphatic hydrocarbon absorptions around 3.4$\,\mu$m, produced by methylene and methyl groups separately) (see Table~\ref{tab:3}). \begin{table*} \begin{center} \footnotesize \caption{Optical depth of the 3.4\,$\mu$m absorption feature for previously measured sources S7, S10 and S11 in Field C based on the GCIRS7--C02 calibration, compared to the values determined in [G12] and [I02]. } \label{tab:3} \centering \begin{tabular}{| c | c | c | c |} \hline \multirow{2}{*}{ } & \multicolumn{3}{c|}{$\tau_{3.4\,\mu m}$} \\ \cline{2-4} & S7 & S10 & S11 \\ \hline this study & 0.239 & 0.137 & 0.123 \\ \hline [G12] & 0.073 & 0.093 & 0.119 \\ \hline [I02] & \multicolumn{3}{c|}{ $0.087^{1}$ - $0.066^{2}$ } \\ \hline \end{tabular} \end{center} \begin{footnotesize} The reported optical depths values for methylene (1) and methyl (2) groups, respectively. \end{footnotesize} \end{table*} We did, however, also check this calibration method with several other methods for determining the fluxes for the sources in Field C. This included using the fluxes for sources S10 and S11 in the spectra of [G12] to provide the calibration (i.e.\ instead of using the spectrum of GCIRS7 from [C02]). We also used the photometric fluxes for GCIRS7 in Field A and for S10 \& S11 in Field C, obtained by interpolating the 2MASS K-band (2.17$\,\mu$m) and Spitzer IRAC\footnote{https://irsa.ipac.caltech.edu/data/SPITZER/} Ch1-band (3.6$\,\mu$m) fluxes \citep{Vig2007} onto the narrow filter bands for our observations (and including a correction for the flux at 3.4\,$\mu$m for the reported 3.4\,$\mu$m optical depths from [C02] and [G12]), to provide the flux calibration. In all cases internally consistent results were found for each of these 5 additional calibration methods, with a near constant offset between the derived 3.4\,$\mu$m optical depths for all the sources in Field C found for each of these calibration methods and that determined when using fluxes derived for GCIRS7 from [C02]; these offsets range from 0.03 to 0.1 dependent on the particular calibration source chosen. These offsets are less than the differences in 3.4\,$\mu$m optical depths determined in Field A by different authors for the same source, as these authors apply different analysis methods to their data for a source. For instance, for GCIRS7, [C02] and [C04] determined a 3.4\,$\mu$m optical depth of 0.15 and 0.41, respectively, using the different sets of spectral data they each obtained. \cite{Moultaka2004} also compared the derived 3.4\,$\mu$m optical depths for a given source applying different methods to estimate the continuum level of the spectrum. For example, for the source IRS16C they found the 3.4\,$\mu$m optical depths ranging from 0.14 to 0.49, depending on the method they used. It is clear that it is difficult to obtain precise values for the optical depth, though the relative differences between sources are more reliably obtained, as we demonstrated in Paper 2\@. Given these uncertainties, the offsets we obtained (i.e.\ from 0.03 to 0.1) are less than the variations in 3.4\,$\mu$m optical depth determined for the same source by the mentioned studies above. Since we determined that using GCIRS7--C02 provided the best calibration set for Field A data in Paper 2, we have then applied it to the data for Fields B and C. \section{Mapping Applications}\label{Section4} For mapping applications, 180 sources in Field B and for 15 sources in Field C (which are generally fainter than GC field sources) were selected to satisfy the S/N criteria. The biases in aliphatic hydrocarbon maps arise primarily from two sources. The first is that the distances of the background sources are largely unknown since it is not possible to distinguish a reddened high mass, hot star from a low mass, cooler, intrinsically redder star. This means that we cannot be certain whether variations in the absorption are due to density or distances. The second bias arises because the data for each field are brightness limited. For the brightness limited bias, we split the data into quartiles according their brightness. We would like to note that there would be bias due to effective temperature and extinction degeneracy and some of the faintest objects could be the most reddened objects. However, we show below that the similar result was found independent of the brightness in the resultant quartile maps. The issue of where the sources lie was found to be not a serious concern since we could not find large source-to-source variations in maps prepared by using the quartile data sets. In the case of the Galactic Centre this is in fact the gas and dust in the central regions of the Galaxy. It is likely that most of the sources in Field A and Field B are at roughly the same distance and so have the same columns of absorbing material in front of them. For Field C, we have very limited number of sources to draw up a conclusion. However, since Field C samples IRAS 18511+0146 cluster sources, we can assume that most of the sources are sampling the same columns of absorbing material. We also compared the optical depths and the brightness of the sources by using the resultant maps. However, we could not find any correlation. \subsection{Application to a new field in the GC: Field B} We present the fluxes for 180 sources measured at 3.3$\,\mu$m, 3.4$\,\mu$m and 3.6$\,\mu$m in Field B in Table \ref{tab:4} and the derived 3.4\,$\mu$m optical depths in Table \ref{tab:5}, together with their celestial coordinates. Optical depths were found to vary over a relatively small range across the field with the mean value of $0.36 \pm 0.09$. We checked whether systematic biases due to flux levels of sources (to satisfy S/N=5) may have affected the optical depth determinations by dividing the sources into four quartiles based on their fluxes, similarly we applied previously for Field A (described in detail in Paper 2). The mean 3.4\,$\mu$m optical depths and standard deviation in each quartile are shown in Table \ref{tab:6}. The differences between the derived 3.4\,$\mu$m optical depths are not significant, consistent with the flux level not biasing its determination. \begin{table*} \begin{center} \caption{Calibrated fluxes ($\times$\,10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) for the Field B sources used for 3.4$\,\mu$m optical depth calculations.} \centering \label{tab:4} \begin{tabular}{| p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} | p {0.6 cm} p {0.7 cm} p {0.7 cm} p {0.7 cm} |} \hline Source & \multicolumn{3}{c|}{Fluxes } & Source & \multicolumn{3}{c|}{Fluxes } & Source & \multicolumn{3}{c|}{Fluxes} & Source &\multicolumn{3}{c|}{Fluxes} \\ \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m &3.6$\,\mu$m &\multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m & \multirow{1}{*}{No} & 3.3$\,\mu$m & 3.4$\,\mu$m & 3.6$\,\mu$m \\ \hline 1 & 46.2 & 36.6 & 47.1 & 46 & 6.2 & 3.5 & 4.5 & 91 & 3.4 & 2.2 & 2.7 & 136 & 2.0 & 1.4 & 1.7 \\ 2 & 48.4 & 35.6 & 42.4 & 47 & 4.8 & 3.5 & 4.4 & 92 & 4.3 & 2.0 & 2.7 & 137 & 2.0 & 1.3 & 1.6 \\ 3 & 34.5 & 25.4 & 31.4 & 48 & 5.3 & 3.7 & 4.4 & 93 & 2.5 & 1.7 & 2.7 & 138 & 2.1 & 1.3 & 1.6 \\ 4 & 30.2 & 16.6 & 24.2 & 49 & 5.1 & 3.6 & 4.4 & 94 & 3.3 & 2.3 & 2.7 & 139 & 2.0 & 1.5 & 1.6 \\ 5 & 33.2 & 23.2 & 24.0 & 50 & 7.1 & 3.7 & 4.3 & 95 & 2.5 & 2.0 & 2.7 & 140 & 1.9 & 1.2 & 1.6 \\ 6 & 18.9 & 14.2 & 21.1 & 51 & 4.4 & 3.2 & 4.3 & 96 & 3.2 & 2.2 & 2.7 & 141 & 1.7 & 1.1 & 1.6 \\ 7 & 23.5 & 16.5 & 20.8 & 52 & 5.3 & 3.5 & 4.2 & 97 & 3.3 & 2.2 & 2.7 & 142 & 1.6 & 1.1 & 1.6 \\ 8 & 20.4 & 14.0 & 20.4 & 53 & 3.8 & 2.6 & 4.1 & 98 & 2.0 & 1.4 & 2.6 & 143 & 1.6 & 1.0 & 1.6 \\ 9 & 11.8 & 10.7 & 19.7 & 54 & 4.6 & 3.0 & 4.1 & 99 & 3.4 & 2.0 & 2.5 & 144 & 2.1 & 1.3 & 1.5 \\ 10 & 15.0 & 11.3 & 16.7 & 55 & 4.6 & 2.9 & 4.1 & 100 & 3.0 & 1.7 & 2.5 & 145 & 1.8 & 1.2 & 1.5 \\ 11 & 14.2 & 9.9 & 14.3 & 56 & 4.7 & 3.3 & 4.0 & 101 & 2.4 & 1.8 & 2.5 & 146 & 1.7 & 1.1 & 1.5 \\ 12 & 21.5 & 12.2 & 13.4 & 57 & 3.7 & 2.8 & 4.0 & 102 & 2.7 & 1.9 & 2.5 & 147 & 1.9 & 1.2 & 1.5 \\ 13 & 16.2 & 11.4 & 13.3 & 58 & 3.5 & 2.6 & 4.0 & 103 & 2.3 & 1.7 & 2.4 & 148 & 2.2 & 1.4 & 1.5 \\ 14 & 11.9 & 8.5 & 12.1 & 59 & 4.7 & 3.0 & 4.0 & 104 & 2.5 & 1.6 & 2.4 & 149 & 1.3 & 0.9 & 1.5 \\ 15 & 9.4 & 7.4 & 11.3 & 60 & 4.7 & 2.9 & 4.0 & 105 & 2.8 & 1.8 & 2.3 & 150 & 1.5 & 1.0 & 1.3 \\ 16 & 12.0 & 7.8 & 10.9 & 61 & 4.6 & 2.8 & 3.9 & 106 & 3.1 & 1.6 & 2.3 & 151 & 1.6 & 1.2 & 1.3 \\ 17 & 11.0 & 7.4 & 9.6 & 62 & 3.7 & 3.0 & 3.9 & 107 & 2.6 & 2.0 & 2.3 & 152 & 1.5 & 0.8 & 1.3 \\ 18 & 12.3 & 6.8 & 9.4 & 63 & 4.9 & 2.9 & 3.9 & 108 & 2.5 & 1.6 & 2.3 & 153 & 1.9 & 1.0 & 1.3 \\ 19 & 10.7 & 5.9 & 9.2 & 64 & 3.5 & 2.5 & 3.9 & 109 & 2.6 & 1.6 & 2.3 & 154 & 1.8 & 1.0 & 1.3 \\ 20 & 9.9 & 6.3 & 8.7 & 65 & 4.3 & 2.8 & 3.9 & 110 & 2.6 & 1.8 & 2.3 & 155 & 1.6 & 1.2 & 1.3 \\ 21 & 9.4 & 6.6 & 8.1 & 66 & 4.3 & 3.0 & 3.9 & 111 & 2.8 & 1.8 & 2.2 & 156 & 1.6 & 0.9 & 1.3 \\ 22 & 9.8 & 5.9 & 8.1 & 67 & 3.0 & 2.5 & 3.8 & 112 & 2.8 & 1.9 & 2.2 & 157 & 1.7 & 1.0 & 1.2 \\ 23 & 9.8 & 6.4 & 7.8 & 68 & 4.4 & 3.1 & 3.8 & 113 & 2.9 & 1.8 & 2.2 & 158 & 1.4 & 1.0 & 1.2 \\ 24 & 9.1 & 6.5 & 7.7 & 69 & 4.1 & 2.8 & 3.7 & 114 & 3.0 & 1.6 & 2.2 & 159 & 1.3 & 0.9 & 1.1 \\ 25 & 7.1 & 5.1 & 6.7 & 70 & 4.1 & 3.0 & 3.5 & 115 & 2.7 & 1.8 & 2.2 & 160 & 1.2 & 0.8 & 1.1 \\ 26 & 8.3 & 5.5 & 6.5 & 71 & 3.9 & 2.7 & 3.5 & 116 & 2.0 & 1.5 & 2.2 & 161 & 1.5 & 0.8 & 1.1 \\ 27 & 7.8 & 5.6 & 6.5 & 72 & 4.2 & 2.8 & 3.5 & 117 & 2.2 & 1.5 & 2.1 & 162 & 1.4 & 0.7 & 1.1 \\ 28 & 6.8 & 4.9 & 6.5 & 73 & 4.5 & 3.0 & 3.5 & 118 & 2.3 & 1.5 & 2.1 & 163 & 1.6 & 0.8 & 1.1 \\ 29 & 7.9 & 4.8 & 6.4 & 74 & 3.7 & 2.6 & 3.3 & 119 & 2.6 & 1.7 & 2.1 & 164 & 1.5 & 0.9 & 1.0 \\ 30 & 5.6 & 4.5 & 6.3 & 75 & 4.0 & 2.8 & 3.3 & 120 & 2.7 & 1.6 & 2.1 & 165 & 1.3 & 0.9 & 1.0 \\ 31 & 7.6 & 5.0 & 6.2 & 76 & 4.0 & 2.6 & 3.2 & 121 & 2.4 & 1.5 & 2.1 & 166 & 1.1 & 0.8 & 1.0 \\ 32 & 6.0 & 4.4 & 6.2 & 77 & 3.7 & 2.3 & 3.2 & 122 & 2.8 & 1.7 & 2.0 & 167 & 1.5 & 0.9 & 1.0 \\ 33 & 4.4 & 3.7 & 6.1 & 78 & 3.9 & 2.5 & 3.1 & 123 & 2.3 & 1.5 & 2.0 & 168 & 1.3 & 0.6 & 1.0 \\ 34 & 6.5 & 4.5 & 5.8 & 79 & 3.5 & 2.5 & 3.1 & 124 & 2.4 & 1.6 & 1.9 & 169 & 1.2 & 0.7 & 1.0 \\ 35 & 6.9 & 4.4 & 5.8 & 80 & 3.8 & 2.2 & 3.0 & 125 & 2.5 & 1.6 & 1.9 & 170 & 1.1 & 0.7 & 1.0 \\ 36 & 5.5 & 4.1 & 5.5 & 81 & 3.5 & 2.5 & 3.0 & 126 & 2.5 & 1.4 & 1.9 & 171 & 1.3 & 0.8 & 1.0 \\ 37 & 7.2 & 3.7 & 5.4 & 82 & 2.8 & 1.9 & 3.0 & 127 & 2.0 & 1.3 & 1.8 & 172 & 1.1 & 0.8 & 1.0 \\ 38 & 7.6 & 5.3 & 5.1 & 83 & 3.5 & 2.3 & 3.0 & 128 & 2.2 & 1.4 & 1.8 & 173 & 1.3 & 0.8 & 1.0 \\ 39 & 4.7 & 3.3 & 5.1 & 84 & 3.6 & 2.5 & 3.0 & 129 & 2.3 & 1.6 & 1.8 & 174 & 1.1 & 0.7 & 0.9 \\ 40 & 5.6 & 3.9 & 5.0 & 85 & 4.4 & 2.1 & 2.9 & 130 & 2.2 & 1.5 & 1.8 & 175 & 1.2 & 0.6 & 0.9 \\ 41 & 5.8 & 4.2 & 4.9 & 86 & 3.7 & 2.2 & 2.9 & 131 & 2.0 & 1.3 & 1.8 & 176 & 1.0 & 0.6 & 0.9 \\ 42 & 5.5 & 3.4 & 4.9 & 87 & 3.4 & 2.3 & 2.8 & 132 & 2.0 & 1.3 & 1.7 & 177 & 0.9 & 0.6 & 0.9 \\ 43 & 4.9 & 3.6 & 4.7 & 88 & 3.6 & 2.1 & 2.8 & 133 & 2.1 & 1.4 & 1.7 & 178 & 1.1 & 0.7 & 0.8 \\ 44 & 5.7 & 3.8 & 4.7 & 89 & 3.3 & 2.2 & 2.8 & 134 & 2.0 & 1.3 & 1.7 & 179 & 0.7 & 0.6 & 0.8 \\ 45 & 4.1 & 2.8 & 4.5 & 90 & 3.2 & 2.3 & 2.7 & 135 & 1.9 & 1.4 & 1.7 & 180 & 1.0 & 0.6 & 0.8 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Celestial coordinates (J2000) in degrees and the optical depths for the 3.4$\,\mu$m absorption feature for the Field B sources.} \centering \label{tab:5} \begin{tabular}{| p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} | p {0.3 cm} p {0.9 cm} p {1.1 cm} p {0.4 cm} |} \hline No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ & No & RA & Dec & $\tau_{3.4}$ \\ \hline 1 & 266.3042 & -29.1602 & 0.18 & 46 & 266.2836 & -29.1629 & 0.40 & 91 & 266.2875 & -29.1705 & 0.30 & 136 & 266.3030 & -29.1611 & 0.24 \\ 2 & 266.3025 & -29.1626 & 0.21 & 47 & 266.3126 & -29.1699 & 0.22 & 92 & 266.2832 & -29.1448 & 0.53 & 137 & 266.3142 & -29.1482 & 0.34 \\ 3 & 266.3159 & -29.1596 & 0.22 & 48 & 266.3279 & -29.1647 & 0.24 & 93 & 266.2914 & -29.1578 & 0.34 & 138 & 266.2883 & -29.1472 & 0.36 \\ 4 & 266.2910 & -29.1410 & 0.46 & 49 & 266.3039 & -29.1798 & 0.22 & 94 & 266.3055 & -29.1532 & 0.27 & 139 & 266.3082 & -29.1708 & 0.19 \\ 5 & 266.3010 & -29.1593 & 0.21 & 50 & 266.2876 & -29.1405 & 0.47 & 95 & 266.3076 & -29.1646 & 0.21 & 140 & 266.3146 & -29.1802 & 0.31 \\ 6 & 266.3240 & -29.1552 & 0.27 & 51 & 266.3311 & -29.1623 & 0.24 & 96 & 266.3011 & -29.1587 & 0.27 & 141 & 266.2961 & -29.1492 & 0.36 \\ 7 & 266.3170 & -29.1558 & 0.25 & 52 & 266.3264 & -29.1743 & 0.25 & 97 & 266.3084 & -29.1496 & 0.31 & 142 & 266.3118 & -29.1658 & 0.27 \\ 8 & 266.2954 & -29.1552 & 0.32 & 53 & 266.3081 & -29.1404 & 0.35 & 98 & 266.3254 & -29.1505 & 0.40 & 143 & 266.2952 & -29.1511 & 0.37 \\ 9 & 266.3098 & -29.1553 & 0.24 & 54 & 266.3212 & -29.1406 & 0.32 & 99 & 266.2858 & -29.1595 & 0.39 & 144 & 266.3264 & -29.1473 & 0.30 \\ 10 & 266.2942 & -29.1709 & 0.25 & 55 & 266.3158 & -29.1403 & 0.37 & 100 & 266.2883 & -29.1442 & 0.46 & 145 & 266.2868 & -29.1729 & 0.31 \\ 11 & 266.2932 & -29.1643 & 0.30 & 56 & 266.3254 & -29.1623 & 0.24 & 101 & 266.3112 & -29.1722 & 0.23 & 146 & 266.3279 & -29.1550 & 0.33 \\ 12 & 266.2844 & -29.1589 & 0.36 & 57 & 266.3032 & -29.1795 & 0.25 & 102 & 266.3022 & -29.1558 & 0.27 & 147 & 266.3191 & -29.1600 & 0.27 \\ 13 & 266.3212 & -29.1770 & 0.23 & 58 & 266.2929 & -29.1760 & 0.26 & 103 & 266.3063 & -29.1772 & 0.27 & 148 & 266.3210 & -29.1479 & 0.27 \\ 14 & 266.3246 & -29.1454 & 0.28 & 59 & 266.2985 & -29.1775 & 0.30 & 104 & 266.3285 & -29.1793 & 0.30 & 149 & 266.3251 & -29.1395 & 0.38 \\ 15 & 266.2986 & -29.1668 & 0.24 & 60 & 266.3311 & -29.1443 & 0.35 & 105 & 266.2853 & -29.1564 & 0.33 & 150 & 266.3228 & -29.1427 & 0.34 \\ 16 & 266.2872 & -29.1621 & 0.33 & 61 & 266.2830 & -29.1721 & 0.34 & 106 & 266.2912 & -29.1421 & 0.52 & 151 & 266.3232 & -29.1639 & 0.22 \\ 17 & 266.3064 & -29.1449 & 0.30 & 62 & 266.3107 & -29.1608 & 0.19 & 107 & 266.3211 & -29.1619 & 0.18 & 152 & 266.3223 & -29.1381 & 0.45 \\ 18 & 266.2898 & -29.1459 & 0.44 & 63 & 266.3148 & -29.1377 & 0.39 & 108 & 266.2943 & -29.1548 & 0.35 & 153 & 266.2914 & -29.1437 & 0.48 \\ 19 & 266.2885 & -29.1381 & 0.50 & 64 & 266.3140 & -29.1455 & 0.33 & 109 & 266.2988 & -29.1491 & 0.37 & 154 & 266.2831 & -29.1653 & 0.39 \\ 20 & 266.3188 & -29.1436 & 0.34 & 65 & 266.3091 & -29.1432 & 0.33 & 110 & 266.3020 & -29.1699 & 0.25 & 155 & 266.3210 & -29.1556 & 0.18 \\ 21 & 266.3223 & -29.1720 & 0.24 & 66 & 266.3152 & -29.1674 & 0.25 & 111 & 266.3074 & -29.1513 & 0.30 & 156 & 266.3022 & -29.1437 & 0.38 \\ 22 & 266.2829 & -29.1776 & 0.33 & 67 & 266.3049 & -29.1607 & 0.20 & 112 & 266.2988 & -29.1479 & 0.27 & 157 & 266.2828 & -29.1594 & 0.40 \\ 23 & 266.3239 & -29.1513 & 0.29 & 68 & 266.3147 & -29.1582 & 0.24 & 113 & 266.2876 & -29.1549 & 0.33 & 158 & 266.2995 & -29.1514 & 0.25 \\ 24 & 266.3206 & -29.1682 & 0.22 & 69 & 266.3218 & -29.1522 & 0.28 & 114 & 266.2846 & -29.1485 & 0.42 & 159 & 266.3224 & -29.1787 & 0.24 \\ 25 & 266.3170 & -29.1524 & 0.25 & 70 & 266.3124 & -29.1790 & 0.21 & 115 & 266.3180 & -29.1782 & 0.26 & 160 & 266.3192 & -29.1482 & 0.31 \\ 26 & 266.3125 & -29.1510 & 0.29 & 71 & 266.3013 & -29.1668 & 0.27 & 116 & 266.3299 & -29.1723 & 0.27 & 161 & 266.2973 & -29.1782 & 0.44 \\ 27 & 266.3134 & -29.1536 & 0.22 & 72 & 266.3201 & -29.1461 & 0.28 & 117 & 266.2953 & -29.1683 & 0.29 & 162 & 266.2874 & -29.1602 & 0.40 \\ 28 & 266.3086 & -29.1775 & 0.25 & 73 & 266.3181 & -29.1712 & 0.26 & 118 & 266.2922 & -29.1608 & 0.31 & 163 & 266.2993 & -29.1373 & 0.47 \\ 29 & 266.2853 & -29.1736 & 0.33 & 74 & 266.3234 & -29.1597 & 0.26 & 119 & 266.3201 & -29.1513 & 0.34 & 164 & 266.2963 & -29.1803 & 0.34 \\ 30 & 266.3129 & -29.1626 & 0.20 & 75 & 266.3079 & -29.1555 & 0.24 & 120 & 266.3297 & -29.1790 & 0.37 & 165 & 266.2990 & -29.1569 & 0.26 \\ 31 & 266.3158 & -29.1466 & 0.30 & 76 & 266.3171 & -29.1639 & 0.31 & 121 & 266.2967 & -29.1581 & 0.36 & 166 & 266.2988 & -29.1639 & 0.22 \\ 32 & 266.3286 & -29.1543 & 0.24 & 77 & 266.2890 & -29.1669 & 0.34 & 122 & 266.2921 & -29.1736 & 0.34 & 167 & 266.3283 & -29.1702 & 0.28 \\ 33 & 266.2943 & -29.1658 & 0.21 & 78 & 266.3247 & -29.1672 & 0.28 & 123 & 266.2965 & -29.1784 & 0.29 & 168 & 266.2843 & -29.1427 & 0.47 \\ 34 & 266.3211 & -29.1594 & 0.26 & 79 & 266.3214 & -29.1570 & 0.25 & 124 & 266.3089 & -29.1504 & 0.29 & 169 & 266.2896 & -29.1451 & 0.56 \\ 35 & 266.3309 & -29.1767 & 0.32 & 80 & 266.2880 & -29.1508 & 0.40 & 125 & 266.3212 & -29.1444 & 0.28 & 170 & 266.3102 & -29.1664 & 0.32 \\ 36 & 266.3089 & -29.1791 & 0.24 & 81 & 266.3080 & -29.1546 & 0.23 & 126 & 266.2963 & -29.1390 & 0.44 & 171 & 266.3095 & -29.1575 & 0.28 \\ 37 & 266.2922 & -29.1408 & 0.50 & 82 & 266.2913 & -29.1525 & 0.37 & 127 & 266.2937 & -29.1622 & 0.36 & 172 & 266.2973 & -29.1737 & 0.24 \\ 38 & 266.3149 & -29.1574 & 0.19 & 83 & 266.3162 & -29.1496 & 0.30 & 128 & 266.2928 & -29.1781 & 0.30 & 173 & 266.3036 & -29.1701 & 0.30 \\ 39 & 266.3289 & -29.1461 & 0.30 & 84 & 266.3205 & -29.1546 & 0.28 & 129 & 266.3272 & -29.1751 & 0.24 & 174 & 266.2953 & -29.1531 & 0.33 \\ 40 & 266.3229 & -29.1518 & 0.26 & 85 & 266.2908 & -29.1378 & 0.54 & 130 & 266.2871 & -29.1779 & 0.27 & 175 & 266.2845 & -29.1509 & 0.40 \\ 41 & 266.3135 & -29.1565 & 0.22 & 86 & 266.3138 & -29.1428 & 0.37 & 131 & 266.3267 & -29.1570 & 0.33 & 176 & 266.2906 & -29.1760 & 0.36 \\ 42 & 266.2846 & -29.1626 & 0.38 & 87 & 266.3177 & -29.1619 & 0.26 & 132 & 266.2986 & -29.1597 & 0.32 & 177 & 266.2994 & -29.1678 & 0.24 \\ 43 & 266.3255 & -29.1609 & 0.24 & 88 & 266.2903 & -29.1673 & 0.35 & 133 & 266.2988 & -29.1469 & 0.27 & 178 & 266.3222 & -29.1605 & 0.24 \\ 44 & 266.3165 & -29.1542 & 0.29 & 89 & 266.3082 & -29.1451 & 0.28 & 134 & 266.2957 & -29.1596 & 0.32 & 179 & 266.3021 & -29.1534 & 0.26 \\ 45 & 266.2933 & -29.1503 & 0.37 & 90 & 266.3228 & -29.1607 & 0.26 & 135 & 266.3003 & -29.1667 & 0.20 & 180 & 266.3208 & -29.1728 & 0.37 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \footnotesize \caption{Mean fluxes ($\times$\,10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$) and 3.4 $\mu$m optical depths ($\tau_{3.4}$) for each of the quartiles in Field B.} \label{tab:6} \centering \begin{tabular}{| c | c | c | c | c | } \hline & Flux Group 1 & Flux Group 2 & Flux Group 3 & Flux Group 4 \\ \hline Flux & 11.2 & 3.5 & 2.2 & 1.2 \\ \hline $\tau_{3.4\,\mu m}$ & 0.35$\pm$0.08 & 0.36 $\pm$0.07 & 0.39 $\pm$0.07 & 0.40$\pm$0.10 \\ \hline \end{tabular} \end{center} \end{table*} We then considered the spatial distribution of the 3.4\,$\mu$m optical depth for the 45 sources in each flux (at 3.6\,$\mu$m) quartile range within Field B, with maps shown in Figure~\ref{fig3}. In this way, the effect of any possible individual erroneous measurement was also examined. While there is clearly some scatter in the distribution as there are areas within the field with few sources, the same pattern is apparent in each of the quartile ranges. There is a gradient across the field from SE to NW, with the 3.4\,$\mu$m optical depth rising approximately 50\% from one side to the other. This is independent of the brightness of individual sources and thus not likely to arise from the larger uncertainties in 3.4\,$\mu$m optical depth derived from the fainter sources. The faintest flux group does indeed show more variation in the pattern, most likely due to lower S/N in this group. \begin{figure*} \begin{center} \begin{tabular}{cc} {\includegraphics[angle=0,scale=0.42]{figure5a.eps}} & {\includegraphics[angle=0,scale=0.42]{figure5b.eps}} \\ {\includegraphics[angle=0,scale=0.42]{figure5c.eps}} & {\includegraphics[angle=0,scale=0.42]{figure5d.eps}} \\ \end{tabular} \caption{Map of the optical depth from 3.4$\,\mu$m absorption feature for the sources in the four flux quartiles in Field B, from brightest (group 1) to faintest (group 4). Each map contains 45 sources. Colour bars and contours indicate the $\tau_{3.4\,\mu m}$ levels. The location of sources are shown by black dots whose sizes are proportional to their fluxes.} \label{fig3} \end{center} \end{figure*} We conclude that we are able to reliably measure the 3.4\,$\mu$m optical depth using this technique of imaging through narrow band filters in Field B, to a sensitivity level of $\sim$10$^{-18}$ W\,cm$^{-2}$ $\mu$m$^{-1}$ at 3.6$\,\mu$m ($\sim$10 mag). Given the increased scatter in the lowest flux quartile we remove the faintest 20 sources from the list to take the brightest 160 sources and plot their 3.4\,$\mu$m optical depths in Figure \ref{fig4}. The location of the background sources are shown by black dots. On the left panel size of the dots are proportional to their 3.4\,$\mu$m optical depth in the line of sights. On the right panel size of the dots are proportional to the flux (at 3.6\,$\mu$m) of the sources. The optical depth, in general, is seen to rise from $\sim 0.2$ to $\sim 0.6$ moving from SE to NW across this field near the centre of the Galaxy. There could readily be local source-to-source variations in 3.4\,$\mu$m optical depth, superimposed on a broader trend. However, examination of Figure \ref{fig4} (right panel), does not suggest this to be significant as little inter-source scatter is apparent. \begin{figure*} \begin{center} \begin{tabular}{cc} {\includegraphics[angle=0,scale=0.42]{figure6b.eps}} & {\includegraphics[angle=0,scale=0.42]{figure6a.eps}} \\ \end{tabular} \caption{Map of the optical depth in 3.4$\,\mu$m absorption feature for Field B in celestial coordinates (J2000) for the brightest 160 sources. Colour bars and contours indicate $\tau_{3.4\,\mu m}$ levels. On the left panel location of the sources are shown by black dots whose sizes are proportional to the optical depth values. On the right panel, sizes are proportional to the 3.4$\,\mu$m flux values.} \label{fig4} \end{center} \end{figure*} \subsection{Application to a field in the Galactic Plane: Field C (IRAS 18511+0146)} We used the resultant spectra of 15 sources to provide a low resolution map of the 3.4$\,\mu$m optical depth across the Field C in Figure \ref{fig5}. Colour bars and contours indicate the 3.4\,$\mu$m optical depth levels, which were found to range from 0.1 to 0.4 across the field. On the left panel, the location of the background sources are shown and size of the dots are proportional to the flux (at 3.6\,$\mu$m) of the sources (the brightness of S7 is not shown but its location indicated by a \textit{star symbol} ($\ast$) as it is far brighter than the other sources and partly saturated). On the right panel, size of the dots are proportional to their 3.4$\,\mu$m optical depth in the line of sight. There is no correlation between fluxes and optical depths, indicating no obvious bias in the determination of the latter, though we note that the care must be taken in interpretation given the low number of irregularly spaced points (15) used in the map's creation. For the Galactic Centre region, maps obtained with four different sets of stars are compared to each other and found to be consistent. However, we could not apply this test to Field C as there is an insufficient number of sources. \begin{figure*} \begin{center} \begin{tabular}{cc} {\includegraphics[angle=0,scale=0.42]{figure3a.eps}} & {\includegraphics[angle=0,scale=0.42]{figure3b.eps}} \\ \end{tabular} \caption{Map of the optical depth from 3.4$\,\mu$m absorption feature for the Field C in celestial coordinates (J2000). Colour bars and contours indicate the $\tau_{3.4\,\mu m}$ levels. On the left panel locations of sources are shown by black dots whose sizes are proportional to the flux values (we note that the brightness of S7 is not shown but its location indicated by a star ($\ast$) as it is far brighter than the other sources and partly saturated). On the right panel instead, these sizes are proportional to the 3.4$\,\mu$m optical depth.} \label{fig5} \end{center} \end{figure*} \begin{figure*} \begin{center} \begin{tabular}{cc} {\includegraphics[angle=0,scale=0.46]{figure7a1.eps}} & {\includegraphics[angle=0,scale=0.46]{figure7a2.eps}} \\ {\includegraphics[angle=0,scale=0.46]{figure7b1.eps}} & {\includegraphics[angle=0,scale=0.46]{figure7b2.eps}} \\ {\includegraphics[angle=0,scale=0.46]{figure7c1.eps}} & {\includegraphics[angle=0,scale=0.46]{figure7c2.eps}} \\ \end{tabular} \caption{Left panel: maps of aliphatic hydrocarbon column density in galactic coordinates. Colour bars and contours indicate the column density ($\times$10$^{18}$ cm$^{-2}$). Right: maps of the aliphatic hydrocarbon abundance relative to hydrogen (assuming A$_{V}$ is invariant in each field). Colour bar and contours indicate the aliphatic hydrocarbon abundance levels (ppm). Field A, Field B and Field C are presented in upper, middle and below panels respectively. } \label{fig6} \end{center} \end{figure*} \section{Results} \label{Section5} Together with the previous results for Field A, our study of Field B and Field C has brought the largest perspective to date on the amount and distribution of solid phase hydrocarbons. We present the outcomes of this work below. \subsection{Aliphatic Hydrocarbon Column Densities} We calculated aliphatic hydrocarbon column densities for Fields B \& C based on the 3.4\,$\mu$m optical depth values in Table~\ref{tab:4} and Table~\ref{tab:5} by applying Eq.~\ref{eq:1} and using the aliphatic hydrocarbon absorption coefficient of \textit{A} = $4.7\times10^{-18}$\,cm\,group$^{-1}$ (as determined in Paper 1). Since optical depths of 3.4\,$\mu$m were obtained by spectrophotometry, we used 3.4\,$\mu$m narrow-band filter width of 62\,cm$^{-1}$, instead of the equivalent width of the 3.4\,$\mu$m absorption feature (108.5\,cm$^{-1}$, see Paper 1) to calculate the aliphatic hydrocarbon column densities. Therefore, the resultant aliphatic hydrocarbon column densities show lower levels than could be detected by spectrophotometry. The resultant maps of aliphatic hydrocarbon column densities are shown in galactic coordinates for Field B and C in Figure~\ref{fig6} in comparison with the aliphatic hydrocarbon column density map of Field A (Paper 2). We found that there is a rise in hydrocarbon column density through to the Galactic plane in Field B similar to that of Field A. \subsection{Relation with ISD Distribution} For the Galactic Centre fields, to investigate whether there is a similarities between the distribution of the aliphatic hydrocarbons in ISD and the total dust, the maps showing the extinction and/or reddening in visible and near infrared regions have been examined. However, a map with similar resolution and coverage that will show extinction and/or reddening through the line of sight of the GC ($\sim$8 kpc) has not been found in the literature \citep{Sumi2004, Marshall2006, Schodel2007, Schodel2010, Nogueras-Lara2018, Green2019, Chen2019}, as the sightline is obscured due to intervening opaque clouds in visible and near infrared wavelengths and there is lack of photometric data of proper background sources. There are many luminous background sources in the near infrared region, however, the majority of them are cool red giants. The GC is totally obscured in the ultraviolet and visible wavelength regions. Thus effects of high extinction and low temperature on IR photometry cannot be easily separated. Therefore highly obscured blue, young massive stars can hardly be distinguished from less obscured red, old low-mass stars \citep{Geballe2010}. We tried to probe the dust distribution using maps that are prepared based on thermal emission of ISD in the far infrared region \citep{Schlegel1998, Peek2010, Schlafly2011, PlanckCollaboration2014}, where the extinction can be sufficiently low to enable us to investigate the ISM through the line of sight of the GC. However, we were also unable to find a map with comparable resolution and coverage to analyse interstellar dust distribution at longer wavelengths. Finally, we decided to analyse the reddening by using the Two Micron All Sky Survey Catalog \citep{Skrutskie2006} and Spitzer GLIMPSE Catalog \citep{Ramirez2007} data sets used in \citealt{Geballe2010}, which were kindly made available by the authors for our use. We prepared colour excess maps that cover the GC fields with a resolution which can enable us to make a meaningful comparison. We used photometric brightness measurements at 2MASS J-band (1.25\,$\mu$m), H-band (1.65\,$\mu$m), K-band (2.17\,$\mu$m) and Spitzer IRAC Ch1-band (3.6\,$\mu$m), Ch2-band (4.5\,$\mu$m), Ch3-band (5.8\,$\mu$m), Ch4-band (8\,$\mu$m) and obtained 2MASS J$-$K, 2MASS/Spitzer K$-$L (Ch1) and Spitzer IRAC Ch1$-$IRAC Ch2 colour excesses of all bright L--band sources (m$_{L}$ \textless 8$^{m}$) in the GC region. We plotted colour excess maps and compared them to check whether there are similar trends with the aliphatic hydrocarbon column density maps. We present 2MASS J$-$K, 2MASS/Spitzer K$-$L (Ch1) and Spitzer L$-$M (Ch2) colour excess maps in Figure \ref{fig7} (note that the colour excess levels of the maps are different: highest for J$-$K and lowest for L$-$M). We also indicate the location of the sources by black dots whose sizes are proportional to their fluxes. However, we cannot determine any relation between the fluxes and colour excesses. We would like to note that in the colour excess maps presented here, there would be bias due to the extinction and effective temperature degeneracy mentioned above. There are different methods to overcome this degeneracy \citep{Indebetouw2005, Marshall2006, Gonzalez2012, Hanson2014}. However, an extensive study of extinction through the GC sightlines is out of the scope of this study. There is a bias and uncertainty due to circumstellar dust around the mass losing asymptotic giant and ascending giant branch stars. They appear redder and this can be accounted for ISD \citep{Marshall2006, Nogueras-Lara2018}. The matching trends between the colour excess maps presented in Figure \ref{fig7}, prove that they sufficiently reflect the distribution of interstellar matter. However, we could not detect any matching trends between the colour excess maps and aliphatic hydrocarbon column density maps. \begin{figure*} \begin{center} \begin{tabular}{c} {\includegraphics[angle=0,scale=0.46]{figure8a.eps}} \\ {\includegraphics[angle=0,scale=0.46]{figure8b.eps}} \\ {\includegraphics[angle=0,scale=0.46]{figure8c.eps}} \\ \end{tabular} \caption{Colour excess maps: 2MASS J$-$K (upper panel), 2MASS/Spitzer K$-$L (Ch1) (middle panel), and Spitzer L$-$M (Ch2) (bottom panel). Field A is on the left and Field B is on the right. Colour bars and contours indicate the colour excess levels in magnitude. } \label{fig7} \end{center} \end{figure*} \subsection{Aliphatic Hydrocarbon Abundances} We converted our results into aliphatic hydrocarbon abundance ratios, in order to compare with carbon abundances (C/H) reported in the literature. Normalised aliphatic hydrocarbon abundances (ppm) were estimated based on gas-to-extinction ratio $N(H) = 2.04 \times 10^{21}$\,cm$^{-2}$ mag$^{-1}$ \citep{Zhu2017}. Average extinction toward the central parsec in the GC is reported to be A$_{Ks}$$\sim$2.5 mag and A$_{V}$$\sim$30 mag \citep{Scoville2003, Nishiyama2008, Fritz2011, Schodel2010}. \cite{RiekeLebofsky1985} found A$_{V}$ to A$_{K}$ ratio is 9, but more recent studies indicate even higher values of A$_{V}$ to A$_{K}$ ratio is $\sim$ 29 \citep{Gosling2006}. Extinction towards the GC also varies on scales of arcseconds \citep{Scoville2003, Schodel2007, Schodel2010}. However, the extinction maps in the literature have very different resolutions or coverage or both, which therefore limit their use in obtaining the gas density distribution in each field. Therefore, we assumed extinction is invariant to estimate normalised aliphatic hydrocarbon abundances. For Field A, previously we assumed A$_{V}$$\sim$30 mag (see Paper 1 and Paper 2). In this study, we followed the same approximation for Field B as it is in the GC region. Since there is a debate on the extinction towards Field C, to estimate the lowest abundance level, we assumed A$_{V}$$\sim$20 mag, which is the highest value for the DISM reported by \cite{Godard2012} and \cite{Vig2017}. The resultant aliphatic hydrocarbon abundance levels (ppm) that are estimated based on an invariant A$_{V}$ are shown in galactic coordinates for Field C and B in Figure~\ref{fig6} in comparison with Field A (for details see Paper 2). \subsection{Summary} For completeness, we summarise and compare the results for Fields A, B \& C in Table \ref{tab:7}, which lists average, minimum and maximum 3.4\,$\mu$m optical depth values, aliphatic hydrocarbon column densities and abundances for each field, together with an estimate of the fraction of the aliphatic carbon in the ISM that lies in the ISD\@. \begin{table*} \begin{center} \footnotesize \caption{Minimum, maximum and average 3.4\,$\mu$m optical depth ($\tau_{3.4\,\mu m}$) values, with the corresponding aliphatic hydrocarbon column densities ($\times$\,10$^{18}$ cm$^{-2}$) and abundances (ppm) for Fields A, B and C\@. For the optical depth values the standard deviations are given. We also note the number of sources measured in each field in the footnote to this Table.} \label{tab:7} \centering \begin{tabular}{| c | c | c | c | c | } \hline && Field A & Field B & Field C \\ \hline &Min. & 0.07 & 0.18 & 0.10 \\ Optical depth ($\tau_{3.4\,\mu m}$) & Max. & 0.43 & 0.62 & 0.44 \\ & Average &0.20 & 0.36 & 0.27 \\ & Sigma & 0.06 & 0.09 & 0.10 \\ \hline & Min. & 0.92 & 2.40 & 1.36 \\ Column density ($\times$\,10$^{18}$ cm$^{-2}$ ) & Max. & 5.67 & 7.16 & 5.83 \\ & Average & 2.64 & 4.78 & 3.56 \\ \hline & Min. & 15 & 39 & 33 \\ Abundance (ppm) & Max. & 93 & 117 & 143 \\ & Average & 43 & 78 & 87 \\ \hline Aliphatic C ($\%$)&& 12 & 22 & 24 \\ \hline \end{tabular} \end{center} \begin{footnotesize} Note that number of sources measured in Field A, B and C are as follows: 200, 180, 15. \end{footnotesize} \end{table*} For Field A, the optical depth at 3.4\,$\mu$m ranges from 0.07 to 0.43, the column density of aliphatic hydrocarbon rises from $\sim$$9.2\times10^{17}$\,cm$^{-2}$ to $\sim$$5.7\times10^{18}$\,cm$^{-2}$, and the corresponding abundances with respect to hydrogen range from $\sim$15\,ppm to $\sim$93\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim 0.2$ corresponds to a typical column density of aliphatic hydrocarbon of $2.6\times10^{18}$\,cm$^{-2}$ and 43\,ppm aliphatic hydrocarbon abundance. For Field B, the optical depth at 3.4\,$\mu$m ranges from 0.18 to 0.54, the column density of aliphatic hydrocarbon rises from $\sim$$2.4\times10^{18}$\,cm$^{-2}$ to $\sim$$7.2\times10^{18}$\,cm$^{-2}$, and the corresponding abundances with respect to hydrogen range from $\sim$39\,ppm to $\sim$117\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.36 corresponds to a typical column density of aliphatic hydrocarbon of $4.8\times10^{18}$\,cm$^{-2}$ and 78\,ppm aliphatic hydrocarbon abundance. For Field C, the optical depth at 3.4\,$\mu$m ranges from 0.10 to 0.44, the column density of aliphatic hydrocarbon rises from $\sim$$1.4\times10^{18}$\,cm$^{-2}$ to $\sim$$5.8\times10^{18}$\,cm$^{-2}$, and the corresponding abundance with respect to hydrogen ranges from $\sim$33\,ppm to $\sim$143\,ppm. The mean value of $\tau_{3.4\,\mu m}$ $\sim$ 0.27 corresponds to a typical column density of aliphatic hydrocarbon of $3.6\times10^{18}$\,cm$^{-2}$ and 87\,ppm aliphatic hydrocarbon abundance. The final line of Table \ref{tab:7} lists the percentage of the total carbon abundance in aliphatic form under the assumption that the total carbon abundance for the ISM is 358\,ppm \citep{Sofia2001}. We obtain an average 43 ppm, 78 ppm and 87 ppm aliphatic hydrocarbon levels for Field A, Field B and Field C, respectively. In addition to gas phase abundances, these amounts correspond to 12\%, 22\% and 24\% of the cosmic carbon abundances. We conclude that at least 10--20\% of the carbon along these sightlines in the Galactic plane lies in aliphatic form. \section{Discussion and Conclusions} \label{Section6} By this work we have clarified that for three fields in the Galactic disk, an important part of cosmic carbon is in aliphatic hydrocarbon form in the ISD. We also showed that the spectrophotometric method is applicable to other fields in the Galactic plane. We showed in Paper 2 that optical depth at 3.4\,$\mu$m for a field in the Galactic Centre (Field A) can be reliably measured using flux measurements made through a series of narrow band filters that are able to sample the spectrum. We have applied this method to two new fields: $\sim 0.2^{\circ}$ to the East (Field B) and $\sim 35^{\circ}$ to the West (Field C) lying in the Galactic plane. In all cases optical depths were able to be measured and found to lie in the range $\sim 0.1 - 0.6$. We found larger optical depth through the line of sight of Field B compared to Field A, based on the calibration fluxes we obtained by the spectroscopic measurements of a source (GCIRS 7) in Field A. However, caution needs to be taken since the sources in Field B lack spectroscopic information. We also found that, through the line of sight of Field C (IRAS 18511+0146 cluster), there is a larger optical depth than Field A. This result was checked by cross-calibration tests by using the spectroscopic fluxes of Field C sources from the literature to calibrate Field A data which yields consistent results. Complete consistency cannot be expected since the spectroscopic studies involve different steps that we were not able to implement in this study, such as polynomial continuum fitting (which requires higher resolution spectra) or normalisation of the spectra with the star's blackbody curve (which requires to measure temperature of the all sources in the field of view). Since we applied linear continua to low-resolution spectra, which are smoother than the real spectrum, our results may yield minimum values for the optical depth levels of 3.4$\,\mu$m absorption. The 3.4$\,\mu$m optical depths are measured simultaneously for each field so the relative variability of column densities for each field are more reliable than the measurements reported by different spectroscopic studies in the literature (for more details see Paper 2). The other advantage of the method is that it allows us to trace abundances through the large FoV, compared to single point or long-slit spectroscopy, which is not sufficient to provide spatial information for the FoV. Although recently \textit{integral field spectroscopy} (IFS) has become an important alternative to long-slit spectroscopy, the narrow-band imaging can be still preferred to obtain spatially resolved spectral data for larger FoVs. Using the spectrophotometric method, we measured the optical depth across the fields that covers 163 arcsec $\times$ 163 arcsec for Field A and B, 137 arcsec $\times$ 137 arcsec for Field C. While the range in extrema between minimum and maximum values measured for 3.4\,$\mu$m optical depth may appear relatively large (for Field A from 0.07 to 0.43, for Field B from 0.18 to 0.62 and for Field C from 0.10 to 0.44), the majority of sources are all within 0.1 of the mean 3.4\,$\mu$m optical depth measured in their field. For both Fields A and B there is a mild gradient in the optical depth of 3.4$\,\mu$m running in same direction across the 2 arcmin field, rising by about a factor of 50\% in increasing towards $b=0^{\circ}$. The pattern is similar between these fields. There are too few sources measured in Field C, however, to draw any conclusions as to whether a gradient exists across this field. As we showed by dividing data into quartiles for the Galactic Centre fields, where the number of the sources are sufficiently large, the general optical depth trends in each quartile are found to be consistent. Although interpolation of data caused some structure variations in the quartile maps due to location of the sources in use, without large source-to-source variations, 3.4\,$\mu$m optical depths are found to be reasonably uniform across the fields. However, in case of large uncertainty in the spectrophotometric measurements of individual sources due to low S/N and uncertainties in continuum determination \citep{Chiar2002, Moultaka2004, Godard2012} or presence of intrinsic spectral properties through some lines of sight there might be source-to-source variations. There might be also bias in the measurements since some of the GC sources are known that they have thick circumstellar dust shells \citep{Roche1985, Tanner2005, Viehmann2005, Pott2008, Moultaka2009} and there are also many sources that are problematic to classify \citep{Buchholz2009, Hanson2014, Dong2017, Nogueras-Lara2018}. However, the uniformity in our measurements suggests that the aliphatic hydrocarbon absorption is dominated by material distributed along the sightline in DISM rather than local to each source, and so our method measures large-scale properties of the foreground molecular medium. We wanted to compare the resultant aliphatic hydrocarbon optical depth maps obtained in this study with the maps in the literature. \cite{Moultaka2005} and \cite{Moultaka2015} provided the map of the optical depth of 3.4\,$\mu$m absorption feature for the GC sightline. However, the coverage of the 3.4\,$\mu$m absorption maps are considerably smaller than the maps obtained in this study. Therefore a meaningful comparison is not possible. They also argued that there is a residual 3.4\,$\mu$m absorption produced by the local medium of the central parsec of the GC. Through the line of sight of the GC, different components of the ISM (diffuse and dense ISM, circumnuclear ring, mini spiral etc.) are superimposed \citep{Muzic2007, Moultaka2009, Ferriere2012, Sale2014, Yusef-Zadeh2017, Moultaka2019, Murchikova2019, Geballe2021}. However, 3.4\,$\mu$m hydrocarbon absorption is assumed to dominantly take place in the DISM \citep{Chiar2002}. Of course, hydrocarbons can be found in all ISM components but by dust evolution, their observable properties change (i.e., ISD could be mixed or covered by ices of the volatiles in dense ISM or in molecular clouds) \citep{Jones2012a, Jones2012b, Jones2012c, Chiar2013, Jones2019, Potapov2021}. There are also masking effects by other features arising from the different components of the ISM and the 3.4$\,\mu$m absorption feature can be superimposed with other features, such as the long wavelength wing of the broad 3.1$\,\mu$m H$_{2}$O ice absorption band ([I02], [C02], [M04] and [G12]). We also tried to explore if there is a correlation between aliphatic hydrocarbon column density and the ISD by comparing our maps with extinction and reddening maps, although a relation between two cannot necessary be expected. Extinction occurs due to combined effect of absorption and scattering of light and can be used to probe the amount of total gas, ice and dust density together. Although extinction curves are shaped by optical properties of ISD \citep{Draine1984, Cardelli1989, Fitzpatrick1999}, they are not sufficient to reveal to chemical composition of the ISDs. Since our measurement is focused on the 3.4\,$\mu$m feature, the resultant maps reflect distribution of a chemical group: hydrocarbons in the ISD. Importantly, we showed that this distribution is independent from the ISD distribution. In addition to the spectrophotometric measurements of the 3.4\,$\mu$m feature, spectrophotometric measurements of the 9.8\,$\mu$m feature has a potential to reveal the siliceous dust distribution but it has not been implemented yet. Spectrophotometric measurement of 3.4\,$\mu$m aliphatic hydrocarbon and 9.8\,$\mu$m silicate absorption features have a potential for the future applications e.g. James Webb Space Telescope (JWST) as discussed in \citealt{Gordon2019}. A comparison of the carbonaceous and siliceous dust maps can help to reveal the abundance distribution of major dust forming elements (i.e. C, O, Si) in the ISM \citep{Kim1996, Cardelli1996, Henning2004, WangLi2015, Zuo2021a, Zuo2021b, HensleyDrain2021, Gordon2021, DraineHensley2021}. It has been expected that gas-phase abundances are often assumed to be invariable with respect to the local interstellar conditions although recent studies imply that there are variations \citep{DeCiaNature2021, Zuo2021a, Zuo2021b}. There is an argument by \cite{DeCiaNature2021} on the presence of line-of-sight inhomogeneities in elemental abundances, and measured column densities could be affected by the ISM being composed of individual clouds with very different depletion strengths and/or abundances. In this study, the aliphatic hydrocarbon optical depth is found to be a little higher for Field B than Field A. While this variation occurs within $0.2^{\circ}$ in the centre of the Galaxy, the optical depth in Field C in the Galactic plane is similar to that of the Galactic Centre. While three fields is, of course, a limited number to be drawing conclusions from, this is consistent with reasonably constant levels of absorption at 3.4\,$\mu$m across the Galactic plane. The resultant aliphatic hydrocarbon column densities have been obtained free from the extinction values. For the three fields, the average aliphatic hydrocarbon column density level found to be several $\rm \times 10^{18}\,cm^{-2}$. We obtained the aliphatic hydrocarbon abundances of several $\rm tens \times 10^{-6}$ (ppm) accordingly for these sightlines. The statements also assume a relatively constant extinction and total carbon abundance, and so caution needs to be applied in order-interpreting it. In addition, there might possibly be larger amounts of aliphatic hydrocarbons in the ISD as we used 3.4\,$\mu$m narrow-band filter width of 62\,cm$^{-1}$ instead of the $3.4\,\mu$m aliphatic hydrocarbon absorption feature equivalent width of 108.5\,cm$^{-1}$ (Paper 1). Therefore we conclude that an overall fractional abundance of carbon in the aliphatic form from 10--20\% in the ISM\@ at least. We also note that the cosmic carbon abundances are still in debate as different studies are not fully consistent yet \citep{DraineHensley2021, Zuo2021a, Zuo2021b}. Using spectroscopic measurements through the different Galactic sightlines, \cite{Parvathi2012} found higher carbon abundance levels (i.e. $\sim$$464$\,ppm towards HD206773) than the cosmic carbon abundance estimations. However, \cite{DeCiaNature2021} implied that the interstellar abundances of refractory elements in the local ISM may be $\sim$55\% of solar, which puts new limitations on elemental abundances in ISD. Importantly, the average aliphatic hydrocarbon abundances found in this study do not exceed the solid phase carbon abundance levels obtained by recent studies (i.e. \citealt{Parvathi2012, Jones2013, Mishra2017, Zuo2021b}) and imply that some of the solid carbon is available in aromatic, olefinic and other forms in ISD to produce all observable features \citep{Jones2013}, in particular the 2175\,\AA\, bump \citep{Stecher1965}, which is the strongest extinction feature produced by electronic transitions in carbonaceous material \citep{Mathis1977, Kwok2009, Li2019, Dubosq2020, Xing2020}. By the support of laboratory studies which allow us to estimate the aliphatic hydrocarbon / total carbon ratios in the ISD, the $3.4\,\mu$m aliphatic hydrocarbon maps can play an important role in solving the carbon crisis, understanding interstellar carbonaceous material cycle and chemical evolution of the Galaxy. The interstellar matter cycle provides the raw material for the formation of stars and planets \citep{Oberg2021}. Stars and planets are formed deep inside dense clouds where siliceous and carbonaceous ingredient of dust covered by ices \citep{Jones2016a, Jones2016b, Oberg2021, Potapov2021}. The gravitational collapse of an interstellar cloud led to the formation of a protoplanetary disks \citep{Andrews2020, Oberg2021} and dust and ice plays role in the formation of the planetesimals (planets, asteroids, and comets) as they collide and stick \citep{Weidenschilling1980}. Assuming the carbon-to-silicon abundance ratio of the the solar photosphere and the ISM is similar (C/Si $\sim$10) \citep{Anderson2017, Asplund2021}, we can estimate that there would be an important amount of aliphatic hydrocarbon available during the planetesimal formation stage. Therefore, beside the role of siliceous / mineral dust and ice (i.e. \citealt{SalterFraser2009, Fraser2010, HillFraser2015, Demirci2019, Musiolik2021}), the possible role of organic material and aliphatic hydrocarbons in dust aggregation and pebble formation \citep{DominikTielens1997, Kazuaki2019, Bischoff2020, Anders2021} which leads to planetesimal formation need to be further investigated using primordial material in the planetesimal samples and analogue materials \citep{schmidtgunay2019} in laboratory experiments. Hydrocarbon groups in ISD are useful to trace the reservoir of organic material and so that prebiotic molecules in the ISM. Some part of the prebiotic molecules have been likely preserved in carbonaceous dust in the planetesimal formation regions \citep{Ehrenfreund2010, Kwok2016, vanDishoeck2020, Ehrenfreund2021, Oberg2021}. The chemical compositions of the planet-forming disks determine hospitality to life \citep{Bergin2015, Oberg2021}. With advent of the new techniques and telescopes, we are able to observe the planet-forming disks and galaxies with great resolution. Future applications of our method will enhance our understanding of the distribution of carbonaceous dust, organic and prebiotic material in space. \section*{Acknowledgments} We would like to thank Dr. Tom Geballe and Dr. Takeshi Oka for their support and for supplying data. BG would like to thank to The Scientific and Technological Research Council of Turkey (T\"{U}B\.{I}TAK) for their support in this work through the 2214/A International Research Fellowship Programme. TWS is supported by the Australian Research Council (CE170100026 and DP190103151). The University of New South Wales (UNSW) seeded this work through the award of a Faculty interdisciplinary grant. We also wish to thank the staff at the UKIRT telescope for their help in gathering the data used for this paper through their service programme, in particular Watson Varricatt and Tom Kerr who undertook the observations. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. \section*{Data Availability} The data used in this study will be made available by the corresponding authors upon request. \bibliographystyle{mn2e}{}
1,314,259,996,767
arxiv
\section{Introduction: towards superconductivity with orbital degrees of freedom} \label{sec:intro} Significance of the discovery of high-$T_c$ superconductivity by Bednorz and Muller \cite{Bed86} for recent progress in the quantum many-body theory cannot be overestimated --- it triggered a huge amount of innovative research on quantum materials and unconventional superconductors, both in the experiment and in the theory. In spite of reaching a qualitative understanding of the nature of the superconducting (SC) state in different transition metal compounds, several open problems remain. Some are related to cuprates, as the astonishing complexity of the phase diagram and the physical origin of the temperature $T^*$ observed well above $T_c$ itself \cite{Lee06,Oga08,Voj09,Bia12,Kei15}; others are more general and include questions about the origin of pairing \cite{Pag10,Hir11,Sca12,Si16,Fer17}, optimal conditions for the onset of the SC state \cite{Cav94,Kas98,Ima98,Mac03}, variation under pressure \cite{Krz16}, and the actual orbital symmetry at the Fermi surface \cite{Gra09}. In this short review we concentrate on the last question. After the discovery of high-$T_c$ superconductivity in cuprates it was believed that lifting of degeneracy of $e_g$ orbitals was important to obtain the SC state with a high value of $T_c$ \cite{Oht91,Pav01,Sak10}. Indeed, typically in cuprates degeneracy is quite lifted and relevance of Jahn-Teller effect is controversial. It is under discussion whether the effective model for cuprates has nondegenerate orbitals and may be represented by extended Hubbard model with on-site Coulomb and further neighbor hopping \cite{Fei96}. Yet, it is derived for the case of large splitting of $e_g$ orbitals, while orbital degrees of freedom might play an important role in the SC instability and the multiorbital Hubbard model is a standard model for all high-$T_c$ superconductors in general, where the dome of $T_c$ occurs by driving the chemical potential in the proximity of a Lifshitz transition \cite{Bia18}. We shall not address here the role played by electron-phonon coupling which is expected to contribute in cuprates and is a driving force of recently discovered superconductivity in H$_3$S \cite{Dro15,Cap17}. In particular, since the degenerate orbital degrees of freedom necessarily come along with strong Jahn-Teller coupling \cite{Mul99,Kel08}, the electron-phonon coupling seems to be essential for SC instabilities in all systems with orbital degeneracy. Nevertheless, to make this review more focused, we shall limit ourselves to the consequences of the orbital degeneracy for the models containing solely electronic degrees of freedom --- thus leaving the interplay of the electron-phonon coupling and the orbital degeneracy in the high-$T_c$ superconductors for another work. The outline of this paper is as follows. One early idea in the theory of cuprate superconductivity was that orbital excitations could contribute to the pairing mechanism, as discussed also in section 2.1, but this was not supported by more recent developments. A usual situation is that the pairing occurs for holes in a single molecular orbital of $x^2-y^2$ symmetry, see section 2.2. But certainly an interesting question is whether allowing for the presence of holes in both $e_g$ orbitals would not lead to enhanced SC instabilities. This idea has its roots in the Jahn-Teller physics in cuprates \cite{Mul99,Kel08}, as well as in the observation that the propagation of a hole in a Mott (or charge-transfer) insulator is much richer when both $e_g$ orbitals can participate \cite{Zaa93}. Partial filling of both $e_g$ orbitals could be realized in the highly overdoped CuO$_2$ monolayer grown on Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ (Bi2212) \cite{Zho16,Jia18}. We remark that the symmetry of the SC phase in Bi2212 has been extensively discussed in the literature \mbox{(see, e.g. \cite{Li99,Mis02,Hoo03,Lat04,Klemm,Zhu19})}. Furthermore, we emphasize that two-dimensional (2D) systems are special, and possible SC instability was predicted for a layered geometry of NiO$_2$ planes in LaNiO$_3$/LaMO$_3$ superlattices \cite{Cha08}. We follow this idea and discuss briefly remarkable similarity between overdoped cuprates and nickelates in sections 3.1 and 3.2. Furthermore, superconductivity occurs also in metallic systems with $t_{2g}$ degrees of freedom, i.e., in planar ruthenate Sr$_2$RuO$_4$ (section 3.3) and in Fe-based superconductors (section 3.4). In the latter systems orbital fluctuations are expected to contribute \cite{Hir11,Kno11}. The former systems are of particular interest as there spin-orbit interaction entangles spin-orbital degrees of freedom and the orbital states become mixed \cite{Vee14}. A planar iridate Sr$_2$IrO$_4$ with an even stronger spin-orbit coupling shows much of the cuprate phenomenology \cite{Ber19}, but no superconductivity was reported so far. This review is summarized in section 4. \section{The role of orbitals in superconducting cuprates} \label{sec:exci} \subsection{Earlier theoretical proposals} \label{sec:specu} Following the idea of Jahn-Teller physics in cuprates \cite{Mul99,Kel08}, a question arises how many orbital symmetries should be included in a minimal realistic model for cuprates. Already in the early years of high-$T_c$, the idea of going beyond the single band picture by including O($2p$) orbitals in the three-band model has emerged \cite{Var87,Eme87,Ole87}. Whether or not the oxygen orbitals could be fully integrated out is still not fully resolved \cite{Arr09,Han10,Ebr14}. Leaving this issue open, we shall present here an overview of the role played by the copper orbital degrees of freedom. It has been discussed early on that multiple orbitals contribute to the physical properties of YBa$_2$Cu$_3$O$_{7-x}$ \cite{Biapc,Bia88,Nuc95} and La$_{2-x}$Sr$_x$CuO$_4$ \cite{Pompa,Che92}. The coupling to the lattice was employed as sensitive to the orbital content of wave-functions --- multiorbital components were deduced from uniaxial and hydrostatic pressure effects on the value of $T_c$ \cite{Sak12}, and from the effect of rhombic distortion on the polarized x-ray absorption spectra in high-$T_c$ superconductors \cite{Sei90}. Further research on stripes and electron-lattice interactions suggested the presence of pseudo-Jahn-Teller effect in cuprates \cite{Bia98,Bia00,Bia00jp,Bia01}. These phenomena follow in a natural way from orbital pseudo-degeneracy \cite{Ber13}, and these ideas were further developed in \cite{Mul00,BuHol,Mul14,Zho95,Ber97}. Support for the relevance of electron-phonon interaction comes from the observation of the isotope effect on the pseudogap temperature $T^*$ \cite{Lan99}, from strong renormalization of certain phonons by doped holes \cite{McQ99}, and from recently observed phonon anomalies in charge density wave states in \mbox{cuprates \cite{Tacon,Mia18}}. Already shortly after the discovery of high-$T_c$ cuprates, it was suggested by Weber \cite{Web88} that an orbital excitation could be responsible for the pairing. In a typical copper oxide the nearest neighbor Coulomb interaction between holes in the oxygen $p$ orbitals and the copper $3z^2-r^2$ orbital is substantially smaller than between holes in the oxygen $p$ orbitals and the Cu $x^2-y^2$ orbital --- such an aspherical Coulomb interaction is estimated to be of the order of 0.3-0.5 eV in the cuprates \cite{Web88}. Consequently, an excitonic pairing mechanism was proposed: two oxygen holes can gain energy provided that the first one excites the Cu $d$ hole from the $x^2-y^2$ to the $3z^2-r^2$ orbital and the second one follows, forming a pair. This mechanism was later further improved by Jarrell, Cox, and others \cite{Jar88,Cox89} by including the superexchange processes between the nearest neighbor oxygen $p$ orbitals and both Cu $e_g$ orbitals. This latter mechanism was estimated to roughly triple the strength of the coupling between the orbital exciton and the holes on oxygen. While the above proposal is appealing, unfortunately it is not very realistic: the crucial role played by the copper spins is completely neglected and it is implicitly assumed that the doped holes go to the $\pi$-bonding oxygen orbitals. A more realistic two-orbital $e_g$ model was proposed in \cite{Zaa93}, and the onset of superconductivity in the various versions of the two-orbital Hubbard models was studied in more detail, e.g. in \cite{Fei92,Bud94,Buc95,Sak14}. Nevertheless, in order to verify whether these models could be relevant to the cuprate superconductivity, it is crucial to include the size of the crystal-field splitting between the Cu $3z^2-r^2$ and $x^2-y^2$ orbitals --- as discussed in the next subsection. \subsection{Orbital excitations in cuprates} \label{sec:ctm} In a `typical' high-$T_c$ cuprate, the CuO$_6$ octahedra are elongated along the $c$ axis due to apical oxygen displacements, and the degeneracy of $e_g$ orbitals is lifted. It has been established that both electron-doped and hole-doped copper oxides are strongly correlated electron systems in the vicinity of the metal to charge-transfer insulator transition \cite{Web10}. One also finds large splitting of the copper $3d$ states and only a single Cu($3d$)--O($2p$) hybridized band crosses the Fermi surface in doped systems. This band has the $x^2-y^2$ symmetry and becomes half-filled in the undoped charge-transfer insulator La$_2$CuO$_4$. The superexchange stabilizes then antiferromagnetic (AF) order \cite{Zaa88} in this and other compounds of the cuprate family \cite{Lee06,Oga08}. Doping generates a Fermi surface originating from a single band made of oxygen $2p$ and copper $3d$ states, for small hole doping $x<0.3$, as discussed by Zhang and Rice \cite{Zha88}, see Fig. 1. In this regime $d$-wave SC phase is found, and a high value of $T_c$ is obtained when the orbital splitting is large \cite{Oht91,Pav01,Sak10}. \begin{figure*}[t!] \centering \includegraphics[width=14.5cm]{fig1} \caption{Schematic phase diagram for cuprates as a function of hole doping $x$, showing the route from the single-band SC phase with holes in $x^2-y^2$-type molecular orbitals, realized in bulk cuprates (left), to the two-orbital nodeless SC phase in the hole-rich CuO$_2$/Bi2212 monolayer (right). The insets show the corresponding Fermi surface for one-orbital (left) and two-orbital $\{x^2-y^2,3z^2-r^2\}$ (right) SC phase. Image is reproduced from \cite{Geb09}; right inset for the hole-rich CuO$_2$ monolayer is reproduced from \cite{Mai18}.} \label{fig:Cu} \end{figure*} Interestingly, it has taken almost 20 years both for the experiment and the theory to unequivocally agree on the energies of the (local) orbital excitations on the copper ion of the undoped cuprates ("$d-d$ excitations" \cite{Zaa93}). It has long been believed that, while the ground state orbital is of ${x^2-y^2}$ character, the energy of the ${3z^2-r^2}$ orbital excitation is rather low, for instance of the order of $\sim 0.4$ eV --- as suggested by the optical and resonant inelastic x-ray scattering (RIXS) measurements of La$_2$CuO$_4$ \cite{Per93,Ghi04} as well as by optical absorption for Sr$_2$CuO$_2$Cl$_2$ \cite{Per93}. The latter result is particularly striking, for the most recent understanding (see Table I and discussion below) is that the ${3z^2-r^2}$ orbital excitation is not even the lowest lying orbital excitation in Sr$_2$CuO$_2$Cl$_2$. Though, even earlier the above result was challenged by Lorenzana and Sawatzky \cite{Lor95}, suggesting a different interpretation of the optical absorption spectra, in which the 0.4 eV feature was assigned to the magnetic excitations. Finally, these results did not agree with the theoretical estimates giving ca. 0.9 eV energy for such an orbital excitation, according to the density-functional theory \cite{Sak10}, or with the x-ray absorption spectroscopy results showing a very weak $3z^2-r^2$ character in the ground state of the hole-doped cuprates \cite{Che92}. \begin{table}[b!] \caption{The energies of orbital excitations in various undoped cuprates, as obtained form the RIXS experiment \cite{Mor11} (quantum-chemistry calculations \cite{Hoz11}). Note that a classical magnetic exchange energy $2J\simeq 0.26$ eV is subtracted from all given energy values. Table adopted from \cite{Hoz11}.} \centering \begin{ruledtabular} \begin{tabular}{cccc} \textbf{Cu($3d$) orbital} & \textbf{La$_2$CuO$_4$} & \textbf{Sr$_2$CuO$_2$Cl$_2$} & \textbf{CaCuO$_2$} \\ \hline ${3z^2-r^2}$ & 1.44 (1.37) & 1.71 (1.75) & 2.39 (2.38) \\ ${xy}$ & 1.54 (1.43) & 1.24 (1.16) & 1.38 (1.36) \\ ${xz/yz}$ & 1.86 (1.78) & 1.58 (1.69) & 1.69 (2.02) \\ \end{tabular} \end{ruledtabular} \label{tab} \end{table} The recent years have, however, lead to substantial advancements both in the resolution of the RIXS experiments of the cuprates \cite{Ame11} as well as \textit{ab-initio} quantum chemistry calculations \cite{Hoz11}, and allowed for obtaining the results for cuprates \cite{Hoz11,Mor11} in remarkable agreement between the theory and experiment, as presented in Table I. We emphasize that nowhere one can find the above-mentioned low values of the orbital excitation energies, for all excitations have their energies substantially above 1 eV. Finally, it is only in La$_2$CuO$_4$ that the lowest energy orbital excitation has a $3z^2-r^2$ character; otherwise this excitation has the {\it highest} energy. In general, a more detailed study of various other copper oxides, \mbox{with / without} apical ligands, suggests that the energy of the $3z^2-r^2$ orbital correlates with the out-of-plane Cu-ligand distance $h$, although this relation is rather complex (cf. Fig.~1 of \cite{Hoz11}). \begin{figure*}[t!] \centering \includegraphics[width=14.5cm]{fig2} \caption{Electronic structure for the LaNiO$_3$/LaAlO$_3$ heterostructure without strain obtained in the LDA: (\textbf{a}) bands of $e_g$ symmetry (black to yellow shading for orbital character), and (\textbf{b}) corresponding cross section of the Fermi surface with the $k_z=0$ plane. Images are reproduced from \cite{Han09}.} \label{fig:Ni} \end{figure*} This situation is somewhat more subtle in doped cuprates. Again, in the early years of high-$T_c$ research, it was expected that the occupancy of the low-lying orbital $3z^2-r^2$ would increase upon doping \cite{Bia88,Bia89,Nuc89,Kho91}. However, as just discussed, in `most' of the cuprates this orbital turns out to be the highest lying one in the undoped crystals, such a scenario now seems to be no longer relevant. Instead, (typically) the lowest lying $xy$ orbital either hardens by ca. 50 meV~\cite{Ell15}, or softens by 150 meV~\cite{Fum19} with doping, depending on the compound. Moreover, in both cases the lowest lying $d-d$ excitation has an energy by ca. 1.5 eV higher than the ground state orbital even in SC samples \cite{Kan19}, in agreement with Table~I. The above discussion shows that the orbital excitations in the cuprates have relatively high energies, suggesting that the electrons close to the Fermi surface are clearly of a single $3d$-band character. Surprisingly, however, some important signatures of the so-called \textit{orbital physics} \cite{Kug82,Fei97,Tok00,Ole05,Kha05,Nor08,Karlo,Sch12,Bis15,AMO12,Woh11,Woh13,Brz15,Brz19} are visible there: clear experimental signatures of the collective orbital excitation, the orbiton, have been observed in the quasi-one-dimensional (quasi-1D) copper oxides \cite{Sch12,Bis15}. To a large extent this turned out to be possible due to the strong crystal-field splitting of the orbital excitations \cite{Woh11,Woh13}. The search for similar phenomena in the quasi-2D (i.e., high-$T_c$) cuprates is ongoing~\cite{Comin}. \section{Superconductivity with orbital degrees of freedom} \label{sec:orbi} \subsection{Beyond the cuprates with the pairing in a single $x^2-y^2$ orbital} \label{sec:Cu} A way to enhance the value of $T_c$ in compounds isostructural with La$_2$CuO$_4$ was found by Uchida {\it et al.} \cite{Uch06}: higher hole doping in CuO$_2$ planes is realized by replacing La ions by Sr ions in Sr$_2$CuO$_{4-\delta}$. The SC transition temperature $T_c=95$ K is then almost doubled for the hole doping $x\simeq 0.8$, see Fig.~1. In this regime another nodeless SC phase arises, and the symmetry in the orbital space is partly restored. In fact, also the band of $3z^2-r^2$ symmetry is partly filled, and the pairing occurs jointly for the B$_{1g}$ and A$_{1g}$ channels in Sr$_2$CuO$_{4-\delta}$ and also in Ba$_2$CuO$_{4-\delta}$ \cite{Mai18}. This possibility was discussed a decade ago \cite{Geb09}, and was realized, \textit{inter alia}, in a recently discovered superconductor with $T_c>70$~K \cite{Uch18}. The CuO$_6$ octahedra are compressed here and $3z^2-r^2$ orbitals contribute at the Fermi surface. The pairing strength is large relative to a similar calculation for an optimally doped Hubbard model. The theory predicts that the pairing strength depends on the shape of the Fermi surface \cite{Mai18}, see the right inset of Fig. 1, with nearly square shapes of both electron and hole bands responsible for the enhanced pairing. These results seem to challenge an early view that $T_c$ is optimized when a single band with $x^2-y^2$ symmetry crosses the Fermi surface \cite{Oht91,Pav01,Sak10}, see also section 2. Another way of increasing the hole density in Cu($3d$) orbitals was realized for a CuO$_2$ monolayer grown on Bi2212 cuprate \cite{Zho16}. Here the CuO$_2$ monolayer is heavily overdoped by charge transfer at the interface and has a short bond between Cu and the apical O in the substrate. A minimal two-orbital model predicts indeed a nodeless high-$T_c$ SC state with $s^{\pm}$ pairing \cite{Jia18}, arising from the spin-orbital exchange a'la Kugel-Khomskii model \cite{Kug82}. \begin{figure*}[t!] \centering \includegraphics[width=16.2cm]{Fig3} \caption{(\textbf{a}) \& (\textbf{b}) Spin-orbital entanglement in Sr$_2$RuO$_4$ ruthenate: (\textbf{a}) Fermi surface for bands $\{\alpha,\beta,\gamma\}$ at $k_z=0$ calculated without (thin black) and with (thick, color-coded) spin-orbit $\langle\vec{l}\cdot\vec{s}\rangle$ coupling, and (\textbf{b})~momentum-dependent Ru($4d$) orbital projection of the wave function for the $\beta$ band at selected momentum locations along the 3D Fermi surface. The orbital color represents the momentum-dependent spin $\langle s^z\rangle$ expectation value; blue/red correspond to spin ${\uparrow}/{\downarrow}$ for one state of the Kramers-degenerate pair, see the color scale at bottom right. The strongly mixed colors indicate momentum-dependent spin-orbital entanglement. Images (\textbf{a}) and (\textbf{b}) are reproduced from~\cite{Vee14}. (\textbf{c})~Orbital excitations in Sr$_2$IrO$_4$ between the entangled spin-orbital ground state (with $J_{\rm eff}=1/2$) and the excited doublet (with $J_{\rm eff}=3/2$), with the energy much lower than in cuprates (cf. Table 1).} \label{fig:Ru} \end{figure*} \subsection{Nickelates} \label{sec:Ni} One expects that another route to achieve higher hole doping in Cu($3d$) orbitals is realized in Ni oxides, and indeed the Fermi surface very similar to that for the overdoped cuprates was observed in Eu$_{2-x}$Sr$_x$NiO$_4$ \cite{Uch11}. This Fermi surface is also similar to the one found for LaNiO$_3$/LaAlO$_3$ heterostructure, where both $e_g$ symmetries contribute to the band structure obtained in local density approximation (LDA) \cite{Han09}, see Fig. 2. Although it was argued that the $3z^2-r^2$ symmetry would be eliminated by electron correlations, both bands were observed in the experiment \cite{Uch11}. The theoretical study of the electronic structure for the LaNiO$_3$/LaAlO$_3$ heterostructure suggests that \textit{orbital engineering} using heterostructuring is in principle possible. The electronic structure obtained with LDA for the heterostructure without strain, see Fig. 2(a), has the Fermi surface at $k_z=0$ very similar to that of the overdoped CuO$_2$ monolayer, cf. Fig. 2(b) and the right inset of Fig. 1. Hence, the theory predicts a SC heterostructure, if this system could be synthesized in the future. So far, widely tunable orbital configurations were indeed realized by in strongly correlated systems: LaTiO$_3$/LaNiO$_3$/LaAlO$_3$ heterostructure \cite{Dis15}, and nickelate superlattices \cite{Wu13,Hep14}. Also in other square-planar nickelate, Pr$_4$Ni$_3$O$_8$, both $e_g$ orbitals contribute to the Fermi surface \cite{Zha17}. X-ray absorption shows that low-spin configuration with $x^2-y^2$ character of the hole states is realized there \cite{Bot17}, making these states remarkably similar to those in hole-doped cuprates. Hence, also these compounds may be considered as promising candidates for unconventional superconductivity. Recently, a family of Ni-based compounds, which contain [Ni$_2$M$_2$O]$^{2-}$ (M-chalcogen) layers with an antiperovskite structure constructed by mixed-anion Ni complexes, has been suggested as possible high-$T_c$ superconductors \cite{Le18}. Here again both $e_g$ symmetries should contribute, and one expects strong competition between $s$-wave and $d$-wave pairing symmetries. More complicated situations are also possible, and we mention here KNi$_2$S$_2$ as an example of a Ni-based superconductor with three different orbital symmetries contributing at the Fermi surface and a small value of $T_c=0.46$ K \cite{Lu13}. The electronic structure is described by the multiorbital Hubbard model and this system has certain similarity to iron-based superconductors discussed in section 3.4. \subsection{Superconducting ruthenate Sr$_2$RuO$_4$} \label{sec:Ru} The SC state of strontium ruthenate Sr$_2$RuO$_4$ was discovered in 1994 by Maeno and his collaborators after they had succeeded in synthesizing high-quality samples of the material \cite{Mae94}. It was soon realized that different orbital symmetries contribute to the SC state in this unique ruthenium oxide \cite{Agt97}. The value of $T_c=1.5$ K is not particularly high, but the ruthenate structure suggests a possible relationship to the high-$T_c$ cuprate superconductors. Yet, in spite of their structural similarity, doped La$_2$CuO$_4$ and Sr$_2$RuO$_4$ are quite different \cite{Mae01}. In conventional superconductors, in high-$T_c$ copper oxides, and in Fe pnictides, the Cooper pairs have even parity. In contrast, Sr$_2$RuO$_4$ is the candidate for odd parity superconductivity \cite{Mac03}. However, the pairing symmetry in this material is not yet fully established (see, e.g. \cite{Sud09,Pus19}). Ruthenates have active $t_{2g}$ orbital degrees of freedom \cite{Wan13,Ima13}, with degenerate $yz$ and $zx$ orbital states. An additional complication in the theory is the presence of a sizeable spin-orbit coupling ($\zeta\sim 90$ meV at the $\Gamma$ point), which is much smaller than full bandwidth of $\sim 3$ eV but nonetheless leads to important consequences by mixing the low-energy $t_{2g}$ spin-orbital states. As a result, the Fermi surface, consisting of three $\{\alpha,\beta,\gamma\}$ bands \cite{Ake19}, changes qualitatively, see Fig. 3(a). In particular, the crossing points (accidental degeneracies) between the electronic $\beta$ and $\gamma$ bands vanish and the Fermi surface for both $\{\beta,\gamma\}$ bands becomes more circular \cite{Vee14}, while there is no phase space for mixing of hole-like $\alpha$ band. For further discussion we focus on the representative $\beta$ band; the mixing of $t_{2g}$ orbital states remains similar for the other bands. The most important consequence of finite spin-orbit coupling $\zeta$ is momentum-dependent spin-orbital entanglement \cite{Vee14} of the eigenstates near the Fermi surface. It is illustrated in Fig. 3(b) for the $\beta$ band and three representative values of momentum $k_z$. Going along the Fermi surface for a fixed value of $k_z$, one observes that not only the orbital character changes, but also both components of $s=1/2$ spin mix. The latter mixing of spin components is somewhat weaker for large value of $k_z=3\pi/4$. This mixing plays an important role and the challenging theoretical problem is to include orbital fluctuations in the theory of superconductivity in Sr$_2$RuO$_4$. When spin-orbit coupling is large enough, it can unify the spin and orbital subspaces locally, forming the effective pseudospin $J_{\rm eff}=1/2$ states that dominate low energy physics \cite{Kha04,Kha05,Jac09}. This case is particulary relevant for iridium oxides, and indeed, a perovskite compound Sr$_2$IrO$_4$ was found to host pseudospin $J_{\rm eff}=1/2$ antiferromagnetism \cite{Kim08,Kim09,Kim12}, with quasi-2D magnon excitations similar to those of spin $S=1/2$ in cuprates. It was recently found that spin-orbit entangled magnetism in iridates and ruthenates is strongly influenced by electron-lattice coupling, via pseudo-Jahn-Teller effect \cite{Liu19} The remarkable analogy between layered iridates and cuprates holds also upon doping --- single band Fermi surface, (pseudo)gap, and Fermi arcs have been detected \cite{Kim14}. However, a long-range coherent superconductivity has not been yet found in iridates. Whether this is related to the fact that spin-orbital excitation energies 0.6 eV \cite{Kim12} in iridates, see Fig. 3(c), are much lower than in cuprates, or to the Mott insulating nature of iridates, in contrast to charge-transfer insulating cuprates, remains an open question. For detailed discussion the similarities and differences between iridates and cuprates, we refer to the recent review article \cite{Ber19}. \subsection{Iron-based superconductors} \label{sec:Fe} Similar to cuprates, the Fe-based superconductors have 2D lattices of $3d$ transition metal ions as building blocks. However, while oxygen ions lie in the same planes as Cu ions in La$_2$CuO$_4$, ions of As, P, or Se lie above or below the Fe plane, in positions close to tetrahedral. For this reason, the out-of-plane As orbitals hybridize well with $t_{2g}$ orbitals of Fe ions. In addition, there is a substantial overlap between the $3d$ orbitals \cite{Pag10,Hir11,Sca12,Si16,Fer17,Ima05,Boe11,Roser}. Under these circumstances, the minimal models for pnictide superconductors contain at least two \cite{Raghu} or three \cite{Dag10} $t_{2g}$ orbitals per Fe atom. In contrast to undoped cuprates, parent compounds of Fe-based superconductors are metallic \cite{Pag10}, and the pairings of different symmetry compete in a two-band model \cite{Fer17,Fer10,Nic11,Nic12}. The coexistence of hole-like and electron-like bands at the Fermi surface is quite generic \cite{Pag10,Hir11,Boe11}. Then Lifshitz transitions can develop for increasing external magnetic field. Such transitions were indeed found in a two-band model with intra-band pairing \cite{Pto17} and could explain the experimental observations in FeSe and Co-doped BaFe$_2$As$_2$ compounds. The competition between different pairing symmetries in KFe$_2$As$_2$ causes a change of symmetry at the critical pressure, from $d$-wave to $s_{++}$-wave symmetry \cite{Taf13}. Before discussing the nature of pairing, it is interesting to look at the strong coupling model for pnictides, including magnetic frustration \cite{Qi08}. A spin-orbital model for interacting Fe ions in intermediate $S=1$ spin states was derived \cite{Zaa09} in the regime of strong electron correlations. It highlights Hund's exchange for electron correlations, recently observed experimentally \cite{Fink}. Magnetic and orbital instabilities are here far richer than in cuprates, and one finds the experimentally observed spin-stripe state which could be accompanied by three different types of orbital order. This is another manifestation of substantial magnetic frustration of the superexchange \cite{Qi08} which may partly explain why magnetic instabilities are sometimes absent, e.g. in FeSe and LiFeAs. The spin-orbital model \cite{Zaa09}, however, does not contain biquadratic exchange which was found to be crucial in magnetism of Fe-based superconductors \cite{Yar09,Wys11,Yu12,Val15}. This coupling may originate from spin-state fluctuations \cite{Cha13} typical for Fe-ions in systems with strong $p-d$ covalency. While large biquadratic exchange is quite unique and essential for the description of magnetic and nematic instabilities, its implications for SC instabilities belong to open problems in the field. Indeed, the SC phase in iron-based materials occurs in the vicinity of the two above instabilities: Not only one has the usual magnetic phases \cite{Dai12}, but also the nematic order may occur \cite{Sch12,Bea14,Wat15}. A detailed study shows that the Fermi surface of FeSe undergoes a spontaneous distortion from fourfold-symmetric to twofold-symmetric elliptical pockets, and next SC phase emerges from the nematic electronic phase. The theory of Cooper pairing in Fe-based superconductors is rather involved and still far from complete \cite{Hir11,Sca12,Fer12}. The role played by orbital and spin fluctuations belongs to challenging open problems in the theory. It was found that the orbital fluctuations may give rise to the strong pairing interaction due to the cooperation of Coulomb and electron-phonon interactions \cite{Kon10,Ona12}. The theory explains also the famous empirical relation between $T_c$ and the As-Fe-As bond angle \cite{Sai10}. Altogether, the pairing in iron-based superconductors involves all the five Fe($3d$) orbitals, and multiple orbital physics gives rise to various novel phenomena like orbital-selective Mott transition, nematicity, and orbital fluctuations that may support the SC phase. Recent theory treating spin and orbital fluctuations on equal footing predicts that, at certain conditions, a spontaneous orbital order sets in first, and then superconductivity follows \cite{Chu16}. The SC gap and low energy excitations in FeSe are dominated by a single $xz$ orbital \cite{Liu18} which uncovers the orbital origin of the strongly anisotropic gap in the FeSe superconductor \cite{Has18,Spr17,Nic17,Ptok}. In LiFe$_{1-x}$Co$_x$As spin excitations are orbital selective: low-energy spin excitations are mostly from $xy$ orbitals, while high-energy spin excitations arise from the $yz$ and $zx$ orbitals \cite{Li16}. Such strongly orbital selective spin excitations in LiFeAs family might play a role in the mechanism of orbital selective Cooper pairing as well. \section{Summary} \label{sec:summa} To conclude, we have presented the current status of the high-$T_c$ superconductivity in the presence of orbital degrees of freedom. As this subject is very broad, in this review we limit the presentation solely to the electronic degrees of freedom, leaving aside a detailed discussion of the role played by their coupling to the lattice. In case of cuprates large hole doping and removing octahedral distortions is necessary to activate the $3z^2-r^2$ orbitals, and we gave examples of cuprates with two $e_g$ orbitals and higher values of $T_c$ than in La$_{2-x}$(Sr,Ba)$_x$CuO$_4$. These orbitals contribute to almost the same Fermi surface, consisting of hole and electron parts, also in doped nickelates. But for nickelates we cannot present anything more than a theoretical suggestion that the superconducting instabilities could occur as well. Finally, very interesting superconducting states emerge in Sr$_2$RuO$_4$ and in iron pnictides, where several $t_{2g}$ symmetries meet at the Fermi surface and participate in the Cooper pairing. Search for other transition metal compounds with SC instabilities continues --- for instance, recently AgF$_2$ was suggested as an excellent analogue to La$_2$CuO$_4$ \cite{Gaw19}, but no superconductivity was observed so far. \begin{quote} Summarizing, the presence of orbital degrees of freedom makes high-$T_c$ superconductors an even more exciting class of quantum materials where the competing quantum phases are of particular importance for superconductivity in layered compounds \cite{Jar19}. It seems that orbital fluctuations could enhance the superconducting transition temperature $T_c$, but we emphasize that the role of orbital degrees of freedom in the phenomenon of pairing belongs to open problems in the theory; in particular the interplay between orbital degeneracy and the Jahn-Teller coupling to lattice --- the idea that guided Bednorz and M\"uller in their discovery of high-$T_c$ superconductivity --- has to be worked out in a greater detail. \end{quote} \textbf{Authors' Contributions:} All authors selected the relevant information, participated in discussions, wrote the manuscript and contributed to the interpretation of the results. \textbf{Funding:} This research was funded by Narodowe Centrum Nauki (NCN, National Science Centre, Poland) under Projects Nos. 2016/22/E/ST3/00560 and 2016/23/B/ST3/00839. \textbf{Acknowledgments:} We would like to thank Mona Berciu, Antonio Bianconi, Lucio Braicovich, Jeroen van den Brink, Mario Cuoco, Maria Daghofer, Thomas P. Devereaux, Louis Felix Feiner, J\"org Fink, Andres Greco, Maurits Haverkort, Peter J. Hirschfeld, Peter Horsch, Liviu Hozoi, Huimei Liu, Andrzej Ptok, Roman Pu\'zniak, George A. Sawatzky, J\'ozef Spa\l{}ek, Hiroyuki Yamase, Alexander N. Yaresko, Jan Zaanen, and Roland Zeyher for many insightful discussions. A.~M.~Ole\'s is grateful for the Alexander von Humboldt Foundation Fellowship (Humboldt-Forschungspreis). \textbf{Conflicts of Interest:} The authors declare no conflicts of interest.
1,314,259,996,768
arxiv
\section{Introduction} This work is devoted to nonparametric estimation and parameter estimation problems by observations of partially observed linear systems with small noise in observations. Such models of observations and more complex (multidimensional, nonlinear) were extensively studied in filtration theory, where approximate filters were proposed and studied in the asymptotics of small noise. In particular the order of the errors of approximations are obtained for the large diversity of the statement of the problems, see \cite{KBS84}, \cite{FP89}, \cite{Pic91}, \cite{Pic93} and references there in. The statistical problems for partially observed linear and non linear systems were studied in \cite{Kut94}, Chapter 6. Note that in \cite{Kut94} it is supposed that the small noise is in the observation and state equations. The parameter estimation problems in the case of small noise in observations only were considered recently in \cite{Kut19}. The statistical problems below are considered in two steps. First we propose a nonparametric estimator of the quadratic variation of the derivative of the limit of observed process. Then the obtained result is used in nonparametric estimation of the integral of the square of volatility of unobservable component and in parameter estimation problems. Let us consider the linear two-dimensional partially observed system \begin{align} \label{2-1} {\rm d}Y_t&=-a\left(t\right)Y_t{\rm d}t+b\left(t\right){\rm d}V_t,\quad Y_0=0,\quad 0\leq t\leq T,\\ \label{2-2} {\rm d}X_t&=f\left(t\right)Y_t{\rm d}t+\varepsilon \sigma \left(t\right){\rm d}W_t,\qquad X_0=0,\quad 0\leq t\leq T, \end{align} where the Wiener processes $V_t,0\leq t\leq T$ and $W_t,0\leq t\leq T$ are supposed to be independent. The solution $Y^T=\left(Y_t,0\leq t\leq T\right)$ of the state equation \eqref{2-1} can not be observed directly and we have available the observations $X^T=\left(X_t,0\leq t\leq T\right)$ only. Here $a\left(\cdot \right), b\left( \cdot \right),$ $ f\left(\cdot \right)$ and $\sigma \left(\cdot \right)$ are some bounded functions and $\varepsilon \in (0,1]$ is {\it small} parameter. Our first goal is to construct consistent $\left(\varepsilon \rightarrow 0\right)$ estimator $\hat \Psi _{ \tau ,\varepsilon },0\leq \tau \leq T $ of the function \begin{align} \label{2-3} \Psi _\tau =\int_{0}^{\tau }f\left(s\right)^2b\left(s\right)^2{\rm d}s,\qquad \qquad 0<\tau \leq T. \end{align} Then we show that this estimator $\hat \Psi _{ \tau ,\varepsilon } $ can be useful in the construction of the estimators of the parameters $\vartheta $ in the case of models \eqref{2-1}, \eqref{2-2} with $f\left(t\right)=f\left(\vartheta ,t\right)$ or $b\left(t\right)=b\left(\vartheta ,t\right)$. The construction of the estimators is based on the following properties of the model \eqref{2-1}, \eqref{2-2}. The observed process $X^T $ uniformly on $t$ with probability 1 converges to the random process $x^T=\left(x_t,0\leq t\leq T\right)$: \begin{align*} \sup_{0\leq t\leq T}\left|X_t-x_t \right|\leq C\,\varepsilon \sup_{0\leq t\leq T}\left|W_t\right|\rightarrow 0,\qquad x_t=\int_{0}^{t}f\left(s\right)Y_s\,{\rm d}s. \end{align*} We have \begin{align*} \frac{{\rm d}x_t}{{\rm d}t}=f\left(t\right)Y_t,\qquad x_0=0. \end{align*} Let us denote $N_t=f\left(t\right)Y_t$. This process has the stochastic differential \begin{align*} {\rm d}N_t=\left[f'\left(t\right)-a\left(t\right)f\left(t\right)\right]Y_t{\rm d}t+f\left(t\right)b\left(t\right){\rm d}V_t, \qquad N_0=0 \end{align*} and therefore $N_\tau^2$ can be written as \begin{align*} N_\tau^2=2\int_{0}^{\tau}N_s\,{\rm d}N_s+\int_{0}^{\tau}f\left(s\right)^2b\left(s\right)^2{\rm d}s. \end{align*} Hence \begin{align} \label{2-4} \Psi _\tau=N_\tau^2-2\int_{0}^{\tau}N_s\,{\rm d}N_s,\qquad 0\leq \tau\leq T. \end{align} This relation is the basic for the construction of the estimator of $\Psi _\tau, 0\leq \tau\leq T$, i.e., we propose estimators of $N_\tau^2$ and of the integral in \eqref{2-4} and obtain the estimator of $\Psi _\tau$. Note that $x_t,0\leq t\leq T$ is the limit of observed process $X_t,0\leq t\leq T$, its derivative is $N_t$ and $\Psi_t $ is quadratic variation of $N_t$. Suppose that in the system \eqref{2-1}-\eqref{2-2} the functions $b\left(t\right)=b\left(\vartheta ,t\right)$ and $f\left(t\right)=f\left(\vartheta ,t\right)$, where $\vartheta \in\left(\alpha ,\beta \right)$ is unknown parameter. The asymptotic properties of the MLE and BE of this parameter were described in the work \cite{Kut19}. It was shown that these estimators are consistent, asymptotically normal and asymptotically efficient. This parametric model of observations has some unusual features. The Fisher information \begin{align*} \int_{0}^{T}{\left[\dot f\left(\vartheta ,t\right)m\left(\vartheta ,t\right)+f\left(\vartheta ,t\right)\dot m\left(\vartheta ,t\right)\right]^2}{\sigma \left(t\right)^{-2}}\;{\rm d}t \longrightarrow 0, \end{align*} where $m\left(\vartheta ,t\right)=\Ex_\vartheta \left(Y_t|X_s,0\leq s\leq t\right)$, dot means derivation w.r.t. $\vartheta $ and for all $t\in (0,T]$ there is a weak convergence \begin{align*} \varepsilon ^{-1/2}\left[\dot f\left(\vartheta ,t\right)m\left(\vartheta ,t\right)+f\left(\vartheta ,t\right)\dot m\left(\vartheta ,t\right)\right]\Longrightarrow {\frac{\dot S\left(\vartheta ,t\right) \sqrt{\sigma \left(t\right)}}{\sqrt{2S\left(\vartheta ,t\right)}}}\; \xi _t. \end{align*} Here $S\left(\vartheta ,t\right)=f\left(\vartheta ,t\right)b\left(\vartheta ,t\right)$ and $\xi _t, t\in (0,T]$ are mutually independent Gaussian (${\cal N}\left(0,1\right)$) random variables. Here we propose a different construction of the estimator of this parameter which is computationally simpler than the MLE. Having estimator $\Psi _{\tau ,\varepsilon }$ we consider the case where the function $\Psi _\tau \left(\vartheta \right), \vartheta \in\left(\alpha ,\beta \right)$ is monotone and put $\check\vartheta _{\tau ,\varepsilon }=\Psi _\tau ^{-1}\left(\Psi _{\tau ,\varepsilon } \right) $. Then under regularity conditions we show the consistency and describe the limit distribution of this estimator. Further, having the estimator $\check\vartheta _{\tau ,\varepsilon } $ we use Fisher-score device and obtain One-step MLE-process $\vartheta ^\star_{t,\varepsilon },\tau <t\leq T$. The asymptotic properties (consistency and asymptotic normality) of the estimator $\vartheta ^\star_{t,\varepsilon } $ are established and the possibility of the construction of adaptive Kalman-Bucy filtration equations is discussed. \section{Estimation of quadratic variation} The construction of the estimator of $\Phi _\tau$ is a slight modification of this procedure. Introduce the estimators \begin{align*} N_{t ,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{T }K\left(\frac{s-t }{\varphi _\varepsilon }\right){\rm d}X_s , \qquad \bar N_{\tau ,\varepsilon }=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right){\rm d}X_s ,\\ \hat\Psi _{\tau,\varepsilon}& =\bar N_{\tau ,\varepsilon } ^2-2\int_{0}^{\tau} N_{s ,\varepsilon }{\rm d}N_{s ,\varepsilon }=\frac{1}{\varphi _\varepsilon ^2}\left(\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right){\rm d}X_s\right)^2\\ &\qquad -\frac{1}{\varphi _\varepsilon ^3}\int_{0}^{\tau }\int_{0}^{\tau }K\left(\frac{t-s }{\varphi _\varepsilon }\right){\rm d}X_t\int_{0}^{\tau }K'\left(\frac{r-s }{\varphi _\varepsilon }\right){\rm d}X_r\,{\rm d}s. \end{align*} where the bandwidth $\varphi _\varepsilon\rightarrow 0 $. The one-sided continuous kernels $K_*\left(\cdot \right) $, $K\left(\cdot \right)$ satisfy the usual conditions \begin{align*} &\int_{-1}^{0} K_*\left(u\right){\rm d}u=1,\qquad \int_{-1}^{0}u K_*\left(u\right){\rm d}u=0,\qquad K_*\left(u\right)=0,\;{\rm for}\; u\not\in \left[-1,0\right],\\ & \int_{0}^{1} K\left(u\right){\rm d}u=1,\qquad\; \int_{0}^{1}u K\left(u\right){\rm d}u=0,\qquad \quad K\left(u\right)=0,\;{\rm for}\; u\not\in \left[0,1\right] \end{align*} and the kernel $K\left(\cdot \right)$ is continuously differentiable. Introduce the random variables \begin{align*} &\hat W_\tau \sim {\cal N}\left(0,d_*^2\right),\qquad d_*^2=\int_{-1}^{0}K_*\left(u\right)^2{\rm d}u,\\ &\hat V_\tau \sim {\cal N}\left(0,d_*^2\right),\qquad d_{**}^2=\int_{-1}^{0}\int_{-1}^{0}K_*\left(u\right)K_*\left(v\right) \left(u\wedge v\right){\rm d}u{\rm d}v,\\ &\hat Q_\tau \sim {\cal N}\bigl(0,\hat d^2\bigr),\qquad \hat d^2=d_{**}^2\int_{0}^{\tau }f\left(t\right)^4b\left(t\right)^4{\rm d}t,\\ &\hat R_\tau \sim {\cal N}\bigl(0,\hat d^2\bigr),\qquad \check d^2=d_{**}^2 \int_{0}^{\tau }f\left(t\right)^2b\left(t\right)^2\sigma\left(t\right)^2 {\rm d}t,\\ &Z_\tau =2f\left(\tau \right)Y_\tau \left[f\left(\tau \right)b\left(\tau \right)\hat V_\tau +\sigma \left(\tau \right)\hat W_\tau \right]+2\hat Q_\tau+2\hat R_\tau . \end{align*} The regularity condition is ${\cal A}$. {\it The functions $a\left(\cdot \right), b\left(\cdot \right)\in {\cal C}^{\left(1\right)}\left[0,\tau \right] $ and the functions $f\left(\cdot \right),\sigma \left(\cdot \right)\in {\cal C}^{\left(2\right)}\left[0,\tau \right]$.} \begin{theorem} \label{T1} Let the condition ${\cal A}$ be fulfilled then we have the convergence \begin{align} \label{2-5} \varepsilon ^{-1/2}\left(\Psi _{\tau ,\varepsilon} -\Psi_\tau\right) \Longrightarrow Z_\tau, \end{align} the random variables $\hat V_\tau, \hat W_\tau,\hat Q_\tau,\hat R_\tau $ are independent and for any $p>0$ there exist a constant $C>0$ such that \begin{align*} \varepsilon ^{-p/2}\Ex_{\vartheta _0}\left|\hat\Psi _{\tau ,\varepsilon}-\Psi _{\tau}\right|^p\leq C. \end{align*} \end{theorem} \begin{proof} We have \begin{align*} \bar N_{\tau,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right)f\left(s\right)Y_s {\rm d}s+\frac{\varepsilon }{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right)\sigma \left(s\right){\rm d}W_s. \end{align*} We change the variable $u=\left(s-\tau\right)\varepsilon ^{-1/2} $ in the stochastic integral \begin{align*} \frac{\varepsilon }{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right)\sigma \left(s\right){\rm d}W_s&= \frac{\varepsilon }{\sqrt{\varphi _\varepsilon} }\int_{-1}^{0 }K_*\left(u\right)\sigma \left(\tau +\varphi _\varepsilon u\right){\rm d}\hat W_{\tau ,\varepsilon } \left(u\right)\\ &=\frac{\varepsilon }{\sqrt{\varphi _\varepsilon} }\sigma \left(\tau \right)\hat W_{\tau ,\varepsilon }\left(1+O_p\left(\varphi _\varepsilon\right)\right) , \end{align*} where we denoted $\hat W_{\tau ,\varepsilon } \left(u\right)=\varphi_\varepsilon ^{-1/2} \left[W_{\tau +\varphi _\varepsilon u}-W_{\tau}\right] $, \begin{align*} \hat W_{\tau ,\varepsilon }=\int_{-1}^{0 }K_*\left(u\right){\rm d}\hat W_{\tau ,\varepsilon } \left(u\right)\sim {\cal N}\left(0,d_*^2\right), \end{align*} and used the relations $\sigma \left(\tau +\varphi _\varepsilon u\right)=\sigma \left(\tau \right)\left(1+O\left(\varphi _\varepsilon \right)\right) $ $\tau /\varphi _\varepsilon \geq 1$. For the ordinary integral the same change of variables gives us the representation \begin{align*} &\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right)f\left( s\right)Y_s {\rm d}s=\int_{-1}^{0 }K_*\left(u\right)N_{\tau +\varphi _\varepsilon u} {\rm d}u\\ &\qquad \qquad =N_{\tau}+ \varphi _\varepsilon f'\left(\tau \right)Y_\tau \int_{-1}^{0}uK_*\left(u\right){\rm d}u \left(1+O_p\left(\sqrt{\varphi _\varepsilon }\right)\right)\\ &\qquad \qquad \qquad -\varphi _\varepsilon f\left(\tau \right)a\left(\tau \right)Y_\tau \int_{-1}^{0}uK_*\left(u\right){\rm d}u \left(1+O_p\left(\sqrt{\varphi _\varepsilon }\right)\right)\\ &\qquad \qquad \qquad + \sqrt{\varphi _\varepsilon }\;f\left(\tau \right)b \left(\tau \right)\int_{-1}^{0}K_*\left(u\right) \hat V_{\tau ,\varepsilon } \left(u\right) {\rm d}u \left(1 +O_p\left(\sqrt{\varphi _\varepsilon }\right)\right)\\ &\qquad \qquad =N_{\tau}+\sqrt{\varphi _\varepsilon }\;f\left(\tau \right)b \left(\tau \right) \hat V_{\tau ,\varepsilon } \left(1 +O_p\left(\sqrt{\varphi _\varepsilon }\right)\right), \end{align*} where we denoted $\hat V_{\tau ,\varepsilon } \left(u\right)=\varphi_\varepsilon ^{-1/2} \left[V_{\tau +\varphi _\varepsilon u}-V_{\tau}\right]$, $ \hat V_{\tau ,\varepsilon } \sim {\cal N}\left(0,d_{**}^2\right)$, \begin{align*} & \hat V_{\tau ,\varepsilon } =\int_{-1}^{0}K_*\left(u\right) \hat V_{\tau ,\varepsilon } \left(u\right) {\rm d}u, \end{align*} and used relations $f\left( \tau +\varphi _\varepsilon u\right)=f\left(\tau \right)+\varphi _\varepsilon u f'\left(\tau \right)\left(1+o_p\left(1\right)\right) $ and \begin{align*} Y_{\tau +\varphi _\varepsilon u}&=Y_{\tau }-\int_{\tau }^{\tau +\varphi _\varepsilon u}a\left(s\right)Y_s{\rm d}s+\int_{\tau }^{\tau +\varphi _\varepsilon u}b\left(s\right){\rm d}V_s\\ &=Y_{\tau }-\varphi _\varepsilon u\,a\left(\tau \right)Y_\tau\left(1 +o_p\left(1\right)\right)+ \sqrt{\varphi _\varepsilon }\; b\left(\tau \right) \hat V_{\tau ,\varepsilon } \left(u\right) \left(1 +o_p\left(1\right)\right). \end{align*} Hence we obtain the relation \begin{align*} \bar N_{\tau,\varepsilon }-N_{\tau}&=\sqrt{\varphi _\varepsilon }\;f\left(\tau \right)b \left(\tau \right) \hat V_{\tau ,\varepsilon } \left(1 +o_p\left(1\right)\right)+\frac{\varepsilon }{\sqrt{\varphi _\varepsilon} }\;\sigma \left(\tau \right)\hat W_{\tau ,\varepsilon }\left(1+o_p\left(1\right)\right), \end{align*} and for any $p>0$ \begin{align*} \varepsilon ^{-p/2}\Ex_{\vartheta _0}\left|\bar N_{\tau,\varepsilon }-N_{\tau} \right|^p=\Ex_{\vartheta _0}\left|f\left(\tau \right)b \left(\tau \right) \hat V_{\tau } + \sigma \left(\tau \right)\hat W_{\tau }\right|^p+ O_p\left(1\right). \end{align*} where we put $\varphi _\varepsilon=\varepsilon $. Remind that the random variables $\hat V_{\tau ,\varepsilon } $ and $\hat W_{\tau ,\varepsilon } $ are independent and their distributions do not depend on $\varepsilon $. Moreover, we obtain the limit in distribution \begin{align} \label{2-6} \varepsilon ^{-1/2}\left(\bar N_{\tau,\varepsilon }-N_{\tau}\right)\Longrightarrow f\left(\tau \right)b \left(\tau \right) \hat V_{\tau }+\sigma \left(\tau \right)\hat W_{\tau } \end{align} where $\hat V_{\tau } $ and $\hat W_{\tau } $ are independent random variables with Gaussian distributions ${\cal N}\left(0,d_{**}^2\right)$ and ${\cal N}\left(0,d_*^2\right)$ respectively. Using the standard arguments we obtain the similar relations for the estimator $\bar N_{\tau ,\varepsilon }^2$ of $ N_{\tau }^2$ \begin{align*} &\varepsilon ^{-1/2}\left(\bar N_{\tau ,\varepsilon }^2-N_{\tau }^2\right)=2f\left(\tau \right)Y_{\tau } \left[f\left(\tau \right)b \left(\tau \right) \hat V_{\tau,\varepsilon }+\sigma \left(\tau \right)\hat W_{\tau,\varepsilon }\right]+o_p\left(1\right),\\ &\varepsilon ^{-p/2}\Ex_{\vartheta _0}\left|\bar N_{\tau ,\varepsilon }^2-N_{\tau }^2\right|^p=\left|2f\left(\tau \right)\right|^{p}\Ex_{\vartheta _0}\left|Y_{\tau }\right|^p\Ex_{\vartheta _0} \left|f\left(\tau \right)b \left(\tau \right) \hat V_{\tau}+\sigma \left(\tau \right)\hat W_{\tau }\right|^p\\ &\qquad \qquad \qquad \quad \quad \qquad +o_p\left(1\right), \end{align*} and \begin{align*} \varepsilon ^{-1/2}\left(\bar N_{\tau ,\varepsilon }^2-N_{\tau }^2\right)\Longrightarrow 2f\left(\tau \right)Y_{\tau } \left[f\left(\tau \right)b \left(\tau \right) \hat V_{\tau}+\sigma \left(\tau \right)\hat W_{\tau }\right]. \end{align*} Note that the Gaussian process $Y_\tau $ and Gaussian variable $\hat V_{t,\varepsilon }$ are asymptotically independent and $\Ex_{\vartheta _0} Y_\tau \hat V_{\tau ,\varepsilon }\rightarrow 0 $. The integral in \eqref{2-4} we estimate with the help of a different kernel but before we remark that we have the following representation of this integral \begin{align} \label{2-7} \int_{0}^{\tau }N_t{\rm d}N_t=\int_{0}^{\tau }f\left(t\right)Y_t^2 \left[f'\left(t\right)-f\left(t\right)a\left(t\right)\right] {\rm d}t+\int_{0}^{\tau }f\left(t\right)^2Y_tb\left(t\right){\rm d}V_t . \end{align} Therefore we will estimate these two integrals of the right-hand side of this expression. We have \begin{align*} &\int_{0}^{\tau }N_{t,\varepsilon }{\rm d}N_{t,\varepsilon }=-\frac{1}{\varphi _\varepsilon ^3}\int_{0}^{\tau } \int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d}X_s \int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d}X_s \;{\rm d}t \\ &\quad =\int_{0}^{\tau }\left[\frac{1}{\varphi _\varepsilon} \int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(s\right)Y_s{\rm d}s+\frac{\varepsilon}{\varphi _\varepsilon} \int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)\sigma \left(s\right){\rm d}W_s \right] \\ &\qquad \times \left[-\frac{1}{\varphi _\varepsilon ^2} \int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right) f\left(s\right)Y_s {\rm d}s-\frac{\varepsilon}{\varphi _\varepsilon ^2}\int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right)\sigma \left(s\right){\rm d}W_s \right] \;{\rm d}t. \end{align*} Let us describe the asymptotics of these integrals. For the first integrals and $t\in \left[0 , \tau-\varphi _\varepsilon \right]$ we already proved that \begin{align*} \frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d}X_s=N_t+\sqrt{\varphi _\varepsilon }\left[f\left(t \right)b \left(t \right) \check V_{t,\varepsilon }+\sigma \left(t \right)\check W_{t,\varepsilon }\right] \left(1 +o_p\left(1\right)\right), \end{align*} Here \begin{align*} \check V_{t,\varepsilon }&=\int_{0}^{1} K\left(u\right)\frac{\left[V_{t+\varphi _\varepsilon u}-V_t\right]}{\sqrt{\varphi _\varepsilon }}{\rm d}u=\int_{0}^{1} K\left(u\right) V_{t,\varepsilon }\left(u\right) {\rm d}u\sim {\cal N}\left(0,d^2\right),\\ \check W_{t,\varepsilon }&=\int_{0}^{1} K\left(u\right)\frac{\left[W_{t+\varphi _\varepsilon u}-W_t\right]}{\sqrt{\varphi _\varepsilon }}{\rm d}u=\int_{0}^{1} K\left(u\right)W_{t,\varepsilon }\left(u\right) {\rm d}u\sim {\cal N}\left(0,d^2\right) \end{align*} Note that \begin{align} \label{2-8} \Ex_{\vartheta _0} \check V_{t,\varepsilon }&=0,\qquad \left|\Ex_{\vartheta _0}\check V _{t_1,\varepsilon }\check V_{t_2,\varepsilon }\right|\leq C\,\1_{\left\{\left|t_1-t_2\right|\leq \varphi _\varepsilon \right\}},\\ \Ex_{\vartheta _0} \check W_{t,\varepsilon }&=0,\qquad \left|\Ex_{\vartheta _0}\check W _{t_1,\varepsilon }\check W_{t_2,\varepsilon }\right|\leq C\,\1_{\left\{\left|t_1-t_2\right|\leq \varphi _\varepsilon \right\}}. \label{2-9} \end{align} Further, for $t\in \left[\varphi _\varepsilon , \tau \right]$ \begin{align} \label{2-10} &-\frac{1}{\varphi _\varepsilon ^2}\int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right) f\left(s\right)Y_s {\rm d}s=-\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau } f\left(s\right)Y_s {\rm d}K\left(\frac{s-t}{\varphi _\varepsilon }\right)\nonumber\\ &\quad =\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right) f'\left(s\right)Y_s {\rm d}s-\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right) f\left(s\right)a\left(s\right)Y_s {\rm d}s\nonumber\\ &\qquad \qquad +\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right) f\left(s\right)b\left(s\right) {\rm d}V_s. \end{align} The first two integrals of the RHS on \eqref{2-8} allow us estimate the ordinary integral in \eqref{2-7} and the last integral will give us the estimator of the stochastic integral in \eqref{2-8}. We can write \begin{align*} &\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right) f'\left(s\right)Y_s {\rm d}s=f'\left(t\right)Y_{t}+\sqrt{\varphi _\varepsilon }\;f'\left(t \right)b \left(t \right) \check V_{t ,\varepsilon } \left(1 +o_p\left(1\right)\right),\\ &\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right) f\left(s\right)a\left(s\right)Y_s {\rm d}s =f\left(t\right)a\left(t\right)Y_{t}\\ &\qquad \qquad\qquad \qquad \qquad \qquad\qquad \qquad +\sqrt{\varphi _\varepsilon }\;f\left(t \right)a\left(t \right)b \left(t \right) \check V_{t ,\varepsilon } \left(1 +o_p\left(1\right)\right). \end{align*} To obtain the stochastic integral in \eqref{2-7} we change the order of integration as follows \begin{align*} &\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }f\left(t\right)Y_t\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(s\right)b\left(s\right){\rm d}V_s\;{\rm d}t \\ &\qquad =\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }f\left(s\right)b\left(s\right)\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(t\right)Y_t{\rm d}t\;{\rm d}V_s\\ & \qquad =-\int_{0}^{\tau }f\left(s\right)b\left(s\right)\int_{1}^{0} K\left(u\right)f\left(s-\varphi _\varepsilon u\right)Y_{s-\varphi _\varepsilon u}{\rm d}u\;{\rm d}V_s\\ & \qquad =\int_{0}^{\tau }f\left(s\right)^2b\left(s\right)Y_{s}\;{\rm d}V_s +\sqrt{\varphi _\varepsilon } \int_{0}^{\tau }f\left(s\right)^2b\left(s\right)^2 V_{s,\varepsilon } {\rm d}V_s+ O_p\left(\varphi _\varepsilon \right) , \end{align*} where \begin{align*} V_{s,\varepsilon }=\int_{0}^{1}K\left(u\right)\left(\frac{V_{s-\varphi _\varepsilon u}-V_s}{\sqrt{\varphi _\varepsilon }}\right){\rm d}u\sim {\cal N}\left(0,d^2\right). \end{align*} Remark, that as $u$ takes non negative values only, the random process $Y_{s-\varphi _\varepsilon u}$ is non anticipative in the corresponding stochastic integral. Further \begin{align*} \frac{\varepsilon}{\varphi_\varepsilon^2}\int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right)\sigma \left(s\right){\rm d}W_s&=\frac{\varepsilon}{\varphi_\varepsilon^{3/2}}\sigma \left(t\right)\int_{0}^{1 }K'\left(u\right){\rm d}\check W_{t,\varepsilon }\left(u\right)\left(1+ O_p\left(\varphi_\varepsilon\right)\right)\\ &=\frac{\varepsilon}{\varphi_\varepsilon^{3/2}}\sigma \left(t\right)\tilde W_{t,\varepsilon }\left(1+ O_p\left(\varphi_\varepsilon\right)\right) \end{align*} where $\tilde W_{t,\varepsilon }\Rightarrow \tilde W_{t }, 0<t\leq \tau $ and $\tilde W_{t }, 0<t\leq \tau $ are independent random variables. Remark as well that for any continuous function $h\left(\cdot \right)$ we have the limit in probability \begin{align} \label{2-11} \int_{0}^{\tau }h\left(t\right)Y_t\;\tilde W_{t,\varepsilon }\;{\rm d}t \longrightarrow 0 \end{align} because by \eqref{2-9} \begin{align*} &\Ex_{\vartheta _0}\left(\int_{0}^{\tau }h\left(t\right)Y_t\;\tilde W_{t,\varepsilon }\;{\rm d}t \right)^2\\ &\qquad \qquad =\int_{0}^{\tau }\int_{0}^{\tau }h\left(t\right)h\left(s\right) \Ex_{\vartheta _0}\left(Y_tY_s\right)\;\Ex_{\vartheta _0} \left(\tilde W_{t,\varepsilon }\tilde W_{s,\varepsilon }\right)\;{\rm d}t{\rm d}s \\ &\qquad \qquad \leq C\int_{0}^{\tau }\int_{0}^{\tau }\1_{\left\{\left|t-s\right|\leq \varphi _\varepsilon \right\}}\;{\rm d}t{\rm d}s \leq C\, \varphi _\varepsilon . \end{align*} Consider the normalized integral \begin{align*} &\varphi _\varepsilon ^{-1/2}\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,\tilde W_{t,\varepsilon }\,{\rm d}t =\varphi _\varepsilon ^{-1/2}\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,\int_{0}^{1}K'\left(u\right){\rm d}\tilde W_{t,\varepsilon }\left(u\right)\,{\rm d}t\\ &\qquad \qquad =\varphi _\varepsilon ^{-1}\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,\int_{0}^{\tau }K'\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d} W_{s }\,{\rm d}t\\ &\qquad \qquad =\varphi _\varepsilon ^{-1}\int_{0}^{\tau }\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,K'\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d}t\, {\rm d} W_{s}. \end{align*} For the ordinary integral here we have \begin{align*} &\frac{1}{\varphi _\varepsilon}\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,K'\left(\frac{s-t}{\varphi _\varepsilon }\right){\rm d}t=-\int_{0}^{\tau }f\left(t\right)\sigma \left(t\right)Y_t\,{\rm d}K\left(\frac{s-t}{\varphi _\varepsilon }\right)\\ &\qquad =\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)\left[f'\left(t\right)\sigma \left(t\right)+f\left(t\right)\sigma' \left(t\right)\right]Y_t\,{\rm d}t\\ &\qquad \qquad -\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(t\right)\sigma \left(t\right)a\left(t\right)Y_t\,{\rm d}t\\ &\qquad \qquad +\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(t\right)\sigma \left(t\right)b\left(t\right)\,{\rm d}V_t. \end{align*} These integrals have the following asymptotics \begin{align*} &\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)\left[f'\left(t\right)\sigma \left(t\right)+f\left(t\right)\sigma' \left(t\right)-f\left(t\right)\sigma \left(t\right)a\left(t\right)\right]Y_t\,{\rm d}t\\ &\qquad =\varphi _\varepsilon \left[f'\left(s\right)\sigma \left(s\right)+f\left(s\right)\sigma' \left(s\right)-f\left(s\right)\sigma \left(s\right)a\left(s\right)\right]Y_s, \left(1+O_p\left(\sqrt{\varphi _\varepsilon }\right)\right),\\ &\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(t\right)\sigma \left(t\right)b\left(t\right){\rm d}V_t=\sqrt{\varphi _\varepsilon }f\left(s\right)\sigma \left(s\right)b\left(s\right)V_{s,\varepsilon }\left(1+O_p\left(\sqrt{\varphi _\varepsilon }\right)\right). \end{align*} Due to \eqref{2-11} the main contribution in the error in the problem of estimation of the integral in \eqref{2-4} is given by two integrals \begin{align*} Q_{\tau ,\varepsilon }=\int_{0}^{\tau }f\left(s\right)^2b\left(s\right)^2 V_{s,\varepsilon } {\rm d}V_s, \qquad \quad R_{\tau ,\varepsilon }=\int_{0}^{\tau }f\left(s\right)\sigma \left(s\right)b\left(s\right)V_{s,\varepsilon }{\rm d}W_s. \end{align*} As $V_{s,\varepsilon }$ converges to independent Gaussian random variables we can show that \begin{align*} \int_{0}^{\tau }f\left(s\right)^4b\left(s\right)^4 V_{s,\varepsilon }^2{\rm d}s\longrightarrow d_{**}^2\int_{0}^{\tau }f\left(s\right)^4b\left(s\right)^4{\rm d}s=\hat d^2, \end{align*} and \begin{align*} \int_{0}^{\tau }f\left(s\right)^2b\left(s\right)^2\sigma \left(s\right)^2 V_{s,\varepsilon }^2{\rm d}s\longrightarrow d_{**}^2\int_{0}^{\tau }f\left(s\right)^2b\left(s\right)^2\sigma \left(s\right)^2{\rm d}s=\check d^2. \end{align*} Therefore $Q_{\tau ,\varepsilon }$ and $R_{\tau ,\varepsilon }$ by the central limit theorem are asymptotically normal. Moreover $Q_{\tau ,\varepsilon }$ and $R_{\tau ,\varepsilon }$ are asymptotically independent because $\Ex_{\vartheta _0}Q_{\tau ,\varepsilon }R_{\tau ,\varepsilon }=0 $. Therefore we obtained the stochastic representation of the error of estimation $\varepsilon ^{-1/2}\left(\Psi _{\tau ,\varepsilon }-\Psi _\tau \right)= Z_{\tau,\varepsilon }$ where $Z_{\tau,\varepsilon } $ can be written as follows \begin{align} \label{2-12} Z_{\tau,\varepsilon } =2f\left(\tau \right)Y_\tau \left[f\left(\tau \right)b\left(\tau \right)\hat V_{\tau,\varepsilon } +\sigma \left(\tau \right)\hat W_{\tau,\varepsilon } \right]+2\hat Q_{\tau,\varepsilon }+2\hat R_{\tau,\varepsilon }+o_p\left(\sqrt{\varepsilon }\right) . \end{align} \end{proof} \section{Nonparametric estimation} Suppose that we have the model of observations \eqref{2-1},\eqref{2-2}, where the functions $a\left(\cdot \right),b\left(\cdot \right),f\left(\cdot \right),\sigma \left(\cdot \right)$ are unknown and the condition ${\cal A}$ is fulfilled. Consider the problem of estimation of the function $\Psi _\tau ,0\leq \tau \leq T$. Then according to the Theorem \ref{T1} the random function $\Psi _{\tau,\varepsilon } ,0\leq \tau \leq T $ is a consistent estimator of $\Psi _\tau ,0\leq \tau \leq T$ and it converges with the rate $\sqrt{\varepsilon }$. Moreover, we have the limit distribution of the error of estimation. We suppose that the rate of convergence is optimal. The obtained result allows us to study two other nonparametric estimation problems of the functions \begin{align*} \hat \Psi _{\tau }=\int_{0}^{\tau }b\left(t\right)^2{\rm d}t,\qquad 0<\tau \leq T, \end{align*} and \begin{align*} \check \Psi _{\tau }=\int_{0}^{\tau }f\left(t\right)^2{\rm d}t,\qquad 0<\tau \leq T. \end{align*} To discuss these problems we first consider a slightly more general statement. Suppose that the function $g\left(\cdot \right)\in {\cal C}^{\left(2\right)}\left[0,T\right]$ and introduce the random process $H_t=g\left(t\right)f\left(t\right)Y_t$ satisfying the equation \begin{align*} H_\tau ^2=2\int_{0}^{\tau }H_t{\rm d}H_t+\int_{0}^{\tau }g\left(t\right)^2f\left(t\right)^2b\left(t\right)^2{\rm d}t. \end{align*} Hence we have \begin{align*} \Psi _\tau \equiv \int_{0}^{\tau }g\left(t\right)^2f\left(t\right)^2b\left(t\right)^2{\rm d}t=H_\tau ^2-2\int_{0}^{\tau }H_t{\rm d}H_t \end{align*} The statistic \begin{align*} H_{t,\varepsilon }=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)g\left(s\right) {\rm d}X_s \end{align*} has the representation \begin{align*} H_{t,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)g\left(s\right)f\left(s\right) Y_s{\rm d}s\\ &\qquad \qquad \qquad +\frac{\varepsilon }{\varphi _\varepsilon }\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)g\left(s\right)\sigma \left(s\right) Y_s{\rm d}W_s\\ &=g\left(t\right)f\left(t\right) Y_t+\sqrt{\varphi _\varepsilon }g\left(t\right)f\left(t\right)b\left(t\right) \int_{0}^{1 }K\left(u\right)V_{t,\varepsilon }\left(u\right){\rm d}u +O_p\left(\varphi _\varepsilon \right). \end{align*} The proof of the Theorem \ref{T1} can be modified to prove that the statistic \begin{align*} \Psi _{\tau ,\varepsilon }=\bar H_{\tau,\varepsilon } ^2-2\int_{0}^{\tau }H_{t,\varepsilon }{\rm d}H_{t,\varepsilon } \end{align*} has the presentation similar to the given in \eqref{2-5}. \bigskip \noindent{\it Estimation of $\hat\Psi _\tau ,0\leq \tau \leq T$}.\\ Suppose that the function $f\left(\cdot \right)$ is known, has two continuous derivatives and $\inf_{0\leq t\leq \tau } f\left(t\right)>0$. The functions $a\left(\cdot \right),b\left(\cdot \right),\sigma \left(\cdot \right)$ are unknown. Introduce the set \begin{align*} \Theta \left(L\right)=\left\{a\left(\cdot \right), b\left(\cdot \right),\sigma \left(\cdot \right)\;:\; \sup_{0\leq t\leq T}\left(\left|a'\left(t\right)\right| +\left|b'\left(t\right)\right| + \left|\sigma '\left(t\right)\right| +\left|\sigma ''\left(t\right)\right| \right)\leq L\right\} \end{align*} and the statistics \begin{align*} \bar Y_{\tau ,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right)f\left(s\right)^{-1} {\rm d}X_s,\\ Y_{t,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)f\left(s\right)^{-1} {\rm d}X_s,\\ \hat \Psi _{\tau ,\varepsilon }&=\bar Y_{\tau,\varepsilon } ^2-2\int_{0}^{\tau }Y_{t,\varepsilon }{\rm d}Y_{t,\varepsilon }\\ &=\bar Y_{\tau,\varepsilon } ^2+\frac{2}{\varphi _\varepsilon ^3}\int_{0}^{\tau } \int_{0}^{\tau }K\left(\frac{s-t}{\varphi _\varepsilon }\right)\frac{{\rm d}X_s}{f\left(s\right)} \, \int_{0}^{\tau }K'\left(\frac{r-t}{\varphi _\varepsilon }\right)\frac{{\rm d}X_r}{f\left(r\right)} \,{\rm d}t , \end{align*} i.e., we put $g\left(t\right)=f\left(t\right)^{-1}$. \begin{proposition} \label{P1} Let $a\left(\cdot \right),b\left(\cdot \right),\sigma \left(\cdot \right)\in\Theta \left(L\right) $. Then the estimator $\hat \Psi _{\tau ,\varepsilon }$ is consistent, we have convergence in distribution \begin{align*} \varepsilon ^{-1/2}\left(\hat \Psi _{\tau ,\varepsilon }-\hat\Psi _\tau \right)\Longrightarrow \end{align*} and for any $p>0$ and any $\delta >0$ there exists a constant $C>0$ such that \begin{align*} \sup_{a\left(\cdot \right),b\left(\cdot \right),\sigma \left(\cdot \right)\in\Theta \left(L\right) }\sup_{\delta \leq \tau \leq T-\delta }\varepsilon ^{-p/2}\Ex_{b} \left|\hat \Psi _{\tau ,\varepsilon }-\hat\Psi _\tau \right|^p\leq C. \end{align*} \end{proposition} The proof follows from the proof of the Theorem \ref{T1}. \bigskip Another possibility, of course, is to put $g\left(t\right)=b\left(t\right)^{-1}$ ($b\left(\cdot \right)$ is known). Then under condition $\inf_{0\leq t\leq \tau } b\left(t\right)>0$ we obtain consistent estimator of another function $ \check\Psi _{\tau },0\leq \tau \leq T$ and the proposition similar to the given above can be formulated and proved for this estimator too. \section{Parameter estimation} Consider the partially observed linear system \begin{align} \label{3-1} {\rm d}Y_t&=-a\left(t\right)Y_t{\rm d}t+b\left(\vartheta ,t\right){\rm d}V_t,\qquad \;Y_0=0,\\ {\rm d}X_t&=f\left(\vartheta ,t\right)Y_t{\rm d}t+\varepsilon \sigma \left(t\right){\rm d}W_t,\qquad X_0=0. \label{3-2} \end{align} The process $X^T=\left(X_t,0\leq t\leq T\right)$ is observed and the Wiener processes $V^T=\left(V_t,0\leq t\leq T\right)$ and $W^T=\left(W_t,0\leq t\leq T\right)$ are independent. The functions $a\left(\cdot \right),b\left(\cdot \right),f\left(\cdot \right)$ and $\sigma \left(\cdot \right)$ are known. The parameter $\vartheta\in \Theta =\left(\alpha ,\beta \right) $ is unknown and has to be estimated by observations $X^T$. \bigskip \noindent {\it Substitution estimator $\bar \vartheta _\varepsilon $ }. \\ We have partially observed linear system \eqref{3-1},\eqref{3-2}, where the functions $f\left(\cdot \right)$, $b\left(\cdot \right)$ are supposed to be known and the functions $a\left(\cdot \right)$, $\sigma \left(\cdot \right)$ are unknown. Fix some $\tau $ and define the function \begin{align*} \Psi _\tau \left(\vartheta \right)=\int_{0}^{\tau }f\left(\vartheta ,t\right)^2b\left(\vartheta ,t\right)^2{\rm d}t ,\qquad \vartheta \in \left(\alpha ,\beta \right)=\Theta . \end{align*} We have \begin{align*} \dot\Psi _\tau \left(\vartheta \right)=2\int_{0}^{\tau }\left[\dot f\left(\vartheta ,t\right)+\dot b\left(\vartheta ,t\right)\right]f\left(\vartheta ,t\right)b\left(\vartheta ,t\right){\rm d}t. \end{align*} {\it Condition ${\cal B}$}. ${\cal B}_1.$ {\it The functions $\Psi _\tau \left(\vartheta \right)$ has two continuous derivatives $\dot \Psi _\tau \left(\vartheta \right),\ddot\Psi _\tau \left(\vartheta \right)$ w.r.t. $\vartheta $.} ${\cal B}_2.$ {\it For a given $\tau $ we have } \begin{align} \label{3-3} \inf_{\vartheta \in\Theta }\left|\dot \Psi _\tau \left(\vartheta \right)\right|>0. \end{align} By condition \eqref{3-3} the function $\Psi _\tau \left(\cdot \right) $ is monotone. Without loss of generality we suppose that it is increasing. Introduce the notation \begin{align*} &\psi _m=\inf _{\vartheta \in\Theta }\Psi _\tau \left(\vartheta \right),\quad \psi _M=\sup _{\vartheta \in\Theta }\Psi _\tau \left(\vartheta \right) ,\qquad \psi _m=\Psi _\tau \left(\alpha \right),\quad \psi_M=\Psi _\tau \left(\beta \right), \\ &G\left(\psi \right)=\Psi _\tau ^{-1}\left(\psi \right),\quad \psi _m<\psi <\psi _M,\qquad \alpha <G\left(\psi \right)<\beta , \\ &\BB_m=\left\{\omega :\quad \hat\Psi_{\tau ,\varepsilon} \leq \psi _m \right\},\qquad\qquad \BB_M=\left\{\omega :\quad \hat\Psi_{\tau ,\varepsilon} \geq \psi _M \right\}, \\ &\BB=\left\{\omega :\quad \psi _m <\hat\Psi_{\tau ,\varepsilon} < \psi _M \right\},\qquad\; \eta _\varepsilon=G(\hat\Psi _{\tau ,\varepsilon} ),\\ &Z_\tau\left(\vartheta \right) =2f\left(\vartheta,\tau \right)Y_\tau \left[f\left(\vartheta,\tau \right)b\left(\vartheta,\tau \right)\hat V_\tau +\sigma \left(\tau \right)\hat W_\tau \right]+2\hat Q_\tau\left(\vartheta\right)+2\hat R_\tau \left(\vartheta\right). \end{align*} We suppose that the definitions of $\hat Q_\tau\left(\vartheta\right) $ and $\hat R_\tau\left(\vartheta\right)$ are clear. The substitution estimator (SE) is introduced as follows \begin{align} \label{3-4} \check \vartheta _{\tau ,\varepsilon }&=\alpha \1_{\left\{\BB_m\right\}}+\eta _\varepsilon\1_{\left\{\BB\right\}}+\beta \1_{\left\{\BB_M\right\}} . \end{align} It has the following properties. \begin{proposition} \label{P2} Suppose that the conditions ${\cal A} $ and ${\cal B} $ are fulfilled, for any (small) $\nu >0$ we have $g\left(\nu \right)>0$. Then the SE $\check \vartheta _{\tau ,\varepsilon } $ is uniformly consistent, converges in distribution \begin{align*} \frac{\check \vartheta _{\tau ,\varepsilon }-\vartheta _0}{\sqrt{\varepsilon }}\Longrightarrow \dot \Psi \left(\vartheta _0\right) ^{-1}Z_{\tau }\left(\vartheta _0\right), \end{align*} and for any $p>0$ there exists a constant $C=C\left(p\right)>0$ such that \begin{align} \label{3-5} \sup_{\vartheta _0\in\Theta }\varepsilon ^{-p/2}\Ex_{\vartheta _0}\left|\check \vartheta _{\tau ,\varepsilon }-\vartheta _0\right|^p\leq C . \end{align} \end{proposition} \begin{proof} Note that from the identity $G\left(\Psi \left(\vartheta \right)\right)=\vartheta $ we obtain \begin{align*} \frac{{\rm d}}{{\rm d}\vartheta }G\left(\Psi \left(\vartheta \right)\right)=\left.\frac{{\rm d}}{{\rm d}\psi }G\left(\psi \right)\right|_{\psi =\Psi \left(\vartheta \right)} \dot \Psi \left(\vartheta \right)=G'\left(\Psi \left(\vartheta \right)\right)\dot \Psi \left(\vartheta \right)=1, \end{align*} and $\sup_{\psi }G'\left(\psi \right)\leq \kappa ^{-1}$, where $\kappa =\inf_{\vartheta }\dot \Psi \left(\vartheta \right) $. Hence for any $\nu > \left(\vartheta _0-\alpha\right)\wedge \left(\beta -\vartheta _0\right) >0$ we can write \begin{align*} \Pb_{\vartheta _0}\left(\left|\check \vartheta _{\tau ,\varepsilon }-\vartheta _0\right|\geq \nu \right)&\leq \Pb_{\vartheta _0} \left(\BB_m\right)+\Pb_{\vartheta _0} \left(\BB_M\right)+\Pb_{\vartheta _0} \left(\left|\eta _\varepsilon -\vartheta _0\right|\geq \nu , \BB\right). \end{align*} Using the representation \eqref{2-12} we obtain the estimates \begin{align*} \Pb_{\vartheta _0} \left(\BB_m\right)&=\Pb_{\vartheta _0} \left( \hat\Psi_{\tau ,\varepsilon} \leq \Psi \left(\alpha \right) \right)=\Pb_{\vartheta _0} \left( \hat\Psi_{\tau ,\varepsilon}-\Psi \left(\vartheta _0\right) \leq \Psi \left(\alpha \right) -\Psi \left(\vartheta _0\right) \right)\\ &=\Pb_{\vartheta _0} \left( \Psi \left(\vartheta _0\right)-\hat\Psi_{\tau ,\varepsilon} \geq \Psi \left(\vartheta _0\right)-\Psi \left(\alpha \right) \right)\\ & \leq \Pb_{\vartheta _0} \left( \left|\Psi \left(\vartheta _0\right)-\hat\Psi_{\tau ,\varepsilon} \right|\geq \kappa \nu \right) = \Pb_{\vartheta _0} \left( \left|Z_{\tau ,\varepsilon } \right|\geq \frac{\kappa \nu }{\sqrt{\varepsilon }} \right)\\ & \leq \left(\frac{\varepsilon}{\kappa^2 \nu ^2 }\right) ^{p/2} \Ex_{\vartheta _0}\left|Z_{\tau ,\varepsilon } \right|^p\leq C\left(\frac{\varepsilon}{\kappa^2 \nu ^2 }\right) ^{p/2}\longrightarrow 0, \end{align*} and similarly \begin{align*} \Pb_{\vartheta _0} \left(\BB_M\right)\leq C\left(\frac{\varepsilon}{\kappa^2 \nu ^2 }\right) ^{p/2}\longrightarrow 0. \end{align*} Further \begin{align*} &\Pb_{\vartheta _0} \left(\left|\eta _\varepsilon -\vartheta_0\right|\geq \nu , \BB\right)=\Pb_{\vartheta _0} \left(\left|G(\hat\Psi_{\tau ,\varepsilon})- G\left(\Psi \left(\vartheta _0\right)\right)\right|\geq \nu , \BB\right) \\ &\qquad \qquad \leq \Pb_{\vartheta _0} \left(\left|\hat\Psi_{\tau ,\varepsilon}-\Psi \left(\vartheta _0\right) \right|G'_M\geq \nu \right)\leq \Pb_{\vartheta _0} \left(\left|Z_{\tau ,\varepsilon } \right|\geq \frac{\nu}{\sqrt{\varepsilon }G'_M} \right)\\ &\qquad \qquad\leq C\left(\frac{\varepsilon}{\kappa ^2 \nu ^2 }\right) ^{p/2}\longrightarrow 0. \end{align*} Here $G'_M=\kappa ^{-1}$. Therefore for any compact $\KK\subset\Theta $ we have \begin{align*} \sup_{\vartheta _0\in\KK}\Pb_{\vartheta _0}\left(\left|\check \vartheta _{\tau ,\varepsilon }-\vartheta _0\right|\geq \nu \right)\leq C\left(\frac{\varepsilon}{\kappa ^2 \nu ^2 }\right) ^{p/2}\longrightarrow 0 \end{align*} and we obtain uniform consistency of the SE $\check\vartheta _{\tau ,\varepsilon }$. We have the relations \begin{align*} \frac{\check \vartheta _{\tau ,\varepsilon }-\vartheta _0}{\sqrt{\varepsilon }}&=\varepsilon ^{-1/2}\left(\hat\Psi_{\tau ,\varepsilon}-\Psi \left(\vartheta _0\right) \right)G'\left(\Psi \left(\vartheta _0\right) \right)\left(1+o_p\left(1\right)\right)\\ &= \dot \Psi \left(\vartheta_0\right) ^{-1}Z_{\tau ,\varepsilon } \left(1+o_p\left(1\right)\right)\Longrightarrow \dot \Psi \left(\vartheta _0\right) ^{-1}Z_{\tau }\left(\vartheta _0\right) \end{align*} and for any $p>0$ \begin{align*} \varepsilon ^{-p/2}\Ex_{\vartheta _0}\left|\check \vartheta _{\tau ,\varepsilon }-\vartheta _0\right|^p\leq \dot \Psi \left(\vartheta_0\right) ^{-p}\Ex_{\vartheta _0}\left(\left|Z_{\tau ,\varepsilon }\right|^p\left(1+\left|o_p\left(1\right)\right|\right)\right)\leq C. \end{align*} \end{proof} \noindent {\it Example 1. }\\ Suppose that we have the model of observations \eqref{3-1},\eqref{3-2}, where $f\left(\vartheta ,t\right)=\vartheta f\left(t\right), \vartheta \in\left(\alpha ,\beta \right), \alpha >0$, $b\left(\vartheta ,t\right)=b\left(t\right)$ and all corresponding conditions are fulfilled. Then \begin{align*} \Psi _\tau \left(\vartheta \right)=\vartheta ^2\int_{0}^{\tau }f\left(t\right)^2b\left(t\right)^2{\rm d}t \end{align*} and SE \begin{align*} \check\vartheta _{\tau ,\varepsilon }=\sqrt{\Psi _{\tau,\varepsilon }}\left(\int_{0}^{\tau }f\left(t\right)^2b\left(t\right)^2{\rm d}t \right)^{-1/2}. \end{align*} This estimator is consistent and has rate of convergence $\sqrt{\varepsilon }$ (Theorem \ref{T1}). \bigskip \noindent {\it Example 2. }\\ Consider the model \eqref{3-1},\eqref{3-2}, where $b\left(\vartheta ,t\right)=\sqrt{h\left(t\right)+\vartheta g\left(t\right)}$ and $f\left(\vartheta ,t\right)=f\left(t\right)$. Suppose that the functions $h\left(\cdot \right)$ and $g\left(\cdot \right)$ are positive and $\alpha >0$. Then \begin{align*} \Psi _\tau \left(\vartheta \right)=\int_{0}^{\tau }f\left(t\right)^2\left[h\left(t\right)+\vartheta g\left(t\right) \right]{\rm d}t \end{align*} and \begin{align*} \vartheta =\left(\Psi _\tau \left(\vartheta \right)-\int_{0}^{\tau }f\left(t\right)^2 h\left(t\right) {\rm d}t \right)\left(\int_{0}^{\tau }f\left(t\right)^2 g\left(t\right) {\rm d}t\right)^{-1}. \end{align*} Hence the SE is \begin{align*} \check\vartheta _{\tau ,\varepsilon }=\left(\Psi _{\tau,\varepsilon }-\int_{0}^{\tau }f\left(t\right)^2 h\left(t\right) {\rm d}t \right)\left(\int_{0}^{\tau }f\left(t\right)^2 g\left(t\right) {\rm d}t\right)^{-1} \end{align*} and this estimator has the properties described in the Theorem \ref{T1}. \noindent {\it One-step MLE-process $ \vartheta _{t,\varepsilon }^\star ,\tau <t\leq T$ }. \\ Suppose that we have slightly different partially observed system \begin{align} \label{3-6} {\rm d}Y_t&=-a\left(\vartheta ,t\right)Y_t{\rm d}t+b\left(\vartheta ,t\right){\rm d}V_t,\qquad \;Y_0=0,\\ {\rm d}X_t&=f\left(\vartheta ,t\right)Y_t{\rm d}t+\varepsilon \sigma \left(t\right){\rm d}W_t,\qquad \quad X_0=0. \label{3-7} \end{align} As before the process $X^T=\left(X_t,0\leq t\leq T\right)$ is observable and $Y^T$ is hidden. All functions $a\left(\cdot \right),b\left(\cdot \right),f\left(\cdot \right)$ and $\sigma \left(\cdot \right)$ are supposed to be known. The parameter $\vartheta\in \Theta =\left(\alpha ,\beta \right) $ is unknown and has to be estimated by observations $X^T$. One way is to use the MLE $\hat \vartheta _\varepsilon $ defined by the equation \begin{align} \label{3-8} L(\hat \vartheta _\varepsilon,X^T)=\sup_{\vartheta \in\Theta }L( \vartheta,X^T), \end{align} where the likelihood ratio function is \begin{align} \label{3-9} L\left( \vartheta,X^T\right)=\exp\left\{\int_{0}^{T}\frac{M\left(\vartheta ,t\right)}{\varepsilon ^2\sigma \left(t\right)^2}\;{\rm d}X_t-\int_{0}^{T}\frac{M\left(\vartheta ,t\right)^2}{2\varepsilon ^2\sigma \left(t\right)^2}{\rm d}t\right\},\qquad \vartheta \in\Theta . \end{align} Here $M\left(\vartheta ,t\right)=f\left(\vartheta ,t\right)m\left(\vartheta ,t\right) $ and the conditional expectation $m\left(\vartheta ,t\right)=$ $\left.\Ex_\vartheta \left(Y_t\right|X_s,0\leq s\leq t \right) $ satisfies the equation of the Kalman-Bucy filtration \begin{align} \label{3-10} {\rm d}m\left(\vartheta ,t\right)&=-a\left(\vartheta ,t\right)m\left(\vartheta ,t\right){\rm d}t\nonumber\\ &\qquad \qquad \qquad +\frac{\gamma \left(\vartheta ,t\right)f\left(\vartheta ,t\right)}{\varepsilon ^2\sigma \left(t\right)^2}\left[{\rm d}X_t-f\left(\vartheta ,t\right)m\left(\vartheta ,t\right){\rm d}t\right] \end{align} with initial value $m\left(\vartheta ,0\right)=0$. The function $\gamma \left(\vartheta ,t\right)=\Ex_\vartheta \left(Y_t-m\left(\vartheta ,t\right) \right)^2$ is solution of Riccati equation \begin{align} \label{3-11} \frac{\partial \gamma \left(\vartheta ,t\right)}{\partial t}&=-2a\left(\vartheta ,t\right)\gamma \left(\vartheta ,t\right)- \frac{\gamma \left(\vartheta ,t\right)^2f \left(\vartheta ,t\right)^2}{\varepsilon ^2\sigma \left(t\right)^2} +b\left(\vartheta ,t\right)^2 ,\qquad \qquad \end{align} with initial value $\gamma \left(\vartheta ,0\right)=0$. The asymptotic properties of the MLE for this model of observations were studied in the work \cite{Kut19}. It was shown that this estimator is consistent, asymptotically normal \begin{align*} \varepsilon ^{-1/2}\left(\hat \vartheta _\varepsilon-\vartheta _0\right)\Longrightarrow {\cal N}\left(0,{\rm I}\left(\vartheta _0\right)^{-1}\right),\qquad {\rm I}\left(\vartheta \right)= \int_{0}^{T}\frac{\dot S\left(\vartheta ,t\right)^2 }{2S\left(\vartheta ,t\right)\sigma \left(t\right)}{\rm d}t \end{align*} and asymptotically efficient. The same asymptotic properties has Bayesian estimator \begin{align} \label{3-12} \tilde \vartheta _\varepsilon =\frac{\int_{\Theta }^{}\vartheta p\left(\vartheta \right)L( \vartheta,X^T){\rm d}\vartheta }{\int_{\Theta }^{} p\left(\vartheta \right)L( \vartheta,X^T){\rm d}\vartheta}. \end{align} Here $p\left(\vartheta \right),\vartheta \in\Theta $ is density a priory of the parameter $\vartheta $. This approach provides the estimators with the nice asymptotic properties, but the calculation of these estimators by expressions \eqref{3-8}, \eqref{3-9} and \eqref{3-12} requires numerical solutions of the equations \eqref{3-10} and \eqref{3-11} for all $\vartheta \in\Theta $. To avoid such sometimes difficult numerical calculations we consider approach based on the Fisher-score device and SE $\check \vartheta _{\tau ,\varepsilon }$ as preliminary one allows to obtain an estimator-process which has good asymptotic properties and can be used for the construction of adaptive filter. Suppose that for some fixed small value $\tau >0$ we have SE $\check \vartheta _{\tau ,\varepsilon }$ constructed by the observations $X^\tau =\left(X_t,0\leq t\leq \tau \right)$. Introduce notation \begin{align*} {\rm I}_\tau ^t\left(\vartheta \right)&= \int_{\tau }^{t}\frac{\dot S\left(\vartheta ,s\right)^2 }{2S\left(\vartheta ,s\right)\sigma \left(s\right)}{\rm d}s,\qquad q_\varepsilon \left(\vartheta ,t\right)=a\left(\vartheta ,t\right)+ \frac{\gamma \left(\vartheta ,t\right)f\left(\vartheta ,t\right)^2}{\varepsilon ^2\sigma \left(t\right)^2}\\ \vartheta _{t,\varepsilon }^\star&=\check\vartheta _{\tau,\varepsilon }+\frac{1}{{\rm I}_\tau^t (\check\vartheta _{\tau,\varepsilon } )} \int_{\tau }^{t}\frac{\dot M(\check\vartheta _{\tau,\varepsilon },s)}{{\varepsilon} \sigma \left(s\right)^2} \left[{\rm d}X_s-M(\check\vartheta _{\tau,\varepsilon },s ){\rm d}s\right],\qquad \tau <t\leq T,\\ \xi _{t,\varepsilon }&=\frac{\vartheta _{t,\varepsilon }^\star-\vartheta _0}{\sqrt{\varepsilon }} ,\qquad \xi _t=\frac{1 }{{\rm I}_\tau^t (\vartheta _{0 } )}\int_{\tau }^{t}\frac{\dot S\left(\vartheta _0,s\right)}{\sqrt{2S\left(\vartheta _0,s\right)\sigma \left(s\right)}}{\rm d}w\left(s \right) ,\quad \tau <t\leq T. \end{align*} We have to define the random processes $M(\check\vartheta _{\tau,\varepsilon },s )=f\left(\check\vartheta _{\tau,\varepsilon },s\right)m\left(\check\vartheta _{\tau,\varepsilon },s\right) $ and $\dot M(\check\vartheta _{\tau,\varepsilon },s )=\dot f\left(\check\vartheta _{\tau,\varepsilon },s\right)m\left(\check\vartheta _{\tau,\varepsilon },s\right)+f\left(\check\vartheta _{\tau,\varepsilon },s\right)\dot m\left(\check\vartheta _{\tau,\varepsilon },s\right) $. The solution of the equation \eqref{3-10} we write as follows \begin{align*} m\left(\vartheta ,t\right)&=e^{-\int_{0}^{t}q_\varepsilon \left(\vartheta ,v\right){\rm d}v}\int_{0}^{t}e^{\int_{0}^{s}q_\varepsilon \left(\vartheta ,v\right){\rm d}v} \frac{\gamma \left(\vartheta ,s\right)f\left(\vartheta ,s\right)}{\varepsilon ^2\sigma \left(s\right)^2}{\rm d}X_s \\ &=e^{-\int_{0}^{t}q_\varepsilon \left(\vartheta ,v\right){\rm d}v}\int_{0}^{t}D\left(\vartheta ,s\right){\rm d}X_s \end{align*} with obvious notation. We can not put $\check\vartheta _{\tau,\varepsilon } $ in this integral because the function $D\left(\check\vartheta _{\tau,\varepsilon } ,s\right) $ is not integrable in It\^ o sens. We have equality \begin{align*} \int_{0}^{t}D\left(\vartheta ,s\right){\rm d}X_s= D\left(\vartheta ,s\right)X_t-\int_{0}^{t}X_s D'\left(\vartheta ,s\right){\rm d}s \end{align*} and we put \begin{align*} m\left(\check\vartheta _{\tau,\varepsilon } ,t\right)=e^{-\int_{0}^{t}q_\varepsilon \left(\check\vartheta _{\tau,\varepsilon } ,v\right){\rm d}v}\left[ D\left(\check\vartheta _{\tau,\varepsilon } ,s\right)X_t-\int_{0}^{t}X_s D'\left(\check\vartheta _{\tau,\varepsilon } ,s\right){\rm d}s \right]. \end{align*} The similar transformation can be done for calculation of the random process $\dot m\left(\check\vartheta _{\tau,\varepsilon } ,t\right) $. \bigskip {\it Conditions} ${\cal C}$. ${\cal C}_1$. {\it For any $t_0\in (\tau ,T]$} \begin{align*} \inf_{\vartheta \in\Theta }{\rm I}_\tau ^{t_0}\left(\vartheta \right)>0. \end{align*} ${\cal C}_2$. {\it The functions $f\left(\cdot \right),\sigma \left(\cdot \right) $ are separated from zero and the functions $f\left(\cdot \right),b\left(\cdot \right) $ have two continuous derivatives w.r.t. $\vartheta $.} \begin{proposition} \label{P3} Let the conditions ${\cal A}$,${\cal B}$,${\cal C}$ be fulfilled then the One-step MLE -process $\vartheta _{t,\varepsilon }^\star,\tau <t\leq T $ is consistent: for any $\nu >0$ \begin{align*} \Pb_{\vartheta _0}\left(\sup_{t_0\leq t\leq T} \left|\vartheta _{t,\varepsilon }^\star-\vartheta _0 \right|\geq \nu \right) \longrightarrow 0, \end{align*} the random process $\xi _{t,\varepsilon },t_0\leq t\leq T$ converges in distribution in the measurable space $\left({\cal C}\left[t_0,T],{\scr B}\right]\right)$ to the Gaussian process \begin{align*} \xi _{\cdot ,\varepsilon }\Longrightarrow \xi _{\cdot },\qquad \qquad \xi _{t }\sim {\cal N}\left(0,{\rm I}_\tau ^{t}\left(\vartheta_0 \right)^{-1}\right) \end{align*} \end{proposition} \begin{proof} The proof follows the same steps as the proof of the Proposition 3 in \cite{Kut20b}. The only difference is that in \cite{Kut20b} the function $f\left(\vartheta ,t\right)=f\left(t\right)$. \end{proof} Note that if $\inf_{\vartheta \in\Theta } |\dot S\left(\vartheta ,\tau \right)|>0 $, then the condition ${\cal C}_1$ is fulfilled. \bigskip \noindent {\it Adaptive filtration}. \\ Let us consider one else possibility to use SE $\check \vartheta _{\tau ,\varepsilon }$ and the corresponding One-step MLE-process $\vartheta _{t,\varepsilon }^\star,t_0\leq t\leq T $. Suppose that we have the partially observed two-dimensional linear stochastic system \eqref{3-6}, \eqref{3-7}. We can not use the equations \eqref{3-10}, \eqref{3-11} because the true value $\vartheta _0$ is unknown. Introduce the equations of adaptive filtration \begin{align*} {\rm d}\hat m\left(t\right)&=-q_\varepsilon \left(\vartheta _{t,\varepsilon }^\star ,t\right)\hat m\left(t\right){\rm d}t +\frac{\gamma \left(\vartheta _{t,\varepsilon }^\star ,t\right)f\left(\vartheta _{t,\varepsilon }^\star ,t\right)}{\varepsilon ^2\sigma \left(t\right)^2}{\rm d}X_t,\quad \hat m\left(0\right)=0,\\ \frac{\partial \gamma \left(\vartheta ,t\right)}{\partial t}&=-2a\left(\vartheta ,t\right)\gamma \left(\vartheta ,t\right)- \frac{\gamma \left(\vartheta ,t\right)^2f \left(\vartheta ,t\right)^2}{\varepsilon ^2\sigma \left(t\right)^2} +b\left(\vartheta ,t\right)^2 ,\; \gamma \left(\vartheta ,0\right)=0. \end{align*} Here Riccati equation can be solved before the experience. It is possible to give this couple of equations in recurrent form too \begin{align*} {\rm d}\tilde m\left(t\right)&=-q_\varepsilon \left(\vartheta _{t,\varepsilon }^\star ,t\right)\tilde m\left(t\right){\rm d}t +\frac{\hat \gamma \left(t\right)f\left(\vartheta _{t,\varepsilon }^\star ,t\right)}{\varepsilon ^2\sigma \left(t\right)^2}{\rm d}X_t,\quad \tilde m\left(0\right)=0,\\ \frac{\partial \hat \gamma \left(t\right)}{\partial t}&=-2a\left(\vartheta _{t,\varepsilon }^\star,t\right)\hat \gamma \left(t\right)- \frac{\hat \gamma \left(t\right)^2f \left(\vartheta _{t,\varepsilon }^\star ,t\right)^2}{\varepsilon ^2\sigma \left(t\right)^2} +b\left(\vartheta _{t,\varepsilon }^\star ,t\right)^2 ,\;\hat \gamma \left(0\right) =0. \end{align*} Note that the error of this approximation was described in \cite{Kut20b}, where it was shown that $\hat m\left(t\right)-m\left(\vartheta _0,t\right)=\varepsilon O_p\left(1\right) $. \section{Possible generalizations} It is possible to consider nonlinear partially observed system \begin{align*} {\rm d}Y_t&=-A\left(t,Y_t\right){\rm d}t+B\left(t,Y_t\right){\rm d}V_t,\qquad Y_0=0,\quad \\ {\rm d}X_t&=F\left(t,Y_t\right){\rm d}t+\varepsilon \sigma \left(t\right){\rm d}W_t,\qquad X_0=0,\quad 0\leq t\leq T. \end{align*} The derivative $N_\tau =F\left(\tau ,Y_\tau \right)$ of the limit of $X_t$ has quadratic variation \begin{align*} \Psi _\tau =F\left(\tau ,Y_\tau \right)^2-2\int_{0}^{\tau }F\left(t,Y_t\right){\rm d}F\left(t,Y_t\right) =\int_{0}^{\tau }F'\left(t,Y_t\right)^2B\left(t,Y_t\right)^2{\rm d}t . \end{align*} The same kernel-type estimator of the derivative $N_\tau $ has the presentation \begin{align*} \bar N_{\tau ,\varepsilon }&=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K_*\left(\frac{s-\tau }{\varphi _\varepsilon }\right){\rm d}X_s= \int_{-1}^{0 }K_*\left(u\right)F\left(\tau +\varphi _\varepsilon u, Y_{\tau +\varphi _\varepsilon u}\right) {\rm d}u \\ &\qquad +\frac{\varepsilon }{\varphi _\varepsilon }\int_{-1}^{0 }K_*\left(u\right)\sigma \left(\tau+\varphi _\varepsilon u \right){\rm d}W_{\tau +\varphi _\varepsilon u}=F\left(\tau , Y_{\tau }\right)\\ &\qquad +\sqrt{\varphi _\varepsilon }F'_y\left(\tau ,Y_\tau \right)B\left(\tau ,Y_\tau \right)\int_{-1}^{0 }K_*\left(u\right) v_{\tau ,\varepsilon }\left(u\right) {\rm d}u \left(1+o\left(1\right)\right)\\ &\qquad +\frac{\varepsilon}{ \sqrt{\varphi _\varepsilon }}\;\sigma \left(\tau \right) \int_{-1}^{0 }K_*\left(u\right) {\rm d}w_{\tau ,\varepsilon }\left(u\right) \left(1+o\left(1\right)\right), \end{align*} where \begin{align*} v_{\tau ,\varepsilon }\left(u\right)= \frac{V_{\tau +\varphi _\varepsilon u}-V_\tau }{\sqrt{\varphi _\varepsilon }},\qquad \quad w_{\tau ,\varepsilon }\left(u\right)=\frac{W_{\tau +\varphi _\varepsilon u}-W_\tau }{\sqrt{\varphi _\varepsilon }}. \end{align*} The similar estimators \begin{align*} N_{t,\varepsilon }=\frac{1}{\varphi _\varepsilon }\int_{0}^{\tau }K\left(\frac{s-\tau }{\varphi _\varepsilon }\right){\rm d}X_s,\qquad \Psi _{\tau,\varepsilon }=\bar N_{\tau ,\varepsilon }^2-2\int_{0}^{\tau }N_{t,\varepsilon }{\rm d}N_{t,\varepsilon } , \end{align*} allow us under regularity conditions to prove the consistency $\Psi _{\tau,\varepsilon }\rightarrow \Psi _{\tau }$ and weak convergence \begin{align*} \varepsilon ^{-1/2}\left(\Psi _{\tau,\varepsilon }-\Psi _{\tau } \right)\Longrightarrow Z_\tau ^* \end{align*} with corresponding random variable $Z_\tau ^*$. The random process $\Psi _\tau,0\leq \tau \leq T $ depends on unobservable component $Y^T$ and the knowledge of $\Psi _\tau$ can not be used directly for the construction of the parameter estimates. That is why we considered much more simple linear model \eqref{2-1}-\eqref{2-2}, where the applications to parameter estimation and construction of adaptive Kalman-Bucy filter can be done directly. If we have the model of observations \eqref{3-1}-\eqref{3-2} and the parameter $\vartheta \in \Theta \subset {\cal R}^d, d>1$. Then we can take $0=\tau _0<\tau _1,\ldots,<\tau _d $, introduce the the system of equations $\Psi _{\tau _0}^{\tau _1}\left(\vartheta \right)=\psi _1,\ldots, \Psi _{\tau _{d-1}}^{\tau _d}\left(\vartheta \right)=\psi _d $, where \begin{align*} \Psi _{\tau _k}^{\tau _{k+1}}\left(\vartheta \right)&=\Psi _{\tau _{k+1}}\left(\vartheta \right)-\Psi _{\tau _k}\left(\vartheta \right)\\ &= \int_{\tau _k}^{\tau _{k+1}}f\left(\vartheta ,t\right)^2b\left(\vartheta ,t\right)^2{\rm d}t,\qquad k=0,\ldots,d-1 . \end{align*} Suppose that this system of equations has a unique solutions for all $\vartheta \in\Theta $, then we can once more define the corresponding substitution estimator $\check\vartheta _\varepsilon $ and under regularity conditions to prove the consistency of this estimator and describe its limit distribution. {\bf Acknowledgment.} This research was supported by RSF project no 20-61-47043.
1,314,259,996,769
arxiv
\section{On wave propagation time modulated materials in the one-dimensional case} If the constituent materials have the same wave impedance, then, there will be no reflected wave when the pulse encounters either a space or a time interface (see equations \eqref{ref_tran_time} and \eqref{ref_tran_space}), and its amplitude will not increase in time (see, e.g., \cite{Lurie:2006:WPE,Lurie:2009:MAW}). In the case of a space-time composite with the two constituent materials having different impedances, a sufficient condition to ensure that a pulse will maintain a constant amplitude as it propagates in the material, is the following: the pulse has to encounter a space interface followed by a time interface between the same materials periodically. Geometrically, this means that any characteristic line in a space-time diagram will need to cross a horizontal interface, after crossing a vertical interface. In order for such a design principle to be satisfied by both the wavefront and the scattered waves, one has to choose a field pattern material \cite{Mattei:2017:FP,Mattei:2017:FPW,Mattei:2017:FPA,Movchan:2022:FWT}, for which the network of characteristic lines is locally periodic so that the scattered waves interact according to a precise pattern. The only two field-pattern materials proposed in the literature (see \cite{Mattei:2017:FP,Mattei:2017:FPW,Mattei:2017:FPA,Movchan:2022:FWT}) for which a pulse propagates at constant amplitude, regardless of the impedance mismatch, are space-time checkerboards in which the constituent materials have the same wave speed. A generalization of such a geometry is the one showed in figure \ref{fig_Same_speed_geo_1d}, where $\delta\in[0,+\infty)$. The right-going and left-going wavefronts will always have amplitude equal to the initial one when traveling through material 1 (gray), and will have amplitude equal to $2\gamma_1/(\gamma_1+\gamma_2)$ when traveling through material 2 (white), regardless of the impedance mismatch: the smaller the impedance mismatch, the smaller the difference in amplitude. The scattered waves will interfere such that the oscillations in the wake of the wave will always have the same amplitude, which could be found analytically, as also showed in figure \ref{comparison_intensity_1d} (the numerical simulations are performed by using the algorithm in \cite{Gulizzi:2022:MWP}). The associated energy is depicted in red in figure \ref{comparison_energy_1d}. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{1d_same_speed} \caption{A field pattern material where the two materials in white and gray have the same wave speed $c$. } \label{fig_Same_speed_geo_1d} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.95\textwidth]{1-time_history-1d} \caption{Snapshots of the amplitude of a Gaussian pulse of the form $u(x,0)=\exp(-100x^2)$, as it propagates through a homogeneous material with $\alpha=\beta=1$ (orange), a time laminate with $\alpha_1=\beta_1=1$, $\alpha_2=\beta_2=0.5$, and period of modulation $T=1$ (blue), the space-time geometry in figure \ref{fig_Same_speed_geo_1d}, with $\delta=1$, $\alpha_1=\beta_1=1$ and $\alpha_2=\beta_2=0.5$ (red). } \label{comparison_intensity_1d} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{2-total_energy-1d} \caption{Energy associated with the wave propagation described in figure \ref{comparison_intensity_1d}.} \label{comparison_energy_1d} \end{figure} Another example of a field-pattern material with the two components having the same speed and satisfying our design principle is represented in figure \ref{fig_Same_spee_non_rec}. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{Non_rec_geo_1d} \caption{A field pattern material where the two materials in white and gray have the same wave speed $c$. } \label{fig_Same_spee_non_rec} \end{figure} A space-time geometry for which the wave propagates with constant amplitude regardless of both the wave impedance mismatch and the wave speed mismatch is the one depicted in figure \ref{fig_Gen_1d}, with \(\delta\in[0,\infty)\). \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{Gen_1d} \caption{A field pattern material where the gray material has wave speed $c_1$, the white material $c_2$, and the dark gray $c_3=c_1^2/c_2$. } \label{fig_Gen_1d} \end{figure} The wave speed of the gray and white materials, $c_1$ and $c_2$, respectively, is arbitrary and the one of the dark gray material is given by $c_3= c_1^2/c_2$. If the initial amplitude of the right or left wavefront is $u_i$, then it will have amplitude equal to $u_i$ in material 1 (gray), \(2\gamma_1u_i/(\gamma_1+\gamma_2)\) in material 2 (white), and \(\gamma_1(\gamma_2+\gamma_3)u_i/(\gamma_2(\gamma_1+\gamma_2))\) in material 3 (dark gray). \begin{figure}[h!] \centering \includegraphics[width=0.95\textwidth]{3-time_history-1d} \caption{Snapshots of the amplitude of a Gaussian pulse of the form $u(x,0)=\exp(-100x^2)$, as it propagates through the space-time geometry in figure \ref{fig_Gen_1d}, with $\delta=0$, $\alpha_1=0.25$, $\beta_1=1$, $\alpha_2=\beta_2=1$, $\alpha_3=1$ and $\beta_3=4$. } \label{comparison_intensity_1d_general} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{4-total_energy-1d} \caption{Energy associated with the wave propagation described in figure \ref{comparison_intensity_1d_general}.} \label{comparison_energy_1d_general} \end{figure} This is the most general design that ensures propagation of the wave at constant amplitude, regardless of the impedance mismatch. In the two-dimensional case, if the pulse is a unidirectional Gaussian, $u(x,y,0)=A\exp(-By^2)$, then, the results obtained in the one-dimensional case trivially hold, when the space-time geometries are the ones in figures \ref{fig_lam_check_2d} and \ref{fig_2d_diff_speed}. \begin{figure}[h!] \centering \begin{subfigure}{.45\textwidth} \includegraphics[width=0.95\textwidth]{lam_check_2d} \caption{} \label{fig_lam_check_2d} \end{subfigure} \begin{subfigure}{.45\textwidth} \includegraphics[width=0.95\textwidth]{2d_diff_speed} \caption{} \label{fig_2d_diff_speed} \end{subfigure} \caption{(a) A field pattern material where the two materials in white and gray have the same wave speed $c$. This is the two-dimensional extension of the geometry of figure \ref{fig_Same_speed_geo_1d} when $\delta=0$. (b) A field pattern material where the gray material has wave speed $c_1$, the white material $c_2$, and the dark gray $c_3=c_1^2/c_2$. This is the two-dimensional extension of geometry of figure \ref{fig_Gen_1d} when $\delta=0$. } \end{figure} Propagation of a symmetrical Gaussian, $u(x,y,0)= A\exp(-B(x^2+y^2))$, in either one of the space-time geometries illustrated in figures \ref{fig_lam_check_2d} and \ref{fig_2d_diff_speed}, will result in a wavefront with constant amplitude in each material and a wake that increases amplitude in time, due to the spatial asymmetry of the geometry, as showed in figure \ref{fig:11-time_history-2d}. \begin{figure} \centering \includegraphics[width = 0.9\textwidth]{11-time_history-2d.pdf} \caption {Amplitude of a Gaussian pulse of the form $u(x,y,0)= \exp(-100(x^2+y^2))$, as it propagates through a homogeneous material with $\alpha=\beta=1$ (orange), and the space-time geometry in \ref{fig_lam_check_2d}, with $\alpha_1=\beta_1=1$ and $\alpha_2=\beta_2=0.5$ (red). } \label{fig:11-time_history-2d} \end{figure} However, the wake will have constant amplitude if one considers a symmetric space-time geometry like the one illustrated in figures \ref{fig_Gen_2d_same_speeds} and \ref{fig_2d_checker}, see figure \ref{fig:3-time_history-2d}. Figure \ref{fig:4-total_energy-2d} shows the associated energy. These are the first two-dimensional field-pattern materials ever proposed. Given that the condition of PT-symmetry is always unbroken, they are promising candidates to create energy harvesting devices: as the wave travels without any instability in the material, the associated energy increases in time and can be harvested. \begin{figure}[h!] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{2d_full} \caption{} \label{fig_Gen_2d_same_speeds} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{2d_checker} \caption{} \label{fig_2d_checker} \end{subfigure} \caption{(a) A two-dimensional field pattern material where the matrix and the inclusions have the same wave speed $c$, and $\delta\in[0,\infty)$. The different shades of gray are only to emphasize the depth of the three-dimensional geometry. (b) The geometry in (a) when $\delta=0$.} \end{figure} \begin{figure} \centering \includegraphics[width = 0.9\textwidth]{9-time_history-2d.pdf} \caption {Amplitude of a Gaussian pulse of the form $u(x,y,0)= \exp(-100(x^2+y^2))$, as it propagates through a homogeneous material with $\alpha=\beta=1$ (orange), a time laminate with $\alpha_1=\beta_1=1$, $\alpha_2=\beta_2=0.5$, and modulation period $T=1$ (blue), and the space-time geometry in figure \ref{fig_2d_checker}, with $\alpha_1=\beta_1=1$ and $\alpha_2=\beta_2=0.5$ (red). } \label{fig:3-time_history-2d} \end{figure} \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{10-total_energy-2d.pdf} \caption {Energy associated with the wave propagation described in figure \ref {fig:3-time_history-2d}} \label{fig:4-total_energy-2d} \end{figure} Finally, given that the constant overall amplitude of the wave is guaranteed only if the time interfaces are applied at specific moments of time, we study the effects of noise. Following \cite{Apffel:2022:EIW}, we consider the time interfaces occurring at random times $T_n=n+\epsilon_n$, with $n$ integer and $\epsilon_n$ chosen independently and uniformly in $[-\sqrt{3}\sigma,\sqrt{3}\sigma]$, with $\sigma$ being the noise standard deviation. As showed in figures \ref{fig:noise_time_history-1d_laminate} and \ref{fig:noise_energy-1d_laminate} for a time laminate, the bigger the disorder, the smaller the increase in the amplitude of the wave and energy, which is in agreement with \cite{Apffel:2022:EIW}. The opposite occurs when disorder is introduced in the space-time geometry illustrated in figure \ref{fig_Same_speed_geo_1d}, see figures \ref{fig:noise_time_history-1d_check} and \ref{fig:noise_energy-1d_check}, which suggests one has to reduce the noise to a minimum, in order to maintain the overall amplitude of the wave constant and the growth of the energy slow. \begin{figure} \centering \includegraphics[width = \textwidth]{5-time_history-1d.pdf} \caption {Snapshots of the amplitude of a Gaussian pulse of the form $u(x,0)=\exp(-100x^2)$, as it propagates through a time laminate, with $\alpha_1=\beta_1=1$, $\alpha_2=\beta_2=0.5$, and the period of time modulation $T=1$, with various degrees of disorder. } \label{fig:noise_time_history-1d_laminate} \end{figure} \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{6-total_energy-1d.pdf} \caption { The energy associated with the wave propagation described in figure \ref {fig:noise_time_history-1d_laminate}.} \label{fig:noise_energy-1d_laminate} \end{figure} \begin{figure} \centering \includegraphics[width = \textwidth]{7-time_history-1d.pdf} \caption {Snapshots of the amplitude of a Gaussian pulse of the form $u(x,0)=\exp(-100x^2)$, as it propagates through the space-time geometry of figure \ref{fig_Same_speed_geo_1d}, with $\alpha_1=\beta_1=1$ and $\alpha_2=\beta_2=0.5$, with various degrees of disorder. } \label{fig:noise_time_history-1d_check} \end{figure} \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{8-total_energy-1d.pdf} \caption {The energy associated with the wave propagation described in figure \ref {fig:noise_time_history-1d_check}.} \label{fig:noise_energy-1d_check} \end{figure} \section*{Acknowledgments} OM thanks the National Science Foundation for support through grant DMS-2008105. \bibliographystyle{abbrv}
1,314,259,996,770
arxiv
\section{Introduction} \label{sec:intro} In many simulation, graphics, simulated-annealing, cryptographic and Monte Carlo/Las Vegas programs, a substantial fraction of the time is used in generating pseudo-random numbers from the uniform, normal or other distributions, so methods of generating such numbers have received much \hbox{attention}. This paper is dedicated to the memory of Chris Wallace, and our intention is to outline Wallace's substantial contribution to several aspects of random number generation, both in hardware and software. In \S\ref{sec:hardware} we consider hardware random number generators (RNGs), and in \S\ref{sec:uniform} we mention (software) uniform RNGs. In \S\ref{sec:normal} we consider ``conventional'' normal random number generators, and in \S\ref{sec:fastnorm} we consider Wallace's new ``maximum entropy'' idea for normal RNGs that do not depend in an essential way on a source of uniform RNGs. This idea is aesthetically appealing (why bother to generate uniform random numbers just in order to transform them by some time-consuming process into normal random numbers?) and has the potential to give extremely fast normal RNGs. \section{Hardware RNGs} \label{sec:hardware} In some cryptographic applications it is important for the numbers to be genuinely random, in the sense of being unpredictable, and not merely ``pseudo-random'', in the sense of passing various statistical tests. For example, this is the case when generating ``one-time pads'', or when constructing random primes whose products are to be made public for use with the Rivest, Shamir and Adleman ``RSA'' public-key cryptosystem~\cite{RSA}, or when constructing exponents to be used in the Diffie-Hellman key exchange protocol or the El~Gamal public-key cryptosystem~\cite{MOV}. Wallace in~\cite{Wallace90} described a simple hardware device that could provide a stream of unpredictable 32-bit numbers at a rate of 64 Mbit/sec, using a 4~MHz clock. The device was connected to the memory-mapped I/O bus of a multiprocessor computer, and appeared to a software process as a single 32-bit word of memory whose content was different (and unpredictable) every time it was read. Technology has advanced since 1990 so an implementation using similar ideas could now use a much faster clock. This could, for example, be used in the implementation of Rabin's ``everlasting encryption'' scheme~\cite{ADR,AR,Maurer92,Rabin83}, which depends on the availability of a high-volume stream of random and unpredictable bits. \section{Uniform RNGs} \label{sec:uniform} In~\cite{Wallace89}, Wallace considered several ways of obtaining uniform pseudo-random number generators with period close to $2^{64}$ on machines with 32-bit words. There are many ways to accomplish this~\cite{rpb132,Ferrenberg92,Knuth,Marsaglia85,Marsaglia91}, but most of them require a large amount of state information. This can be a problem if several independent streams of random numbers are required simultaneously. With Wallace's proposal, only two 32-bit words of state information are required. \section{Conventional Normal RNGs} \label{sec:normal} Most popular algorithms for generating normally distributed pseudo-random numbers are based on some variant of the rejection method, pioneered by von Neumann~\cite{vonNeumann}. More recent references are~\cite{AD72,rpb023,Devroye,Forsythe,Kinderman,Leva92}. Wallace~\cite{Wallace76} contributed some elegant and efficient generators of this class. Rejection methods for normally distributed pseudo-random numbers require on average some number $U > 1$ of uniformly distributed numbers per normally distributed number. Thus, they can not be faster than the uniform random number generator, and are typically several times slower. Rejection methods for the normal distribution usually (though not always~\cite{rpb023,Forsythe}) involve the computation of functions such as $\log$, $\sin$, $\cos$, which is slow compared to the time required to generate a uniform pseudo-random number. Leva~\cite[Table~1]{Leva92} compared several of the better methods and found that they are at least five times slower than a fast uniform generator on the same machine. \section{Maximum Entropy Normal RNGs} \label{sec:fastnorm} Wallace~\cite{Wallace96} revolutionised normal random number generation by his discovery of a class of methods that do not depend in an essential way on uniform generators. Similar ideas can be used to generate pseudo-random numbers with some other distributions. In Wallace's paper~\cite{Wallace96} the uniform, Gaussian (normal) and exponential distributions are considered as maximum-entropy distributions subject to the following constraints: \begin{enumerate} \item[] Uniform: $0 \le x \le 1$ \item[] Gaussian: ${\rm{E}}(x^2) = 1$ \item[] Exponential: ${\rm{E}}(x) = 1$, $x \ge 0$. \end{enumerate} The idea of a maximum-entropy distribution is most easily seen in the discrete case of $N$ possibilities with probabilities $p_1, \ldots, p_N$. Subject to the constraints $p_j \ge 0$ and $\sum p_j = 1$, the uniform distribution $p_1 = \cdots = p_N = 1/N$ maximises the entropy $S = -\sum p_j \log p_j$. This can be proved using Lagrange multipliers. Similarly, the continuous distribution on $[0,1]$ that maximises $-\int_0^1 f(x) \log f(x) dx$ is the uniform distribution, and the continuous distribution on $(-\infty,+\infty)$ that maximises $-\int_{-\infty}^{+\infty} f(x) \log f(x) dx$ subject to $\int_{-\infty}^{+\infty} x^2 f(x) dx = 1$ is the Gaussian distribution. These statements can be proved using the calculus of variations. For the reader unfamiliar with Bayesian and maximum entropy methods, a good introduction is Jaynes~\cite{Jaynes1986a}. An annotated bibliography is available at~\cite{Jaynes1994}. In the following we restrict our attention to the Gaussian case, since that is where Wallace's idea gives the most significant speedup over conventional methods. For example, Wallace's own implementation {\em FastNorm} is reported in~\cite[\S5]{Wallace96} to be only 13~percent slower than a generalised Fibonacci uniform random number generator on a RISC workstation. Wallace proposed his method in a Technical Report in 1994, and a revision of this Report appeared two years later~\cite{Wallace96} along with an implementation {\em fastnorm}. Some changes in the implementation were made in 1998, % resulting in an improved implementation {\em fastnorm2}~\cite{fastnorm2}. There is a more recent and probably better implementation {\em fastnorm3}~\cite{fastnorm3}, but it was not available when our tests were performed, so we restrict our comments to {\em fastnorm2}. Wallace~\cite{Wallace96} describes two implementations~-- one using integer arithmetic, and the other using floating-point arithmetic. On the workstation that he tested it on, the integer version was faster, but this might not be true on more recent machines with faster floating-point hardware. Traditional normal RNGs are inefficient on vector processors. In 1993 the author compared various normal RNGs on vector processors and concluded that careful implementations of old methods such as the 1958 {\em Polar} method of Box, Muller and Marsaglia (see Knuth~\cite[Algorithm~P]{Knuth}) and the 1959 Box-Muller method~\cite{Knuth,Muller} were faster than more recent methods~\cite{Leva92} on vector processors produced by companies such as Cray, Fujitsu, and NEC: see~\cite{rpb141,Petersen88}. When Wallace's maximum-entropy idea appeared, it was clear that the landscape had changed, although the published implementation {\em fastnorm} was not intended to be efficient on vector processors. Thus, the author implemented an efficient vectorised version {\em rann4}~\cite{rpb170,rpb185} of Wallace's maximum-entropy idea. {\em rann4} and a more recent implementation {\em rannw}~\cite{rannw} are more than three times faster than the methods previously thought to be the most efficient on vector processors. \subsection{Wallace's {\em fastnorm} algorithm} Many uniform random number generators generate one or more new uniform variables from a set of previously-generated uniform variables. Wallace's idea is to apply the same principle to normal random number generators. Given a set of normally distributed random variables, we can generate a new set of normally distributed random variables by applying a linear transformation that respects the ``maximum entropy'' constraint. This avoids the time-consuming conversion of uniform to normal variables that is required in conventional normal random number generators (see \S\ref{sec:normal}). The key idea is: if $x$ is an $n$-vector of independent, identically distributed $N(0,1)$ random variables $x_1, \ldots, x_n$, and $Q$ is any $n \times n$ orthogonal matrix, then $y = Qx$ is another $n$-vector of independent, identically distributed $N(0,1)$ random variables. (Of course, the components $y_i$ of $y$ are dependent on the components $x_j$ of $x$.) To prove the claim, observe that the component $x_j$ has probability density $(2/\pi)^{-1/2}\exp(-x_j^2/2)$, so the vector $x$ has probability density $(2/\pi)^{-n/2}\exp(-r^2/2)$, where $r = \Vert x \Vert_2$. This density depends only on $r$, the distance of $x$ from the origin. However, since $Q$ is orthogonal, $\Vert y \Vert_2 = \Vert Qx \Vert_2 = \Vert x \Vert_2 = r$. Suppose that the $n$-vector $x$ is a pool of $n$ pseudo-random numbers that (we hope) are independent and normally distributed. We can generate a new pool $y = Qx$ by applying an orthogonal transformation~$Q$. However, several problems arise. \subsection{Undesirable correlations} \label{subsec:correlations} $y_i$ is correlated with $x_j$. In fact, $y_i = q_{i,j}x_j + \cdots$, so ${\rm{E}}(y_ix_j) = {\rm{E}}(q_{i,j}x_j^2) = q_{i,j}$. This problem can be overcome by applying several different orthogonal tranformations $Q_1, Q_2, \ldots$ with a random choice of signs, so when averaged over all transformations ${\rm{E}}(q_{i,j}) \approx 0$. \subsection{Cost of transformations} It is too expensive to apply a general $n \times n$ orthogonal transformation $Q$ to produce $n$ new random numbers. This would involve of order $n$ multiplications (and a similar number of additions) per random number generated. To overcome this problem, we can take $Q$ to have a special form, e.g.~in {\em rann4} we use a product of plane rotations of the form \[ R(\theta) = \left[\begin{array}{cc} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array} \right]\;, \] where $\theta$ varies, but is held constant within each inner loop. We do not need to compute trigonometric functions, since $\sin \theta = 2t/(1+t^2)$ and $\cos \theta = (1-t^2)/(1+t^2)$, where $t = \tan (\theta/2)$ varies; the angle $\theta$ is defined only for mathematical convenience and is never computed. In his implementation {\em fastnorm}, Wallace preferred to use $4 \times 4$ orthogonal matrices $A_1, A_2, A_3, A_4$, where \[ A_1 = \frac{1}{2}\left[\begin{array}{rrrr} 1 & 1 & -1 & 1 \\ 1 & -1 & 1 & 1 \\ 1 & -1 & -1 & -1 \\ -1 & -1 & -1 & 1 \end{array} \right]\;, \] and $A_2, A_3, A_4$ are similar. The advantage (on a machine with slow floating-point multiplication) is that multiplication of a $4$-vector by $A_1$ requires only seven additions and one division by two (for details see~\cite[\S2.1]{Wallace96}). The inner loop of the implementation is similar to the inner loop for the popular ``generalised Fibonacci'' uniform random number generators~\cite{Anderson,rpb132,Green,% James,Knuth,Marsaglia85,Petersen94,Reiser}. Wallace's implementation of {\em fastnorm} on a RISC workstation is about as fast as a good uniform random number generator on the same workstation. \subsection{Mixing} \label{subsec:mixing} As Wallace observes~\cite[\S2.2]{Wallace96}, it is desirable that any value in the pool should eventually contribute to every value in the pools formed after several passes. In other words, the transformation from one pool to the next should be strongly ``mixing''. In our experience this is a tricky aspect of the implementation of generators based on Wallace's idea~-- several attempts which appeared plausible did not produce acceptable random numbers (after transformation to uniform variates they failed various statistical tests in Marsaglia's Diehard package~\cite{Diehard}). In {\em fastnorm}, Wallace ensures mixing by regarding the pool of 1024 values as a $256 \times 4$ matrix which is (implicitly) transposed at each pass; an additional ad hoc permutation is applied by stepping some row indices with an odd stride (mod~256). % For details see~\cite[\S2.2]{Wallace96}. In {\em rann4} we effectively apply permutations of the form $\pi_1(j) = \alpha j + \gamma {\;\bmod\;} n$, $\pi_2(j) = \beta j + \delta {\;\bmod\;} n$, where $\gcd(\alpha,n) = \gcd(\beta,n) = 1$. Since $n$ is a power of~2, any odd $\alpha$ and $\beta$ can be chosen. For details see~\cite[\S3]{rpb170}. Although the mixing transformations used in {\em fastnorm} and {\em rann4} appear satisfactory, they seem {\em ad hoc} and there is little helpful theory here~-- all we can do is apply empirical tests. \subsection{Chi-squared correction} Because $Q$ is orthogonal, $\Vert Qx \Vert_2 = \Vert x \Vert_2$, so the sum of squares of numbers in a pool remains constant. This is unsatisfactory, because if $x_1, \ldots, x_n$ were independent samples from the normal $N(0,1)$ distribution, we would expect $\sum_{1 \le i \le n}x_i^2$ to have a chi-squared distribution $\chi_\nu^2$, where $\nu = n$ is the pool size. To overcome this defect, Wallace suggests that one pseudo-random number from each pool should not be returned to the user, but should be used to approximate a random sample $S$ from the $\chi^2_\nu$ distribution. A scaling factor can be introduced to ensure that the sum of squares of the $\nu$ values in the pool (of which $\nu-1$ are returned to the user) is~$S$. If the routine is written to provide random numbers with mean $\mu$ and variance $\sigma^2$, % then scaling by $S^{1/2}$ can be done at the same time as scaling by $\sigma$, so it is essentially free. There are several approximations to the $\chi^2_\nu$ distribution for large $\nu$. For example, the one used in {\em rann4} is \[ 2\chi^2_\nu \;\simeq\; \left(x + \sqrt{2\nu - 1}\right)^2\;, \] where $x$ is $N(0,1)$. It would not be much more expensive to use the (more accurate) Wilson-Hilferty approximation~\cite{Wilson} \[ \chi^2_\nu \;\simeq\; \nu\;\left(\left(\frac{2}{9\nu}\right)^{1/2}x \;+\; \left(1 - \frac{2}{9\nu}\right)\right)^3\;. \] Even better is % \[ \chi^2_\nu \;\simeq\; A(x^2 - 1) \;+\; (2(\nu - A^2))^{1/2}\;x \;+\; \nu\;, \] where \[ A = 2\sqrt{\nu}\;\sin\left(\frac{1}{3}\arcsin\frac{1}{\sqrt{\nu}}\right) = \frac{2}{3} + O\left(\frac{1}{\nu}\right) % \] satisfies the cubic equation $A^3 - 3\nu A + 2\nu = 0$. We can assume that $\nu$ is large ($\nu = 1024$ in {\em fastnorm}; $\nu$ depends on the size of the buffer provided by the user in {\em rann4/rannw}), so all of these approximations are sufficiently accurate. A slow but exact $\chi_\nu^2$ algorithm, such as that of Ahrens and Dieter~\cite{AD82}, is not required. In the above approximations to $\chi_\nu^2$, the variable $x$ was supposed to have a normal distribution. If only $n-1$ values are returned to the user from a pool of $n$ values, the remaining (scaled) value $x$ can be used to approximate $\chi_\nu^2$ for the next pool. This is a point where the implementations of {\em fastnorm} and {\em fastnorm2} differ. In {\em fastnorm}, $x$ is taken from the current pool, but in {\em fastnorm2} it is taken from the previous pool. The choice used in {\em fastnorm} is undesirable because a large value of $x$, and hence a large scaling factor from the $\chi_\nu^2$ approximation, is correlated with a small sum of squares of the remaining values in the pool (since the sum of squares including $x$ is invariant). \subsection{More subtle correlations} \label{subsec:squared} In \S\ref{subsec:correlations} we saw how, by using several orthogonal transformations, we could ensure that ${\rm{E}}(y_ix_j) \approx 0$. However, more subtle correlations persist. Consider the simplified model \[ \left[\begin{array}{c} y_1 \\ y_2 \end{array}\right] = R(\theta) \left[\begin{array}{c} x_1 \\ x_2 \end{array}\right]\;, \] where $R(\theta)$ is a plane rotation as above, and $\theta$ is distributed uniformly in $[0, 2\pi)$. We write $c = \cos \theta$, $s = \sin \theta$. Thus $y_1 = cx_1 + sx_2$, $y_2 = -sx_1 + cx_2$. Suppose that $x_1$ and $x_2$ are independent and normally distributed, with zero mean and unit variance. Then \[ E(x_1^2 y_1^2) = E(c^2)E(x_1^4) + E(s^2)E(x_1^2 x_2^2) = 2 \ne E(x_1^2)E(y_1^2)\;. \] In {\em fastnorm/fastnorm2} and {\em rann4/rannw}, similar effects occur, although the undesirable correlations are small and they occur between well-separated outputs (the separations are of the order of the pool size) because of the permutations used to provide mixing ({\em{cf}}~\S\ref{subsec:mixing}). \subsection{Other finite pool size effects} \label{subsec:finite} Chris Wallace~\cite{trouble} has observed a phenomenon that, like the one discussed in~\S\ref{subsec:squared}, becomes less significant as the pool size increases, but never disappears entirely for any finite pool size~$n$. Consider a rare event such as the occurrence of a large normal variate $x$ that is expected to occur say once in every $10n$ samples, i.e.~once in every 10 pools. The ``energy'' $x^2$ is distributed over only a small number (four) of variables in the next pool. Thus we can expect one or more of these variables to be unusually large. Although the distribution of values considered over many pools is correct, it is more likely that rare events will occur in adjacent pools. It is possible % % to devise statistical tests that \hbox{detect} this behaviour and/or the correlations described in~\S\ref{subsec:squared}. However, we have not obtained any statistically significant \hbox{results} with a sample size of less than ${10^4}n$. Clearly, one way to reduce (though not eliminate) the significance of such effects is to increase the pool size (easy for {\em rann4/rannw}). Another way is to discard some of the numbers produced by the random number generator~-- e.g.~we could use every third value, or the values in every third pool. This has an obvious effect on the speed of the generator, but because the underlying algorithm is so fast we can afford to do it and still have a random number generator that is faster than more conventional generators ({\em{cf}}~\S\ref{sec:normal}). \subsection{Use of uniform RNGs} Although normal generators based on the maximum entropy idea do not use uniform random numbers in any {\em essential} way, it is convenient to use a uniform RNG for purposes such as initialisation, selection of orthogonal transformations, etc. The advantage of the maximum entropy methods is that the number $U$ of uniform distributed numbers required per normally distributed number is very small (of the order of $1/n$ for pool size $n$), whereas for rejection methods $U > 1$. If we choose a uniform random number generator with known long period, and use it at least once for each pool of normal random numbers (e.g. to select from a set of possible orthogonal transformations), then it is easy to guarantee that the period of the normal random number generator is at least as great as that of the uniform random number generator. Thus, although any use of a uniform random number generator might be considered contrary to the spirit of the maximum entropy method, it does have the practical benefit of guaranteeing a long period. If (as is certainly possible) we avoided using a uniform generator except perhaps for initialisation, then we could not {\em guarantee} a long period, although a short period would be extremely unlikely, since it would require an implausible coincidence in the initialisation. \subsection{Summary} Although care needs to be taken in the implementation of normal random number generators like {\em fastnorm}, and the end-user should be aware of the small but unavoidable \hbox{defects} discussed in \S\S\ref{subsec:squared}-\ref{subsec:finite}, these generators have such a performance advantage over more conventional generators that they can not be ignored in applications where the speed of generation of pseudo-random numbers is critical. \section*{Acknowledgements} Comments by David Dowe and by Chris Wallace himself, on earlier versions of this paper, are gratefully acknowledged.
1,314,259,996,771
arxiv
\section{Introduction} The electrical transport, notably the interlayer transport, of the layered high-$T_c$ superconductors (HTS) shows anomalous properties related to the quasi two-dimensional structure which have been studied very extensively in recent years. In the normal state the interlayer conductivity gives information on the quasiparticle properties \cite{Morozov00}. This behavior of the quasiparticles is anomalous in the normal state of the HTS, which is of importance for elucidating the yet to be understood mechanism of superconductivity. The understanding of the fundamental interlayer transport properties of HTS is a challenging physical problem in its own right. One of the unusual features of the normal-state properties is the coexistence of a metallic-like temperature dependence of the in-plane resistivity $\rho _{ab}$ and a semiconducting-like behavior for the out-of-plane resistivity $\rho _c$ (see e.g. Refs. [\onlinecite{Martin,Brinceno,Foro}]). The very different behavior of the resistivities $\rho _{ab}$ and $\rho _c$ implies a 2D confinement and is \textit{a~priori} incompatible with a Fermi-liquid behavior \cite{Ando96}. Over the last few years, many theoretical and experimental investigations have been devoted to the transport properties of HTS. In particular, in the temperature region showing the semiconducting-like $c$-axis resistivity, most compounds reveal a negative out-of-plane magnetoresistance: the Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta }$ (Bi2212) \cite{Nakao,Yan,Wahl,Heine,Ando99,Morozov00}, the La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) \cite{Kimura,Hussey}, and the La-doped Bi$_2$Sr$_{2-x}$La$_x$CuO$ _{6+\delta }$ (BSLCO) system \cite{Ando96,Yoshizaki}. The observed semiconducting-like $\rho _c(T)$ and negative out-of-plane magnetoresistance have been discussed in terms of different models, such as $c$-axis tunneling with a strong suppression by charge fluctuations excited in the process of tunneling \cite{Leggett}, $c$-axis hopping with interplanar scattering \cite{Hussey}, a reduction of the density of states due to superconducting fluctuations \cite{Nakao,Wahl,Heine}, and a pseudogap and/or spin gap opening in the density of states \cite{Yan,Kimura}. This is another striking feature of HTS. Of particular interest in the physics of carriers in strongly correlated and disordered systems to which HTS belong is the coexistence of superconductivity and localization. The latter phenomenon is one further peculiarity of HTS. Disorder in a metallic system can cause localization of the electronic states and lead to a metal-insulator transition \cite{Anderson}. The metal-insulator transition has been observed in the superconducting systems LSCO \cite{Boebinger} and Pr$_{2-x}$Ce$_x$CuO$_{4+\delta }$ \cite{Fournier} at optimal doping and BSLCO well inside the underdoped regime \cite{Ono00}. The insulating behavior in these systems is characterized by an in-plane resistivity $\rho _{ab}(T)$ which increases as $\log (1/T)$. These results demonstrate that the metal-insulator crossover in cuprates should not be universally associated with doping but rather with the observation of a unified $\log (1/T)$ temperature dependence of the resistivity suggesting a peculiar charge localization in the above mentioned cuprates \cite{Ono00}. It is difficult to obtain an overall picture of the metal-insulator transition in cuprates because only three systems have been studied, with strikingly different results obtained for the metal-insulator crossover for BSLCO when compared to those for LSCO and Pr$_{2-x}$Ce$_x$CuO$_{4+\delta }$. The anomalous transport should be more noticeable in the vicinity of the metal-insulator transition and in the $T\rightarrow 0$ limit, suggesting the existence of a close link between charge transport and strong electron correlation. However, up to now the behavior of cuprates in the normal state in the $T\rightarrow 0$ limit remains an open issue. One of the unresolved, but all-important issues of high temperature superconductivity, is the connection of normal state correlations cited above, and referred to as a pseudogap, to the origins of the high $T_c$ \cite{Ding96}. Many experiments (e.g. nuclear magnetic resonance \cite{Berthier}, photoemission \cite{Ding96}, tunneling \cite{Renner}) have provided evidence that in the normal state of underdoped HTS, a pseudogap exists in the electronic excitation spectra below a temperature $T^{*}>T_c$. This leads to a semiconducting-like behavior of the $c$-axis resistivity below $T^{*}$. Photoemission experiments (ARPES) have seen d-wave symmetry in the pseudogap structure \cite{Ding96}. In scanning tunneling measurements on Bi2212, Renner $et$ $al.$ \cite{Renner} have found this pseudogap to be present both in underdoped and overdoped samples, and to scale with the superconducting gap. Certain groups have proposed, that the pseudogap in the normal state can be seen as a precursor for the occurrence of superconductivity where the superconducting phase-coherence is suppressed by thermal or quantum fluctuations, e.g. Refs.[\onlinecite{Emery,Hotta,Maly}]. More recently, from interlayer tunneling spectroscopy in the Bi2212 system, evidence for a definite difference between the superconducting gap and the pseudogap has been obtained \cite{Suzuki}. This result is further reinforced by nuclear magnetic resonance measurements \cite{Zheng} on the underdoped cuprate YBa$_2$Cu$_4$O$_8$ ($T_c=74$ K) which showed that a magnetic field of 23 T, while reducing $T_c$ by 23\%, has no effect on the pseudogap, suggesting that it has a distinct origin from that of the superconductivity. In the case of a non-superconducting origin, a pseudogap can be formed in the spin-part of the excitation spectrum in the context of spin charge separation. In studies of the magnetic field dependence of the spin gap in the near optimally doped YBa$_2$Cu$_3$O$_7$ in the normal state \cite{Gorny,Mitrovic} probed using the spin lattice relaxation rate, contradictory results were obtained. On the one hand, in an intensive study of the anisotropic transport on the Bi2212 system \cite{Wantanabe97} the authors found that the onset of semiconducting-like $\rho _c(T)$ does not coincide with the opening of the spin gap seen in the in-plane resistivity $\rho _{ab}(T)$. The pseudogap opening temperature, on the other hand, coincides with the onset of the semiconducting-like behavior observed in $\rho _c(T)$ on the YBa$_2$Cu$_3$O$_7$ system. Since the normal-state properties in the high-$T_c$ superconductors are known to depend strongly on the carrier concentration, the reported transport and magnetotransport data in the normal state cannot be easily categorized to form a common picture. There is currently no consensus concerning at what temperature the pseudogap opens \cite{Batlogg}. An experimental investigation of the possible correlation between the pseudogap and the out-of-plane magnetoresistance in layered HTS at high magnetic fields is therefore of crucial importance. In previous measurements \cite{Vedeneev00a} we have studied the $c$-axis magnetoresistance in the La-free Bi$_{2+x}$Sr$_{2-x}$Cu$_{1+y}$O$_{6+\delta }$ (Bi2201) single crystals with $T_c=9.5$ K under magnetic fields up to 28 T and over a temperature range $6-100$ K. The observed isotropic behavior of the normal-state magnetoresistance with respect to the orientation of the magnetic field (perpendicular and parallel to the CuO$_2$ planes) shows that only the effect of the magnetic field on the spins (Zeeman effect) is important in the normal state. Such a result makes it difficult to explain the negative magnetoresistance with models based on superconductivity involving superconducting fluctuations or a pseudogap as a precursor of complete superconductivity. Shibauchi \textit{et al.} \cite{Shibauchi01} have reported $c$-axis resistivity measurements in fields up to $60$ T in underdoped and overdoped Bi2212 crystals, from which they made a first evaluation of the pseudogap closing field $H_{pg}$. These results again indicate the predominant role of spins over orbital effects in the formation of the pseudogap. However, because of the high $T_c=67-78$ K and very high upper critical field, $H_{c2} $, for Bi2212 crystals the available $60$ T field was insufficient to suppress superconductivity at low temperatures and to evaluate $H_{pg}$, the authors \cite{Shibauchi01} were forced to extrapolate their data. Direct measurements of $H_{pg}$ were performed only at $T>95$ K. So far as little is known about the effect of magnetic field, the $H$ dependence of the pseudogap in HTS remains highly controversial. In this paper we present, to our knowledge, the first measured temperature dependence for both the in-plane $\rho _{ab}$ and the out-of-plane $\rho _c$ resistivities and magnetoresistivities $\rho _{ab}(H)$ and $\rho _c(H)$ in hole-doped La-free Bi2201 cuprate at under, and optimal doping concentrations, and over a wide range of temperature down to $40$ mK. Due to the lack of a sufficient amount of Bi2201 single crystals and especially crystals with different doping levels, the transport properties of this system have not previously been investigated in detail. Owing to the low critical temperature of Bi2201, $25$ T magnetic fields are sufficient to suppress superconductivity in these samples in the $T\rightarrow 0$ limit, even at optimal doping \cite{Vedeneev99}. We have suppressed superconductivity in single crystals using a 28~T resistive magnet at the Grenoble High Magnetic Field Laboratory, in order to measure the in-plane $R_{ab}$ and the out-of-plane $R_c$ resistances in the normal state in magnetic fields applied perpendicular and parallel to the $ab$-plane. \section{Experiment} \begin{table*} \caption{\label{tab:Table1}Summary of the properties of the investigated single crystals determined as described in the text: The carrier concentration per Cu atom ($p$), actual cationic compositions (Bi:Sr:Cu), ratios Bi/Sr, critical temperature ($T_c$), lattice parameter ($c$), disorder parameter ($k_Fl$), pseudo gap closing field ($H_{pg}$), and the functional form of the magnetic field dependence of $\rho_c(H)$.} \begin{ruledtabular} \begin{tabular}{cccccccc} $p$ & Bi:Sr:Cu & Bi/Sr & $T_c$ (K) & $c$ ($\AA$) & $k_Fl$ & $H_{pg}$ (T) & Functional form of $\rho_c(H)$\\ \hline 0.12 & 2.66:1.33:0.85 & 2.0 & 2.3 & 24.57 & 0.6 & $\geq30$ & $\rho _c(H) \simeq \rho_{c0}+a_{1}H$\\ 0.13 & 2.62:1.38:0.87 & 1.9 & 3 & 24.575 & 7 & $\geq30$ & $\rho _c(H) = \rho_{c0}+a_{2}H+b_{2}H^2$\\ 0.16 & 2.39:1.61:1.02 & 1.48 & 9 & 24.59 & 20 & $\simeq 21$ & $\rho _c(H,T)=\rho_{c0}+a_{3}\exp (-H/b_{3}T)$\\ 0.17\footnote{Complete $\rho_{ab}(H)$ and $\rho_c(H)$ data is unavailable for this sample so that we are unable to estimate all parameters.} & 2.31:1.69:1.12 & 1.37 & 9.6 & 24.61 & - & - & -\\ 0.2 & 2.10:1.90:1.14 & 1.1 & 6.7 & 24.63 & 49 & $\simeq 16$ & $\rho _c(H,T)=\rho_{c0}+a_{4}\exp (-H/b_{4}T)$\\ \end{tabular} \end{ruledtabular} \end{table*} It is known, that the stoichiometric composition Bi2201 is an insulating phase, and that single-phase superconducting crystals can be obtained by replacing Sr with either Bi or La \cite{Maeda}. In a compound, the optimal cation states for Sr, La and Bi, are Sr$^{2+}$, La$^{3+}$ and Bi$^{3+}$, respectively. Therefore, the substitution of trivalent La or Bi for divalent Sr in the BSLCO or in the La-free Bi2201 samples reduces the hole concentration in the CuO$_2$ planes. For the Bi2201 samples, Fleming \textit{et al. }\cite{Fleming} and Harris $et~al.$ \cite{Harris} found that as the Bi/Sr ratio increases, and one moves toward the bottom of the phase diagram of the solid solution, the number of holes doped into the system decreases, which thus pushes the system towards the hole-underdoped regime. The lower $T_c$, together with the larger residual resistivity of Bi2201 in comparison with BSLCO (the maximum $T_c$ is 38 K \cite{Ono00}) apparently suggests that the disorder due to (Sr,Bi) substitution is stronger in Bi2201 than the disorder due to (Sr,La) substitution \cite{Ono03}. We were able to make high quality single-phase superconducting Bi$_{2+x}$Sr$_{2-x}$ Cu$_{1+y}$O$_{6+\delta }$ single crystals in the range of $ 0.1<x<0.7 $, provided that the Cu content was slightly increased \cite{Gorina94,Martovitsky}. The investigated Bi2201 single crystals were grown by a KCl-solution-melt free growth method. A temperature gradient along the crucible results in the formation of a large closed cavity inside the solution-melt. In this case, the crystals are not in direct contact with the solidified melt in the crucible, thereby avoiding thermal stresses during cool down. The crystals were grown in the temperature range $830 - 850~{}^{\circ}$C. The crystals had a platelet-like shape and mirror-like surfaces. The several tens of crystals grown in such a cavity, when characterized, are found to have almost identical properties. The quality of the crystals was systematically verified by measurements of the dc resistance, ac susceptibility, X-ray diffraction and scanning electron microscopy. To summarize the properties of the investigated crystals, we have regrouped in Table \ref{tab:Table1} the data of $p$ (carrier concentration per Cu atom), actual cationic compositions, ratios Bi/Sr, $T_c$, and lattice parameters $c$. The X-ray diffraction measurements were performed using a double-axis diffractometer. A CuK$_{\alpha}$ radiation monochromatized by a pyrolytic graphite crystal was employed. Both $\theta$- and 2$\theta$-scans of the ($0 0 l 0$) sublattice reflections and the ($0 0 l \pm 1$) satellite reflections were used to assess structural perfection. These measurements were carried out before and after low-temperature experiments in magnetic fields. The half-width of the sublattice reflections in the X-ray rocking curves for the optimally doped single crystals consisting of two or three blocks did not exceed $0.3^{\circ}$, whereas for the crystals consisting of one block only (with dimensions of only $0.3\times 0.3$~mm$^2$) it was less than $0.1^{\circ}$. This value is close to a resolution limit of a diffractometer. Both the ($\theta-2\theta$)- and $\theta$- X-ray diffraction profiles of the sublattice show no detectable structural defects. Thus, it can be concluded that even the sublattice contains no small-angle boundaries. For example, the half-width of both the main profile (0016) and the satellite reflections (00151), (00151)' in the X-ray rocking curves for the heavily underdoped single crystal with p = 0.13 (with large the Bi excess) was about $0.2^{\circ}$. The composition of the crystals was studied using a Philips CM-30 electron microscopy with a Link analytical AN-95S energy dispersion X-ray spectrometer. The actual cationic compositions of each investigated crystal were measured at several different places on the crystal and the scatter in the data was less than 7\%. Complementary measurements of our Bi2201 single crystal composition performed at the Material Science Center, University of Groningen (The Netherlands) have shown that our crystals are slightly underdoped due to oxygen depletion. The dimensions of the crystals were ($0.4-0.8)~mm \times (0.5-1)~mm \times (3-10)~\mu$m. The $T_c$ value of the crystals formed by our free growth method can be as high as $13$ K. However, we have found that the highest quality superconducting Bi2201 single crystals have a very narrow range of values of the lattice parameters $a=5.360-5.385~\AA$ and $c=24.57 - 24.63~\AA$. In this case the $T_c$ (midpoint) values of the crystals lie in the region $3.5-9.5~K$ in agreement with previous studies \cite{Sonder,Harris}. The transition width defined by the $10\%$ and $90\%$ points of the superconducting transition of crystals ranged from $0.5$ to $1.7~K$. It is known, that overdoping or underdoping of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta }$ can be achieved by cation substitutions or by changes in the oxygen content \cite{Villard,Ooi}. However, in the low-$T_c$ Bi-based phase Bi2201, we have found that it is difficult to change the number of holes, because it is difficult to change the oxygen content. We have performed many attempts to change the $T_c$ of single crystals, after changing the doping level, by means of an annealing in oxygen or argon at different temperatures. However, a careful characterization of the annealed samples revealed that changes in $T_c$ greater than $\pm 1$ K, were always accompanied by a severe degradation of the sample quality and the occurrence of phase inhomogeneity in agreement with previous studies \cite{Sonder}. Most likely this is due to the fact that our crystals are close to the decomposition line. For this reason, in the following measurements, we used only high quality \textit{as-grown} single crystals. For the investigation, samples with different $T_c$ values were obtained by growing crystals with a different Bi content. A four-probe contact configuration, with symmetrical positions of the low-resistance contacts ($<1\Omega $) on both $ab$-surfaces of the sample was used for the measurements of $R_{ab}$ and $R_c$ resistances. The temperature and magnetic field dependence of the resistances $R_{ab}(T,H)$ and $R_c(T,H)$ were measured using a lock-in amplifier driven at $\approx $10.7 Hz. The measured resistances were then transformed to the respective resistivities $\rho _{ab}$ and $\rho _c$ using the crystal dimensions and the ratio of $R_2/R_1$ in the thin sample limit of the Montgomery technique \cite{Logan}. For the low temperature magnetotransport measurements, the crystals were placed directly inside the mixing chamber of a Kelvinox top-loading dilution fridge and studied with the magnetic field $H$ applied either parallel or perpendicular to the $c$ -axis. A configuration with $\mathbf{H\perp J}$ and $\mathbf{H\parallel J}$ for the in-plane transport current $\mathbf{J}$ was used. For the out-of-plane transport current, the magnetic field $H$ was applied both parallel to the $c$-axis and parallel to the $ab$-plane in the longitudinal ($\mathbf{H\parallel c\parallel J}$) and transverse ($\mathbf{H\perp c\parallel J}$) configurations. The carrier concentration per Cu atom, $p,$ in the Bi-based HTS cannot be unambiguously determined because the Bi ion does not have a fixed valency \cite{Idemoto}. However, Ando \textit{et al}. \cite{Ando00} have shown that the normalized Hall coefficient $R_HeN/V_0$ of various cuprates agree well in the temperature range 150 - 300 K and the data of La$_{2-x}$Sr$_x$CuO$_4$, for which $p$ is unambiguous, can be used to estimate the doping level in other systems. Here $e$, $N$ and $V_0$ are the electronic charge, the number of Cu atoms in the unit cell and the volume associated with each Cu atom, respectively. In order to estimate the carrier concentration in our samples, following the method proposed by Ando \textit{et al}.\cite{Ando00}, we have measured the Hall coefficient $R_H$ in several crystals\cite{Vedeneev00b,Bel} and compared the magnitudes of the normalized Hall coefficient \cite{Bel} with the values reported for LSCO \cite{Hwang}. Subsequently, we estimated $p$ in other samples using the empirical (nearly linear) relation between the excess Bi, $x$, and $p$. In the inset of Fig.~\ref{fig1}, we show the values of $T_c$ (closed circles) plotted vs $ p $ for our Bi2201 single crystals (the dashed line is shown a guide to the eye). It was found that optimum doping occurs at $p\simeq 0.17$ below which $T_c(p)$ shows a rapid drop as for the BSLCO system \cite{Ando00}. As can be seen in Fig.~\ref{fig1}, our samples are basically in the optimally doped and underdoped side of the phase diagram and the data show the well-known parabolic behavior. \section{Metal-insulator crossover and absence of a $\mathbf{\log (1/T)}$ divergence in both $\rho _{ab}$ and $\rho _{c}$} \subsection{In-plane resistivity $\rho _{ab}$} In Fig.~\ref{fig1} (main panel) we show the temperature dependence of the in-plane resistivity $\rho _{ab}$ for five single crystals with $T_c=2.3$, $3$, $6.7$, $9.6$ and $9$ K (midpoint) at zero magnetic field for $p$ values between $0.12$ and $0.2$. One can see that as for other cuprates, the magnitude of $\rho _{ab}(T)$ increases with decreasing carrier concentration. The resistivity curves give an almost linear temperature dependence for the optimally doped sample, positive curvature for the overdoped sample typical for other overdoped cuprates, and linear temperature dependence for the underdoped samples with a characteristic upturn at low temperatures (``semiconducting behavior''). \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig1.eps} \caption{\label{fig1}Temperature dependence of the in-plane resistivity ($\rho_{ab}$) as function of temperature for Bi2201 samples with different hole concentrations. The inset shows the critical temperature for superconductivity ($T_c$) determined from $\rho_{ab}$ as a function of hole concentration.} \end{figure} \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig2.eps} \caption{\label{fig2} Semi-log plot of $\rho_{ab}$ versus temperature for various magnetic fields applied along the c-axis for four Bi2201 samples with different hole concentrations} \end{figure} Fig.~\ref{fig2} shows a semi-logarithmic plot of $\rho _{ab}(T)$ at various fixed magnetic fields for selected samples from Fig.~\ref{fig1} in order to emphasize the low-temperature behavior. Because the 20 and 27.5 T data are almost identical, we believe that we are measuring the true normal-state resistivity at our highest magnetic fields. $\rho _{ab}$ for two underdoped samples, $p=0.12$ (a) and $0.13$ (b), goes through a minimum and then at temperatures $T\approx 30$ K (a) and $T\approx 10$ K (b), increases as $\log (1/T)$ as the temperature decreases, consistent with the onset of localization \cite{Jing91}. This behavior is in agreement with the results of Ono \textit{et al}. \cite{Ono00}, who found a logarithmic divergence of $\rho _{ab}(T)$ in underdoped BSLCO and LSCO samples. The $\log (1/T)$ dependence of $\rho _{ab}(T)$ reported by Ono \textit{et al}. \cite{Ono00} extended over temperatures from $30$ to $0.3$ K without any sign of saturation at low temperatures. However, as can be seen from Fig.~\ref{fig2}(a) and (b), $\rho _{ab}$ in Bi2201 shows a downward deviation from a $\log (1/T)$ dependence at ultra low temperatures, $T=$ $0.04-0.2$ K, in very high fields. This deviation cannot be related to the proximity of the superconducting transition since the behavior of $\rho _{ab}(T)$ in magnetic fields of $20$ T and $27.5$ T in Fig. ~\ref{fig2}(a) and (b) is identical. Moreover, the data at $27.5$ T in Fig. ~\ref{fig2}(b) actually lie below the $20$ T data. We interpret the observed onset of the saturation of $\rho _{ab}$, as a suppression of the localization by the magnetic field. \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig3.eps} \caption{\label{fig3} In-plane resistance as a function of magnetic field applied along the c-axis (a) and in the ab plane (b) measured at different temperatures for the underdoped Bi2201 sample with p=0.12.} \end{figure} One can see in Fig.~\ref{fig2}(a) that in the most underdoped sample with $p=0.12$ at zero magnetic field there is the weak upturn in the region $3-4$ K, which we believe is a consequence of a competition between superconductivity and localization. To illustrate this, we show in Fig.~\ref{fig3} the magnetic-field dependence of $R_{ab}$ for the same sample with $p=0.12$ for temperatures from $40$ mK to nearly $6$ K for magnetic fields perpendicular (a) and parallel (b) to the $ab$-plane. The considerable difference in the $R_{ab}(H)$ curves between the two field orientations is a direct consequence of the anisotropy of the upper critical field in Bi2201 due to a difference in the orbital effect of the magnetic field on the one hand, and because the effect of the magnetic field on the localization is weaker for the parallel geometry on the other hand \cite{Lee}. As can be seen from Fig.~\ref{fig3}(b), at $T=3$ K and $2.1$ K, a negative magnetoresistance appears which results from the gradual suppression of localization effects by the magnetic field. This negative magnetoresistance also exerts some influence on the other $R_{ab}(H)$ curves at lower $T$, that is to say, the localization effects still persist. In the perpendicular geometry, the magnetic field suppresses rapidly the superconductivity and the competition between superconductivity and localization is not observed. Although the localization also exerts some influence on the $R_{ab}(H)$ curves in Fig.~\ref{fig3}(a) (curves have a pronounced break-points in the derivative). Hence, we believe that the weak upturn in the zero-magnetic field $\rho_{ab}$ in Fig.~\ref{fig2}(a) is due to a competition between superconductivity and localization. Nevertheless, a sample inhomogeneity on atomic scale because of the heavy doping and proximity of the isolating phase cannot be ruled out as a possibility at this composition. The negative magnetoresistance for the longitudinal geometry itself presents an additional difficulty for standard interaction theory. The same anomalous negative magnetoresistance for the longitudinal geometry at low temperatures has been observed previously in the nonsuperconducting Bi2201 single crystals by Jing \textit{et al.}\cite{Jing91}. Since the authors \cite{Jing91} considered this phenomenon in detail, we will not discuss this topic further. However, it is important to note that in the second most underdoped sample with $p=0.13$ the negative longitudinal magnetoresistance is not observed in spite of the fact that the $\rho _{ab}$ at $T < 10$ K in Fig.~\ref{fig2}(b) increases as $\log (1/T)$ and the localization persists. The data in Fig.~\ref{fig2} and Fig.~\ref{fig3} shows that the role of disorder in the field-induced normal state of underdoped cuprates remains an open question. Further experiments are needed to reliably determine the low-temperature variation. In contrast, $\rho_{ab}(T)$ for the slightly underdoped and overdoped samples with $p=0.16$ Fig.~\ref{fig2}(c) and $0.2$ Fig.~\ref{fig2} (d) is constant below $5$ K and clearly shows a metallic behavior in the normal state. This data is in full agreement with the behavior of $\rho _{ab}(T)$ in BSLCO and LSCO systems. Thus, it seems likely that the metal-insulator transition in Bi2201 lies in the underdoped region ($p<0.16$) as for BSLCO. The observed metallic behavior gradually changes to an insulating behavior with decreasing carrier concentration. In a 2D system the disorder parameter given by $k_Fl$, where $k_F$ is the Fermi wave vector and $l$ the elastic scattering length, may serve as a measure of the disorder in the material \cite{Fiory}. From the residual resistivity $\rho _{ab}(T\rightarrow 0)$ in Fig.~\ref{fig2} and the lattice parameter $c$ we determined the disorder parameter in the $ab$-plane $(k_Fl)_{ab}\simeq 0.6,7,20,$ and $49$ for samples with $p=0.12$, $0.13$, $0.16$, and $0.2$, respectively. For the samples with $p\geq 0.16$, $(k_Fl)_{ab}>>1$ and a true metallic conduction in the CuO$_2$ layers takes place, whereas the sample with p=0.12 clearly shows $\log (1/T)$ behavior starting from $T\simeq 30$ K where the value of $\rho _{ab}$ is consistent with $(k_Fl)_{ab}=1.3$ (it is important to note that the Mott limit corresponds to $k_Fl=1$). According to the optical data obtained by Tsvetkov \textit{et al}. \cite{Tsvetkov} on our Bi2201 single crystals, the effective mass in the $ab$-plane is $m^{\ast}=3m_{o}$ where $m_{o}$ is the free-electron mass. Using this value of $m^{\ast }$ together with the carrier density we can calculate $k_{F}$ and hence $l=60$ and $145~\AA$ at 10 K for the samples with $p = 0.17$ and $0.2$, respectively. This clearly indicates that the optimally doped and underdoped Bi2201 crystals are clean superconductors. For these calculations we have assumed a cylindrically shaped Fermi surface with a highly anisotropic dispersion relation \cite{Kresin}. The large increase of $\rho _{ab}$ is striking when compared with the small change in $T_c$ when the hole doping $p$ is changed from $0.13$ to $0.12$. This phenomena is not observed in the BSLCO system, which supports the suggestion of Ono \textit{et al}. \cite{Ono03}, that the disorder associated with (Sr,Bi) substitution is more harmful to the electronic system than the disorder due to (Sr,La) substitution. It is also possible that this results from the proximity of the isolating phase near the bottom of the phase diagram. \subsection{Out-of-plane resistivity $\rho _{c}$} Fig.~\ref{fig4} (main panel) shows the temperature dependence of the out-of-plane resistivity $\rho _c$ at zero magnetic field for four single crystals shown in Fig.~\ref{fig1}. The inset in Fig.~\ref{fig4} plots $\rho _c(T)$ on a semi-logarithmic scale to equally show the behavior of all the samples. As for the case of $\rho _{ab}(T)$, with decreasing $p$, the overall magnitude of $\rho _c$, increases as its ``semiconducting'' temperature dependence becomes less marked. The exception is the overdoped sample with $p=0.2$, for which the $\rho _c$ value is larger than $\rho _c$ of the sample with $p=0.16$ and this sample already shows a ``metallic'' temperature dependence of $\rho _c$ at high temperatures. Such behavior at high temperatures is often observed in overdoped cuprates. The larger value of $\rho _c$ in the overdoped sample Bi2201 is likely due to an excess of Bi and suggests a larger disorder in the electronic system compared to the in plane disorder in the same sample probed by $\rho _{ab}$. In all the underdoped crystals studied, we found that $\rho _c(T)$ at $H=0$~T varies as a power law $T^{-\alpha}$ over the temperature range $T=3-300$~K with $\alpha =0.7-1.6$. \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig4.eps} \caption{\label{fig4} c-axis resistivity $\rho_{c}$ as a function of temperature for Bi2201 samples with different hole concentrations. The inset shows the same data plotted on a semi-log scale.} \end{figure} A $\log T$ plot of $\rho _c$ at various fixed magnetic fields for samples from Fig.~\ref{fig4} is shown in Fig.~\ref{fig5} in order to emphasize again the low-temperature behavior. A strong magnetic-field induced suppression of the low-temperature upturn can be observed. In addition, $\rho _c(T)$ for the case of the slightly underdoped or overdoped crystals shows a tendency to saturate. One can see that the $\log (1/T)$ behavior of the $\rho _c$ in the normal state gradually changes to a metallic-like behavior with increasing carrier concentration. The onset of this behavior in $\rho _c(T)$ moves to higher temperatures with increasing carrier concentration. Our data in Fig.~\ref{fig5} are in striking contrast to the behavior of $\rho _c(T)$ reported for the underdoped LSCO samples \cite{Ando95} and the slightly overdoped BSLCO single crystals \cite{Ando96}, which exhibited a $\log (1/T)$ divergence in the normal state at $T\ll T_c$ (for temperatures up to 0.66 K). The metallic-like temperature dependence of the in-plane resistivity $\rho _{ab}$ and a semiconducting-like behavior for the out-of-plane resistivity of $\rho _c$ reported by Ando \textit{et al}. \cite{Ando96} suggested that the $c$-axis transport is uncorrelated with the in-plane transport. On the other hand, the same $\log (1/T)$ divergence of $\rho _c(T)$ and $\rho _{ab}(T)$ in the underdoped LSCO samples gave the authors of Ref.[ \onlinecite{Ando95}] additional evidence against 2D localization. However, as is clear from Fig.~\ref{fig5}, we do not have any evidence for a $\log (1/T)$ divergence at low temperatures in underdoped Bi2201 single crystals, and the out-of-plane resistivity $\rho _c$ of the slightly underdoped and overdoped Bi2201 single crystals below $T_c$ in the highest applied fields shows almost no temperature dependence. This implies that the carrier-transport mechanism in the low-temperature limit, $T / T_c \to 0$, is the same for the $ab$ and $c$ directions. We note that Morozov \textit{et al.} \cite{Morozov00} have also observed a near saturation of $\rho _c$ for a Bi2212 crystal in the temperature region $22.5 - 30$~K at 55~T. \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig5.eps} \caption{\label{fig5} $\rho_{c}$ as a function of temperature and measured at various magnetic fields applied parallel to the c-axis for Bi2201 samples with different hole concentrations } \end{figure} A parameter more often used than $\rho_c$ to characterize the interlayer coupling is the anisotropy of the resistivity $\rho_c / \rho_{ab}$. The largest anisotropy ratio found here is $\rho _{c}/\rho _{ab}$ is $2.2\times 10^{4}$ just above $T_c$. We find that the anisotropy ratio in zero-magnetic field for samples is strongly temperature dependent except for the most underdoped sample with $p=0.12$ for which $\rho _{c}/\rho _{ab}$ is significantly less and depends only slightly on temperature, probably due to the localization or enhanced disorder at this doping level. Such behavior is in agreement for the most part with the results of Wang \textit{et al.} \cite{Wang} and Ando \textit{et al.} \cite{Ando96,Ono03} previously reported for BSLCO samples and implies that at high temperatures the mechanisms governing transport along and perpendicular to the CuO$_2$ plane are different. However, the normal-state anisotropy ratio $\rho_c / \rho_{ab}$ at low temperatures in very high magnetic fields becomes practically temperature independent for all samples. This behavior is in distinct contrast to Ref.[\onlinecite{Ando96}] where $\rho_c / \rho_{ab}$ of BSLCO crystals continued to increase below $T_c$ providing evidence for the non-Fermi-liquid nature of the system. On the other hand, this result is consistent with data for the underdoped LSCO samples reported in Refs. [\onlinecite{ Boebinger,Ando95}]. The saturation of the ratio $\rho_c / \rho_{ab}$ suggests that at low temperatures $\rho_{ab}$ and $\rho_c $ in very high magnetic fields are related, which is probably indicative of the anisotropic three-dimensional charge transport in this region induced by the magnetic field. In view of the remarkable difference between the temperature dependence of $\rho_c / \rho_{ab}$ in Bi2201, BSLCO and LSCO, we do not want discuss here this topic more fully. \subsection{Pseudogap} \begin{figure*} \includegraphics[width=0.9\linewidth,angle=0,clip]{Fig6.eps} \caption{\label{fig6} $\rho_{c}$ as a function of magnetic field applied parallel to the c-axis for different temperatures for Bi2201 samples with various hole concentrations. The inset in (c) shows the relative variation $\Delta \rho_c / \rho_{c0} = [\rho_c(H,T) -\rho_c(0,T) ] / \rho_c(0,T)$ at different temperatures for both configurations at a magnetic field $H=28$~T. The inset in (d) shows the c-axis conductivity for the same sample ($p=0.2$).} \end{figure*} According to Ref.[\onlinecite{ Morozov00}], the interlayer transport results from a tunneling process and quasiparticle tunneling dominates at higher fields. Since $\rho_c$ can give information about the quasiparticle density of states in the presence of a pseudogap, below we will discuss the $\rho_c$ magnetoresistivity at high fields in our samples. The suppression of a semiconducting-like temperature dependence for $\rho_c(T)$ can be interpreted as the magnetic-field induced suppression of the pseudogap, previously observed at temperatures above 5~K for slightly underdoped Bi2201 crystals with $T_c=9.5$~K \cite{Vedeneev00a} and in highly overdoped Bi2212 single crystals \cite{Shibauchi03} at $T > 20$~K ($T_c\approx 60$~K). In Fig.~\ref{fig6} we plot $\rho _c(H)$ versus magnetic field for four Bi2201 single crystals. For completeness in Fig.~\ref{fig6} (c), we also display our data for the slightly underdoped ($p=0.16$) sample \cite{Vedeneev00a}. The inset in Fig.~\ref{fig6} (c) shows the relative variation $\Delta \rho_c / \rho_{c0} = [\rho_c(H,T) -\rho_c(0,T) ] / \rho_c(0,T)$ at different temperatures for both configurations at a magnetic field $H=28$~T. After the magnetic field induced onset suppression of superconductivity all samples show a positive magnetoresistance at low fields. The maximum in $\rho _c(H)$ observed at higher fields is followed by a region of negative magnetoresistance. Fig.~\ref{fig6} clearly shows the difference between the behavior of $\rho _c(H)$ in the underdoped and overdoped crystals. At low temperatures $\rho _c$ in the overdoped regime shows a much stronger negative magnetoresistance compared to that observed in the underdoped regime. Such a behavior of $\rho _c(H)$ has already been intimated in the Bi2212 system \cite{Shibauchi01}. However, such a large difference between the underdoped and overdoped regimes in the slope of the negative magnetoresistance in Fig.~\ref{fig6}, has not previously been observed. Furthermore, in the heavily underdoped sample ($p=0.12$) after an increase of $\rho _c$ at low fields due to the gradual suppression of superconductivity, $\rho _c$ decreases almost linearly with increasing magnetic field up to $\simeq 28$ T even at very low temperatures in contrast to the power-law field dependence previously reported in references [\onlinecite{Morozov00,Shibauchi01}]. In Ref. [\onlinecite{Shibauchi01}] it was found that the field at which the excess "semiconducting" resistivity $\Delta \rho _c(T)$ vanishes corresponds to the pseudogap closing field $H_{pg}$. A fit to the power-law dependence of $\Delta \rho _c(H)$ for magnetic fields above the maximum in $\rho _c(H)$ at different temperatures allowed the authors of Ref.[\onlinecite{Shibauchi01}] to find the field at which $\Delta \rho _c$ vanishes and evaluate $H_{pg}(T)$ beyond the available 60 T. Based on this suggestion, we tried to fit to a near linear field dependencies of $\Delta \rho_c(H)$ in a log-log plot for $p=0.12$ and $p=0.13$ in order to evaluate $H_{pg}$ at low temperatures in underdoped samples. This evaluation gives exaggeratedly large values for $H_{pg}\approx 2000-3000$ T. In Ref.[\onlinecite{Shibauchi01}] it has also been found that $H_{pg\text{ }}$and $T^{*}$ are related through the Zeeman-like expression $g\mu _BH_{pg}=k_BT^{*}$, where $g=2$ is the electronic $g$-factor, $\mu _B$ the Bohr magneton, and $k_B$ the Boltzmann constant. In our case such an analysis leads to physically meaningless values $T^{*}=2700-4000$ K. Other extrapolation polynomial fits gave the same physically meaningless values of $H_{pg}$. These results probably indicate that the method suggested in Ref. [\onlinecite{Shibauchi01}] for evaluating $H_{pg}$ is unsuccessful in case of underdoped Bi2201 samples. We have tried to use such an extrapolation fit to our $\rho _c(H)$ data for overdoped Bi2201. Fig.~\ref{fig7} shows a log-log plot of $\rho _c(H)$ at various fixed temperatures for the overdoped sample with $p=0.2$. It can be seen that the dashed straight lines, which are extensions of the linear dependencies, point to the limiting value \cite{Shibauchi01} of $H_{pg}$, corresponding to the intersection at $25$ T. If $H_{pg}\simeq 25$ T, then using the Zeeman-like expression $T^{*}$ is found to be $\simeq 34$ K. In the overdoped Bi2212 samples, the negative magnetoresistance disappears at the same temperature at which the zero-field $\rho_c(T)$ deviates from its characteristic linear (metallic) high-temperature dependence \cite{Shibauchi01,Shibauchi03}. This temperature in Ref. [\onlinecite{Watanabe00}] was identified as the pseudogap closing (opening) temperature $T^{*}$. However, as can be seen from Fig.\ref{fig4}, the zero-field $\rho _c(T)$ of sample with $p=0.2$ deviates from a metallic linear (high) temperature dependence at $T\simeq 140$~K, so that $H_{pg}$ should be $\simeq 100$ T. The sample with $p=0.16$ does not show any linear $T$ dependence (metallic state) up to $270$ K ( suggesting that $H_{pg}$ should be $>200$ T). This result is clearly inconsistent. Moreover, in Fig.~\ref{fig7} one can see that even in case of overdoped Bi2201 samples there is only a finite range of magnetic field for which the $\rho _c(H)$ data can be described by a power-law \cite{Shibauchi01,Shibauchi03} dependence $H^\alpha $ (dashed lines). This result indicates that the method suggested in Ref. [\onlinecite{Shibauchi01}] for evaluating $H_{pg}$ is unsuccessful in case of overdoped Bi2201 samples also. Once the magnetic field at which the negative magnetoresistance vanishes is identified with the pseudogap closing field $H_{pg}$, our results clearly show that in the Bi2201 samples investigated here, the pseudogap closing temperature $T^{*}$ does not agree with the temperature at which the zero-field semiconducting-like temperature dependence of $\rho_c$ changes into a metallic dependence at higher temperatures as in the overdoped Bi2212 \cite{Shibauchi01}. Since the metallic-like linear temperature dependence of the $\rho _c$ at $H=0$~T is a consequence of the high doping of the samples which is inevitably accompanied by a severe degradation of samples quality, we are unable to reach an unambiguous conclusion concerning the relation of $H_{pg}$ with the deviation from the linear temperature dependence of $\rho_c$. \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig7.eps} \caption{\label{fig7} Log-log plot of $\rho_{c}$ versus magnetic field for various temperatures for the overdoped Bi2201 samples.} \end{figure} However, on the other hand, in our slightly underdoped ($p=0.16$) and overdoped ($p=0.2$) Bi2201 crystals the negative magnetoresistance vanishes and the magnetoresistance changes sign at $\approx 28$~K, the inset in Fig.~\ref{fig6}(c), and at $\approx 22$~K, Fig.\ref{fig6}(d). Thus, $T^{*}$ should be close to these temperatures. According to the the Zeeman-like expression (Ref.[\onlinecite{Shibauchi01,Shibauchi03}]) the pseudogap closing field scales with $T^{*}$ as $g\mu _BH_{pg}=k_BT^{*}$ which implies that $H_{pg}$ should be $\simeq 21$ T and $\simeq 16$ T, respectively. In the slightly underdoped ($p=0.16$) and overdoped ($p=0.2$) samples, the strong negative magnetoresistance rapidly weakens [Fig.~\ref{fig6}(c) and (d)] and clearly shows a saturation at high fields after more than a two-fold decrease. When the temperature-dependent data in Fig.~\ref{fig5}(c) and (d) are compared with Figs.~\ref{fig6}(c) and (d), it can be concluded that the observed negative magnetoresistance corresponds to a suppression of the semiconducting-like behavior in $\rho _c(T)$, which can in turn be interpreted as the magnetic-field induced suppression of the pseudogap. In previous measurements \cite{Vedeneev99} we have shown that all $\rho _c(H)$ curves for Bi2201 single crystal with $T_c\simeq 7$ K (overdoped) have a pronounced break-point in the derivative well above the $\rho _c(H)$ peak, which shifts to higher fields with decreasing temperature and at $T\simeq T_c$ disappears. The field position of these break-points in the derivative coincide with the $H_{c2}^{*}$ values determined from the $\rho _{ab}(H)$ curves. The values of $H$ at which the log-log plot of $\rho _c(H)$ deviates from a linear magnetic field dependence in Fig.~\ref{fig7} (shown by arrows) are in close agreement with the $H_{c2}^{*}$ values for a Bi2201 sample \cite{Vedeneev99} with $T_c\simeq 7$ K. As has been shown in Refs.[\onlinecite{Shibauchi01,Shibauchi03}], $H_{pg}$ in Bi2212 does not depend on temperature for $T<T_c$ and as $T\rightarrow 0$ $H_{pg}$ and the upper critical field, $H_{c2}$, coincide. This suggests again that the intersection points of the dashed straight lines in Fig.~\ref{fig7} is not $H_{pg}$ as observed in Bi2212. On the other hand, if $H_{pg}$ is determined from the disappearance of the negative magnetoresistance and, as pointed out above, $H_{pg}\approx 21$ T ($p=0.16$) and $H_{pg}\approx 16$ T ($p=0.2$), so that $H_{pg}$ and $H_{c2}^{*}$ in Bi2201 are closely linked as in Bi2212. Yurgens \textit{et al.}\cite{Yurgens} measured the intrinsic-tunneling spectra of a La-doped Bi2201 ($T_c$=32 K) at $T$ = 4.5 - 300 K in order to determine the pseudogap phase diagram. Their phase diagrams show that for samples with $p$=0.16 and under, the pseudogap closing temperature $T^{*}$ is over 300 K. These temperatures lie outside the range of our measurements. While for the overdoped sample with $p$=0.2, the value of $T^{*}$=22 K found in our work agrees well with the pseudogap phase diagram of Yurgens \textit{et al.}\cite{Yurgens}. The negative magnetoresistance observed in our experiments show a characteristic exponential decrease with magnetic field. Fig.~\ref{fig6}(c) and (d), show numerical fits (the dashed curves) calculated using the functional form $\rho _c(H,T)=\rho _{c0}+a\exp (-H/bT)$, where $a$ and $b$ are constants. Our data in the slightly underdoped, optimally doped and overdoped regimes are well described by such a functional form. The possibility to describe $\rho _c(H)$ by an exponential expression in $H/T$ implies the magnetic-field couples to the pseudogap via the Zeeman energy of the spin degrees of freedom\cite{Vedeneev00a,Shibauchi01}. In previous measurements of near optimally doped Bi2201 single crystals\cite{Vedeneev00a}, we have found an isotropic behavior of the normal-state magnetoresistance with respect to the orientation of the magnetic field (perpendicular and parallel to the CuO$_2$ planes) which showed that only the effect of the magnetic field on the spins (Zeeman effect) is important in the normal state. Here a negative magnetoresistance is observed for both geometries ($\mathbf{H\parallel c\parallel J}$ and $\mathbf{H\perp c\parallel J}$) in all investigated crystals. In contrast to the magnetoresistance in the superconducting state, the normal-state magnetoresistance of $\rho _c$ is independent of the field orientation with respect to the current direction. For the slightly underdoped sample ($p=0.16$), this behavior can be seen in the inset in Fig.~\ref{fig6} (c). We observed the same behavior for the heavily underdoped sample ($p=0.12$). The similarity in the normal-state data for the two field orientations, probably excludes an explanation of the normal state negative out-of-plane magnetoresistance in terms of superconductivity. It should especially be pointed out that in slightly underdoped, optimally doped and overdoped samples after the field induced suppression of the superconductivity and pseudogap for $\rho _c$ in high fields, the value of $\rho _c$ (after the saturation of the magnetoresistance) remains much higher than the expected un-gapped value. \emph{A semiconducting-like temperature dependence of the out-of-plane resistivity $\rho _c$ is partly conserved even after the suppression of the negative magnetoresistance at $H > H_{pg}$}. It seems reasonable to conclude that the semiconducting-like temperature dependence of $\rho _c$ is controlled not only by the magnetic-field-sensitive pseudogap. \begin{figure} \includegraphics[width=0.7\linewidth,angle=0,clip]{Fig8.eps} \caption{\label{fig8} $\rho _c$ and $\rho _{ab}$ measured as a function of magnetic field applied along the c-axis around T=1.9K for the (a) underdoped and (b) optimally doped Bi2201 samples.} \end{figure} In previous measurements \cite{Vedeneev99} we have pointed out that in overdoped samples ($T_c=7$ K) the maxima of $\rho_c(H)$ coincide with the field positions of $\rho_{ab}(H) =0.4\rho_{ab}^n$ where $\rho _{ab}^n $ is the normal-state resistivity. The observed maximum in $\rho_c(H)$ is a property of the mixed state and results from a competition between the ``semiconducting'' behavior of $\rho_c$ and the superconducting transition. In Fig.~\ref{fig8} we display the resistive transitions of the heavily underdoped [(a), p=0.12] and optimally doped [(b), p=0.17] samples in a magnetic field ${\bf H\Vert c\Vert J}$ at temperatures $\approx 1.9$ K. It can be see that after the maximum in $\rho_c(H)$, the $\rho_{ab}(H)$ curves still show a strong positive magnetoresistance, clearly originating from the superconducting state. In this range of magnetic fields a superconducting gap still persists and part of the current along the $c$-axis is a quasiparticle tunneling current. In the underdoped sample with lower $T_c$ the superconducting gap is small and closes rapidly when magnetic field is applied. The resistive transition to the normal state is completed at $\approx 10$~T (the weak increase of the normal-state $\rho_{ab}$ resistivity is due to a magnetoresistance contribution at high magnetic fields). The negative magnetoresistance at $H > 10$ T displayed by $\rho_c(H)$ in Fig.~\ref{fig8}(a) is isotropic and is due to the magnetic-field induced suppression of the pseudogap. Since in the underdoped samples the magnitude of the pseudogap is large, the effect of the magnetic field is small and the negative magnetoresistance does not saturate in the available magnetic field range [Fig.~\ref{fig6}(a),(b)]. In the optimally doped [Fig.~\ref{fig8}(b)], slightly underdoped and overdoped [Fig.~\ref{fig6}(c), (d)] samples because $H_{c2}$ is large, the major contribution to the anisotropic negative magnetoresistance in the $\rho_c(H)$ curves is due to the gradual decrease of the superconducting gap (in Fig.~\ref{fig7}, the end of the superconducting transition is indicated by arrows). Since in these samples the magnitude of the pseudogap is small, the negative magnetoresistance connected with the pseudogap rapidly saturates following the superconducting transition [Fig.~\ref{fig6} (c), (d)]. However, as previously pointed out the value of $\rho _c$ remains much higher than the expected un-gapped value. Recently it has been shown that the negative magnetoresistance in the superconducting state can also be described by the Zeeman-like expression \cite{Pieri}. This explains why it is possible to describe completely all curves $\rho _c(H)$ in Fig.~\ref{fig6}(c), (d) using an expression which is exponential in $H/T$ in both the superconducting and normal states. \section{Conclusion} We have presented the temperature dependence for both in-plane $\rho _{ab}$ and out-of-plane $\rho _c$ resistivities and magnetoresistivities $\rho _{ab}(H)$ and $\rho _c(H)$ in hole-doped La-free Bi2201 cuprate for a wide doping range and over a wide range of temperatures down to $40$ mK. We have shown that the temperature and magnetic field dependence of the in-plane and out-of-plane resistivities are determined by the localization, the superconducting gap and the normal-state pseudogap. The data suggest that the metal-to-insulator crossover in Bi2201 lies in the underdoped region ($p<0.16$). The metallic behavior of $\rho _{ab}(T)$ gradually changes to insulating behavior with decreasing carrier concentration. We did not observe any evidence for a $\log (1/T)$ divergence of $\rho _{ab}$ and $\rho _{c}$ at very low temperatures in underdoped Bi2201 single crystals. The out-of-plane resistivity $\rho _c$ of the slightly underdoped and overdoped samples below $T_c$ in the highest applied fields shows almost no temperature dependence. Our data strongly suggest that the negative out-of-plane magnetoresistance appears to be governed by different mechanisms; the main contribution comes from the transition to the normal state which gives rise to a strong magnetic field dependence, while the non-superconducting pseudogap, shows a much weaker magnetic field dependence and therefore only gives a small contribution to the negative out-of-plane magnetoresistance. A semiconducting-like temperature dependence of the out-of-plane resistivity $\rho _c$ is conserved in part even after the suppression of the negative magnetoresistance and at magnetic fields above the pseudogap closing field $H_{pg}$. Our data support that the pseudogap does not correlate with the existence of superconducting gap. \begin{acknowledgments} We thank V.P.Martovitskii for the careful X-ray studies of the single crystals. This work has been partially supported by NATO grant PST.CLG. 979896. \end{acknowledgments}
1,314,259,996,772
arxiv
\section{When was the High Energy Physics born?} It was born twice. First time -- 100 years ago, when in 1896 the radioactivity was discovered. Second time -- 50 years ago, after World War II, when the first large accelerators of charged particles started to create new elementary particles. At the dawn of the century -- when X-rays, radioactivity and atomic nucleus were discovered -- high energies stretched from thousands to millions of electrovolts (from KeV's to MeV's). At present they stretch from billions to trillions of eV's (from GeV's to TeV's). Let me comment the notations: K -- kilo, M -- mega, G -- giga, T -- tera; $1 \mbox{\rm KeV} = 10^3$ eV, $1 \mbox{\rm MeV} = 10^6$ eV, $1 \mbox{\rm GeV} = 10^9$ eV, $1 \mbox{\rm TeV} = 10^{12}$ eV. $1 \mbox{\rm eV} = 1 e \times 1 \mbox{\rm V}$ -- is the energy, which an electron acquires by crossing in accelerating difference of potentials of 1 Volt. Let me remind that $1 \mbox{\rm Amper} = 6 \cdot 10^{18}$ electrons/sec. Thus the values of energies under discussion may seem to be quite non-impressive. But they are very large, if one takes into account, that they are carried by single particles! \section{Why do we need the highest energies?} In order to create and to study fundamental particles. You know, of course, the famous formula by Einstein, which literally, shook the world: $$ E_0 = mc^2 \;\; , $$ where ~~~ $E_0$ -- is the rest energy of a body (the index zero indicates that the body is at rest, its velocity being equal to zero), ~~~~ $m$ -- is the mass of the body, ~~~~ $c= 3 \cdot 10^8$ m/sec -- is the velocity of light. In experiments on accelerators the kinetic energy of accelerated particles is transformed, during collisions, into the rest energy (mass) of created particles. The higher is the energy of an accelerator, the heavier are the particles which it can produce. \section{Which particles are called fundamental (elementary)?} Those, which at present level of knowledge do not consist of more fundamental ones ("the smallest matreshkas"). Atoms are not elementary: they consist of electrons and nuclei. Nuclei consist of protons $p$ and neutrons $n$. Protons and neutrons are also not elementary, they consist of quarks of two types, $u$-quarks and $d$-quarks: $$ p = uud \; , \;\; n = ddu \;\; . $$ Quarks and electrons are elementary at present level of knowledge. Photons -- the particles of light -- are elementary as well. Quarks and electrons belong to a family of particles called fundamental fermions. Photons belong to another family, that of fundamental bosons. \section{What is the difference between bosons and fermions?} They differ by the value of their spin. Particles are like miniature tops. Spin is the proper rotational (angular) momentum of a particle. Spin is measured in units of $\hbar$: $$ \hbar = 10^{-34} \mbox{\rm Joules} \cdot \mbox{\rm sec} = 10^{-34} \mbox{\rm kg} \cdot \mbox{\rm (m/sec)} \cdot \mbox{\rm m} $$ In order to visualize the value of $\hbar$, imagine one gram weight, which is fixed on a rotating 1 cm long stick so that its velocity is 1 cm/sec. And now reduce the mass by 27 orders of magnitude, the length by 10 orders, and increase the velocity by 10 orders. (In order to reduce a stick by 10 orders, one has to break it into two halves, then to break, in the same way, one of the halves, and to repeat this procedure 31 times "only".) The constant $\hbar$ -- one of the most fundamental constants of nature. It was introduced by German physicist Max Planck in 1990. Bosons (named after Indian physicist Bose) have integer values of spin in units of $\hbar$. Fermions (named after Italian physicist Fermi) have half-integer spin. The spin of photon is $1 \hbar$, or simply 1; the spin of electron is 1/2. Fermions are individualists: there can exist, in a given state, only one fermion of a given type. This property explains the pattern of atomic levels and hence the Mendeleev Table. Bosons are collectivists: all bosons of a given type prefer to be in one state. This property is the basis of the laser. The amazing properties of bosons and fermions are connected with basic principles of the modern quantum physics. I do not know of any simple graphic explanation of these properties. Now we are ready to answer the key question of this talk. \section{How does the "Mendeleev Table" of fundamental particles look like?} The Table contains 16 particles: 4 bosons and 12 fermions. \newpage \begin{center} {\it Fundamental bosons.} \end{center} \vspace{2mm} The photon, $\gamma$, the $W$ and $Z$ bosons, the gluon, $g$. All of them have spin 1. The photon, gluon and $Z$ boson are electrically neutral: their electric charge $Q$ is equal to zero. For $W$ bosons $Q = \pm 1$. \vspace{3mm} \begin{center} {\it Fundamental fermions.} \end{center} \vspace{2mm} Twelve fermions are subdivided into three generations, two quarks and two leptons in each. (The term leptons means electron and its "relatives".) The first three columns of the following table represent three generations of fundamental fermions. The fourth column shows their electric charges $Q$. \vspace{3mm} \begin{center} \begin{tabular}{|ll|c|c|c||c|} \hline generation & & 1st & 2nd & 3d & $Q$ \\ \hline & upper & $u$ & $c$ & $t$ & +2/3 \\ quarks & & & & & \\ & lower & $d$ & $s$ & $b$ & -1/3 \\ \hline & neutrinos & $\nu_e$ & $\nu_{\mu}$ & $\nu_{\tau}$ & 0 \\ leptons & & & & & \\ & charged leptons & $e$ & $\mu$ & $\tau$ & -1 \\ \hline \end{tabular} \end{center} \vspace{3mm} As seen from the table, there are two types of quarks: "upper" and "lower". (The symbols $u$ and $d$ stem from "up" and "down", whilst $t$ and $b$ -- from "top" and "bottom"; $c$ and $s$ denote so called "charmed" and "strange" quarks.) As for leptons, they are subdivided into neutral (neutrinos) and charged ones. Each charged fermion has its antiparticle: $\bar{u}$, $\bar{d}$, $\bar{c}$, $\bar{s}$, $\bar{t}$, $\bar{b}$, $e^+$, $\mu^+$, $\tau^+$. About neutrinos this is still uncertain. It is one of the important problems -- to establish whether or not antineutrinos are identical with neutrinos. Let us note that 5 fundamental fermions ($c, b, t, \tau, \nu_{\tau}$) and 3 fundamental bosons ($g, W, Z$) have been discovered after 1973. \section{Why are the fundamental bosons needed?} The main role of the known fundamental bosons is to serve as mediators of forces. The exchange of photons creates electromagnetic forces which underlie the atomic and molecular physics, physics of solid state and plasma, optics, acoustics, chemistry, biology. The exchange of gluons creates strong interactions which confine quarks inside protons, neutrons and hundreds of other particles which are built of quarks and gluons and which are called hadrons. The exchange of $W$ and $Z$ bosons creates weak forces. Theorists have no doubt that gravity is produced by exchange of gravitons, fundamental bosons with spin equal 2. However to observe these particles is extremely difficult. Even our grand-grand children will be unable to detect them. \section{Why is the first generation of fermions needed?} The world around us and we ourselves are built from electrons and $u$- and $d$-quarks. Without weak interactions with participation of $\nu_e$ there would be no stars, no sun, no life. A complex chain of nuclear reactions involving electron neutrinos inside the sun transforms hydrogen into helium and then into heavy elements. Protons "burn" with release of energy and neutrinos: $$ 2 e^- + 4p \to ~^4He + 2\nu_e + 27 \; \mbox{\rm MeV} \;\; . $$ The flux of solar neutrinos is enormous: 70 billion per second per each cm$^2$ on the earth. But we are practically transparent for them. Only special multikilotonne detectors can capture a few particles from this flux. (One of such detectors operates in our country, in the Baksan valley.) The number of captured $\nu_e$'s turned out to be approximately a factor of two less, than had been expected. Maybe, on their way from the center of the sun to the earth, $\nu_e$'s partly transform into $\nu_{\mu}$'s or $\nu_{\tau}$'s, which leave no trace in detectors, which are sensitive only to $\nu_e$'s? \section{Why are the second and the third generations of fermions needed?} We have no definite answer to this question. Maybe, they are needed in order to preserve some amount of electrons and protons from the time of the "big bang". Otherwise the world would consist only of photons and neutrinos. (On an average per one proton, there is one electron, one billion of photons, with energy $3\cdot 10^{-4}$ eV in the form of radiowaves, and approximately the same number of neutrinos.) \section{What is a collider? What has been discovered by using colliders?} Collider is a machine for accelerating, storing (not always) and head-on colliding two beams of particles. (In an ordinary accelerator, not a collider, there is only one beam, which hits a fixed target.) In the head-on collisions in colliders the kinetic energy is transformed into the rest energy of created particles in the most effective way. The particles discovered with colliders are: $t$- and partly $c$-quark, $\tau$-lepton, gluon, $W$ and $Z$-bosons. The masses of $W$ and $Z$ bosons are 80 GeV and 91 GeV, respectively. These bosons have been discovered at CERN on a specially built for this purpose proton-antiproton collider, with energies of particles in each beam 270 GeV. 20 million $Z$ bosons have been created and detected at CERN, in 1989--1995, on the LEP I collider. In a circular tunnel of LEP I, with 27 km circumference, beams of electrons and positrons with energies 45.5 GeV collided head-on. Their energy was fully used to create $Z$ bosons. In 1994 at the proton-antiproton collider Tevatron (USA), the heaviest of known particles have been created -- $t$-quark, its mass being about 175 GeV. The energy of particles in each of the two beams of Tevatron is about one TeV, hence the name of the collider. May I remind you that the mass of a proton is 0.94 GeV, while that of electron is 0.5 MeV. \section{What do we expect from the future colliders?} In the same ring, where LEP I was operating, a new collider started to run in the fall of 1999. The energies of $e^-$ and $e^+$ will reach in it 96 GeV. In year 2000, in the same tunnel the construction of a new machine -- Large Hadron Collider (LHC) -- will start. The colliding particles in this machine will be protons with energy 7 TeV. First of all, physicists expect, that with the new colliders, a new particle called Higgs boson (or simply, higgs) will be discovered. (P.Higgs is a contemporary British theorist.) The spin of the higgs must be equal to zero. Its mass cannot be predicted in a definite way. Most probable, however, that higgs is heavier than $W$ boson, but lighter than the top-quark. The higgs plays a central role in the modern theory of matter. According to the theory, all fundamental particles acquire their masses through their interaction with the higgs. The discovery of the higgs would allow physicists to come closer to understanding the nature of mass. Another promising direction is supersymmetry, according to which to each of the known particles there correspond a "superpartner": a particle with spin differing by 1/2. Thus superpartner of a fermion is a boson, whilst superpartner of a boson is a fermion. The supersymmetry is strongly broken in the nature. It is expected, that the masses of "superpartners" of the known particles lie mainly in the interval from 100 GeV to 1 TeV. Third direction is the possible compositness of our fundamental particles, which may reveal itself at higher energies. Finally, one has not to forget about "expected surprises". In the past, many important phenomena were unexpectedly discovered at accelerators, which had been built for other purposes and did not have these phenomena on their "to be discovered" lists. \section{Experiments not at the highest energies -- are they needed?} Yes! The idea that all efforts should be concentrated on the highest energy colliders is an erroneous one. The colliders represent the direction of the principle attack, but we are fighting simultaneously on several fronts. The answers to many crucial questions cannot be, in principle, obtained with colliders. They might be obtained either in non-accelerator experiments, or in experiments on low energy fixed target accelerators. (In the latter case they may be considered as "pockets of resistance".) Here are a few examples: \begin{itemize} \item Search for neutrino masses. \item Study of solar neutrinos (the neutrino monitoring of the sun could be exceptionally important from practical point of view). \item Search for proton decay. \item Search for transformation of neutrons into antineutrons in vacuum. \item Study of the asymmetry between particles and antiparticles. \item Study of light hadrons in order to understand the mechanism of confinement of quarks in hadrons. \end{itemize} It should be stressed that for the study of light hadrons the existing accelerators in Russia could be highly effective. \section{To understand the fundamental particles -- why is it necessary for the mankind?} In order to ascertain the basic principles of nature (as an ideal -- one principle, from which follow all fundamental laws). In order to understand the birth of the universe and its future. (One TeV corresponds to the temperature of the universe at the age of one picosecond). High Energy Physics is a tuning fork of the intellectual sphere of mankind. There exists at present a unique community of engineers, experimental and theoretical physicists, which can solve these problems. It should not be lost! \end{document}
1,314,259,996,773
arxiv
\section{} \section{Introduction} Recently, some networks representing metabolic reactions in the cell and gene regulatory responses through transcription factors have been elucidated along with progress of experimental systems and accumulation technology in the database \cite{database2}. In addition, researches on characterizing the state of the cells as a complex network utilizing these database have been actively investigated \cite{han07,ferhat09,huang13,tran13}. Moreover, the deterministic discrete-time dynamics for discrete-state model with such network structures have been widely studied on the properties of the attractors that represent cellular activity states. This is because the state space is finite, so it is easy to search the fixed points and the periodic solutions using computer power. For example, Kauffman {\it et al.} modeled the early cells before differentiation with the dynamics of the network, and made the type of the attractors correspond to the type of cells after the differentiation \cite{kauffman69,iguchi07,kinoshita09,daniels18}. On the other hand, Li {\it et al.} discovered that in the model of the gene regulatory network related to the cell-cycle, there is a fixed point with a very large basin size, and the transition process to the fixed point corresponds to the expression pattern of the gene in each process of the cell-cycle \cite{li04}. It should be noticed that in the network of the Kauffman {\it et al.}, there is no self-regulating factor (self-loop), but in the model of Li {\it et al.} the existence of the self-loops has influence on the attractors. Very recently, in other systems such as fission yeast cell cycle and mammalian cell cycle, the Boolean network models for the regulation have also been studied \cite{yang13,barberis16,luo17}. In this study, using the same gene regulatory network as Li {\it et al.} for the budding yeast, we clarify the relationship between the fixed points (point attractors) with large basin size and the presence of the self-loops in the network. It is found that there is a simple division rule of the state space by removing the self-loops, and the point attractors with largest basin size (BS) is robust against the changing the self-loops. The similar results are obtained for C. {\it elegans} early embryonic cell cycles as well \cite{kinoshita18}. \section{Model} \label{sec:model} Here, we give some basic properties of the self-loop in cell-cycle network of budding yeast. Let us take the binary value $\{0, 1\}$ as the state $S_i$ of each node $i$ corresponding to the numbered genes as given in table 1. The states 1 and 0 correspond to expressed and unexpressed genes, respectively and the attractors of the dynamics are associated to cell differentiation. The effect on the node $i$ from the other node $j (\neq i) $ is defined as \beq B_i =\sum_{j (\neq i)}^Na_{ij} S_j, \eeq where $N$ is the total number of the nodes, and $a_{ij}$ denotes matrix element of the weighted adjacency matrix $A$ representing the interaction between the genes. We take $a_{ij}=+1$ when the node $j$ positively regulates the node $i$ (positive interaction ), and $a_{ij}=-1$ when the node $j$ negatively suppresses the node $i$ (negative interaction). \begin{figure}[htbp] \begin{center} \includegraphics[width=8.0cm]{fig1.eps} \caption{ (Color online) Gene regulatory network of the cell-cycle of budding yeast \cite{li04}. Each circle represents a protein (cyclin or transcription factor) involved in the gene regulation. For the links connecting the respective proteins, the blue-solid lines represent the effect of the activation control, and the red-dashed lines represent the effect of the suppression control. In addition, the self-loops by green-dotted lines represent the effect (ubiquitin-proteasome system) of protein degradation in the absence of external input. } \label{fig:yst_net} \end{center} \end{figure} \begin{table*}[t] \label{table:G0} \begin{center} \begin{tabular}{lcccccccccccc} & Cln3 & MBF & SBF & Cln1 & Cdh1 & Swi5 & Cdc20 & Clb5 & Sic1 & Clb1 & Mcm1 & \\ No. & 1 & 2 & 3 &4 & 5 & 6 &7 & 8 & 9 & 10 & 11 & BS \\ & $\circ$ & & & $\circ$ & & $\circ$ & $\circ$ & & & & $\circ$ & \\ \hline $A_1^{(0)}$ & 0 & 0 & 0 & 0 & 1& 0 & 0 & 0 & 1 & 0 & 0 & 1764 \\ $A_2^{(0)}$ & 0 & 0 & 1 & 1 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 151 \\ $A_3^{(0)}$ & 0 & 1 & 0 & 0 & 1& 0 & 0 & 0 & 1 & 0 & 0 & 109 \\ $A_4^{(0)}$ & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 1 & 0 & 0 & 9 \\ $A_5^{(0)}$ & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 7 \\ $A_6^{(0)}$ & 0 & 1 & 0 & 0 & 0& 0 & 0 & 0 & 1 & 0 & 0 & 7 \\ $A_7^{(0)}$ & 0 & 0 & 0 & 0 & 1& 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{tabular} \end{center} \caption{Seven attractors in the original gene regulatory network. (All are point attractors.) The third line shows that there is a degenerate self-loop when mark $\circ$ is present in the node. In the decimal notation, each attractor is displayed as, $A_1^{(0)}=68$, $A_2^{(0)}=384$, $A_3^{(0)}=580$, $A_4^{(0)}=4$, $A_5^{(0)}=0$, $A_6^{(0)}=516$, $A_7^{(0)}=64$. The last column (BS) represents the basin size of the attractors. Note that Cln 1 represents Cln 1, 2, Clb 5 represents Clb 5, 6, and Clb 1 represents Clb 1, 2. } \label{table:yst_atr} \end{table*} The node without the self-loop, i.e. $a_{ii}=0$, follows a threshold dynamics from discrete time $t$ to $t+1$ ($t \in {\bf N}$) by using the parallel updating scheme as follows: \beq S_i(t+1) = \Biggl( \begin{array}{cc} 0 & (B_i(t) <\theta_i) \\ 1 & (B_i(t) > \theta_i) \\ S_i(t) & (B_i(t) = \theta_i), \end{array} \label{eq:rule-1} \eeq where $\theta_i$ denotes the threshold value of the node $i$. Also, if the self-loop acts inactively when $B_i(t)=\theta_i$, the effect of the protein degradation called "degeneration", which is distinguished from a simple inhibition effect, is given as follows; \begin{equation} S_i(t+1) = \Biggl( \begin{array}{cc} 0 & (B_i(t)=\theta_i, a_{ii}=-1) \\ 1 & (B_i(t)=\theta_i, a_{ii}=+1) . \end{array} \label{eq:rule-2} \end{equation} The budding yeast cell-cycle network model (denoted by $G^{(0)}$) by Li {\it et al.} is a special one in a sense that all nodes of the existing self-loops are given as $a_{ii}=-1$. The network is shown in Figure 1. We take the values $\theta_i=0$ for all $i$ in this report. Each regulatory factor is represented by each numbered node ($i = 1, 2, ..., 11$), and the activation effect ($a_{ij}=+1$) and suppression effect ($a_{ij}=-1$) are indicated by solid and dashed arrows between the nodes. There are self-degeneration loops on the 5 nodes, Cln3, Cln1-2, Swi5, Cbe/Cdc14, Mcm1/SFF. Note that this rule is the same as that of Refs. \cite{tran13} and \cite{li04}, but it differs from that of \cite{goles13}. In this network, the total state number is $W=2^{11}=2048$, and all steady states are seven point attractors by numbering as ${\bf A}^{(0)}=\{ A^{(0)}_1,A^{(0)}_2,A^{(0)}_3,A^{(0)}_4,A^{(0)}_5,A^{(0)}_6,A^{(0)}_7 \}$. The state of the point attractor with the largest basin size among these is $A^{(0)}_1=00001000100=68$, where the last number is in decimal. According to the study of Li et {\it al.} the following facts are known. (i)The attractor with the largest basin $A^{(0)}_1= 68$ corresponds to the stationary $G_1$ state in the cell-cycle of the budding yeast. (ii)When creating the random network model of the same system size $N=11$, there is no attractor that corresponds to $A^{(0)}_1$ with a very large basin size. (iii)One of the trajectories to reach the attractor $A^{(0)}_1$ coincides with the trajectory of the actual biological cell-cycle. (iv)The trajectory corresponding to the biological cell-cycle leading to $A^{(0)}_1$ is stable against external perturbation. In addition, the result for the basin size of the attractors in the similar random networks with same conditions of the structure as the $G^{(0)}$ is given in appendix \ref{app:random}. We confirmed that the occurrence probability of the point attractors with the large basin size ($\geq 1700$) is less than 20 percent. This result is consistent with those in Ref.\cite{huang13}. These results may be due to all self-loops being degenerate and threshold values being zero, and all the attractors are point attractors only. Generally, the threshold values are related to adding the active self-loops at each node. Note that for fission yeast cell-cycle model with similar network structure some limit cycles of period two appear as the attractor because some of the threshold value are not zero \cite{goles13,maria09}. Further, notice that when an active self-loop is attached to the node the state update rule becomes different from those of Tran {\it et al.} due to the existence of rule (\ref{eq:rule-2}). \section{Numerical result} In this section, we investigate the effect of the degenerate self-loops on the attractors of the original network $G^{(0)}$. Therefore, we write the network from which the degenerate self-loop of the $k$th node is removed from $G^{(0)}$ as $G^{(-k)} $, and the network with self-activating loop is added to the $m$th node of $G^{(0)}$ as $G^{(+m)}$. Here, $k$ selects from the nodes with the self-loop, and $m$ selects from the nodes without the self-loop. The attractor sets are indicated as ${\bf A}^{(-k)}=\{ A^{(-k)}_1,A^{(-k)}_2,...., ..A^{(-k)}_{n_{-k}} \}$, ${\bf A}^{(+m)}=\{ A^{(+m)}_1,A^{(+m)}_2,....., A^{(+m)}_{n_{+m}} \}$, and so on, respectively, where $n_{-k}$ and $n_{+m}$ means the number of attractors in the networks $G^{(-k)} $ and $G^{(+m)}$, respectively. We can numerically decide the all attractors and the basin size because the network has a state space of $2^{11}=2048$ states. \subsection{Case of removing degenerate self-loop } \label{subsec:removing} In Figure \ref{fig:yst_net} of the original network, degenerate self-loops are included in five control factors of Cln3, Cln1-2, Swi5, Cbc20/Cdc14, Mcm1/SFF, and the table \ref{table:yst_atr} shows the 7 attractors. We show in the table \ref{table:yst_des} the 11 attractors of the gene regulatory network $G^{(-1)}$ which removed the degenerate self-loop of Cln3 (the first node). \begin{table*}[t] \begin{center} \begin{tabular}{lcccccccccccc} No. & 1 & 2 & 3 &4 & 5 & 6 &7 & 8 & 9 & 10 & 11 & BS \\ & & & & $\circ$ & & $\circ$ & $\circ$ & & & & $\circ$ & \\ \hline $A_1^{(-1)}$ & 1 & 1 & 1 & 1 & 0& 1 & 1 & 1 & 0 & 1 & 1 & 888 \\ $A_2^{(-1)}$ & 0 & 0 & 0 & 0 & 1& 0 & 0 & 0 & 1 & 0 & 0 & 856 \\ $A_3^{(-1)}$ & 0 & 0 & 1 & 1 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 87 \\ $A_4^{(-1)}$ & 1 & 0 & 1 & 1 & 0& 1 & 1 & 0 & 0 & 1 & 1 & 61 \\ $A_5^{(-1)}$ & 0 & 1 & 0 & 0 & 1& 0 & 0 & 0 & 1 & 0 & 0 & 57 \\ $A_6^{(-1)}$ & 1 & 1 & 0 & 0 & 0& 1 & 1 & 1 & 0 & 1 & 1 & 52 \\ $A_7^{(-1)}$ & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 23 \\ $A_8^{(-1)}$ & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 1 & 0 & 0 & 9 \\ $A_9^{(-1)}$ & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 0 & 0 & 0 & 7 \\ $A_{10}^{(-1)}$ & 0 & 1 & 0 & 0 & 0& 0 & 0 & 0 & 1 & 0 & 0 & 7 \\ $A_{11}^{(-1)}$ & 0 & 0 & 0 & 0 & 1& 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{tabular} \end{center} \caption{ Eleven attractors in the gene regulatory network $G^{(-1)}$ which removed the degenerate self-loop of Cln3(the first node). (All are point attractors.) The last column (BS) represents the basin size of the attractors. In the decimal notation, each attractor is displayed as, $A_1^{(-1)}=1979$, $A_2^{(-1)}=68$, $A_3^{(-1)}=384$, $A_4^{(-1)}=1459$, $A_5^{(-1)}=580$, $A_6^{(-1)}=1595$, $A_7^{(-1)}=1971$, $A_8^{(-1)}=4$, $A_9^{(-1)}=0$, $A_{10}^{(-1)}=516$, $A_{11}^{(-1)}=64$. } \label{table:yst_des} \end{table*} \begin{figure}[htbp] \begin{center} \includegraphics[width=8.0cm]{fig2.eps} \caption{ (Color online) The point attractors and the basin structures of the network $G^{(-1)}$. The 7 red circles present the common point attractors to $G^{(0)}$ and $G^{(-1)}$. The blue and green circles present attractors newly added by the network becoming $G^{(-1)}$. The poin attractor with the largest basin of $G^{(-1)}$ is indicated by green circle. } \label{fig:Fig2} \end{center} \end{figure} We compare the attractors of the network $G^{(-1)}$ with those of $G^{(0)}$. It is found that $A^{(-1)}_{2}=A^{(0)}_{1}$, $A^{(-1)}_{3}=A^{(0)}_{2}$, $A^{(-1)}_{5}=A^{(0)}_{3}$, $A^{(-1)}_{8}=A^{(0)}_{4}$, $A^{(-1)}_{9}=A^{(0)}_{5}$, $A^{(-1)}_{10}=A^{(0)}_{6}$, $A^{(-1)}_{11}=A^{(0)}_{7}$. That is, all of the attractor sets ${\bf A}^{(0)}$ of the original network $G^{(0)}$ is included the attractor set of ${\bf A}^{(-1)}$ of the network $G^{(-1)}$. Next, we focus on the change of the basin size. It follows that the basin size of the attractor $A^{(0)}_1$ with the largest basin size is reduced by the elimination of the degenerate self-loop. Also, the basin size of the other attractors are also reduced from those of $\bf{A}^{(0)}$. Figure \ref{fig:Fig2} shows the basin structure of the 2048 initial states flowing to the fixed points given in the table \ref{table:yst_des}. The red circles are the point attractors of $G^{(0)}$, and the blue circles indicate the four point attractors newly added by the network becoming $G^{(-1)}$. Obviously, the basin size of the same attractor of $G^{(-1)}$ to those of attractor of $G^{(0)}$ is smaller than those of $G^{(0)}$, and they are caused by branching from the basin of $G^{(0)}$. Accordingly, it is also easy to understand that all attractors (attractor sets) of the original network $G^{(0)}$ are included in the attractor set of $G^{(-1)}$. The attractor of the large BS of $G^{(0)}$ corresponds to the attractor of the relatively large BS of $G^{(-1)}$. In Figure \ref{fig:Fig5}, we show the coloring basin structure of $G^{(0)}$ depending on each basin of the attractors of $G^{(-1)}$. (Figure \ref{fig:Fig6} shows the one that removed the color-coded state other than red from the attractor of the largest basin. ) It is found that the newly appearing attractors of $G^{(-1)}$ are created by connecting the the leaf states to the other leaf states in the original gene state in the transition diagram. \begin{figure}[htbp] \begin{center} \includegraphics[width=8.0cm]{fig3.eps} \caption{ (Color online) The basin structure of $G^{(0)}$ classificated by colors depending on the basins for each attractor of $G^{(-1)}$.The states are color-coded so that we can see basins of the 11 attractors of $G^{(-1)}$. } \label{fig:Fig5} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=8.0cm]{fig4.eps} \caption{ The basin structure that removed the color-coded state other than red in Fig.\ref{fig:Fig5} from the attractor with the largest basin of $G^{(0)}$. } \label{fig:Fig6} \end{center} \end{figure} Although above results are for the specific case which the degenerate self-loop of Cln3 has been removed, but also it is found that the similar results are also true for the cases removing the other degenerate self-loops. Furthermore, if we apply this rule repeatedly in the process of removing the self-loops, we can see that in general the above relations of the attractors and the basin size also applies to the relationship before and after removing the self-loops. \subsection{Case of adding active self-loop } \label{subsec:adding} It is noting that in the general network which both the self-regression loops and self-activation loops exist, limit cycles can appear as the attractors, as shown in case of the fission yeast. In networks which the self-activation loop is added to the original network $G^{(0)}$, not only point attractors but also other types of periodic attractors exist. As an example, the attractors $\bf{A}^{(+8)}$ of the network $G^{(+8)}$ which an active self-loop added to Clb5(the 8th node) of the $G^{(0)}$ is given in table \ref{table:yst_add}. It follows that the attractors $A^{(+8)}_{1}=A^{(0)}_{1}$, $A^{(+8)}_{5}=A^{(0)}_{4}$, exist also in the network $G^{(0)}$, and the limit cycle attractors of period 2, $A^{(+8)}_2$,$A^{(+8)}_3$,$A^{(+8)}_4$ , are newly emerging as the attractors of the network $G^{(+8)}$. Also, it follows that many attractors of $G^{(0)}$ have disappeared, but the attractor with largest basin size has survived. The basin structure of the attractors in the table \ref{table:yst_add} is shown in Figure \ref{fig:Fig4}. It is found that the limit cycles are constituted by combining the gene states with the relatively small basin size. In such a case the limit cycles with large basin size do not occur. These features occur even if the self-activated loop is added to the other nodes without the self-loop. Furthermore, the similar phenomena can also be confirmed by changing any of the degenerate self-loop of the five nodes to the active one. \begin{table*}[t] \begin{center} \begin{tabular}{lcccccccccccc} No. & 1 & 2 & 3 &4 & 5 & 6 &7 & 8 & 9 & 10 & 11 & BS \\ &$\circ$ & & & $\circ$ & & $\circ$ & $\circ$ & $+$ & & & $\circ$ & \\ \hline $A_1^{(+8)}$ & 0 & 0 & 0 & 0 & 1& 0 & 0 & 0 & 1 & 0 & 0 & 1897 \\ $A_2^{(+8)}(LC_{P2})$ & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 110 \\ $ $ & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & \\ $A_3^{(+8)}(LC_{P2})$ & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 25 \\ $ $& 0 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & \\ $A_4^{(+8)}(LC_{P2})$ & 0 & 1 & 0 & 0 & 0& 1 & 0 & 0 & 1 & 0 & 1 & 9 \\ $ $ & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & \\ $A_5^{(+8)}$ & 0 & 0 & 0 & 0 & 0& 0 & 0 & 0 & 1 & 0 & 0 & 7 \\ \end{tabular} \end{center} \caption{ Five attractors present in gene regulatory network $G^{(+8)}$ which an active self-loop is added to Clb5(the 8th node). The three attractors $A_2^{(+8)}$, $A_3^{(+8)}$, $A_4^{(+8)}$ are limit cycle. $LC_{P2}$ means the limit cycle with the period 2. The last column (BS) represents the basin size of the attractors. In the decimal notation, each attractor is displayed as, $A_1^{(+8)}=59$, $A_2^{(+8)}=(933,956)$, $A_3^{(+8)}=(613,633)$, $A_4^{(+8)}=(549,572)$, $A_5^{(+8)}=4$. } \label{table:yst_add} \end{table*} \begin{figure}[htbp] \begin{center} \includegraphics[width=8.0cm]{fig5.eps} \caption{ The attractors and the basin structures of $G^{(+8)}$. The 2 red circles present the point attractors. The 6 blue circles represent the states that belong to the three limit cycles of period 2 two each, respectively. } \label{fig:Fig4} \end{center} \end{figure} \section{Summary and discussion} \label{sec:discuss} In this short report, we investigated the influence of the degenerate self-loop on attractor of the gene regulatory network model of the cell-cycle of budding yeast, In the case of networks with degenerate self-loops removed from the original network $G^{(0)}$, only the point attractor appears because all of the self-loops are degenerate. The attractor set of the network without the degenerate self-loops includes all attractors of the original network $G^{(0)}$. In addition, when self-regression loops and self-activation loops coexist, limit cycles with the period more than 2 appear other than point attractor, and many attractors of $G^{(0)}$ are not included in the attractor set, but the attractor with the largest basin size was relatively stable against the deletions and additions of the self-loop. Above result can apply to Boolean genetic network model of C.{\it elegans} early embryonic cell-cycle network as it is, because the self-loops of network are only self-inhibitation loops, and the attractors are only fixed points \cite{huang13,kinoshita18}. Note that necessary and sufficient condition that the network attractors does not become limit cycle but only point attractors is not known yet \cite{tran13,goles13,richard15}. However, we expect that the result in Subsec.\ref{subsec:removing} holds when at least the attractors are only fixed points in the random network with only degenerate self-loops. There is a theorem in the graph theory \cite{goles13}: {\it Consider a Boolean network such that each gene is governed with a threshold function. Then, if the associated incidence graph, without considering the diagonal elements, is a directed acyclic graph (DAG) and the thresholds are non negative, $\theta_i \geq 0$, then the attractors are only fixed points. } The network of the budding yeast satisfies the following sufficient condition for the fixed points. The result of the Subsec.\ref{subsec:adding} seems to contradict above theorem at first glance. However, considering that the update rule (\ref{eq:rule-2}) is different from one in Ref. \cite{goles13}, we can see that it is not necessarily contradictory to the theorem.
1,314,259,996,774
arxiv
\section{Introduction}\label{sec:intro} Let $K$ be a knot in a 3-sphere $S^3$. In our previous paper \cite{gs08}, we introduced a class of knots called {\it $($rationally$)$ homologically fibered knots} and studied their fundamental properties by using their Alexander invariants. A (rationally) homologically fibered knot $K$ is by definition a knot satisfying the property that the sutured manifold $M_R$ obtained from the exterior $E(K)$ of $K$ by cutting along a minimal genus Seifert surface $R$ is a (rational) homology product whose boundary is the union of two copies of $R$. For a rationally homologically fibered knot $K$ with a minimal genus Seifert surface $R$ of genus $g$, let $i_+, i_-:R \to \partial M_R$ denote the natural identifications of $R$ with the two sides of the boundary of $M_R$. We fix a basis of $H_1 (R;\mathbb{Q})$ giving rise to an isomorphism $H_1 (R;\mathbb{Q}) \cong \mathbb{Q}^{2g}$. Then, by using the invertibility (over $\mathbb{Q}$) of the Seifert matrix $S$, we can rewrite the definition $\Delta_K (t)=\det(S-tS^T)$ of the Alexander polynomial of $K$ and obtain a factorization \begin{equation}\label{eq1} \Delta_K(t) = \det (S) \det (I_{2g}-t \,\sigma (M_R)) \end{equation} \noindent of $\Delta_K (t)$. Here $\sigma(M_R):=S^{-1}S^T$ coincides with the representation matrix of the composite of isomorphisms \[\mathbb{Q}^{2g} \cong H_1 (R;\mathbb{Q}) \xrightarrow[i_-]{\cong} H_1 (M_R;\mathbb{Q}) \xrightarrow[i_+^{-1}]{\cong} H_1 (R;\mathbb{Q}) \cong \mathbb{Q}^{2g}.\] The matrix $\sigma(M_R)$ can be interpreted as a monodromy of $M_R$ from a view point of the rational homology. Regarding the formula $(\ref{eq1})$ as a basic case, in \cite{gs08} we gave its generalization under the framework of {\it higher-order Alexander invariants} due to Cochran \cite{coc}, Harvey \cite{har2} and Friedl \cite{fri}. In this procedure, the Seifert matrix $S$, the monodromy $\sigma (M_R)$ and $\Delta_K (t)$ are generalized to a certain Reidemeister torsion $\tau_\rho^+ (M_R)$, the Magnus matrix $r_\rho (M_R)$ and a higher-order (non-commutative) Alexander invariant $\tau_\rho (E(K))$ associated with a representation $\rho$ of the fundamental group of $M_R$. Then the generalized formula is given by \begin{equation}\label{eq_factor} \tau_\rho (E(K)) = \frac{\tau_\rho^+ (M_R) \cdot (I_{2g} -\rho (\mu) r_\rho (M_R))}{1-\rho(\mu)}, \end{equation} where $\mu \in \pi_1 (E(K))$ represents the meridian of $K$. To compare $(\ref{eq_factor})$ with $(\ref{eq1})$, recall Milnor's formula \cite{milnor} that $\displaystyle\frac{\Delta_K (t)}{1-t}$ represents a Reidemeister torsion associated with the abelianization homomorphism $\rho_1:\pi_1 (E(K)) \to \langle t \rangle$. For details of the formula, see Theorem \ref{thm:factorization}. The purpose of this paper is to investigate the factorization formula (\ref{eq_factor}) with explicit computational examples. In the theory of higher-order Alexander invariants, an important problem has been to find methods for computing the invariants and extract topological information from them. This problem arises from the difficulty in non-commutative rings involved in the definition. We now intend to understand the higher-order invariant $\tau_\rho (E(K))$ by looking at each of the constituents of the formula (\ref{eq_factor}). More specifically, in the latter half of this paper, we focus on the invariants associated with metabelian quotients of knot groups of homologically fibered knots. In this situation, although $\tau_\rho (E(K))$ itself belongs to a non-commutative ring setting, both of $\tau_\rho^+ (M_R)$ and $r_\rho (M_R)$ can be computed in a realm of commutative rings. A sample calculation with details is given in Section \ref{sec:sample} and more examples are exhibited in Section \ref{sec:HFK12}, where we use $\tau_\rho^+ (M_R)$ to detect the non-fiberedness of all the 12 crossings non-fibered homologically fibered knots. We remark that in the situation of Sections \ref{sec:sample} and \ref{sec:HFK12}, the torsion $\tau_\rho^+ (M_R)$ may be regarded as a special case of a decategorification of the sutured Floer homology as shown by Friedl-Juh\'asz-Rasmussen \cite{fjr}. In Section \ref{sec:magnus}, we study the Magnus matrix $r_\rho (M_R)$ and see that $r_\rho (M_R)$ is unchanged under {\it concordances of Seifert surfaces} introduced by Myers \cite{myers}. Using his result, we mention how to obtain more examples of explicit computations. The authors would like to thank Professor Ko Honda for helpful discussions and Professor Robert Myers for informing the authors about his paper \cite{myers}. They also thank the anonymous referee for his-or-her helpful comments to improve the previous version of this paper. \section{Homologically fibered knots and homology cylinders}\label{sec:HFK} First, we recall the definition of sutured manifolds given by Gabai \cite{gabai1}. We here use a special case of them. A {\it sutured manifold\/} $(M,\gamma)$ is a compact oriented 3-manifold $M$ together with a subset $\gamma \subset \partial M$ which is a union of finitely many mutually disjoint annuli. For each component of $\gamma$, an oriented core circle called a {\it suture\/} is fixed, and we denote the set of sutures by $s(\gamma)$. Every component of $R(\gamma)=\partial M-{\rm Int\,}\gamma$ is oriented so that the orientations on $R(\gamma)$ are coherent with respect to $s(\gamma)$, that is, the orientation of each component of $\partial R(\gamma)$ induced from that of $R(\gamma)$ is parallel to the orientation of the corresponding component of $s(\gamma)$. We denote by $R_{+}(\gamma)$ (resp. $R_{-}(\gamma)$) the union of those components of $R(\gamma)$ whose normal vectors point out of (resp. into) $M$. \begin{example} For a knot $K$ in $S^3$ and a Seifert surface $\bar{R}$ of $K$, we set $R:=\bar{R} \cap E(K)$, called also a Seifert surface, where $E(K)= \overline{S^3-N(K)}$ is the complement of a regular neighborhood $N(K)$ of $K$. Then $(M_R, \gamma):=(\overline{E(K)-N(R)}, \overline{\partial E(K)-N(\partial R)})$ defines a sutured manifold. We call it the {\it complementary sutured manifold} for $R$. In this paper, we simply call it the sutured manifold for $R$. \end{example} \begin{definition}[\cite{gs08}]\label{def:HFknot} A knot $K$ in $S^3$ is called a {\it rationally homologically fibered knot} of genus $g$ if it has the following properties which are equivalent to each other: \begin{itemize} \item[(a)] The degree of the Alexander polynomial $\Delta_K (t)$ of $K$ is equal to twice the genus $g=g(K)$ of $K$; \item[(b)] For any minimal genus Seifert surface $R$ of $K$, its Seifert matrix $S$ is invertible over $\mathbb{Q}$; and \item[(c)] The sutured manifold $(M_R,\gamma)$ for any minimal genus Seifert surface $R$ is a rational homology product over $R$. \end{itemize} Moreover, when $\Delta_K (t)$ is monic (correspondingly, $S$ is invertible over $\mathbb{Z}$ and $M_R$ is a homology product), we say $K$ is a {\it homologically fibered knot}. \end{definition} \begin{remark} Aside from the name, the equivalence of the conditions (a), (b), (c) in Definition \ref{def:HFknot} was mentioned in Crowell-Trotter \cite{ct}. \end{remark} Next we recall the definition of {\it homology cylinders}, which can be regarded as a generalization of mapping classes of surfaces. We refer to Goussarov \cite{gou}, Habiro \cite{habiro}, Garoufalidis-Levine \cite{gl} and Levine \cite{levin} for their origin. Strictly speaking, the definition below is closer to that in \cite{gl} and \cite{levin}. Let $\Sigma_{g,1}$ be a compact connected oriented surface of genus $g\ge 0$ with a connected boundary. We fix a cell decomposition of $\Sigma_{g,1}$ consisting of one vertex $p$, edges $\gamma_1, \gamma_2, \ldots, \gamma_{2g}, \zeta$ and one face as in Figure \ref{fig:spine1}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\textwidth]{surface2.eps} \end{center} \caption{Cell decomposition of $\Sigma_{g,1}$} \label{fig:spine1} \end{figure} \begin{definition} A {\it homology cylinder\/} $(M,i_{+},i_{-})$ {\it over} $\Sigma_{g,1}$ consists of a compact oriented 3-manifold $M$ with two embeddings $i_{+}, i_{-}: \Sigma_{g,1} \hookrightarrow \partial M$, called {\it markings\/}, such that: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item $i_{+}$ is orientation-preserving and $i_{-}$ is orientation-reversing; \item $\partial M=i_{+}(\Sigma_{g,1})\cup i_{-}(\Sigma_{g,1})$ and $i_{+}(\Sigma_{g,1})\cap i_{-}(\Sigma_{g,1}) =i_{+}(\partial\Sigma_{g,1})=i_{-}(\partial\Sigma_{g,1})$; \item $i_{+}|_{\partial \Sigma_{g,1}}=i_{-}|_{\partial \Sigma_{g,1}}$; and \item $i_{+},i_{-} : H_{*}(\Sigma_{g,1};\mathbb Z)\to H_{*}(M;\mathbb Z)$ are isomorphisms. \end{enumerate} Similarly, the definition of a {\it rational homology cylinder\/} is obtained by replacing (iv) with the condition that (iv') $i_{+},i_{-} : H_{*}(\Sigma_{g,1};\mathbb Q)\to H_{*}(M;\mathbb Q)$ are isomorphisms. \end{definition} Two homology cylinders $(M,i_+,i_-)$ and $(N,j_+,j_-)$ over $\Sigma_{g,1}$ are said to be {\it isomorphic} if there exists an orientation-preserving diffeomorphism $f:M \xrightarrow{\cong} N$ satisfying $j_+ = f \circ i_+$ and $j_- = f \circ i_-$. We denote by $\mathcal{C}_{g,1}$ the set of all isomorphism classes of homology cylinders over $\Sigma_{g,1}$. By using markings, we can endow $\mathcal{C}_{g,1}$ with a monoid structure whose product is given by \[(M,i_+,i_-) \cdot (N,j_+,j_-) :=(M \cup_{i_- \circ (j_+)^{-1}} N, i_+,j_-)\] for $(M,i_+,i_-)$, $(N,j_+,j_-) \in \mathcal{C}_{g,1}$. The unit of this monoid is given by \[(M,i_+,i_-)=(\Sigma_{g,1} \times [0,1], \mathrm{id} \times 1, \mathrm{id} \times 0),\] where collars of $i_+ (\Sigma_{g,1})$ and $i_- (\Sigma_{g,1})$ are stretched half-way along $(\partial \Sigma_{g,1}) \times [0,1]$. The monoid $\mathcal{C}_{g,1}^\mathbb{Q}$ of all isomorphism classes of rational homology cylinders over $\Sigma_{g,1}$ is defined similarly. For each diffeomorphism $\varphi$ of $\Sigma_{g,1}$ which fixes $\partial \Sigma_{g,1}$ pointwise, we can construct a homology cylinder as a {\it mapping cylinder} \[(\Sigma_{g,1} \times [0,1], \mathrm{id} \times 1, \varphi \times 0)\] of $\varphi$. Constructing a homology cylinder from a given homologically fibered knot has an ambiguity arising from taking a minimal genus Seifert surface and fixing a pair of markings. \begin{proposition}\label{prop:unique} Let $R_1$ and $R_2$ be $($maybe parallel\/$)$ minimal genus Seifert surfaces of a homologically fibered knot of genus $g$ and let $M_{R_1}$ and $M_{R_2}$ be their sutured manifolds. For any markings $i_\pm$ and $j_\pm$ of $\partial M_{R_1}$ and $\partial M_{R_2}$, there exists another homology cylinder $N \in \mathcal{C}_{g,1}$ such that \[(M_{R_1},i_+,i_-) \cdot N = N \cdot (M_{R_2},j_+,j_-)\] holds as elements of $\mathcal{C}_{g,1}$. \end{proposition} \begin{proof} First we assume that $R_1$ and $R_2$ are disjoint in $E(K)$. Cut $E(K)$ along $R_1$ and $R_2$. Then we obtain two submanifolds $N$ and $N'$ of $E(K)$, where $N$ (resp. $N'$) may be regarded as a surface cobordism between $i_+ (\Sigma_{g,1})$ and $j_- (\Sigma_{g,1})$ (resp. $j_+ (\Sigma_{g,1})$ and $i_- (\Sigma_{g,1})$). We can easily check that $(N,i_+,j_-)$ and $(N',j_+,i_-)$ are homology cylinders over $\Sigma_{g,1}$. Then the equality $M_{R_{1}} \cup_{R_1} N=N \cup_{R_2} M_{R_2}$ holds and it shows our claim in this case. For the general case, we can use a theorem of Scharlemann-Thompson \cite{st} . It says that there exists a sequence of minimal genus Seifert surfaces $R_1=S_1\rightarrow S_2 \rightarrow \cdots \rightarrow S_n=R_2$ such that $S_i$ and $S_{i+1}$ are disjoint in $E(K)$ for $i=1,2,\ldots , n-1$. Using the above argument repeatedly, we have the conclusion. \end{proof} \noindent This proposition can be seen as a generalization of the fact that a fibered knot determines an element of the mapping class group of a surface uniquely up to conjugation. \begin{remark} Differently from fibered knots, a homologically fibered knot does not necessarily have a unique minimal genus Seifert surface. Indeed, it was shown by Eisner \cite{eisner} that the connected sum of two non-fibered knots has infinitely many non-isotopic minimal genus Seifert surfaces. Hence the connected sum of two non-fibered homologically fibered knots, which is again a homologically fibered knot, gives such an example. The authors do not know whether there exists a homologically fibered knot which has minimal genus Seifert surfaces whose complements are not homeomorphic. \end{remark} \section{Higher-order Alexander invariants}\label{sec:alexander} From the factorization $(\ref{eq1})$, we see that if a rationally homologically fibered knot has a non-trivial $\det (S)$-part, that is $|\det (S)| \neq 1$, then this knot is not fibered. However, this argument is useless for homologically fibered knots, since $|\det (S)|= 1$. In this section, we give a generalization of the factorization $(\ref{eq1})$ by using the framework of {\it higher-order Alexander invariants} originally due to Cochran \cite{coc} and Harvey \cite{har2} together with their interpretations as Reidemeister torsions given by Friedl \cite{fri}. We will see later that this generalized factorization works well for homologically fibered knots. We begin by summarizing our notation. For a matrix $A$ with entries in a group ring $\mathbb{Z} G$ (or its quotient field) for a group $G$, we denote by $\overline{A}$ the matrix obtained from $A$ by applying the involution induced from $(x \mapsto x^{-1},\ x \in G)$ to each entry. For a module $M$, we write $M^n$ for the module of column vectors with $n$ entries. For a finite cell complex $X$, we denote by $\widetilde{X}$ its universal covering. We take a base point $p$ of $X$ and a lift $\widetilde{p}$ of $p$ as a base point of $\widetilde{X}$. $\pi:=\pi_1 (X,p)$ acts on $\widetilde{X}$ from the {\it right} through its deck transformation group, so that the lift of a loop $l \in \pi$ starting from $\widetilde{p}$ reaches $\widetilde{p} \, l^{-1}$. Then the cellular chain complex $C_{\ast} (\widetilde{X})$ of $\widetilde{X}$ becomes a right $\mathbb{Z} \pi$-module. For each left $\mathbb{Z}\pi$-algebra $\mathcal{R}$, the twisted chain complex $C_{\ast} (X;\mathcal{R})$ is given by the tensor product of the right $\mathbb{Z}\pi$-module $C_{\ast} (\widetilde{X})$ and the left $\mathbb{Z}\pi$-module $\mathcal{R}$, so that $C_{\ast} (X;\mathcal{R})$ and $H_{\ast} (X;\mathcal{R})$ are right $\mathcal{R}$-modules. In the definition of higher-order Alexander invariants, PTFA groups play important roles, where a group $\Gamma$ is said to be {\it poly-torsion-free abelian $($PTFA$)$} if it has a sequence \[\Gamma=\Gamma_0 \triangleright \Gamma_1 \triangleright \cdots \triangleright \Gamma_n = \{1\}\] whose successive quotients $\Gamma_i/\Gamma_{i+1}$ $(i \ge 0)$ are all torsion-free abelian. An advantage of using PTFA groups is that the group ring $\mathbb{Z} \Gamma$ (or $\mathbb{Q} \Gamma$) of $\Gamma$ is known to be an {\it Ore domain} so that it can be embed into the field (skew field in general) \[\mathcal{K}_\Gamma:= \mathbb{Z}\Gamma (\mathbb{Z}\Gamma - \{0\})^{-1} =\mathbb{Q}\Gamma (\mathbb{Q}\Gamma - \{0\})^{-1}\] called the {\it right field of fractions}. We refer to \cite{coc} and \cite{pa} for generalities of PTFA groups and localizations of their group rings. A typical example of PTFA groups is $\mathbb{Z}^n$, where $\mathcal K_{\mathbb{Z}^n}$ is isomorphic to the field of rational functions with $n$ variables. For a rationally homologically fibered knot $K$, we take a {\it non-trivial} homomorphism $\rho:G(K) \to \Gamma$ to a PTFA group $\Gamma$, where $G(K)$ denotes the knot group $\pi_1 (E(K))$. We can regard $\mathcal{K}_\Gamma$ as a local coefficient system on $E(K)$ through $\rho$. Using arguments in Cochran-Orr-Teichner \cite[Section 2]{cot} and Cochran \cite[Section 3]{coc}, we have: \begin{lemma}\label{lem:vanish} For any non-trivial homomorphism $\rho:G(K) \to \Gamma$ to a PTFA group $\Gamma$, we have $H_\ast (E(K); \mathcal{K}_\Gamma)=0$. \end{lemma} \noindent By this lemma, we can define the Reidemeister torsion \[\tau_{\rho} (E(K)):= \tau(C_\ast (E(K); \mathcal{K}_{\Gamma})) \in K_1(\mathcal{K}_\Gamma)/\pm\rho(G(K))\] for the acyclic complex $C_\ast (E(K); \mathcal{K}_{\Gamma})$. We refer to Milnor \cite{milnor} for generalities of torsions. By higher-order Alexander invariants for $K$, we here mean this torsion $\tau_{\rho} (E(K))$. We now describe a factorization of $\tau_{\rho} (E(K))$ generalizing (\ref{eq1}). For that we use two kinds of invariants for rational homology cylinders from \cite{sakasai08} and \cite{gs08}. Let $(M_R,i_+,i_-) \in \mathcal{C}_{g,1}^\mathbb{Q}$ be the rational homology cylinder obtained as the sutured manifold for a minimal genus Seifert surface $R$ of $K$. We use the same notation $\rho: \pi_1 (M_R) \to \Gamma$ for the composition $\pi_1 (M_R) \to G(K) \xrightarrow{\rho} \Gamma$. By applying Cochran-Orr-Teichner \cite[Proposition 2.10]{cot}, we have: \begin{lemma}\label{lem:vanish2} $i_+, i_-: H_\ast (\Sigma_{g,1},p ;i_{\pm}^\ast \mathcal{K}_\Gamma) \to H_\ast (M_R,p ;\mathcal{K}_\Gamma)$ are isomorphisms as right $\mathcal{K}_\Gamma$-vector spaces. Equivalently, $H_\ast (M_R,i_\pm(\Sigma_{g,1}); \mathcal{K}_\Gamma)= 0$. \end{lemma} \noindent This lemma provides the following two kinds of invariants for $M_R$. \bigskip \noindent {\bf The Magnus matrix} \ Let $X \subset \Sigma_{g,1}$ be the union of $2g$ loops $\gamma_1, \ldots , \gamma_{2g}$ (see Figure \ref{fig:spine1}). $X$ is a deformation retract of $\Sigma_{g,1}$ relative to $p$. Therefore, for $\pm \in \{+,-\}$, we have \[H_1 (\Sigma_{g,1},p;i_{\pm}^\ast \mathcal{K}_\Gamma) \cong H_1 (X,p;i_{\pm}^\ast \mathcal{K}_\Gamma) = C_1 (\widetilde{X}) \otimes_{\pi_1 (\Sigma_{g,1})} i_{\pm}^\ast \mathcal{K}_\Gamma \cong \mathcal{K}_\Gamma^{2g}\] with a basis \[\{ \widetilde{\gamma}_1 \otimes 1, \ldots , \widetilde{\gamma}_{2g} \otimes 1\} \subset C_1 (\widetilde{X}) \otimes_{\pi_1 (\Sigma_{g,1})} i_{\pm}^\ast \mathcal{K}_\Gamma\] as a right $\mathcal{K}_\Gamma$-vector space. Here we fix a lift $\widetilde{p}$ of $p$ as a base point of $\widetilde{X}$, and denote by $\widetilde{\gamma}_i$ the lift of the oriented loop $\gamma_i$ starting from $\widetilde{p}$. \begin{definition}\label{def:Mag2} For $M_R=(M_R,i_+,i_-) \in \mathcal{C}_{g,1}^\mathbb{Q}$, the {\it Magnus matrix} \[r_\rho (M_R) \in GL(2g,\mathcal{K}_\Gamma)\] of $M_R$ is defined as the representation matrix of the right $\mathcal{K}_\Gamma$-isomorphism \[\mathcal{K}_\Gamma^{2g} \cong H_1 (\Sigma_{g,1},p;i_-^\ast \mathcal{K}_\Gamma) \xrightarrow[i_-]{\cong} H_1 (M_R,p;\mathcal{K}_\Gamma) \xrightarrow[i_+^{-1}]{\cong} H_1 (\Sigma_{g,1},p;i_+^\ast \mathcal{K}_\Gamma) \cong \mathcal{K}_\Gamma^{2g},\] where the first and the last isomorphisms use the bases mentioned above. \end{definition} \noindent The matrix $r_\rho(M_R)$ can be interpreted as a monodromy of $M_R$ from a view point of the twisted homology with coefficients in $\mathcal{K}_\Gamma$. \bigskip \noindent {\bf $\Gamma$-torsion} \ Since the relative complex $C_\ast (M_R,i_+(\Sigma_{g,1});\mathcal{K}_\Gamma)$ obtained from any cell decomposition of $(M_R, i_+(\Sigma_{g,1}))$ is acyclic by Lemma \ref{lem:vanish2}, we can define the following: \begin{definition} For $M_R=(M_R,i_+,i_-) \in \mathcal{C}_{g,1}^\mathbb{Q}$, the $\Gamma$-{\it torsion} $\tau_{\rho}^+ (M_R)$ of $M_R$ is defined by \[\tau_{\rho}^+ (M_R):= \tau(C_\ast (M_R,i_+(\Sigma_{g,1});\mathcal{K}_\Gamma)) \in K_1 (\mathcal{K}_\Gamma)/\pm \rho (\pi_1 (M_R)).\] \end{definition} \bigskip A method for computing $r_\rho(M_R)$ and $\tau_{\rho}^+ (M_R)$ is given in \cite[Section 4]{gs08}, which is based on Kirk-Livingston-Wang's method \cite{klw} for invariants of string links, and we now recall it briefly. An {\it admissible presentation} of $\pi_1 (M_R)$ is defined to be the one of the form \begin{align}\label{admissible} \langle i_- (\gamma_1),\ldots,i_- (\gamma_{2g}), z_1 ,\ldots, z_l, i_+ (\gamma_1),\ldots,i_+ (\gamma_{2g}) \mid r_1, \ldots, r_{2g+l} \rangle \end{align} for some integer $l$. That is, it is a finite presentation with deficiency $2g$ whose generating set contains $i_- (\gamma_1),\ldots,i_- (\gamma_{2g}), i_+ (\gamma_1),\ldots,i_+ (\gamma_{2g})$ and is ordered as above. Such a presentation always exists (see \cite[Section 4]{gs08}). For any admissible presentation, define $2g \times (2g+l)$, $l \times (2g+l)$ and $2g \times (2g+l)$ matrices $A,B,C$ over $\mathbb{Z} \Gamma$ by \[A=\overline{ \sideset{^{\rho}\!}{} {\mathop{\left({\displaystyle \frac{\partial r_j}{\partial i_-(\gamma_i)} } \right)}\nolimits} }_{\begin{subarray}{c} {}1 \le i \le 2g\\ 1 \le j \le 2g+l \end{subarray}}, \ \ B=\overline{ \sideset{^{\rho}\!}{} {\mathop{\left({\displaystyle \frac{\partial r_j}{\partial z_i} } \right)}\nolimits} }_{\begin{subarray}{c} {}1 \le i \le l\\ 1 \le j \le 2g+l \end{subarray}}, \ \ C=\overline{ \sideset{^{\rho}\!}{} {\mathop{\left({\displaystyle \frac{\partial r_j}{\partial i_+(\gamma_i)} } \right)}\nolimits} }_{\begin{subarray}{c} {}1 \le i \le 2g\\ 1 \le j \le 2g+l \end{subarray}}.\] \begin{proposition}[{\cite[Proposition 4.1]{gs08}}] \label{prop:MagnusFormula} As matrices with entries in $\mathcal{K}_\Gamma$, we have: \begin{itemize} \item[$(1)$] The square matrix $\begin{pmatrix} A \\ B \end{pmatrix}$ is invertible and $\tau_{\rho}^+ (M_R)=\begin{pmatrix} A \\ B \end{pmatrix}$; and \item[$(2)$] $r_\rho(M_R) = -C \begin{pmatrix} A \\ B \end{pmatrix}^{-1} \! \begin{pmatrix} I_{2g} \\ 0_{(l,2g)}\end{pmatrix}$. \end{itemize} \end{proposition} \begin{remark}\label{rem:Strebel} We see from Strebel \cite{strebel} that for a PTFA group $\Gamma$, every matrix with entries in $\mathbb{Z} \Gamma$ sent to an invertible matrix over $\mathbb{Q}$ by the augmentation map $\mathbb{Z} \Gamma \to \mathbb{Z}$ is invertible over $\mathcal{K}_\Gamma$. The first assertion of (1) follows from this fact. Indeed, $\begin{pmatrix} A \\ B \end{pmatrix}$ is sent to a representation matrix of $H_1 (M_R;\mathbb{Q})/ i_+ (H_1 (\Sigma_{g,1};\mathbb{Q}))=0$ by the augmentation map, which is invertible over $\mathbb{Q}$. \end{remark} \begin{remark}\label{rem:obstruction} If $K$ is fibered, the complementary sutured manifold for the unique minimal genus Seifert surface is a product sutured manifold, so that $\Gamma$-torsion is trivial for any $\Gamma$. Therefore we can use $\Gamma$-torsion as fibering obstructions of homologically fibered knots. Note that we can also use the Magnus matrix (see \cite[Theorem 4.1]{gs08} and Section \ref{sec:magnus}). \end{remark} By using the above invariants, the factorization formula for $\tau_\rho (E(K))$ is given as follows, where the statement is simpler than that in \cite{gs08} because we are now considering the knot cases only. \begin{theorem}\label{thm:factorization} Let $K$ be a rationally homologically fibered knot of genus $g$ and let $R$ be a minimal genus Seifert surface of $K$. For any non-trivial homomorphism $\rho:G(K) \to \Gamma$ to a PTFA group $\Gamma$, a loop $\mu$ representing the meridian of $K$ satisfies $\rho(\mu) \neq 1 \in \Gamma \subset \mathcal{K}_\Gamma$ and we have a factorization \begin{equation}\label{eq2} \tau_\rho (E(K)) = \frac{\tau_\rho^+ (M_R) \cdot (I_{2g} -\rho (\mu) r_\rho (M_R))}{1-\rho(\mu)} \quad \in K_1(\mathcal{K}_\Gamma)/\pm\rho(G(K)) \end{equation} of the torsion $\tau_\rho (E(K))$. \end{theorem} \begin{proof} \ First, by passing to the image if necessary, we may suppose that $\rho$ is onto. This is justified by the facts that any subgroup $\Gamma'$ of a PTFA group $\Gamma$ is again PTFA and that the torsion is invariant under the field extension $\mathcal{K}_{\Gamma'} \hookrightarrow \mathcal{K}_\Gamma$. By the definition of PTFA groups, we see that there exists a surjective homomorphism $\Gamma \to \mathbb{Z}$. Then the composite $G(K) \xrightarrow{\rho} \Gamma \to \mathbb{Z}$ is also surjective and it coincides with the abelianization map of $G(K)$ up to sign. Hence $\rho (\mu) \neq 1 \in \Gamma$. The rest of the proof is almost identical to the argument in \cite[Section 5]{gs08} (see also the argument of Friedl \cite[Section 6]{fri}). For convenience, we repeat it here in a simplified form. Given an admissible presentation of $\pi_1 (M_R)$ as in (\ref{admissible}), we denote it briefly by \[\pi_1 (M_R) \cong \langle i_- (\overrightarrow{\gamma}), \overrightarrow{z}, i_+ (\overrightarrow{\gamma}) \mid \overrightarrow{r} \rangle.\] A usual computation gives \[G(K) \cong \langle i_- (\overrightarrow{\gamma}), \overrightarrow{z},i_+ (\overrightarrow{\gamma}),\mu \mid \overrightarrow{r}, i_- (\overrightarrow{\gamma}) \, \mu \, i_+(\overrightarrow{\gamma})^{-1} \mu^{-1} \rangle.\] From this presentation, we construct a 2-complex $X(K)$ consisting of one 0-cell, one 1-cell for each generator and one 2-cell for each relation with an attaching map according to the word. We can check that $E(K)$ and $X(K)$ are simple homotopy equivalent (see \cite[Lemma 5.1]{gs08}). The $\mathcal{K}_\Gamma$-rank of $C_i (X(K);\mathcal{K}_\Gamma)$ and the $\mathbb{Z}$-rank $C_i (X(K))$ are the same and their degree $0, 1, 2$ parts are given by $1, 4g+l+1, 4g+l$. The map $\partial_2: C_2 (X(K);\mathcal{K}_\Gamma) \cong \mathcal{K}_\Gamma^{4g+l} \to C_1 (X(K);\mathcal{K}_\Gamma) \cong \mathcal{K}_\Gamma^{4g+l+1}$ is represented by the matrix \[D_2:= \begin{pmatrix} A & I_{2g} \\ B & 0_{(l,2g)} \\ C & -\rho (\mu)^{-1} I_{2g} \\ 0_{(1,2g+l)} & \ast \ \ast \ \cdots \ \ast \end{pmatrix}. \] Consider the matrix $D_2^\mu$ obtained from $D_2$ by deleting the last row. By fundamental transformations of matrices, we have \begin{align*} D_2^\mu &= \begin{pmatrix} A \ & I_{2g} \\ B \ & 0_{(l,2g)}\\ C \ & -\rho (\mu)^{-1} I_{2g} \end{pmatrix} \to \begin{pmatrix} A + \rho (\mu) C & 0_{2g} \\ B & 0_{(l,2g)}\\ C & -\rho (\mu)^{-1} I_{2g} \end{pmatrix}\\ &\to \begin{pmatrix} A + \rho (\mu) C & 0_{2g} \\ B & 0_{(l,2g)} \\ 0_{(2g,2g+l)} & -\rho (\mu)^{-1} I_{2g} \end{pmatrix} =:D \end{align*} Here the above matrices are with entries in $\mathbb{Z} \Gamma$ and we apply the augmentation map $\mathbb{Z} \Gamma \to \mathbb{Z}$ to $D$. Then we have a matrix representing $\partial_2: C_2 (X(K)) \to C_1 (X(K))/ \langle \mu \rangle$, which can be easily seen to be invertible over $\mathbb{Q}$. Hence $D$ (and also $D_2^\mu$) is invertible over $\mathcal{K}_\Gamma$ as mentioned in Remark \ref{rem:Strebel}. Now we compute $\tau_\rho (E(K))=\tau (C_\ast (X(K);\mathcal{K}_\Gamma))$. By the cell structure of $X(K)$, \[\tau_\rho (E(K)) = D_2^\mu \cdot (1-\rho (\mu)^{-1})^{-1}\] holds. Then as elements in $K_1(\mathcal{K}_\Gamma)/\pm\rho(G(K))$, we have \begin{align*} D_2^\mu &= D =\begin{pmatrix} A + \rho (\mu) C \\ B \end{pmatrix}=\begin{pmatrix} I_{2g} - \rho (\mu) r_{\rho} (M_R) & \ -\rho (\mu) Z \\ 0_{(l,2g)} & \ I_l \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix}\\ &=(I_{2g} - \rho (\mu) r_{\rho} (M_R)) \begin{pmatrix} A \\ B \end{pmatrix}, \end{align*} \noindent where we used \[\begin{pmatrix} A + \rho (\mu) C \\ B \end{pmatrix} =\begin{pmatrix} A \\ B \end{pmatrix} -\rho (\mu) \begin{pmatrix} r_{\rho} (M_R) \quad Z \\ 0_{(l,2g+l)} \end{pmatrix} \begin{pmatrix} A \\ B \end{pmatrix}\] at the third equality and $Z$ is defined by the formula $(r_\rho (M_R) \quad Z)= -C \begin{pmatrix} A \\ B \end{pmatrix}^{-1}$ (see Proposition \ref{prop:MagnusFormula} (2)). This completes the proof. \end{proof} \noindent When we take the abelianization map $\rho_1: G(K) \to \langle t \rangle \subset \mathbb{Q} (t)$ as $\rho$, the formula (\ref{eq1}) is recovered. \begin{remark} Factorizations of (higher-order) Alexander invariants into some torsions and ``monodromy'' information appear in various contexts such as Morse-Novikov theory and the theory of string links. For example, see Hutchings-Lee \cite{hl,hl2}, Goda-Matsuda-Pajitnov \cite{GMP}, Kitayama \cite{kitayama} and Kirk-Livingston-Wang \cite{klw}. It would be interesting to compare these factorization formulas in an appropriate situation. \end{remark} \section{A sample calculation }\label{sec:sample} Although all the ingredients in the formula $(\ref{eq2})$ are determined by information on fundamental groups, it is difficult to compute them explicitly because of the non-commutativity of $\mathcal{K}_\Gamma$ except in some special cases including the following. Let $K$ be a homologically fibered knot with a minimal genus Seifert surface $R$ and let $M_{R}$ be the sutured manifold for $R$. Consider the group extension \begin{equation}\label{eq:seq} 1\longrightarrow G(K)'/G(K)'' \longrightarrow D_2 (K) \longrightarrow G(K)/G(K)'=H_1(E(K))\cong \mathbb{Z} \longrightarrow 1 \end{equation} relating to the metabelian quotient $D_2 (K) :=G(K)/G(K)''$ of $G(K)$. We have \[G(K)'/G(K)'' \cong H_{1}(R) \cong H_{1}(M_{R})\] since it coincides with the first homology of the infinite cyclic covering of $E(K)$, which can be seen as the product (as homology cylinders) of infinitely many copies of $M_R$. Let $\rho_2$ be the natural projection \[\rho_2: G(K) \longrightarrow D_2 (K). \] It is known that $D_2 (K)$ is PTFA (see Strebel \cite{strebel}), so that $\mathcal{K}_{D_2 (K)}$ is defined. Then, it follows from the Proposition \ref{prop:MagnusFormula} that $\tau_{\rho_2}^+ (M_R)$ and $r_{\rho_2} (M_{R})$ can be computed by calculations on the {\it commutative} subfield $\mathcal{K}_{H_1 (M_R)}$ of $\mathcal{K}_{D_2 (K)}$, and therefore we can carry it out. Let us see an example of calculations of our invariants. Let $K$ be the knot obtained as the boundary of the Seifert surface $R$ illustrated in Figure \ref{fig:0057knot}. We can easily compute that $\Delta_K (t)=1-2t+3t^2-2t^3+t^4$ and the genus of $R$ is $2$. Hence $K$ is a homologically fibered knot and $R$ is of minimal genus. The graph $G$ in right hand side of Figure \ref{fig:0057knot} is obtained from $R$ by a deformation retract. Thus $\pi_{1}(M_R)\cong \pi_{1}(S^3-\overset{\circ}{N}(G))$. Then $\pi_1(M_{R})$ has a presentation: \[{\small \left\langle \begin{array}{c|l} z_{1},z_{2}, \ldots, z_{10} \ & \ \begin{array}{ll} z_{1}z_{5}z_{6}^{-1},\,z_{2}z_{3}z_{4}z_{1},\,z_{3}z_{9}^{-1}z_{5}^{-1},\, z_{7}z_{4}z_{8}^{-1},\\ z_{8}z_{10}z_{6},\, z_{2}z_{5}z_{7}^{-1}z_{5}^{-1},\,z_{9}z_{4}z_{10}^{-1}z_{4}^{-1} \end{array} \end{array} \right\rangle. }\] We can drop the last relation $z_{9}z_{4}z_{10}^{-1}z_{4}^{-1}$ because it is derived from the others. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.8\textwidth]{0057knot.eps} \end{center} \caption{} \label{fig:0057knot} \end{figure} We take a spine of $R$ as in Figure \ref{fig:basis}, by which we can fix an identification of $\Sigma_{g,1}$ and $R$. \begin{figure}[h] \begin{center} \includegraphics[width=0.4\textwidth]{basis.eps} \end{center} \caption{} \label{fig:basis} \end{figure} A direct computation shows that \begin{align*} i_-(\gamma_1)&=z_{5}z_{1}, & i_-(\gamma_2)&=z_{2}^{-1}, & i_-(\gamma_3)&=z_{5}z_{7}^{-1}z_{8}^{-1}z_{4}^{-1}, & i_-(\gamma_4)&=z_{4}^{-1}, \\ i_+(\gamma_1)&=z_{5}, & i_+(\gamma_2)&=z_{6}z_{9}, & i_+(\gamma_3)&=z_{6}z_{5}^{-1}z_{3}z_{5}z_{7}^{-1}z_{4}^{-1}z_{6}^{-1}, & i_+(\gamma_4)&=z_{6}z_{7}z_{6}^{-1}. \end{align*} \noindent Here the darker color in $R$ is the $+$-side. Then, we obtain an admissible presentation of $\pi_1 (M_{R})$: {\small \begin{center} \begin{tabular}{lcr} \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{10},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $z_{1}z_{5}z_{6}^{-1},\,z_{2}z_{3}z_{4}z_{1},\,z_{3}z_{9}^{-1}z_{5}^{-1},\, z_{7}z_{4}z_{8}^{-1},\,z_{8}z_{10}z_{6},\, z_{2}z_{5}z_{7}^{-1}z_{5}^{-1}, $ \\ & $i_{-}(\gamma_{1})z_{1}^{-1}z_{5}^{-1},\, i_{-}(\gamma_{2})z_{2},\, i_{-}(\gamma_{3})z_{4}z_{8}z_{7}z_{5}^{-1},\, i_{-}(\gamma_{4})z_{4},$ \\ & $i_{+}(\gamma_{1})z_{5}^{-1}, \,i_{+}(\gamma_{2})z_{9}^{-1}z_{6}^{-1},\, i_{+}(\gamma_{3})z_{6}z_{4}z_{7}z_{5}^{-1}z_{3}^{-1}z_{5}z_{6}^{-1},\, i_{+}(\gamma_{4})z_{6}z_{7}^{-1}z_{6}^{-1}$ \end{tabular} \end{center} } By sliding the edges $v_{1}$ and $v_{2}$ of $G$ as in Figure \ref{fig:homology}, we obtain a graph whose complement is a genus 4 handlebody. This means that the complement of $G$ (and hence $M_R$) is homeomorphic to a genus 4 handlebody. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.65\textwidth]{homology.eps} \end{center} \caption{} \label{fig:homology} \end{figure} Let $D_1,\ldots , D_4$ be the meridian disks of the handlebody as illustrated in the figure. Put $x_1:=z_{1}^{-1}$, $x_{2}=z_{6}^{-1}$, $x_{3}:=(z_{6}z_{7})^{-1}$ and $x_{4}:=z_{4}$, where $x_i$ is a loop intersecting $D_i$ transversely in one point from the above to the down side in Figure \ref{fig:homology} and is disjoint from $D_{j}\,(i\neq j)$. By using them, we have the following simplified admissible presentation of $\pi_1(M_{R})$: {\small \begin{center} \begin{tabular}{lcr} \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, x_{1},x_{2},x_{3} ,x_{4},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $i_{-}(\gamma_{1})x_{1}x_{2}x_{1}^{-1}$, \, $i_{-}(\gamma_{2})x_{1}x_{3}^{-1}x_{2}x_{1}^{-1}$, \, $i_{-}(\gamma_{3})x_{4}x_{2}x_{3}^{-1}x_{4}x_{2}x_{3}^{-1}x_{2}x_{1}^{-1}$,\\ & $i_{-}(\gamma_{4})x_{4}$, \,$i_{+}(\gamma_{1})x_{2}x_{1}^{-1},$ \, $i_{+}(\gamma_{2})x_{4}x_{3}^{-1}x_{2}$,\\ & $i_{+}(\gamma_{3})x_{2}^{-1}x_{4}x_{2}x_{3}^{-1} x_{2}x_{1}^{-1}x_{4}x_{3}^{-1}x_{2}$, \,$i_{+}(\gamma_{4})x_{2}^{-1}x_{3}$ \end{tabular} \end{center} } We write $r_1,\ldots ,r_8$ for these relations in order. Recall that $\mathcal K_{H_1 (M_R)}$ is isomorphic to the field of rational functions with variables $x_1,\ldots , x_4$, where we use the same notation for the image of $x_i$ by the abelianization map $\pi_1 (M_R) \to H_1 (M_R)$. Then we have \noindent \[\begin{pmatrix}A \\ B\\ C\end{pmatrix}= \begin{pmatrix} I_4 & 0_4 \\ G_1 & G_2 \\ 0_4 & I_4 \end{pmatrix},\] where $G_1 = \begin{pmatrix} g_{11}&g_{12}&g_{13}&g_{14}\\ g_{21}&g_{22}&g_{23}&g_{24}\\ g_{31}&g_{32}&g_{33}&g_{34}\\ g_{41}&g_{42}&g_{43}&g_{44} \end{pmatrix}$ and $G_2 = \begin{pmatrix} g_{15}&g_{16}&g_{17}&g_{18}\\ g_{25}&g_{26}&g_{27}&g_{28}\\ g_{35}&g_{36}&g_{37}&g_{38}\\ g_{45}&g_{46}&g_{47}&g_{48} \end{pmatrix}$ with $g_{ij}= \rho\left( \overline{\displaystyle\frac{\partial r_{j}}{\partial x_{i}}} \right)$. Thus \[\tau_{\rho_2}^+ (M_{R}) = \begin{pmatrix}A \\ B\end{pmatrix} = \begin{pmatrix} I_4 & 0_4 \\ G_1 & G_2 \end{pmatrix}.\] As a torsion, it is equivalent to $G_2$, where \begin{alignat*}{3} g_{15} &= -1, & \quad g_{16} &= 0, & \quad g_{18} &= 0,\\ g_{25} &= x_{1}^{-1}x_{2}, & \quad g_{26} &= x_{2}, & \quad g_{28} &= -x_{3},\\ g_{35} &= 0, & \quad g_{36} &= -x_{2}, & \quad g_{38} &= x_{3},\\ g_{45} &= 0, & \quad g_{46} &= x_{2}x_{3}^{-1}x_{4}, & \quad g_{48} &= 0, \end{alignat*} \begin{align*} g_{17} &= -x_{2}x_{3}^{-1}x_{4}, \qquad g_{27} = x_{2}+x_{1}^{-1}x_{2}^{2}x_{3}^{-1}x_{4}+x_{1}^{-1}x_{2}^{3}x_{3}^{-2}x_{4}-x_{1}^{-1}x_{2}^{3}x_{3}^{-2}x_{4}^{2},\\ g_{37} &= -x_{2}-x_{1}^{-1}x_{2}^{2}x_{3}^{-1}x_{4}, \qquad g_{47} = x_{2}x_{3}^{-1}x_{4}+x_{1}^{-1}x_{2}^{3}x_{3}^{-2}x_{4}^{2}. \end{align*} Then we have \[\det (\tau_{\rho_2}^+ (M_{R}))= \det (G_2) = -\frac{x_{2}^{3}x_{4}^{2}}{x_{1}x_{3}^{2}}(x_{2}-x_{3}-x_{2}x_{4}). \] The Magnus matrix $r_{\rho_2} (M_R)$ can be computed by the formula in Proposition \ref{prop:MagnusFormula} (2). However we omit it here. \begin{remark} From an admissible presentation, we can use the Mathematica program given in Section \ref{sec:program} for calculations of $\tau_{\rho_2}^+ (M_{R})$ and $r_{\rho_2} (M_{R})$. Note that the program uses $\{i_+(\gamma_1), i_+(\gamma_2),\ldots,i_+(\gamma_{2g})\}$ as a basis of $H_1 (M_R)$. In the above example, \[x_{1}=\gamma_{2}^{-2}\gamma_{3}, \quad x_{2}=\gamma_{1}^{-1}\gamma_{2}^{-2}\gamma_{3}, \quad x_{3}=\gamma_{1}^{-1}\gamma_{2}^{-2}\gamma_{3}\gamma_{4}^{-1}, \quad x_{4}=\gamma_{2}^{-1}\gamma_{4}^{-1},\] where $\gamma_j$ denotes $i_+ (\gamma_j)$, and we have $\det (\tau_{\rho_2}^+ (M_{R}))= \displaystyle\frac{\gamma_{3}}{\gamma_{1}^2 \gamma_{2}^5 \gamma_{4}} (1+\gamma_{2}-\gamma_{2}\gamma_{4})$. \end{remark} \section{Homologically fibered knots with 12-crossings}\label{sec:HFK12} It is known that all homologically fibered knots are fibered among prime knots with at most 11-crossings. On the other hand, Friedl-Kim \cite{fk} showed that there are 13 non-fibered homologically fibered knots with 12-crossings by using the twisted Alexander invariant. See Figure \ref{fig:all} and Table \ref{table:nonfibered}. In this section, we list admissible presentations and the torsion $\tau^{+}_{\rho_2}$ for sutured manifolds associated with minimal genus Seifert surfaces illustrated in Figure $\ref{fig:0210SG}, \ldots, \ref{fig:0815SG}$. As a by-product, we observe that $\tau^{+}_{\rho_2}$ can also detect the non-fiberedness of all these knots. In the forthcoming paper \cite{gs09}, we will obtain the same result by using Johnson homomorphisms as a fibering obstruction. It is easy to see that the complements of the Seifert surfaces for knots 0210, 0214, 0382 and 0394 are handlebodies. (In \cite{cl}, a non-alternating prime knot with 12-crossings is denoted by $12n_{-}P$. We refer only the number $P$ in this section.) Hence, we take free generators corresponding to disks $z_{i}$ in each figure, which run from the upside to the downside of the diagrams. As for the other knots, we have the admissible presentations by the same method as in Section \ref{sec:sample}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.99\textwidth]{all.eps} \end{center} \caption{12-crossings non-fibered homologically fibered knots} \label{fig:all} \end{figure} \begin{table}[htbp] \begin{tabular}{|c|c|c|} \hline Knot & Genus & Alexander polynomial \\ \hline\hline 0057 & 2 & $1-2t+3t^2-2t^3+t^4$\\ \hline 0210, 0214 & 3 & $1-t-t^2+3t^3-t^4-t^5+t^6$\\ \hline 0258, 0464, 0483 & 2 & $1-4t+5t^2-4t^3+t^4$ \\ \hline 0279, 0394 & 2 & $1-6t+11t^2-6t^3+t^4$\\ \hline 0382, 0801 & 2 & $1-5t+7t^2-5t^3+t^4$\\ \hline 0535 & 2 & $1-7t+11t^2-7t^3+t^4$\\ \hline 0650 & 2 & $1-4t+7t^2-4t^4+t^4$\\ \hline 0815 & 2 & $1-2t+t^2-2t^3+t^4$\\ \hline \end{tabular} \caption{Non-fibered homologically fibered knots with 12-crossings} \label{table:nonfibered} \end{table} The following are admissible presentations and the determinant of the torsion $\tau^{+}_{\rho_2} (M_R)$, where we use $\{i_+(\gamma_1), i_+(\gamma_2),\ldots, i_+(\gamma_{2g})\}$ as a basis of $H_1 (M_R)$ and denote $i_+(\gamma_j)$ by $\gamma_j$ for simplicity. Note that the example in Section \ref{sec:sample} is about the knot 0057, and we omit it here. \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0210}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_{1}),\ldots, i_{-}(\gamma_{6}),\, z_{1},\ldots z_{6},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{6})$ \\ \multicolumn{1}{c}{Relations} & $ i_{-}(\gamma_{1})z_{3}^{-1}z_{4},\, i_{-}(\gamma_{2})z_{3}^{-2}z_{2},\, i_{-}(\gamma_{3})z_{5}^{-1}z_{3}^{-1}z_{2},\, i_{-}(\gamma_{4})z_{2}^{-1}z_{1}z_{6}^{-1}z_{5}z_{6}^{-1}z_{5},\, $ \\ & $ i_{-}(\gamma_{5})z_{5}^{-1}z_{6}z_{5}^{-1}z_{1}z_{6}^{-1}z_{5}z_{6}^{-1}z_{5},\, i_{-}(\gamma_{6})z_{5}^{-1}z_{6}z_{5}^{-1}z_{1}z_{3}^{-1}z_{5}z_{6}^{-1}z_{5},\, $ \\ & $ i_{+}(\gamma_{1})z_{4}, \, i_{+}(\gamma_{2})z_{4}z_{3}^{-1}z_{2}z_{3}^{-1},\, i_{+}(\gamma_{3})z_{6}^{-1}z_{2}z_{3}^{-1},\, i_{+}(\gamma_{4})z_{5}z_{2}^{-1}z_{1}z_{6}^{-1}z_{5},\, $ \\ & $ i_{+}(\gamma_{5})z_{5}^{-1}z_{6}z_{2}^{-1}z_{1}z_{6}^{-1}z_{5},\, i_{+}(\gamma_{6})z_{5}^{-1}z_{6}z_{3}^{-1}z_{5}z_{6}^{-1}z_{5}\, $ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{\gamma_1^5\gamma_3^3\gamma_5^4\gamma_6^7}{\gamma_2^6\gamma_4^6} +\frac{\gamma_1^6\gamma_3^4\gamma_5^4\gamma_6^7}{\gamma_2^7\gamma_4^6} -\frac{\gamma_1^6\gamma_3^4\gamma_5^4\gamma_6^8}{\gamma_2^7\gamma_4^6}}$\\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0210SG.eps} \caption{0210} \label{fig:0210SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0214}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_{1}),\ldots, i_{-}(\gamma_{6}),\, z_{1},\ldots z_{6},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{6})$ \\ \multicolumn{1}{c}{Relations} & $ i_{-}(\gamma_{1})z_{2}z_{3}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{2})z_{2}z_{1}^{-1}z_{2},\, i_{-}(\gamma_{3})z_{5}^{-1}z_{1}^{-1}z_{2},\, $ \\ & $ i_{-}(\gamma_{4})z_{6}^{-1}z_{1}z_{3}^{-1}z_{5},\, i_{-}(\gamma_{5})z_{5}^{-1}z_{4}z_{3}^{-1}z_{1}z_{3}^{-1}z_{5},\, i_{-}(\gamma_{6})z_{5}^{-1}z_{4},\, $ \\ & $ i_{+}(\gamma_{1})z_{2}^{2}z_{3}^{-1}z_{2}^{-1}, \, i_{+}(\gamma_{2})z_{2}^{2}z_{6}^{-1},\, i_{+}(\gamma_{3})z_{1}^{-1}z_{2}z_{6}^{-1},\, $ \\ & $ i_{+}(\gamma_{4})z_{5}z_{3}^{-1}z_{1},\, i_{+}(\gamma_{5})z_{3}^{-1}z_{5}z_{3}^{-1}z_{1},\, i_{+}(\gamma_{6})z_{3}^{-1}z_{4}\, $ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{\frac{1}{\gamma_2\gamma_4^2\gamma_6} -\frac{\gamma_1}{\gamma_2\gamma_4^2\gamma_6} +\frac{\gamma_1}{\gamma_2\gamma_4\gamma_5\gamma_6}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0214SG.eps} \caption{0214} \label{fig:0214SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0258}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{7},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $z_{1}z_{2}z_{3}z_{4}$, $z_{1}z_{2}z_{4}z_{6}^{-1}z_{7}^{-1}$, $z_{7}z_{6}z_{5}$, $i_{-}(\gamma_{1})z_{7}z_{6}z_{7}^{-1}$, \\ & $i_{-}(\gamma_{2})z_{7}z_{6}z_{5}^{-1}z_{4}z_{6}^{-1}z_{7}^{-1}$, $i_{-}(\gamma_{3})z_{1}z_{2}^{2}z_{4}^{2}z_{6}^{-1}z_{7}^{-1}$, $i_{-}(\gamma_{4})z_{1}z_{2}^{2}z_{1}^{-2},$ \\ & $i_{+}(\gamma_{1})z_{7}^{-1}, \,i_{+}(\gamma_{2})z_{6}z_{4},\, i_{+}(\gamma_{3})z_{2}z_{1}^{-1}z_{4},\, i_{+}(\gamma_{4})z_{2}z_{1}^{-2}\,$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{\gamma_2^{5}\gamma_4^{7}}{\gamma_1^{6}\gamma_3^{12}} +\frac{\gamma_2^{6}\gamma_4^{8}}{\gamma_1^{7}\gamma_3^{13}} -\frac{\gamma_2^{6}\gamma_4^{9}}{\gamma_1^{7}\gamma_3^{14}}}$\\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0258SG.eps} \caption{0258} \label{fig:0258SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0279}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{9},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{1}z_{2}z_{4},\,z_{1}z_{3}^{-1}z_{2}z_{9}^{-1},\,z_{5}z_{8}^{-1}z_{6}^{-1},\, z_{6}z_{7}z_{8}z_{9},\,z_{2}^{-1}z_{3}z_{2}z_{5}^{-1},$ \\ & $i_{-}(\gamma_{1})z_{5}z_{8}z_{2}z_{9}^{-1}z_{5}^{-1},\, i_{-}(\gamma_{2})z_{5}z_{6}^{-1}z_{5}^{-1},\, i_{-}(\gamma_{3})z_{9}^{-1}z_{6}^{-1}z_{5}^{-1},\, i_{-}(\gamma_{4})z_{2}^{-1}z_{3}z_{1}z_{2}^{2},$ \\ & $i_{+}(\gamma_{1})z_{5}z_{2}z_{9}^{-1}z_{5}^{-1}, \, i_{+}(\gamma_{2})z_{5}z_{9}z_{6}^{-1},\, i_{+}(\gamma_{3})z_{2}^{-1}z_{6}^{-1},\, i_{+}(\gamma_{4})z_{2}^{-1}z_{1}z_{2}^{2}\,$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{\gamma_3^2\gamma_4^5}{\gamma_2^5} +\frac{\gamma_3^2\gamma_4^5}{\gamma_1\gamma_2^5} +\frac{\gamma_3^2\gamma_4^6}{\gamma_2^5}}$ \\ \end{tabular} \begin{figure*}[htbp] \includegraphics[width=0.99\textwidth]{0279SG.eps} \caption{0279} \label{fig:0279SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0382}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_{1}),\ldots, i_{-}(\gamma_{4}),\, z_{1},\ldots z_{4},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ i_{-}(\gamma_{1})z_{2}z_{1}^{-1}z_{3}z_{2}^{-1},\, i_{-}(\gamma_{2})z_{2}z_{3}^{-1}z_{2}z_{1}^{-2}z_{4}z_{2}^{-1},\, i_{-}(\gamma_{3})z_{4}^{-1}z_{1}^{-1}z_{4}z_{2}^{-1},\,$ \\ & $ i_{-}(\gamma_{4})z_{2}^{2}z_{1}^{-1}z_{4},\, $ \\ & $ i_{+}(\gamma_{1})z_{3}z_{2}^{-1}, \, i_{+}(\gamma_{2})z_{2}z_{1}^{-2}z_{4}z_{1}^{-1},\, i_{+}(\gamma_{3})z_{1}^{-1},\, i_{+}(\gamma_{4})z_{4}z_{2}z_{1}^{-1}z_{4}\, $ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{\frac{1}{\gamma_1\gamma_2\gamma_4} +\frac{1}{\gamma_1\gamma_3^2\gamma_4} -\frac{1}{\gamma_1\gamma_3\gamma_4}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0382SG.eps} \caption{0382} \label{fig:0382SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0394}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_{1}),\ldots, i_{-}(\gamma_{4}),\, z_{1},\ldots z_{4},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ i_{-}(\gamma_{1})z_{1}^{-1}z_{2}^{-1}z_{3},\, i_{-}(\gamma_{2})z_{3}^{-1}z_{4}z_{2}z_{3}z_{2}^{-1}z_{1},\, i_{-}(\gamma_{3})z_{4}z_{2}z_{3}z_{2}^{-1}z_{1},\, i_{-}(\gamma_{4})z_{4},\, $ \\ & $ i_{+}(\gamma_{1})z_{2}^{-1}z_{3}, \, i_{+}(\gamma_{2})z_{3}^{-1}z_{1}z_{3}^{-1}z_{4}z_{2}z_{3}z_{2}^{-1},\, i_{+}(\gamma_{3})z_{2}z_{3}z_{2}^{-1},\, i_{+}(\gamma_{4})z_{2}z_{4}\, $ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{\frac{1}{\gamma_1\gamma_2\gamma_3^2\gamma_4} +\frac{1}{\gamma_1^2\gamma_2\gamma_3\gamma_4} -\frac{1}{\gamma_1\gamma_2\gamma_3\gamma_4}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0394SG.eps} \caption{0394} \label{fig:0394SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0464}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{10},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{1}z_{2}z_{6}z_{7},\,z_{2}z_{9}z_{7},\,z_{3}z_{4}z_{5}z_{10}^{-1},\, z_{4}z_{5}z_{8},\,z_{1}z_{2}z_{3}^{-1}z_{2}^{-1},\, z_{8}z_{6}z_{8}^{-1}z_{9}^{-1}, $ \\ & $ i_{-}(\gamma_{1})z_{2}z_{10}z_{5}^{-1}z_{9}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{2})z_{2}z_{10}z_{5}^{-1}z_{3}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{3})z_{2}z_{8}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{4})z_{2}z_{1},$ \\ & $i_{+}(\gamma_{1})z_{2}z_{9}^{-1}z_{2}^{-1}, \, i_{+}(\gamma_{2})z_{2}z_{5}^{-1}z_{2}^{-1},\, i_{+}(\gamma_{3})z_{1}^{-1}z_{8}^{-1}z_{9}^{-1}z_{2}^{-1}z_{1},\, i_{+}(\gamma_{4})z_{1}^{-1}z_{7}^{-1}z_{1}\,$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{\gamma_1^3\gamma_4^3}{\gamma_3}-\gamma_1^2\gamma_4^4 +\gamma_1^3\gamma_4^4}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0464SG.eps} \caption{0464} \label{fig:0464SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0483}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{9},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{8}^{-1}z_{1}z_{4}z_{9}z_{4}^{-1},\, z_{5}z_{6}z_{7}^{-1}z_{6}^{-1}z_{8},\, z_{2}z_{3}z_{2}^{-1}z_{1},\, z_{3}^{-1}z_{2}z_{3}z_{5}^{-1},\, z_{4}z_{9}^{-1}z_{4}^{-1}z_{3},\,$ \\ & $ i_{-}(\gamma_{1})z_{1}z_{2}^{-1}z_{1}^{-1},\, i_{-}(\gamma_{2})z_{1}z_{4}^{-1}z_{8}^{-1},\, i_{-}(\gamma_{3})z_{6}^{-1},\, i_{-}(\gamma_{4})z_{6}^{-1}z_{3},$ \\ & $i_{+}(\gamma_{1})z_{4}^{-1}z_{2}^{-1}, \, i_{+}(\gamma_{2})z_{4}^{-1},\, i_{+}(\gamma_{3})z_{5}z_{6}^{-1}z_{8},\, i_{+}(\gamma_{4})z_{8}^{-1}z_{3}$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{\frac{1}{\gamma_1\gamma_3\gamma_4^2} -\frac{\gamma_2}{\gamma_1^2\gamma_3\gamma_4^2} -\frac{1}{\gamma_1\gamma_3\gamma_4}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0483SG.eps} \caption{0483} \label{fig:0483SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0535}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{10},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{1}z_{2}z_{3},\,z_{2}z_{6}z_{7}^{-1},\, z_{10}^{-1}z_{4}z_{5}z_{8}^{-1}z_{7},\, z_{1}z_{10}z_{9},\,z_{2}z_{3}z_{2}^{-1}z_{4}^{-1},\, z_{2}z_{6}^{-1}z_{2}^{-1}z_{5}^{-1},$ \\ & $ i_{-}(\gamma_{1})z_{10}^{-1},\, i_{-}(\gamma_{2})z_{10}^{-1}z_{1}^{-1}z_{3}^{-1}z_{1}^{-1}z_{10},\, i_{-}(\gamma_{3})z_{7}^{-1}z_{1}^{-1}z_{10},\, i_{-}(\gamma_{4})z_{6}z_{3}^{-1}z_{7},$ \\ & $i_{+}(\gamma_{1})z_{7}^{-1}z_{9}, \, i_{+}(\gamma_{2})z_{7}^{-1}z_{1}^{-1}z_{3}^{-1}z_{10}z_{7},\, i_{+}(\gamma_{3})z_{7}^{-1}z_{3}z_{10}z_{7},\, i_{+}(\gamma_{4})z_{7}^{-1}z_{6}z_{3}^{-1}z_{7}$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{1}{\gamma_1^{11}\gamma_2^6\gamma_3^6\gamma_4^{15}} +\frac{1}{\gamma_1^{10}\gamma_2^5\gamma_3^6\gamma_4^{15}} -\frac{1}{\gamma_1^{10}\gamma_2^5\gamma_3^6\gamma_4^{14}}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0535SG.eps} \caption{0535} \label{fig:0535SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0650}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{11},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{2}z_{3}z_{1}^{-1}z_{4}z_{1},\, z_{2}z_{6}z_{8}^{-1}z_{6}^{-1}z_{11}^{-1},\, z_{1}z_{5}^{-1}z_{1}^{-1}z_{4},\, z_{3}z_{6}z_{9}^{-1}z_{6}^{-1},\, z_{9}z_{8}^{-1}z_{7}z_{8},\,$ \\ & $ z_{8}z_{7}z_{10}^{-1}z_{7}^{-1},\, z_{10}z_{6}^{-1}z_{11}z_{6}, $ \\ & $ i_{-}(\gamma_{1})z_{2}z_{6}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{2})z_{2}z_{7}z_{6}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{3})z_{6}z_{8}z_{6}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{4})z_{2}z_{3}z_{1}^{-1},$ \\ & $i_{+}(\gamma_{1})z_{11}z_{6}^{-1}z_{2}^{-1}, \, i_{+}(\gamma_{2})z_{2}z_{3}^{-1}z_{2}^{-1},\, i_{+}(\gamma_{3})z_{1}z_{6}z_{8}z_{6}^{-1},\, i_{+}(\gamma_{4})z_{1}^{-1}$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{\frac{1}{\gamma_1\gamma_2^3\gamma_3^2\gamma_4^2} -\frac{1}{\gamma_1\gamma_2^3\gamma_3\gamma_4} +\frac{1}{\gamma_1\gamma_2^2\gamma_3\gamma_4}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0650SG.eps} \caption{0650} \label{fig:0650SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0801}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{9},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{1}^{-1}z_{6}z_{7}z_{8}^{-1}z_{9}^{-1},\, z_{3}z_{4}z_{9}z_{6}^{-1},\, z_{2}z_{4}z_{5},\, z_{2}z_{6}z_{7}^{-1}z_{6}^{-1},\, z_{2}z_{3}^{-1}z_{2}^{-1}z_{1}, $ \\ & $ i_{-}(\gamma_{1})z_{6}z_{7}z_{8}^{-1}z_{6},\, i_{-}(\gamma_{2})z_{1}z_{2}z_{8}z_{7}^{-1}z_{6}^{-1},\, i_{-}(\gamma_{3})z_{9}z_{6}^{-1}z_{2}^{-1},\, i_{-}(\gamma_{4})z_{5}^{-1}z_{9}^{-1}z_{5}^{-1},$ \\ & $i_{+}(\gamma_{1})z_{6}z_{9}, \, i_{+}(\gamma_{2})z_{6}z_{2}z_{6}z_{9}^{-1}z_{6}^{-1},\, i_{+}(\gamma_{3})z_{5}z_{9}z_{6}^{-1},\, i_{+}(\gamma_{4})z_{4}z_{9}^{-1}z_{5}^{-1}$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $-\gamma_1^2\gamma_3^2\gamma_4 +\gamma_1^2\gamma_2\gamma_3^2\gamma_4 -\gamma_1^2\gamma_2\gamma_3^3\gamma_4^2$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0801SG.eps} \caption{0801} \label{fig:0801SG} \end{figure*} \end{center} \begin{center} \begin{tabular}{lcr} \multicolumn{2}{c}{0815}\\ \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\ldots, i_{-}(\gamma_4),\, z_{1},\ldots ,z_{11},\, i_{+}(\gamma_{1}),\ldots,i_{+}(\gamma_{4})$ \\ \multicolumn{1}{c}{Relations} & $ z_{1}z_{9}z_{6},\, z_{1}z_{2}^{-1}z_{4}^{-1},\, z_{4}z_{11}^{-1}z_{5},\, z_{10}^{-1}z_{5}^{-1}z_{6}z_{7}z_{8},\, z_{8}^{-1}z_{6}^{-1}z_{9}z_{6},\,$ \\ & $ z_{7}^{-1}z_{6}^{-1}z_{3}z_{6},\, z_{4}z_{3}^{-1}z_{4}^{-1}z_{10}, $ \\ & $ i_{-}(\gamma_{1})z_{4}z_{3}^{-1}z_{4}^{-1},\, i_{-}(\gamma_{2})z_{4}z_{11},\, i_{-}(\gamma_{3})z_{9},\, i_{-}(\gamma_{4})z_{2}^{-1}z_{9}^{-1},$ \\ & $i_{+}(\gamma_{1})z_{2}^{-1}z_{3}^{-1}z_{4}^{-1}, \, i_{+}(\gamma_{2})z_{11}z_{1},\, i_{+}(\gamma_{3})z_{9}z_{3}^{-1}z_{1},\, i_{+}(\gamma_{4})z_{9}z_{2}^{-1}z_{9}^{-1}$ \\ \multicolumn{1}{c}{Torsion $\tau^{+}_{\rho_2}$} & $\displaystyle{-\frac{\gamma_1^3\gamma_2^5}{\gamma_4^6} +\frac{\gamma_1^2\gamma_2^4}{\gamma_4^5} +\frac{\gamma_1^3\gamma_2^5}{\gamma_4^5}}$ \\ \end{tabular} \begin{figure*}[h] \includegraphics[width=0.99\textwidth]{0815SG.eps} \caption{0815} \label{fig:0815SG} \end{figure*} \end{center} \begin{remark} According to \cite{bg} and \cite{juhasz2}, these knots have unique minimal genus Seifert surfaces. \end{remark} \section{Magnus matrix and concordances of Seifert surfaces}\label{sec:magnus} Not only the torsion $\tau_\rho^+$ but the Magnus matrix $r_\rho$ can be used as a fibering obstruction of homologically fibered knots. In fact, for a fibered knot $K$ with its unique minimal genus Seifert surface $R$ of genus $g$, the sutured manifold $M_R$ is given by a mapping cylinder $(\Sigma_{g,1} \times [0,1], \mathrm{id} \times 1, \varphi \times 0)$ of $\varphi$, which is an element of the mapping class group of $\Sigma_{g,1}$ and is uniquely (not up to conjugation) determined after fixing an identification of $R$ with $\Sigma_{g,1}$. Then the Magnus matrix $r_\rho (M_R)$ associated with a homomorphism $\rho:G(K) \to \Gamma$ is given by $\overline{ \sideset{^{\rho \circ i_+}\!}{} {\mathop{\left({\displaystyle\frac{\partial \varphi(\gamma_j)}{\partial \gamma_i}} \right)}\nolimits} }_{1 \le i,j \le 2g}$. In particular, all the entries are elements of $\mathbb{Z} \Gamma$. Therefore, for the detection of non-fiberedness of a non-fibered homologically fibered knot $K$, it suffices to find a minimal genus Seifert surface $R$, an identification of $R$ with $\Sigma_{g,1}$ and a homomorphism $\rho:G(K) \to \Gamma$ to a PTFA group $\Gamma$ whose Magnus matrix has an entry not contained in $\mathbb{Z} \Gamma$. \begin{example} If we continue the computation for the knot 0057 in Section \ref{sec:sample}, we can see that the $(1,3)$-entry of $r_{\rho_2} (M_R)$ is $\displaystyle\frac{\gamma_4}{1+\gamma_2-\gamma_2 \gamma_4}$, not an element of $\mathbb{Z} D_2 (K)$. This shows that the knot 0057 is not fibered. \end{example} In the usage of $r_\rho$ as a fibering obstruction, its invariance under homology cobordisms of homology cylinders is convenient. We first recall the definition of homology cobordisms of homology cylinders: \begin{definition} Two homology cylinders $M=(M,i_+,i_-), N=(N,j_+,j_-) \in \mathcal{C}_{g,1}$ are said to be {\it $($rational\/$)$ homology cobordant} if there exists a smooth compact oriented 4-manifold $W$ such that: \begin{enumerate} \item $\partial W = M \cup (-N) /(i_+ (x)= j_+(x) , \, i_- (x)=j_-(x)) \quad x \in \Sigma_{g,1}$; and \item the inclusions $M \hookrightarrow W$, $N \hookrightarrow W$ induce isomorphisms on the (rational) homology, \end{enumerate} where $-N$ denotes $N$ with opposite orientation. \end{definition} \begin{proposition} \label{prop:h-inv} Suppose that $M=(M,i_+,i_-), N=(N,j_+,j_-) \in \mathcal{C}_{g,1}^\mathbb{Q}$ are rational homology cobordant by a rational homology cobordism $W$. Let $\rho: \pi_1 (W) \to \Gamma$ be a homomorphism to a PTFA group. Then the Magnus matrices $r_\rho (M)$ and $r_\rho (N)$ associated with $\pi_1 (M) \to \pi_1 (W) \xrightarrow{\rho} \Gamma$ and $\pi_1 (N) \to \pi_1 (W) \xrightarrow{\rho} \Gamma$ are defined, and $r_\rho (M) = r_\rho (N)$ holds. \end{proposition} \begin{proof} We can apply the argument of \cite[Section 3.1]{sakasai08} and we omit the details. \end{proof} To interpret the homology cobordant relation in terms of homologically fibered knots, we introduce {\it concordances of Seifert surfaces} defined by Myers. \begin{definition}[Myers \cite{myers}] Seifert surfaces $R$, $R'$ of genus $g$ for knots $K$, $K'$ in $S^3$ are said to be {\it concordant} if there is a smooth embedding $I: \Sigma_{g,1} \times [0,1] \to S^3 \times [0,1]$ such that $I (\Sigma_{g,1} \times \{ 0 \}) = R$ and $I (\Sigma_{g,1} \times \{ 1 \}) = R'$. \end{definition} Using this terminology, we have the following relationship between concordances of Seifert surfaces for homologically fibered knots and homology cylinders. \begin{proposition}\label{prop:concordant} Let $K$ be a $($rationally\/$)$ homologically fibered knot with a minimal genus Seifert surface $R$ of genus $g$. Suppose $R$ is concordant to another Seifert surface $R'$ of a knot $K'$. Then $K'$ is also a $($rationally\/$)$ homologically fibered knot of genus $g$ with a minimal genus Seifert surface $R'$ such that $M_R$ and $M_{R'}$ are $($rational\/$)$ homology cobordant as $($rational\/$)$ homology cylinders. \end{proposition} \begin{proof} Let $W$ be a manifold obtained from $S^3 \times [0,1]$ by cutting along the image of an embedding $I: \Sigma_{g,1} \times [0,1] \to S^3 \times [0,1]$ which connects $R$ and $R'$. Then it is straightforward to check our assertions by observing the Mayer-Vietoris exact sequence of $S^3 \times [0,1]=W \cup I(\Sigma_{g,1} \times [0,1])$ with the intersection homeomorphic to $(\partial M_R) \times [0,1] = (\Sigma_{g,1} \cup (-\Sigma_{g,1})) \times [0,1]$. We omit the details. \end{proof} The following theorem enables us to produce infinitely many Seifert surfaces which are concordant to a given one. \begin{theorem}[Myers \cite{myers}]\label{thm:myers} If a Seifert surface $R$ of a knot $K$ is not a disk, then $R$ is concordant to $R'$ such that: \begin{itemize} \item[$(1)$] $K'=\partial R'$ is hyperbolic; and \item[$(2)$] $S^3 - K'$ has arbitrarily large hyperbolic volume. \end{itemize} \end{theorem} For $(M,i_+,i_-) \in \mathcal{C}_{g,1}$, consider a homomorphism \[\rho_M: \pi_1 (M) \to H_1 (M) \xrightarrow{i_+^{-1}} H_1 (\Sigma_{g,1}).\] We can easily check that if $W$ is a homology cobordism between $M$ and $N$ in $\mathcal{C}_{g,1}$, then there exists an extension $\rho_W:\pi_1 (W) \to H_1 (\Sigma_{g,1})$ of $\rho_M$ and $\rho_N$. Note that $\rho_M$ can be regarded as a restriction of $\rho_2$ when $M$ is obtained from a homologically fibered knot (recall the exact sequence (\ref{eq:seq})). Consequently we can combine Theorem \ref{thm:myers} with Proposition \ref{prop:h-inv} as follows: \begin{theorem}\label{thm:detect} Let $K$ be a homologically fibered knot with a minimal genus Seifert surface $R$. If $K$ is shown to be non-fibered by using $r_{\rho_2} (M_R) (=r_{\rho_{M_R}} (M_R))$, then $K'=\partial R'$ is also non-fibered for any Seifert surface $R'$ concordant to $R$. Moreover, there exist infinitely many such $K'= \partial R'$. \end{theorem} We may take $K$ to be a homologically fibered knot in Section \ref{sec:sample}. Then Theorem \ref{thm:detect} shows that there does exist infinitely many homologically fibered knots whose non-fiberedness are detected by the Magnus matrices. \begin{example}\label{ex:concordance} Let $K$ be the knot as the boundary of the Seifert surface $R$ illustrated in Figure \ref{fig:concordance}. \begin{figure}[htbp] \begin{center} \includegraphics[width=.8\textwidth]{concordant2.eps} \end{center} \caption{Concordant Seifert surfaces} \label{fig:concordance} \end{figure} $R$ is concordant to the minimal genus Seifert surface $R'$ of the trefoil knot, which is fibered. Proposition \ref{prop:concordant} shows that $K$ is a homologically fibered knot and $R$ is of minimal genus. An admissible presentation of $\pi_1 (M_R)$ is given by {\small \begin{center} \begin{tabular}{lcr} \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\,i_{-}(\gamma_2),\, z_{1},\ldots ,z_{9},\, i_{+}(\gamma_{1}),\,i_{+}(\gamma_2)$ \\ \multicolumn{1}{c}{Relations} & $z_{1}z_{2}z_{3},\,z_{1}z_{9}z_{8},\, z_{4}z_{5}z_{4}^{-1}z_2^{-1},\, z_{4}^{-1}z_{5}z_{3}^{-1}z_5^{-1},\, z_{3}z_{6}z_{3}^{-1}z_4,\, z_{7}z_{5}z_{8}z_{5}^{-1}$,\\ & $z_{7}^{-1}z_{9}z_{7}z_{5}^{-1},\, i_{-}(\gamma_{1})z_{1}z_7z_{4}^{-1}z_2z_5^{-1}z_3z_8^{-1}z_5,\, i_{-}(\gamma_{2})z_{8}^{-1}z_7z_4^{-1}z_1^{-1},$ \\ & $i_{+}(\gamma_{1})z_7 z_{4}^{-1} z_2 z_5^{-1} z_3 z_8^{-1} z_5, \ i_{+}(\gamma_{2})z_{7}z_4^{-1}z_{1}^{-1}.$ \end{tabular} \end{center} } \noindent From this, we have \begin{align*} \det (\tau_{\rho_2}^+ (M_R)) &= 3-\frac{1}{\gamma_1}-\gamma_1 -\frac{\gamma_1}{\gamma_2}+\frac{\gamma_1^2}{\gamma_2} +\frac{\gamma_2}{\gamma_1^2}-\frac{\gamma_2}{\gamma_1},\\ r_{\rho_2} (M_R) &= \begin{pmatrix} 1 & \gamma_2^{-1}\\ -\gamma_1^{-1} \gamma_2 & 1-\gamma_1^{-1} \end{pmatrix}. \end{align*} \noindent On the other hand, an admissible presentation of $\pi_1 (M_{R'})$ is given by {\small \begin{center} \begin{tabular}{lcr} \multicolumn{1}{c}{Generators} & $i_{-}(\gamma_1),\,i_{-}(\gamma_2),\, z_{1},\,z_2,\,z_3,\, i_{+}(\gamma_{1}),\,i_{+}(\gamma_{2})$ \\ \multicolumn{1}{c}{Relations} & $z_{1}z_{2}z_{3},\,i_-(\gamma_1)z_{3}^{-1},\, i_-(\gamma_2)z_3^{-1}z_1^{-1},\,i_+(\gamma_1) z_2,\, i_+(\gamma_2)z_1^{-1}$ \end{tabular} \end{center} } \noindent and we have \begin{align*} \det (\tau_{\rho_2}^+ (M_R)) &= \frac{1}{\gamma_2},\\ r_{\rho_2} (M_R) &= \begin{pmatrix} 1 & \gamma_2^{-1}\\ -\gamma_1^{-1} \gamma_2 & 1-\gamma_1^{-1} \end{pmatrix}. \end{align*} \end{example} \begin{remark} As seen in Example \ref{ex:concordance}, $\Gamma$-torsion generally changes under homology cobordisms. However, Cha-Friedl-Kim \cite{cfk} recently found a way to extract homology cobordant invariants from the torsion $\tau_{\rho_2}^+$ by taking a certain quotient of the target. Then they applied it to the {\it homology cobordism group of homology cylinders} and showed that this group has $\mathbb{Z}_2^\infty$ as an abelian quotient. By Proposition \ref{prop:unique}, we may regard this abelian quotient as an invariant of homologically fibered knots. In fact, it is unchanged under concordances of Seifert surfaces. \end{remark} \section{MATHEMATICA program}\label{sec:program} The following is a MATHEMATICA program which calculates the invariants discussed in Sections \ref{sec:sample}, \ref{sec:HFK12} and \ref{sec:magnus}. \medskip {\scriptsize \begin{verbatim} h1Class = {}; h1Monodromy = {}; torsionMatrix = {}; magnusMatrix = {}; invariants[g_, z_, RELATIONS_] := Module[{reindexedRel, h1Matrix, i, alex}, GENUS = g; Ztotal = z; reindexedRel = Map[reindexing, RELATIONS, {2}]; h1Matrix = -Map[Take[#, -2 GENUS] &, homologyComputation[reindexedRel]]; h1Class = Join[Map[monomialExpression, h1Matrix], Table[ToExpression[ToString[SequenceForm["\[Gamma]", i]]], {i, 2 GENUS}]]; Print["Homology classes of generators = ", h1Class // DisplayForm]; h1Monodromy = Transpose[Take[h1Matrix, 2 GENUS]]; Print["Homological monodromy = ", h1Monodromy // MatrixForm]; alex = Transpose[makeAlexanderMatrix[reindexedRel]]; torsionMatrix = Take[alex, 2 GENUS + Ztotal]; Print["torsion matrix = ", torsionMatrix // MatrixForm]; Print["det(torsion) = ", Expand[Det[torsionMatrix]]]; magnusMatrix = Simplify[Transpose[ Take[Transpose[-Drop[alex, 2 GENUS + Ztotal].Inverse[ torsionMatrix]], 2 GENUS]]]; Print["Magnus matrix = ", magnusMatrix // MatrixForm] ]; reindexing[num_] := Module[{numString, sg}, If[NumberQ[num], num + 2 GENUS*Sign[num], numString = ToString[num]; sg = If[StringTake[numString, 1] == "-", 1, 0]; If[StringTake[numString, {1 + sg}] == "m", ((-1)^sg)*ToExpression[StringDrop[numString, 1 + sg]], ((-1)^sg)*(ToExpression[StringDrop[numString, 1 + sg]] + 2 GENUS + Ztotal)]] ]; homologyComputation[rel_] := Module[{i, j}, RowReduce[Table[Count[rel[[i]], j] - Count[rel[[i]], -j], {i, 1, 2 GENUS + Ztotal}, {j, 1, 4 GENUS + Ztotal}]]]; monomialExpression[list_] := Module[{i, prod = 1}, For[i = 1, i <= 2 GENUS, i++, prod = prod*(ToExpression[ToString[SequenceForm["\[Gamma]", i]]]^list[[i]])]; prod]; makeAlexanderMatrix[rel_] := Module[{i, j}, Table[foxDer[rel[[i]], j], {i, 1, Length[rel]}, {j, 1, 4 GENUS + Ztotal}]]; foxDer[word_, var_] := Module[{entry = 0, i}, For[i = 1, i <= Length[word], i++, Which[word[[i]] == var, entry = entry + (makeMonomial[Take[word, i - 1]]^(-1)), word[[i]] == -var, entry = entry - (makeMonomial[Take[word, i]]^(-1))]]; entry]; makeMonomial[list_] := Module[{prod = 1}, For[i = 1, i <= Length[list], i++, prod = prod*(h1Class[[Abs[list[[i]]]]]^Sign[list[[i]]])]; prod]; \end{verbatim} } A computation by this program goes as follows. Let $(M,i_+,i_-) \in \mathcal{C}_{g,1}$ with an admissible presentation \[\langle i_- (\gamma_1),\ldots,i_- (\gamma_{2g}), z_1 ,\ldots, z_l, i_+ (\gamma_1),\ldots,i_+ (\gamma_{2g}) \mid r_1, \ldots, r_{2g+l} \rangle\] of $\pi_1 (M)$. The main function in the program is $\mathtt{invariants}$ having three slots as the input. These slots correspond to the genus $g$, the number $l$ of $z$-generators and the list of relations. For each word in the relations, we make a list by replacing $i_-(\gamma_j)^{\pm 1}$, $z_j^{\pm 1}$ and $i_+(\gamma_j)^{\pm 1}$ by $\pm \mathtt{mj}$, $\pm \mathtt{j}$ and $\pm \mathtt{pj}$. By lining up them, we obtain the list of relations. When we compute the case of the knot 0815 with an admissible presentation of $\pi_1 (M_R)$ of the sutured manifold $M_R$ as in Section \ref{sec:HFK12}, for example, the input is: {\small \begin{verbatim} invariants[2, 11, {{1, 9, 6}, {1, -2,-4}, {4,-11, 5}, {-10, -5, 6, 7, 8}, {-8, -6, 9, 6}, {-7, -6, 3, 6}, {4, -3, -4, 10}, {m1, 4, -3, -4}, {m2, 4, 11}, {m3, 9}, {m4, -2, -9}, {p1, -2, -3, -4}, {p2, 11, 1}, {p3, 9, -3, 1}, {p4, 9, -2, -9}}] \end{verbatim}} Then the function returns homology classes of generators in terms of $\mathtt{\gamma j}:=i_+(\gamma_j) \in H_1 (M_R)$, the homological monodromy matrix $\sigma (M_R)$, the torsion matrix $\tau_{\rho_2}^+ (M_R)$ and the Magnus matrix $r_{\rho_2}(M_R)$. These data can be referred as the variables $\mathtt{h1Class}$, $\mathtt{h1Monodromy}$, $\mathtt{torsionMatrix}$ and $\mathtt{magnusMatrix}$. \bibliographystyle{amsplain}
1,314,259,996,775
arxiv
\section{Introduction} \blfootnote{Northern Lights Deep Learning Workshop, Tromso, Norway, January 2019.} With the rising costs of conventional sources of energy, the world is moving towards sustainable energy sources including wind energy. Wind turbines consist of several electrical and mechanical components and experience an enormous amount of irregular loads, making their operational behaviour at times inconsistent. Operations and Maintenance (O\&M) is a key factor in monitoring such inconsistent behaviour of the turbines in order to predict and prevent any incipient faults which may occur in the near future. Machine learning has been applied to the domain of wind energy over the last decade for analysing, diagnosing and predicting wind turbine faults. In particular, we follow the idea of modelling a turbine's performance as a power curve where any power outputs that fall off the curve can be seen as performance errors. Existing work using this idea \cite{paper1} has used data from a turbine's Supervisory Control \& Acquisition (SCADA) system to filter and analyse fault \& alarm data using regression techniques. In \cite{paper2}, the authors showed that traditional supervised learning techniques, e.g. support vector regression, are able to make reliable predictions from a power curve. In this study, we investigate the applicability of deep learning techniques to turbine fault prediction from power curves. Deep learning \cite{Goodfellow-et-al-2016} is relatively new in its application to wind energy. We use the open access NREL Western Wind Dataset\footnote{\tiny \url{https://www.nrel.gov/grid/western-wind-data.html}} for the year 2012 to first predict the wind power output from turbines and secondly identify operational faults based on data points that fall off the optimal performance curve. In contrast to previous work, we explore how deep learning can be applied to fault prediction from open access meteorological data only. \begin{figure} \centering \includegraphics[scale = 0.2]{powercurve_final_new.png} \caption{Power Curve for the Wind Turbine where any instance in Region 1 or Region 3 signifies a fault. Actual power curve is modelled based on the labelled data (black) while the predicted power (yellow) is modelled using a neural network} \label{fig:powercurve_neuralnet} \end{figure} \section{Modelling of Power Curve} A power curve is an integral part of wind turbines' O\&M, showing the relation between the wind speed and the turbine's power output. A turbine is said to be operating normally when the wind speed is above the cut-in speed---which is the minimum wind speed at which the turbine generates power, or when the wind speed is below the cut-out speed---which is the maximum wind speed, beyond which turbine should stop operating to prevent failures. The ideal turbine operates at the rated speed---which is the speed at which turbine generates its rated power. Whenever a turbine enters regions 1 and 3 of the power curve (i.e. below cut-in wind speed or above cut-out), it can be interpreted as an impending fault in the turbine. See Figure \ref{fig:powercurve_neuralnet} for an illustration. In the absence of openly available SCADA data on turbine faults, we use the 29,736 data samples in the NREL database to model faults based on the power curve and meteorological data only. Features include timestamps (month, day, hour, min), wind speed (m/s), air temperature (Kelvin), air pressure (Pascals), wind direction (Degrees) and density at hub height (Kg/$ m^3 $). We use a 70\%-30\% split of the data set of training and testing and find that a Medium Gaussian Kernel based Support Vector Regression (SVR) model achieves a minimal square error (MSE) of 0.098063 using 5-fold cross-validation. This is a much better MSE obtained than \cite{paper2}, in which SVR with a radial basis function Kernel yields an MSE of 0.6696. Figure \ref{fig:powercurve_neuralnet} shows the resulting power curve including predicted faults. It can clearly be enunciated from the graph that the actual and predicted power is quite close, and the actual power curve resembles the predicted power curve in most of the context. The deviations from the curve, which are demonstrated by the curve showing extreme asymptotic spikes pertains to the turbine moving over to above cut-out speed (Region 3) or below cut-in speed (Region 1). \section{Prediction of Faults} We compare various neural network (NN) models for identifying and predicting faults in the NREL dataset. We treat fault situations as those where the turbine operates in Region 1 or 3 of the power curve. Accordingly, the entire dataset is annotated with binary labels---1 for a fault and 0 for normal operation. The dataset is split into training (70\%), validation (15\%) and test (15\%) sets and we use MSE as an optimisation criterion and early stopping. Table \ref{tab:results} shows a comparison of neural networks, alongside performance parameters. We compared a feedforward NN, recurrent NN, convolutional NN, sparse autoencoder and dynamic time series Non Linear Autoregressive (NAR) NN. Results show that the recurrent neural net performs best (by 10\%) but needs longest to train (by 9\%) compared to sparse autoencoder (2\textsuperscript{nd} best). \begin{table} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{r|lll} \toprule Neural Network Type & Epochs & Time & MSE \\ \midrule Feedforward Network &114 &15.00 sec &0.048979 \\ Recurrent Neural Network (RNN) &138 &22.83 sec &0.026299 \\ Convolutional Neural Network (CNN) &129 &18.00 sec &0.047950 \\ Sparse Autoencoder &135 &20.53 sec &0.029314 \\ Dynamic Time Series Non Linear Autoregressive (NAR) &131 &19.18 sec &0.031452 \\ \bottomrule \end{tabular}% \caption {\label{tab:results} Various Neural Network Models used and their performance evaluation} } \end{table} \section{Conclusion and Future Work} As is evident from existing literature most work on O\&M of wind turbines is based on traditional machine learning algorithms but has not yet explored deep learning. We present a feasibility study demonstrating that deep learning is promising for application in the wind energy domain, outperforming traditional regression models. In addition, we have explored the possibility of predicting impeding turbine faults from meteorological data --- thus circumventing the problem that machine fault data is often treated as commercially sensitive and therefore not readily openly available. In future work we seek to obtain such data and confirm our results, paving the way for predicting incipient faults in the electrical and mechanical components of wind turbines. \paragraph{Acknowledgement} We are thankful to the University of Hull's High Performance Computing facility for providing us access to MATLAB on Viper. \bibliographystyle{abbrv}
1,314,259,996,776
arxiv
\section{Introduction} \label{sec:intro} The eXtreme Multi-label Classification (XMC\xspace) task is to find relevant labels from an enormous output space of candidate labels, where the size is in the order of millions or more~(\cite{yu2014large,bhatia16,tagami2017annexml,babbar2017dismec,jain2016extreme}; etc.) This problem is of interest in both academia and industry: for instance, tagging a web page given its contents from tens of thousands of categories~\cite{partalas2015lshtc}; finding a few products that a customer will purchase from among an enormous catalog on online retail stores~\cite{medini2019extreme}; or recommending profitable keywords given an item/product from millions of advertisement keywords~\cite{prabhu2018parabel}. The XMC\xspace problem is challenging not only because of the data scalability (e.g., the number of instances, features, and labels are of the scale of millions or more), but also due to the label sparsity issue where there is little training signal for long-tailed labels. To tackle these issues, most prevailing XMC\xspace algorithms use a \textit{partition-based} approach. Instead of ranking the entire set of millions of labels, they partition the label space into smaller mutually-exclusive clusters. Each instance is only mapped to one or a few label clusters based on a matching model, and then ranking is only conducted within the smaller subset of labels. Some exemplars are Parabel\xspace~\cite{prabhu2018parabel}, eXtremeText\xspace~\cite{wydmuch2018no}, AttentionXML\xspace~\cite{you2019attentionxml}, {XR-Linear}\xspace~\cite{yu2020pecos} and {X-Transformer}\xspace~\cite{chang2020xmctransformer}. Partitioning labels into mutually-exclusive clusters may not be ideal. When labels are semantically complex and multi-modal, it is more natural to assign a label to multiple semantic clusters. In product categorization, for instance, the tag ``belt'' can be related to a vehicle belt (under ``vehicle accessories'' category), or a man's belt (under ``clothing'' category). Assigning ``belt'' to just one of the clusters but not the other is likely to cause a mismatch for certain queries. To solve this problem, we reformulate the label clustering step as an assignment problem, where each label can be assigned to multiple clusters to allow disentanglement of mixed semantics. Further, we formulate learning optimal assignments by maximizing the precision as an optimization problem, and propose efficient solvers that automatically learn a good label assignment based on the current matching models. With this formulation, we apply our algorithm to alternatively refine label assignments and matching model in existing partition-based XMC methods to boost their performance. Our contributions can be summarized below: \begin{itemize}[noitemsep,topsep=0pt,parsep=8pt,partopsep=0pt,leftmargin=*] \item We propose a novel way to obtain label assignments that disentangle multi-modal labels to multiple clusters. \item Unlike previous methods that partition the label set before training, we propose an optimization-based framework that allows optimizing label assignment with the matching and ranking modules inside a partition-based XMC solver. Our method is plug-and-play; it is orthogonal to the models so most of the partition-based methods can benefit from our method. \item Our proposed solution yields consistent improvements over two leading partition-based methods, {XR-Linear}\xspace~\cite{yu2020pecos} and {X-Transformer}\xspace~\cite{chang2020xmctransformer}. Notably, with the concatenation of tfidf\xspace features and {X-Transformer}\xspace embeddings, we achieve new SOTA results on four XMC\xspace benchmark datasets. \end{itemize} \section{Related Work} \subsection{XMC\xspace literature} \paragraph{Sparse Linear Models} Sparse linear one-versus-reset~(OVR) methods such as DiSMEC~\cite{babbar2017dismec}, ProXML\xspace~\cite{babbar2019data}, PDSparse~\cite{yen2016pd}, PPDSparse~\cite{yen2017ppdsparse} explore parallelism to speed up the algorithm and reduce the model size by truncating model weights to encourage sparsity. OVR approaches are also building blocks for many other XMC\xspace approaches. For example, in Parabel\xspace~\cite{prabhu2018parabel}, SLICE\xspace~\cite{jain2019slice}, {X-Transformer}\xspace~\cite{chang2020xmctransformer}, linear OVR classifiers with negative sampling are used. \par \paragraph{Partition-based Methods} The efficiency and scalability of sparse linear models can be further improved by incorporating different partitioning techniques on the label spaces. For instance, Parabel\xspace~\cite{prabhu2018parabel} partitions the labels through a balanced 2-means label tree using label features constructed from the instances. Other approaches attempt to improve on Parabel\xspace, for instance, eXtremeText\xspace~\cite{wydmuch2018no}, Bonsai\xspace~\cite{khandagale2019bonsai}, NAPKINXC\xspace~\cite{jasinskakobus2020probabilistic}, and {XR-Linear}\xspace~\cite{yu2020pecos} relax two main constraints in Parabel\xspace by: 1) allowing multi-way instead of binary partitions of the label set at each intermediate node, and 2) removing strict balancing constraints on the partitions. More recently, AttentionXML\xspace~\cite{you2019attentionxml} uses BiLSTMs and label-aware attention to replace the linear functions in Parabel\xspace, and warm-up training the models with hierarchical label trees. In addition, AttentionXML\xspace considers various negative sampling strategies on the label space to avoid back-propagating the entire bottleneck classifier layer. \par \paragraph{Graph-based Methods} SLICE\xspace~\cite{jain2019slice} uses an approximate nearest neighbor (ANN) graph as an indexing structure over the labels. For a given instance, the relevant labels can be found quickly via ANN search. SLICE\xspace then trains linear OVR classifiers with negative samples induced from ANN. Graph-based partitions can be viewed as an extension of tree-based partitions, where at each layer of the tree, random edges are allowed to connect two leaf nodes to further improve connectivity. Nevertheless, such construction of overlapping tree structures is fully unsupervised, and is agnostic to the data distribution or training signals of the downstream XMC\xspace problem. \subsection{Overlapped Clustering} Finding overlapped clusters has been studied in the unsupervised learning literature~\cite{banerjee2005model,cleuziou2008extended,lu2012overlapping,whang2019non}. For example, Cleuziou \emph{et al.}~\cite{cleuziou2008extended} extend $K$-means with multi-assignments based on coverage, but tends to yield imbalance clusters. This issue was later improved by Lu \emph{et al.}~\cite{lu2012overlapping} with sparsity constraints. Recently, Whang \emph{et al.}~\cite{whang2019non} propose variants of $k$-mean objectives with flexible clustering assignment constraints to trade-off the degree of overlapping. However, all these unsupervised approach cannot be optimized with existing partition-based XMC\xspace methods. \begin{figure*}[htb] \centering \includegraphics[width=0.85\linewidth]{./figs/xrlinear.png} \caption{ An illustration of partition-based XMC\xspace models. The matcher is a tree structured model similar to Parabel or XR-Linear. In this diagram, the hierarchical label tree has a depth of $3$, branching factor of $2$, number of clusters $K=4$, and number of labels $L=16$.} \label{fig:tree-based-model} \vspace{-0.75em} \end{figure*} \section{Background of partition-based XMC\xspace} The XMC\xspace problem can be characterized as follows: given an input instance ${\bm{x}} \in {\mathbb{R}}^d$ and a set of labels ${\mathbb{L}}=\{1, 2, \ldots, L\}$, find a model that retrieves top relevant labels in ${\mathbb{L}}$ efficiently. The model parameters are estimated from the training dataset $\{({\bm{x}}_i, {\bm{y}}_i): i=1,\ldots,n\}$ where ${\bm{y}}_i \in \{0,1\}^{L}$ denotes the relevant labels for ${\bm{x}}_i$ from the label space ${\mathbb{L}}$. We further denote ${\bm{X}}\in{\mathbb{R}}^{n\times d}$ as the feature matrix and ${\bm{Y}}\in{\mathbb{R}}^{n\times L}$ as the label matrix. Partition-based XMC methods rely on label space partitioning to screen out irrelevant labels before running the ranking algorithm. They often have the following three components (depicted in Figure~\ref{fig:tree-based-model}): \begin{itemize}[noitemsep,topsep=0pt,parsep=4pt,partopsep=0pt,leftmargin=*] \item The \textbf{cluster assignments}, where a set of labels are assigned to each cluster. Assuming there are $K$ clusters, we use $\mathcal{S}_i=\{\ell_1^i, \ell_2^i,\dots\}$ to denote the labels assigned to cluster $i$, and $\bigcup_{i=1}^K\mathcal{S}_i={\mathbb{L}}$. The clusters are constructed by $K$-means on label features, such that the clusters $\{\mathcal{S}_i\}_{i=1}^K$ are mutually exclusive and unaware of the matcher $\mathcal{M}$ and ranker $\mathcal{R}$, since clustering is typically performed only once \emph{before} the matcher and ranker are trained. On the contrary, as we will see shortly, our new method provides a way to refine cluster assignments based on the matcher and is able to effectively unveil multimodal labels. \item The \textbf{matcher}, which matches the input data ${\bm{x}}_i\in{\mathbb{R}}^d$ to a small set of candidate label clusters \begin{equation}\label{eq:matcher} \mathcal{M}: {\bm{x}}_i\mapsto \{\mathcal{S}_1,\mathcal{S}_2,\dots,\mathcal{S}_b\},\quad b\le K. \end{equation} Each label set $\mathcal{S}_k$ contains a small fraction of labels (a few hundreds), and $b$ is called the beam size. One recursive implementation of matcher is seen in XR-Linear~\cite{yu2020pecos}, where inside the matcher, there is a tree constructed by recursive $K$-means clustering. On each level of the tree, a maximum of $b$ nodes is selected according to the scores obtained from linear models. After that, all siblings of $b$ nodes will be expanded at the next level to repeat this process recursively until $b$ leaves are obtained, which results in the match set $\{\mathcal{S}_1, \mathcal{S}_2,\dots, \mathcal{S}_b\}$. See the ``Matcher'' block in Figure~\ref{fig:tree-based-model}. \item The \textbf{ranker}, which ranks the candidate labels collected from the matcher, with $\succ$ denoting the ranking order: \begin{equation}\label{eq:ranker} \begin{aligned} \mathcal{R}: &\mathcal{M}({\bm{x}}_i)\mapsto \ell_{(1)}\succ\ell_{(2)}\succ\dots\succ\ell_{(w)},\\ \text{where}\quad &\{\ell_{(1)}, \ell_{(2)}, \dots, \ell_{(w)} \}=\bigcup_{i=1}^b\mathcal{S}_i. \end{aligned} \end{equation} Lastly, top-$k$ labels are returned as the final prediction. See the ``Ranker'' block of Figure~\ref{fig:tree-based-model}. \end{itemize} Partition-based XMC typically assumes labels in the same cluster are similar, and thus when training the ranker, they focus on distinguishing the labels (and the corresponding samples) within each cluster. Taking the widely used linear ranker as an example, they often assign a weight vector for each label. The weight vector ${\bm{w}}_{\ell}$ for label $\ell$ can be obtained by the following training objective: \begin{equation}\label{eq:linear-loss} \min_{{\bm{w}}_{\ell}} \sum_{i\in\mathcal{D}_{\text{positive}}} \mathcal{L}\big({\bm{x}}_i^{\top} {\bm{w}}_{\ell}; +1\big) \ + \sum_{i\in\mathcal{D}_{\text{negative}}} \mathcal{L}\big({\bm{x}}_i^{\top} {\bm{w}}_{\ell}; -1 \big), \end{equation} where $\mathcal{L}(\cdot,\cdot)$ is the loss function (e.g., squared-hinge loss); $\mathcal{D}_{\text{positive}}$ and $\mathcal{D}_{\text{negative}}$ are the positive and negative samples for label $\ell$. In XMC, it is not efficient to collect all non-positive data $\mathcal{D}\setminus\mathcal{D}_{\text{positive}}$ as the negative part to train ${\bm{w}}_{\ell}$; instead, they often sample the ``hard negatives'' which is a tiny subset $\mathcal{D}_{\text{negative}}\subset \mathcal{D}\setminus\mathcal{D}_{\text{positive}}$. Different negative sampling changes the loss function and the final results. For instance, in Parabel~\cite{prabhu2018parabel} and XR-Linear~\cite{yu2020pecos}, $\mathcal{D}_{\text{negative}}$ are the examples having similar labels but not $\ell$ itself, based on the intuition that the learning can be more efficient by separating similar but different labels. In other words, \begin{equation}\label{eq:pos-neg} \mathcal{D}_{\text{positive}}=\{{\bm{x}}_i~|~\ell\in{\bm{y}}_i\},\quad \mathcal{D}_{\text{negative}}=\{{\bm{x}}_i~|~\ell\not\in{\bm{y}}_i \text{ but ``similar''} \}. \end{equation} The label similarity information is hidden in the cluster assignments $\{\mathcal{S}_1,\mathcal{S}_1,\dots,\mathcal{S}_K\}$, with the assumption that similar labels are clustered. \vspace{-5pt} \section{Proposed Method} \vspace{-3pt} \subsection{Motivation} The central idea of the partition-based XMC methods is to partition labels into disjoint clusters, such that labels in each cluster share similar semantics. This relies on the \textit{unimodal assumption} that each label only represents a pure, uniform semantic across all positive samples. However, we observe in practice that this assumption may not hold in general. Given labels that have multi-modal semantics, it is natural to seek a method to \emph{disentangle} their semantics from each other and treat each semantic as a separate label, and further assign them to different clusters. To achieve this, we weaken the requirement that label clusters under leaf nodes are mutually exclusive $\mathcal{S}_i\cap\mathcal{S}_j=\varnothing$, to a more suitable, limited overlapping: for any label $\ell\in\{1,2,\dots, L\}$, it can appear \emph{at most} $\lambda$-times among $\{\mathcal{S}_1,\mathcal{S}_2,\dots, \mathcal{S}_K\}$. \par Although allowing label clusters to overlap with each other does not explicitly disentangle the semantics of a label, it paves the way for learning multiple versions of weights for the same label. Take the linear model in Eq.~\eqref{eq:linear-loss} for simplicity, label $\ell$ is seen in both cluster $\mathcal{S}_A$ and $\mathcal{S}_B$, so we will have two weight vectors ${\bm{w}}_A$ and ${\bm{w}}_B$ for the same label. ${\bm{w}}_A$ and ${\bm{w}}_B$ are trained separately in Eq.~\eqref{eq:linear-loss}, where they have different negative examples $\mathcal{D}_{\text{negative}}^A$ and $\mathcal{D}_{\text{negative}}^B$. Recall in Eq.~\eqref{eq:pos-neg} the negative part of the data relies on the label similarity induced from clusters $\mathcal{S}_A$ and $\mathcal{S}_B$. Therefore, ${\bm{w}}_A$ and ${\bm{w}}_B$ will eventually converge to two different solutions -- this is how the semantics are disentangled when we assign a label to multiple clusters. \vspace{-5pt} \subsection{Optimization-based Label Assignment Approaches} How to assign a label to multiple clusters? Previous methods consider clustering as a preprocessing step and apply unsupervised methods, such as $K$-means, to partition the label space. In contrast, we formulate label assignments as an optimization problem to maximize the precision rate for XMC, which allows learning a matcher-aware clustering assignment as well as alternative updates between cluster assignments and the matcher. As mentioned before, our goal is to learn $\mathcal{S}_1, \dots, \mathcal{S}_K$ to cover all the labels, and the sets can be overlapping. To find the best label assignments, it is natural to maximize the precision rate. Many previous methods in information retrieval obtain ranking functions by optimizing (surrogates) Precision/Recall~\cite{kar2015surrogate,menon2019multilabel}. Here our goal is to find a good combination of {\bf matcher} and {\bf cluster assignments} in the context of partition-based XMC, so the objective is totally different. To formally define precision, we first define the output of matcher represented by matrix ${\bm{M}}\in{\mathbb{R}}^{n\times K}$: \begin{equation}\label{eq:match-matrix} {\bm{M}}_{ij}=\begin{cases} 1, & \text{if ${\bm{x}}_i$ is matched to leaf cluster $S_j$},\\ 0, & \text{otherwise}. \end{cases} \end{equation} For beam size $b$, every row of ${\bm{M}}$ has exactly $b$ nonzero entries. At the same time, the cluster assignments $\mathcal{S}_1, \dots, \mathcal{S}_K$ can be parameterized by the clustering matrix ${\bm{C}}\in\{0, 1\}^{L\times K}$ so that ${\bm{C}}_{ij}=1$ if and only if label-$i$ is seen in cluster $\mathcal{S}_j$, otherwise ${\bm{C}}_{ij}=0$. With this notation, the candidate labels generated by matcher is given by $\hat{{\bm{Y}}}:=\text{Binary}({\bm{M}}{\bm{C}}^{\top}), \hat{{\bm{Y}}} \in \mathbb{R}^{n \times L}$, where $\text{Binary}({\bm{A}})={\bm{I}}_{A}$ is the element-wise indicator function. \par Now consider the number of true positives in top-$k$ predictions (TP$@k$), it is upper bounded by the intersection between matcher predictions $\hat{{\bm{Y}}}$ and the ground truth ${\bm{Y}}$, i.e. \begin{equation}\label{eq:precision} \text{TP}@k = |\text{Top-}k(\hat{{\bm{Y}}})\odot {\bm{Y}}| \overset{(i)}{\le} | \hat{{\bm{Y}}}\odot {\bm{Y}}| \overset{(ii)}{=} \mathsf{Tr}({\bm{Y}}^{\top}\hat{{\bm{Y}}}), \end{equation} where $|\cdot|$ denotes the \#nnz elements in a sparse matrix; the inequality $\overset{(i)}{\le}$ is due to the error of ranker hidden in $\text{Top-}k(\cdot)$; and $\overset{(ii)}{=}$ is from the fact that both $\hat{{\bm{Y}}}$ and ${\bm{Y}}$ are binary. In this paper, we consider following scenarios that simplifies our analysis \begin{itemize}[noitemsep,topsep=0pt,parsep=4pt,partopsep=0pt,leftmargin=*] \item Perfect ranker: the ranker does not make any mistake, so the gap in Eq.~\eqref{eq:precision} vanishes. \item Probably correct ranker: for any true positive label $\ell^+\in\hat{{\bm{Y}}}$, it is ranked to top-$k$ with a \emph{constant} probability $p$, i.e. ${\mathbb{P}}(\ell^+\in\text{Top-}k(\hat{{\bm{Y}}})|\ell^+\in\hat{{\bm{Y}}})=p$. Then we have $\text{TP}@k=p\cdot\mathsf{Tr}({\bm{Y}}^\top\hat{{\bm{Y}}})$. \end{itemize} In both cases, the precision is proportional to $\mathsf{Tr}({\bm{Y}}^\top\hat{{\bm{Y}}})$. We can then formulate the problem of learning the best cluster assignment as \begin{small} \begin{equation}\label{eq:opt-2} \mathop{\mathrm{maximize}}_{ \substack{ \mathcal{S}_1,\mathcal{S}_2,\dots, \mathcal{S}_K\\ \bigcup_{i=1}^K\mathcal{S}_i=\{1,2,\dots, L\}}} \mathsf{Tr} \big( {\bm{Y}}^{\top} \text{Binary}({\bm{M}}{\bm{C}}^{\top}) \big), \quad\text{s.t. } \sum_{i=1}^K{\bm{I}}_{\ell\in\mathcal{S}_i}\le \lambda,\forall\ell\in\{1,2,\dots,L\}. \end{equation} \end{small} Note that $\{\mathcal{S}_i,\ldots, \mathcal{S}_K\}$ are hidden in ${\bm{C}}$. The first constraint ensures we cover the whole label set and the second constraint mitigates the case of degenerated clusters where some $\mathcal{S}_i$'s have too many labels, resulting in significantly increased time complexity. The parameter $\lambda$ is the only hyperparameter to be tuned. In practice, we find $\lambda$ is stable across datasets and for simplicity we select $\lambda=2$ for all the experiments. We also show the sensitivity of our algorithm with respect to $\lambda$ in Section~\ref{sec:hyperparameter}. The optimization problem~\eqref{eq:opt-2} is combinatorial and hard to solve. In fact, we show this is NP-complete: \begin{theorem} \label{thm:npcomplete} Problem \eqref{eq:opt-2} is NP-complete. \end{theorem} This can be proved by a polynomial time reduction from the set cover problem to Problem~\eqref{eq:opt-2}, and the proof is deferred to the Appendix~\ref{apx:npcomplete-proof}. To develop an efficient algorithm, we approximate the objective function of \eqref{eq:opt-2} with a continuous, ReLU-like function \begin{equation}\label{eq:binary-to-relu} \text{Binary}({\bm{M}}{\bm{C}}^{\top})\approx \max({\bm{M}}{\bm{C}}^{\top}, \bm{0}) = {\bm{M}}{\bm{C}}^{\top}, \end{equation} where the first approximation comes from replacing binary function to ReLU; the second equality is because both ${\bm{M}}$ and ${\bm{C}}$ are positive in all entries. A special case is when beam size $b=1$. In that case ${\bm{M}}{\bm{C}}^{\top}$ is a binary matrix, so $\text{Binary}({\bm{M}}{\bm{C}}^{\top})={\bm{M}}{\bm{C}}^{\top}$. We then consider the following simplified problem: \begin{small} \begin{equation}\label{eq:projection-C} \mathop{\mathrm{maximize}}_{ \substack{ \mathcal{S}_1,\mathcal{S}_2,\dots, \mathcal{S}_K\\ \bigcup_{i=1}^K\mathcal{S}_i=\{1,2,\dots, L\} }}\mathsf{Tr}\big({\bm{Y}}^{\top} {\bm{M}}{\bm{C}}^{\top} \big), \quad\text{s.t. } \sum_{i=1}^K{\bm{I}}_{\ell\in\mathcal{S}_i}\le \lambda,\forall\ell\in\{1,2,\dots,L\}. \end{equation} \end{small} This problem can be solved efficiently with a closed form solution: \begin{theorem} Problem~\eqref{eq:projection-C} has a closed form solution \begin{equation}\label{eq:PGD} {\bm{C}}^*= \text{Proj}({\bm{Y}}^{\top}{\bm{M}}), \end{equation} where the $\text{Proj}(\cdot)$ operator selects the top-$\lambda$ elements for each row of the matrix. \end{theorem} The above result can be easily derived since the objective function is linear and the constraint is a row-wise $\ell_0$ norm constraint. The detailed proof can be found in the Appendix~\ref{thm:l0-constraint}. But is~\eqref{eq:PGD} also a good solution to the original problem~\eqref{eq:opt-2}? In fact, we show that this solution, despite not being optimal for \eqref{eq:opt-2}, is provably better than any non-overlapping cluster partitioning, which is used in almost all the existing partition-based XMC methods. \begin{theorem} For any clustering matrix ${\bm{C}}$ corresponding to a non-overlapping partition of the label set, we have $\mathsf{Tr}({\bm{Y}}^{\top} \text{Binary}({\bm{M}}({\bm{C}}^*)^{\top})) > \mathsf{Tr}({\bm{Y}}^{\top} \text{Binary}({\bm{M}}{\bm{C}}^{\top}))$. \end{theorem} The proof can be found in the Appendix~\ref{apx:thm3-proof}. This theorem implies our partitioning can achieve higher precision over any existing non-overlapping clustering for any ranker. \par \paragraph{Practical Implementation} After ${\bm{C}}$ (or equivalently $\{\mathcal{S}_i\}_{i=1}^K$) is determined, we finetune the matcher $\mathcal{M}$ to accommodate the new deployment of label clusters. Note that most of the existing partition-based XMC starts from unsupervised label clustering and then alternate between getting a new (overlapping) clustering and finetune the matcher. Our algorithm can plug-in into most of these methods by adding one or more loops of alternative cluster assignments and matcher updates. Given the current matcher, the cluster assignments are updated based on the proposed formulation \eqref{eq:PGD}, and then the matcher will be retrained following the same procedure used in the original partition-based XMC solver. The whole routine is exhibited in Algorithm~\ref{alg:algorithm}. Since we use a balanced $K$-means label clustering as initialization, we found that after one step update in Line 6 of Algorithm~\ref{alg:algorithm}, $\{\mathcal{S}_i\}_{i=1}^K$ is still not too imbalanced in our experiments. It's possible to add cluster size constraints in Problem \eqref{eq:projection-C} to enforce the label partition to be balanced, as discussed in Appendix~\ref{apx:unbalance-cluster}, but we do not find a practical need for such constraints. \begin{algorithm} \caption{Our proposed framework.} \label{alg:algorithm} \begin{algorithmic}[1] \STATE \textbf{Input:} training data $\langle{\bm{X}}, {\bm{Y}}\rangle$, any partition-based XMC algorithm $\texttt{XMC-part}$. \STATE \textbf{Output:} the trained model (i.e., matcher $\mathcal{M}$, ranker $\mathcal{R}$, label clusters $\{\mathcal{S}_i\}_{i=1}^K$). \STATE \textbf{Initialize} $\{\mathcal{S}_i\}_{i=1}^K$ with balanced $K$-means using label features. \STATE \textbf{Initialize} $ \mathcal{M}\gets \texttt{XMC-part}(\{\mathcal{S}_i\}_{i=1}^K)$ \STATE Compute matcher prediction matrix ${\bm{M}}=\mathcal{M}({\bm{X}})$ by Eq.~\eqref{eq:match-matrix}. \STATE Update label clustering: $\{\mathcal{S}_i\}_{i=1}^K$ by Eq.~\eqref{eq:PGD}. \STATE \texttt{\textit{\%\% Following two lines are called ``alternative update'' hereafter.}} \STATE Finetune the matcher given new clusters: $\mathcal{M} \gets \texttt{XMC-part}(\{\mathcal{S}_i\}_{i=1}^K)$. \STATE Train the ranker with updated clusters and matcher: $\mathcal{R}\gets\texttt{XMC-part}(\{\mathcal{S}_i\}_{i=1}^K, \mathcal{M})$. \end{algorithmic} \end{algorithm} \par \textbf{Deduplication at inference time.} To perform inference with the new model, we need to deduplicate the scores from the same labels but in different clusters. This happens because we use beam search to efficiently search $b$ paths from root node to leaves. Refer to Figure~\ref{fig:tree-based-model} where $b=2$ is shown ($\mathcal{S}_1$ and $\mathcal{S}_3$ selected), if both $\mathcal{S}_1$ and $\mathcal{S}_3$ contain a same label $\ell_{\text{duplicate}}$ but different scores $s_1$ and $s_3$, then we average them together with the final score $s(\ell_{\text{duplicate}})=\frac{1}{2}(s_1+s_3)$. In this sense, our algorithm can be interpreted as a finer-grained ensemble which ensembles the scores inside a tree, rather than ensembling multiple, independently trained trees. \vspace{-4pt} \section{Experimental Results} \vspace{-2pt} \label{sec:exp} Our proposed framework serves as a \textit{generic plugin} for any partition-based XMC\xspace methods and we show its efficacy on multiple experiment setups. First, we verify that the proposed method improves over the baselines on synthetic datasets where we simulate labels with mixed semantics. Next, we test the sensitivity of hyperparameter $\lambda$, followed by experiments on real-world XMC\xspace benchmark datasets. We end this section with an ablation study. \par \textbf{Datasets.} We consider four publicly available XMC\xspace benchmark datasets~\cite{bhatia16,you2019attentionxml} for our experiments. See Table~\ref{tab:datasets} for data statistics. To obtain state-of-the-art (SOTA) results in Table~\ref{tab:real-data-compare}, we concatenate the dense neural embeddings (from fine-tuned {X-Transformer}\xspace~\cite{chang2020xmctransformer,yu2020pecos}) and sparse TF-IDF features (from AttentionXML\xspace~\cite{you2019attentionxml}) as the input features to train the models. \par \textbf{Evaluation Metric.} We measure the performance with precision metrics (P@k) as well as Propensity-based scores (PSP@k) ~\cite{jain2016extreme}, which are widely-used in the XMC literature~\cite{liu2017deep,prabhu2018parabel,reddi2018stochastic,jain2019slice,chang2020xmctransformer,yu2020pecos}. Specifically, for a predicted score vector $\hat{\mathbf{y}} \in \mathbb{R}^L$ and a ground truth label vector $\mathbf{y} \in \{0,1\}^L$, $\text{P}@k = \frac{1}{k} \sum_{l \in \text{rank}_k(\hat{\mathbf{y}})} \mathbf{y}_l$; $\text{PSP}@k=\frac{1}{k}\sum_{l=1}^k\frac{\mathbf{y}_{\text{rank}(l)}}{\mathbf{p}_{\text{rank}(l)}}$, the latter focuses more on the \textit{tail labels}. \par \textbf{Models.} As we introduce a new technique that is generally applicable, our method must be combined with existing partition-based XMC algorithms. We take XR-Linear~\cite{yu2020pecos} as the backbone for all experiments except Section~\ref{sec:real-data}. To get SOTA results in large-scale real datasets, we change the backbone to {X-Transformer}\xspace~\cite{chang2020xmctransformer} in Section~\ref{sec:real-data}. The implementation details for combining our method with XR-Linear and {X-Transformer}\xspace are discussed in the Appendix~\ref{app:implementations}. \par \textbf{Hyper-parameters.} Our technique depends on hyperparameter $\lambda$, which is tested in a standalone section. For hyperparameters in the XMC model, we mostly follow the default settings in the corresponding software. The details about hyperparameters are listed in the Appendix~\ref{app:hyperparams}. \begin{table*}[tb] \centering \begin{tabular}{crrrrrrr} \toprule Dataset & $n_{\text{trn}}$ & $n_{\text{tst}}$ & $L$ & $\bar{L}$ & $\bar{n}$ & $d_{\text{tfidf}}$\\ \midrule Wiki10-31K & 14,146 & 6,616 & 30,938 & 18.64 & 8.52 & 101,938 \\ AmazonCat-13K & 1,186,239 & 306,782 & 13,330 & 5.04 & 448.57 & 203,882 \\ Amazon-670K & 490,449 & 153,025 & 670,091 & 5.45 & 3.99 & 135,909 \\ Amazon-3M & 1,717,899 & 742,507 & 2,812,281 & 36.04 & 22.02 & 337,067 \\ \bottomrule \end{tabular} \caption{XMC\xspace data statistics. $n_{\text{trn}}$ and $n_{\text{tst}}$ are the number of instances in training and testing splits. $L$ is the number of labels and $\bar{L}$ is the average number of labels in each instance. $\bar{n}$ is the average number of positive data of each label. $d_{\text{tfidf}}$ is the sparse TFIDF feature dimension. These four datasets and the sparse TF-IDF features are downloaded from \url{https://github.com/yourh/AttentionXML} which are the same as used in AttentionXML\xspace~\cite{you2019attentionxml} and {X-Transformer}\xspace~\cite{chang2020xmctransformer}. \vspace{-10pt} } \label{tab:datasets} \end{table*} \subsection{Synthetic datasets by grouping true labels} In this experiment, we create a synthetic data derived from public datasets by deliberately entangling some labels. The motivation is to see the capability of disentangling different semantics from fused labels. Specifically, we designed several methods to mix up labels in the order of easy, medium and hard. The grouping is done without replacement: suppose a dataset has $L$ labels, and we want to entangle every $k$ labels into a fake one, then after this process we will get an artificial dataset with $\ceil{\frac{L}{k}}$ labels. Below are three implementations we are going to evaluate: \begin{itemize}[noitemsep,topsep=0pt,parsep=4pt,partopsep=0pt,leftmargin=*] \item Easy mode: we run a balanced $\ceil{\frac{L}{k}}$-means clustering on the original label space of size $L$, so that each cluster contains about $k$ labels. After that, we represent each cluster with a separate label, and the new label embedding is the cluster centroid. \item Medium mode: we run a balanced $\ceil{\frac{L}{32k}}$-means clustering on the label space. Different from the setting above, now each cluster contains $32k$ labels. Next, we \emph{randomly} group the $32k$ labels in to 32 subsets, with $k$ labels each. \item Hard mode: the most difficult mode is randomly shuffle labels into $\ceil{\frac{L}{k}}$ subsets, with $k$ labels in each subset. \end{itemize} We label the difficulty of three modes based on the idea that when similar labels are entangled together, it is less a issue for the baseline model, so the accuracy drop is negligible; Whereas if unrelated labels are bundled together, our method starts to shine. See Figure~\ref{fig:three-modes} for the relative performance improvement over XR-Linear. Several conclusions can be made from this figure: first, the improvements are all positive, meaning our method consistently beats the baseline XR-Linear model in all scenarios; secondly, by comparing Wiki10-31K with Amazon-670K, we see Amazon-670K shows an upwards in performance gain. This is mainly because the larger label space of Amazon-670K renders this problem even harder after combining labels, so our method exhibits a big edge over baselines. \begin{figure}[tb] \centering \includegraphics[width=0.4\textwidth]{rel_perf/wiki10.pdf} \includegraphics[width=0.4\textwidth]{rel_perf/amazon670k.pdf} \caption{The relative precision@5 gain over baseline XR-Linear model when adding our proposed overlapping clusters. The precision values are shown in the Appendix~\ref{app:three-modes}} \label{fig:three-modes} \end{figure} How are the labels disentangled to different clusters? We attach a random example from our dataset in Table~\ref{tab:fused_label}. This example shows that our method can indeed separate labels with fused semantics, improving the quality of both matcher and ranker. \subsection{Hyperparameter sensitivity ($\lambda$)} \label{sec:hyperparameter} In real applications, we never know the number of semantics a label has - either because the label space is huge, or it is uneconomical. In this section, we check the sensitivity of model performance on hyperparameter $\lambda$. To this end, we consider four different datasets and choose $\lambda=\{1,2,\ldots, 6\}$. The results are exhibited in Figure~\ref{fig:sensitivity}. From the figure above, we observe that the performance generally improves as we increase $\lambda$ -- our prior on the number of different meanings of each labels. However, since the benefits diminishes quickly, we choose $\lambda=2$ for all the following experiments in this paper. \begin{minipage}{\textwidth} \vspace{.5em} \begin{minipage}{0.49\textwidth} \begin{tabularx}{\linewidth}{l|X} \toprule \multicolumn{2}{l}{Fused label: ``tanks'' and ``fashion scarves''}\\ \toprule Cluster ID & Input text \\ \midrule \multirow{4}{*}{96} & \makecell[Xt]{\small Chiffon Butterfly Print on Black - {\color{blue}Silk Long Scarf 21x}; (Clearance): ... offers you one of delightful styles as you desire.}\\ \midrule \multirow{5}{*}{103} & \makecell[Xt]{\small This Scuba Yoke Fill Station device allows a person to fill high pressure nitrogen/compressed {\color{blue}air tanks} from a scuba tank...} \\ \bottomrule \end{tabularx} \captionof{table}{An example of how a fused label is disentangled into two labels, both then falling into different clusters (cluster ids are 96 and 103), finally labeled correctly despite the fusion of labels.} \label{tab:fused_label} \end{minipage} \hfill \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{./figs/prec_at_5.pdf} \captionof{figure}{Precision@5 (y-axis) versus different $\lambda$ ranging from $0$ to $6$. We generally see that higher $\lambda$ converts to better performance, but it plateaus at $\lambda=4$.} \label{fig:sensitivity} \end{minipage} \end{minipage} \vspace{-10pt} \subsection{Real datasets} \label{sec:real-data} The above experiments showed that our method is indeed better at disentangling complex meanings from labels. In real scenarios, it is certainly not the case that every label will have tens of underlying semantics. On the contrary, they behave more like a mix of both unimodal and multimodal labels. However, in this experiment, we will show that our method is still favorable. We conduct experiments to show that our method can successfully boost the performance of existing partition-based XMC models, including XR-Linear~\cite{yu2020pecos} and {X-Transformer}\xspace~\cite{yu2020pecos}. \begin{table} \centering \scalebox{0.675}{ \begin{tabular}{c|cccccc|cccccc} \toprule Methods & P@1 & P@3 & P@5 & PSP@1 & PSP@3 & PSP@5 & P@1 & P@3 & P@5 & PSP@1 & PSP@3 & PSP@5 \\ \midrule & \multicolumn{6}{c}{Wiki10-31K} & \multicolumn{6}{c}{AmazonCat-13K} \\ \midrule AnnexML~\cite{tagami2017annexml} & 86.46 & 74.28 & 64.20 & 11.86 & 12.75 & 13.57 & 93.54 & 78.36 & 63.30 & 51.02 & 65.57 & 70.13 \\ DiSMEC~\cite{babbar2017dismec} & 84.13 & 74.72 & 65.94 & 10.60 & 12.37 & 13.61 & 93.81 & 79.08 & 64.06 & 51.41 & 61.02 & 65.86 \\ Parabel~\cite{prabhu2018parabel} & 84.19 & 72.46 & 63.37 & 11.69 & 12.47 & 13.14 & 93.02 & 79.14 & 64.51 & 50.92 & 64.00 & 72.10 \\ eXtremeText~\cite{wydmuch2018no} & 83.66 & 73.28 & 64.51 & - & - & - & 92.50 & 78.12 & 63.51 & - & - & - \\ Bonsai~\cite{khandagale2019bonsai} & 84.52 & 73.76 & 64.69 & 11.85 & 13.44 & 14.75 & 92.98 & 79.13 & 64.46 & 51.30 & 64.60 & 72.48 \\ XML-CNN~\cite{liu2017deep} & 81.41 & 66.23 & 56.11 & 9.39 & 10.00 & 10.20 & 93.26 & 77.06 & 61.40 & \second{52.42} & 62.83 & 67.10 \\ AttentionXML~\cite{you2019attentionxml}& 87.47 & 78.48 & 69.37 & \first{15.57} & \second{16.80} & 17.82 & 95.92 & 82.41 & 67.31 & \first{53.76} & \first{68.72} & 76.38 \\ XR-LINEAR~\cite{yu2020pecos} & 85.13 & 74.96 & 66.05 & - & - & - & 94.54 & 79.87 & 64.67 & - & - & - \\ {X-Transformer}\xspace~\cite{yu2020pecos} & \second{88.26} & \second{78.51} & \second{69.68} & 15.12 & 16.52 & \second{17.99} & \first{96.48} & \second{83.41} & \second{68.19} & 50.36 & 66.32 & \second{76.45} \\ \midrule Ours & \first{88.85} & \first{79.52} & \first{70.67} & \second{15.23} & \first{16.81} & \first{18.42} & \first{96.48} & \first{83.55} & \first{68.36} & 50.46 & \second{66.76} & \first{77.01} \\ \bottomrule & \multicolumn{6}{c}{Amazon-670K} & \multicolumn{6}{c}{Amazon-3M}\\ \midrule AnnexML~\cite{tagami2017annexml} & 42.09 & 36.61 & 32.75 & 21.46 & 24.67 & 27.53 & 49.30 & 45.55 & 43.11 & 11.69 & 14.07 & 15.98 \\ DiSMEC~\cite{babbar2017dismec} & 44.78 & 39.72 & 36.17 & 26.26 & 30.14 & 33.89 & 47.34 & 44.96 & 42.80 & - & - & - \\ Parabel~\cite{prabhu2018parabel} & 44.91 & 39.77 & 35.98 & 26.36 & 29.95 & 33.17 & 47.42 & 44.66 & 42.55 & 12.80 & 15.50 & 17.55 \\ eXtremeText~\cite{wydmuch2018no} & 42.54 & 37.93 & 34.63 & - & - & - & 42.20 & 39.28 & 37.24 & - & - & - \\ Bonsai~\cite{khandagale2019bonsai}& 45.58 & 40.39 & 36.60 & 27.08 & 30.79 & 34.11 & 48.45 & 45.65 & 43.49 & 13.79 & 16.71 & 18.87 \\ XML-CNN~\cite{liu2017deep} & 33.41 & 30.00 & 27.42 & 17.43 & 21.66 & 24.42 & - & - & - & - & - & - \\ AttentionXML~\cite{you2019attentionxml} & 47.58 & 42.61 & 38.92 & 30.29 & 33.85 & 37.13 & 50.86 & \second{48.04} & \second{45.83} & 15.52 & 18.45 & 20.60 \\ XR-LINEAR~\cite{yu2020pecos} & 42.51 & 37.32 & 33.60 & - & - & - & 46.65 & 43.38 & 41.05 & - & - & - \\ {X-Transformer}\xspace~\cite{chang2020xmctransformer,yu2020pecos} & \second{48.07} & \second{42.96} & \second{39.12} & \second{36.06} & \second{38.38} & \second{41.04} & \second{51.20} & 47.81 & 45.07 & \second{18.64} & \second{21.56} & \second{23.65} \\ \midrule Ours & \first{50.70} & \first{45.40} & \first{41.55} & \first{36.39} & \first{39.15} & \first{41.96} & \first{52.70} & \first{49.92} & \first{47.71} & \first{18.79} & \first{21.90} & \first{24.10} \\ \bottomrule \end{tabular}} \caption{Comparing the Precision@k (P@k) and Propensity-based scores (PSP@k) for $k=1, 3, 5$ on four datasets. First place is marked \first{red}; second place is marked \second{blue}. Our method is trained with same concatenation of dense neural and sparse TF-IDF features, where the former is from fine-tuned {X-Transformer}\xspace~\cite{chang2020xmctransformer,yu2020pecos}. Used as a plugin upon existing {X-Transformer}\xspace models, our proposed framework achieves new SOTA results on three out of four datasets. } \label{tab:real-data-compare} \end{table} In Table~\ref{tab:linear-experiment}, we combine the proposed method with competitive linear models XR-Linear on four XMC\xspace benchmark datasets. We assume only the TF-IDF features are given and assume the single model setting without any ensemble. Our method consistently improves the performance over the baseline XR-Linear. Moreover, we tested how finetuning increases the accuracies (by comparing No/Yes in ``alternative update?'' column). The results prove that it is indeed necessary to train both the model parameters and cluster map in an alternative fashion (i.e., see Algorithm~\ref{alg:algorithm}). \par Next, we repeat the same process but based on the current SOTA {X-Transformer}\xspace model. The results are shown in Table~\ref{tab:real-data-compare}. Notice that our method reuses the same dense+sparse features as in the {X-Transformer}\xspace model, and the results are the ensemble of $9$ models~\cite{chang2020xmctransformer}. From Table~\ref{tab:real-data-compare} we can observe that our method achieves new SOTA precision results with a significant margin while still maintaining good PSP scores, comparing with other strong algorithms such as Parabel, AttentionXML, XR-Linear and {X-Transformer}\xspace. Notably, the numbers show that our method is more suitable for large label space (such as Amazon-3M) compared with smaller label space (e.g., AmazonCat-13K). \par To benchmark the training and inference overhead due to repeated clustering and training in our method, we report the relative training and inference time under different $\lambda$'s shown in Table~\ref{tab:runtime_cmp}. We remark that our method is more economical when the underlying model is inherently slow to train. On Amazon-3M for example, our framework only introduces $1.1\%$ training overhead and $21.1\%$ inference overhead over the {X-Transformer}\xspace models. \par In Appendix~\ref{app:random-baseline}, we provided one more experiment on the real-world datasets that compares our method with random baseline. The random baseline is constructed by duplicating the same amount of labels to random clusters, as opposed to our precision maximization algorithm. \begin{minipage}{\textwidth} \vspace{5pt} \begin{minipage}[t]{0.48\textwidth} \scalebox{0.7}{ \begin{tabular}{ccccc} \toprule Method & Alternative update? & Prec@1 & Prec@3 & Prec@5 \\ \midrule \multicolumn{5}{c}{Wiki10-31K}\\ XR-Linear & N/A & 84.14 & 72.85 & 64.09\\ +Ours & No & 84.52 & 74.23 & 65.41 \\ +Ours & Yes & \textbf{84.57} & \textbf{74.55} & \textbf{65.59} \\ \midrule \multicolumn{5}{c}{AmazonCat-13K} \\ XR-Linear & N/A & 92.53 & 78.45 & 63.85\\ +Ours & No & 90.63 & 78.12 & 63.86\\ +Ours & Yes & \textbf{92.90} & \textbf{79.04} & \textbf{64.35}\\ \midrule \multicolumn{5}{c}{Amazon-670K} \\ XR-Linear & N/A & 44.17 & 39.14 & 35.38 \\ +Ours & No & 43.22 & 38.19 & 34.10 \\ +Ours & Yes & \textbf{44.82} & \textbf{39.93} & \textbf{36.34} \\ \midrule \multicolumn{5}{c}{Amazon-3M} \\ XR-Linear & N/A & 46.74 & 43.88 & 41.72\\ +Ours & No & 46.95 & 44.24 & 42.18\\ +Ours & Yes & \textbf{47.51} & \textbf{44.76} & \textbf{42.67}\\ \bottomrule \end{tabular}} \captionof{table}{Experiments with XR-Linear. Our method is called ``XR-Linear+Ours'', we also tested the finetune step detailed in Algorithm~\ref{alg:algorithm}(L7). When joinly train option is disabled, the matcher $\mathcal{M}$ won't be updated despite the creation of new cluster map.} \label{tab:linear-experiment} \end{minipage} \hfill \begin{minipage}[t]{0.5\textwidth} \scalebox{0.75}{ \begin{tabular}{ccc} \toprule Method & Extra training time & Extra inference time \\ \midrule \multicolumn{3}{c}{Wiki10-31K} (baseline: $T_{trn}=0$m$30$s, $T_{tst}=0.7$ms/pts)\\ +Ours ($\lambda=1$) & 2.32$\times$ & 2.40$\times$ \\ +Ours ($\lambda=2$) & 5.58$\times$ & 4.80$\times$ \\ \midrule \multicolumn{3}{c}{AmazonCat-13K \quad (baseline: $T_{trn}=1$m$37$s, $T_{tst}=0.2$ms/pts)}\\ +Ours ($\lambda=1$) & 3.05$\times$ & 0.71$\times$ \\ +Ours ($\lambda=2$) & 4.54$\times$ & 1.59$\times$ \\ \midrule \multicolumn{3}{c}{Amazon-670K \quad (baseline: $T_{trn}=1$m$30$s, $T_{tst}=0.4$ms/pts)}\\ Ours ($\lambda=1$) & 0.47$\times$ & 0.07$\times$ \\ Ours ($\lambda=2$) & 0.96$\times$ & 0.60$\times$ \\ \midrule \multicolumn{3}{c}{Amazon-3M \quad (baseline: $T_{trn}=13$m$49$s, $T_{tst}=0.5$ms/pts)} \\ Ours ($\lambda=1$) & 0.33$\times$ & 1.12$\times$ \\ Ours ($\lambda=2$) & 1.63$\times$ & 2.51$\times$ \\ \bottomrule \end{tabular} } \captionof{table}{Extra training time and inference time (in ms/pts, which is millisecond per data point) of our method compared to the baseline XR-Linear. Our method posts negligible overhead for big model. For {X-Transformer}\xspace on Amazon-3M, we only have $1.1\%$ training overhead and $21.1\%$ inference overhead. } \label{tab:runtime_cmp} \end{minipage} \end{minipage} \section{Conclusion} In this paper, we have proposed a simple way to disentangle the semantics reside in the labels of XMC\xspace problems. We have shown how to do this in an indirect way that builds overlapping label clusters. We proposed an optimization algorithm to solve both parts: the matcher model, the ranker and cluster assignments. In experiments, we tested under various cases and the results indicate that our method is exceptionally good when the label space is huge (such as Amazon-3M) or when the label contains many different meanings (such as the artificial data we created). \bibliographystyle{unsrt}
1,314,259,996,777
arxiv
\section{Introduction} All quantum systems are open, i.e., they interact with an environment. The interaction with the environment leads to dissipation and decoherence due to a flow of energy and/or information from the system to the environment \cite{breuer&petruccione, weiss}. The dynamics of dissipation and decoherence in an open quantum system depends on the properties of the environment, and therefore can be altered by modifying characteristics of the environment such as its spectrum \cite{Bollinger09,Maniscalco04a}. A common example of environment is the quantized electromagnetic field, typically modeled as an infinite chain of non-interacting quantum harmonic oscillators. The coupling of the quantum system to the environment is described by the spectral density function. If the system couples to all modes of the environment in an equal way the spectrum of the reservoir is flat. If, instead, the spectral density function strongly varies with the frequency of the environmental oscillators, the environment is said to be structured. Structured environments arise in many physical situations, e.g., in photonic band-gap materials and lossy optical cavities \cite{lambropoulos, haroche}. In these systems the reservoir memory effects induce a feedback of information from the environment into the system. We call these systems non-Markovian \cite{JPRL09}. In this paper we study, to second order in perturbation theory with respect to system-reservoir coupling, the non-Markovian dynamics of a driven two-state system in presence of a structured reservoir with a generic spectral density. One of the first studies on the general dynamical properties of this model dates back to almost twenty years ago, when Lewenstein and Mossberg studied a driven atom inside both an optical cavity with a Lorentzian spectral density and a microwave cavity with a step function spectral density \cite{lewenstein}. Further studies on the laser-induced modification of the spontaneous decay of an atom embedded in a structured reservoir are given in Refs. \cite{brinati,janowicz,fanchini,budini,tanas1,tanas2}. We extend these results in several ways. First of all we present the time-local non-Markovian master equation for the dynamics and its solution, valid in the limit of weak couling between the system and the environment. The memory effects due to the reservoir structure are contained in time-dependent decay rates. We show that, for short initial times, the dependence of the decay rates on the driving laser is more complicated than the one presented in Ref. \cite{lewenstein}. For these short initial times, indeed, the spontaneous decay of the atom can not only be suppressed or enhanced, but also partly reversed, when non-Markovian oscillations induced by reservoir memory effects are present. The importance of the driven two-state model is especially pronounced in quantum computation and quantum technologies, where one or more driven qubits constitute the basic building block of quantum logic gates \cite{nilsen}. Different implementations of qubits for quantum logic gates are subjected to different types of environmental noise, i.e., to different environmental spectra \cite{Bollinger09}. In this study we focus on two examples of a structured reservoir, namely the Ohmic and the Lorentzian reservoirs. Owing to the microscopic approach that we adopt in this paper, we can make a comparison of the microscopic processes underlying the system dynamics for the two different reservoirs. This knowledge can aid the search for physical set-ups that best retain quantum properties under dissipative dynamics. We study the system dynamics with and without the widely used secular approximation, singling out its limits of validity. The time-scale for nonsecular phenomena ranges typically from small to intermediate in comparison with the time-scale for relaxation processes and therefore the secular approximation may not be consistent with studies of non-Markovian dynamics. A recent article by Cummings and Hu further elucidates the importance of nonsecular studies on open quantum systems \cite{hu}. Our investigation of the effects of nonsecular terms on the dynamics of the driven two-state system brings to light the existence of nonsecular oscillations in the population of the two-state system. These oscillations have similar nature to those observed in the entanglement dynamics in Ref. \cite{vasile}. Contrarily to the oscillations due to non-Markovianity, nonsecular oscillations persist for times much longer than the reservoir correlation time. Moreover, our analysis shows that the nonsecular terms affect also the asymptotic long time values of the Bloch vector components, i.e., their impact exceeds the time-scale of nonsecular oscillations. Finally, an important result we present in the paper is the analysis of conditions for complete positivity (CP) of the system. All phenomenological or approximated non-Markovian master equations may lead to unphysical results for certain values of the parameters. In order to guarantee the physicality of the solution of the master equation, one needs to study the complete positivity of the dynamical map. This is by no means an easy task. Explicit conditions for CP have been up to now obtained only for very simple systems \cite{Maniscalco07,Breuer09}. Here we study for the first time the CP conditions for the non-Markovian driven two-state system and show that they have a clear physical interpretation in terms of the decay rates. The paper is organized as follows. In Sec. \ref{the model} we introduce the microscopic Hamiltonian model and the non-Markovian nonsecular time-local master equation describing the dynamics of the driven two-level atom in a generic structured reservoir. In Sec. \ref{decay rates} we derive the analytic expressions of the non-Markovian time-dependent decay rates for the special cases of a Lorentzian and an Ohmic reservoir and we discuss the physical processes characterizing the system dynamics. These results are used to study the solutions of the optical Bloch equations in Sec. \ref{dynamics} in the secular and the non-secular regime. In Sec. \ref{cp} we derive the necessary and sufficient conditions for completely positivity. Finally Sec. \ref{conclusions} summarizes the results and presents conclusions. \date{\today} \section{The microscopic model}\label{the model} We consider a two-level atom with Bohr frequency $\omega_A$ interacting with a driving laser of frequency $\omega_L$ almost resonant with the atomic transition, i.e., $|\Delta|=|\omega_A-\omega_L|\ll\omega_A$. The two-level atom is embedded in a zero-T thermal bosonic reservoir modeled by an infinite chain of quantum harmonic oscillators. In a frame rotating with the laser frequency $\omega_L$ the total Hamiltonian for this system, in units of $\hbar$, is given by \begin{equation} \label{eq:h} H=H_S+H_E+H_I, \end{equation} where \begin{eqnarray} \label{eq:hs} H_S&=&\frac{1}{2}(\Delta\sigma_z+\Omega\sigma_x),\\ \label{eq:he} H_E&=&\sum_k\omega_ka_k^{\dagger}a_k,\\ \label{eq:hi} H_I&=&\sum_k g_ke^{-i\omega_L t}a_k^{\dagger}\sigma_-+g_k^*e^{i \omega_L t}a_k\sigma_+, \end{eqnarray} are the free Hamiltonians of the system and the environment and the interaction Hamiltonian, respectively, $\sigma_{x,y,z}$ are the Pauli operators, $\sigma_{\pm}$ the atomic inversion operators and $a_k$ the annihilation operator of quanta in the reservoir $k$-th mode.\\ The Rabi frequency $\Omega$ describes the strength of the interaction between the atom and the laser and it is taken to be small compared to the atomic and laser frequencies, $\Omega\ll\omega_A,\omega_L$. The interaction strength between the two-level atom and the $k$-th mode of the reservoir is given by $g_k$. In the limit of a continuum of reservoir modes $\sum_k|g_k|^2\rightarrow\int d\omega J(\omega),$ where $J(\omega)$ is the spectral density function, characterizing the reservoir spectrum. In this paper we focus on structured reservoirs, i.e., reservoirs with a spectrum that varies sensibly with the environmental oscillators frequency.\\ The description of a quantum system in a structured reservoir requires non-Markovian approaches since the reservoir correlation time is typically longer than other time-scales of the system dynamics. In the following subsection we present the microscopic non-Markovian master equation for the system introduced above. We will see how useful information on the system dynamics can be inferred already by looking at the form of the master equation and in particular by studying the behavior of the time-dependent decay rates appearing in the equations. \subsection{Time-local master equation} We use the time-convolutionless (TCL) projection operator technique to obtain the master equation for the driven two-level atom starting from the microscopic model of Eqs. (\ref{eq:h})-(\ref{eq:hi}) \cite{breuer&petruccione}. In the limit of weak coupling between the system and the environment the TCL generator is expanded to second order with respect to a coupling constant quantifying the strength of the interaction between the system and the environment. In Ref. \cite{haikka} one of us has demonstrated that the time-local non-Markovian master equation describing the system under study can be written in the form \begin{equation} \label{master equation} \frac{d\bar{\rho}(t)}{dt}=-i[\bar{H}_S+\bar{H}_{LS},\bar{\rho}(t)] +\mathcal{D}[\bar{\rho}(t)]+\mathcal{D}'[\bar{\rho}(t)], \end{equation} where bars indicate that the operators are given in the dressed state basis $|\psi_\pm\rangle=\pm\sqrt{C_{\pm}}|e\rangle+\sqrt{C_{\mp}}|g\rangle$, where $|e\rangle$ and $|g\rangle$ are the atomic excited and ground state, and the coefficients $C_\pm$ are \begin{equation} \label{cpm} C_{\pm}=\frac{\Delta\pm\omega}{2\omega},\quad C_0=\frac{\Omega}{2\omega}, \end{equation} with \begin{equation} \label{omega} \omega=\sqrt{\Delta^2+\Omega^2} \end{equation} the energy separation between the eigenstates of the driven atom. The unitary part of Eq. (\ref{master equation}) is governed by the Hamiltonians \begin{eqnarray} \label{dissipator} \bar{H}_S &=& \frac{\omega}{2}\bar{\sigma}_z, \\ \bar{H}_{LS} &=& \lambda_-(t)C_-^2\bar{\sigma}_-\bar{\sigma}_++\lambda_+(t)C_+^2\bar{\sigma}_+\bar{\sigma}_- \nonumber \\ &+&\lambda_0(t)C_0^2\bar{\sigma}_z^2, \label{eq:HLS} \end{eqnarray} namely the free Hamiltonian and the Lamb shift Hamiltonian, respectively. The latter one describes a small shift in the energy of the eigenstates of the two-level atom. This term has no qualitative effect on the dynamics of the system and therefore will be neglected in the following. \\ The dissipator in Eq. (\ref{master equation}) has been written as the sum of two terms, $\mathcal{D}$ and $\mathcal{D}'$. The first term is \begin{eqnarray} \label{secular dissipator} \mathcal{D}[\bar{\rho}(t)]&=&C_+^2\gamma_+(t)\left[\bar{\sigma}_-\bar{\rho}(t)\bar{\sigma}_+-\frac{1}{2}\{\bar{\sigma}_+\bar{\sigma}_-,\bar{\rho}(t)\}\right]\nonumber\\ &+&C_-^2\gamma_-(t)\left[\bar{\sigma}_+\bar{\rho}(t)\bar{\sigma}_--\frac{1}{2}\{\bar{\sigma}_-\bar{\sigma}_+,\bar{\rho}(t)\}\right]\nonumber\\ &+&C_0^2\gamma_0(t)\left[\bar{\sigma}_z\bar{\rho}(t)\bar{\sigma}_z-\frac{1}{2}\{\bar{\sigma}_z\bar{\sigma}_z,\bar{\rho}(t)\}\right]. \end{eqnarray} The second term has a more complicated form and contains the contribution of the so-called nonsecular terms, i.e., terms oscillating rapidly with respect to the dressed atom characteristic time $\tau_S=\omega^{-1}$, \begin{eqnarray} \mathcal{D}'[\bar{\rho}(t)]\!\!&=&\!\!\!\left[\frac{\gamma_-(t)}{2}-i\lambda_-(t)\right]\!\!\Big\{C_-C_0\big[\bar{\sigma}_+\bar{\rho}(t)\bar{\sigma}_z-\bar{\sigma}_z\bar{\sigma}_+\bar{\rho}(t)\big] \nonumber \\ &+& C_+C_-\big[\bar{\sigma}_+\bar{\rho}(t)\bar{\sigma}_+-\bar{\sigma}_+\bar{\sigma}_+\bar{\rho}(t)\big]\Big\}\nonumber\\ &+&\!\!\!\left[\frac{\gamma_+(t)}{2}-i\lambda_+(t)\right]\!\!\Big\{C_+C_0\big[\bar{\sigma}_-\bar{\rho}(t)\bar{\sigma}_z-\bar{\sigma}_z\bar{\sigma}_-\bar{\rho}(t)\big] \nonumber \\ &+&C_+C_-\big[\bar{\sigma}_-\bar{\rho}(t)\bar{\sigma}_--\bar{\sigma}_-\bar{\sigma}_-\bar{\rho}(t)\big]\Big\}\nonumber\\ &+&\!\!\!\left[\frac{\gamma_0(t)}{2}-i\lambda_0(t)\right]\!\!\Big\{C_-C_0\big[\bar{\sigma}_z\bar{\rho}(t)\bar{\sigma}_--\bar{\sigma}_-\bar{\sigma}_z\bar{\rho}(t)\big] \nonumber \\ &+&C_+C_0\big[\bar{\sigma}_z\bar{\rho}(t)\bar{\sigma}_+-\bar{\sigma}_+\bar{\sigma}_z\bar{\rho}(t)\big]\Big\}+ h.c.\label{dissipator'} \end{eqnarray} where $h.c.$ denotes Hermitian conjugation.\\ As for all time-local master equations, the non-Markovian effects are contained in the coefficients $\gamma_{\xi}(t)$ and $\lambda_\xi(t)$, with $\xi\in\{-,0,+\}$, which arise from the real and imaginary part of the reservoir correlation function, respectively \cite{haikka}. These coefficients read \begin{eqnarray} \label{rates} \gamma_{\xi}(t)&=&2\int_0^td\tau\int d\tilde{\omega}J(\tilde{\omega})\cos[(\omega_L+\xi\omega-\tilde{\omega})\tau],\\ \lambda_{\xi}(t)&=&\int_0^td\tau\int d\tilde{\omega}J(\tilde{\omega})\sin[(\omega_L+\xi\omega-\tilde{\omega})\tau]. \label{lambs} \end{eqnarray} For times longer than the reservoir correlation time $\tau_C$ the decay rates attain their stationary Markovian values $\gamma_{\xi}^M\equiv\lim_{t\rightarrow\infty}\gamma_{\xi}(t)$ and $\lambda_{\xi}^M\equiv\lim_{t\rightarrow\infty}\lambda_{\xi}(t)$. Therefore the first non-Markovian corrections on the dynamics of the driven two-level atom are visible only for small initial times $t \simeq \mathcal{O}(\tau_C)$.\\ We observe that the dynamics of the driven two-level atom comprises of three different dynamical effects occurring at three different respective time-scales. Indeed, the dynamics of both dissipation and decoherence occur at a time-scale of the order of the relaxation time-scale $\tau_R$, which is defined by the properties of the reservoir. Nonsecular terms cause oscillations, occurring over the typical time-scale of the system, $\tau_S=\omega^{-1}=(\Delta^2+\Omega^2)^{-1/2}$, for the driven two-level atom. Finally, as mentioned above, the non-Markovian memory effects happen for times shorter or of the order of the reservoir correlation time-scale $\tau_C$.\\ \subsection{The secular approximation} Conventionally the nonsecular terms, contained in the dissipator $\mathcal{D}'$, are neglected in the secular approximation when $\tau_S \ll \tau_R$ \cite{breuer&petruccione}. However, as one might expect, a non-Markovian description of the short-time dynamics is often incompatible with the secular approximation. In order to investigate the effects of the nonsecular terms on the non-Markovian dynamics, we focus instead on two regimes identified by the mutual relationship between the characteristic time-scale $\tau_S$ and the reservoir correlation time-scale $\tau_C$.\\ The first regime we call the {\sl secular regime}, characterized by the condition $\tau_S\ll\tau_C$. In this regime the nonsecular terms are negligible even at short non-Markovian time scales and we can make the secular approximation. The second regime is the {\sl nonsecular regime}, characterized by the opposite condition, i.e., $\tau_C\ll\tau_S$. In this case we must retain the nonsecular terms to correctly describe the non-Markovian dynamics.\\ When the secular approximation is valid, and the dissipator $\mathcal{D}'$ can be neglected, the coefficients $\lambda_\xi(t)$ appear only in the Lamb-shift Hamiltonian of Eq. (\ref{eq:HLS}), and therefore they describe a time-dependent renormalization of the dressed atomic energy. Moreover, the master equation is in time-dependent Lindblad form and the coefficients $\gamma_{\xi}(t)$ are proportional to the decay rates associated to transitions between the atomic dressed states described by the operators $\bar{\sigma}_+$, $\bar{\sigma}_-$ and $\bar{\sigma}_z$. In this case, the dynamics of the system can be inferred from the time evolution of the decay rates for different types of reservoirs in terms of direct and reversed quantum jumps between dressed states, as suggested by the Non-Markovian quantum jump (NMQJ) method of Ref. \cite{nmqj,nmqj2}. We will discuss this point further in Sec. III.\\ When the secular approximation is not valid, the master equation is not in Lindblad-type form. Both $\gamma_{\xi}(t)$ and $\lambda_\xi(t)$ appear now in the dissipator $\mathcal{D}'$. In this case it is not possible to extract from the master equation the jump operators, describing transitions between the dressed states, and the associated time-dependent decay rates. However, as we will see in Sec. IV, the nonsecular terms give rise to interesting effects not only at intermediate times but also in the asymptotic long time regime. \section{Time-dependent coefficients and NMQJ interpretation} \label{decay rates} In the following we specify our study to two exemplary reservoir spectra widely used in the literature, namely the Lorentzian and the Ohmic spectra. Our aim is to investigate how both the system dynamics and the validity of the secular approximation depend on the properties of the reservoir spectrum. We give analytic expression for the time-dependent coefficients and use the NMQJ method to compare the microscopic dynamics of the driven two-state system in the two exemplary reservoirs. \begin{figure*} \includegraphics[width=15cm]{fig1.eps} \caption{(Colors online) Lorentzian time-dependent coefficients $\gamma_+(T)/\alpha^2$ (dot-dashed blue line), $\gamma_-(T)/\alpha^2$ (dashed red line) and $\gamma_0(T)/\alpha^2$ (solid black line) for $p=0.01, 1, 100$ and $s=0.1,1,10$. The insets show very short time scale dynamics. \label{Fig1}} \end{figure*} \subsection{Lorentzian reservoir}\label{sec:3a} As a first example we consider a Lorentzian spectral density characterizing, e.g., one quantized mode of the electromagnetic field inside a cavity, \begin{equation} \label{Jl} J_{Lor}(\omega)=\frac{\alpha^2}{2\pi}\frac{\lambda^2}{(\omega-\omega_0)^2+\lambda^2}, \end{equation} where $\omega_0$ is the frequency of the mode supported by the cavity and $\lambda$ is the width of the distribution quantifying leakage of photons through the cavity mirrors. The reservoir correlation time is given by $\tau_{C}=\lambda^{-1}$. The coupling constant $\alpha^2$ has frequency dimensions; the limit of weak coupling between the system and the environment, assumed in this paper, is valid when $\alpha^2$ is smaller than the smallest relevant frequency in the system. \subsubsection{Time-dependent coefficients} The time-dependent coefficients for the two-level atom in a Lorentzian reservoir can be calculated explicitly using Eqs. (\ref{rates})-(\ref{lambs}), and are given by \begin{eqnarray} \label{ratel} \!\!\! \gamma_{\xi}(T)\!\!\!&=&\!\!\!\frac{\alpha^2 }{2(1+q_{\xi}^2)}\left(1-e^{-T}\cos q_{\xi}T+e^{-T} q_{\xi} \sin q_{\xi} T\right),\\ \!\!\! \lambda_{\xi}(T)\!\!\!&=&\!\!\!\frac{\alpha^2 }{1+q_{\xi}^2} \left(- q_{\xi} +e^{-T} q_{\xi} \cos q_{\xi} T+e^{-T} \sin q_{\xi} T\right),\nonumber\\ \end{eqnarray} where $T=\lambda t$ and $q_{\xi}=s-\xi p$ with $\xi=\{-,0,+\}$. We introduce two important parameters \begin{eqnarray} p&=&\frac{\tau_C}{\tau_S} =\frac{ \omega }{ \lambda},\\ s&=&\frac{\omega_0-\omega_L}{\lambda}. \end{eqnarray} The former of these two parameters identifies the region of validity of the secular approximation, more precisely $p\gg1$ corresponds to the secular regime and $p\ll1$ corresponds to the nonsecular regime. \begin{figure} \includegraphics[width=7cm]{fig2.eps} \caption{(Colors online) Ohmic time-dependent coefficients $\gamma_+(T)/\alpha^2$ (dot-dashed blue line), $\gamma_-(T)/\alpha^2$ (dashed red line) and $\gamma_0 (T)/\alpha^2$ (solid black line) for $p=0.01, 1, 5$. The insets show very short time scale dynamics.} \label{Fig2} \end{figure} The latter parameter $s=(\omega_0-\omega_L)/\lambda \approx (\omega_0-\omega_A)/\lambda$ indicates how far detuned is the peak of the Lorentzian spectrum from the atomic and/or laser frequency in units of $\lambda$.\\ In Fig. 1 we plot, as an example, the dynamics of $\gamma_{\xi}(T)$ for different values of the parameters $p$ and $s$. We note in passing that the coefficient $\gamma_0(t)$ does not depend on $p$ but only on $s$. A first look at the plots of Fig. 1 shows that for small values of $p$ (upper row), i.e., in the nonsecular regime, the time-dependent coefficients $\gamma_+(t)$, $\gamma_-(t)$ and $\gamma_0(t)$ coincide. Indeed, a power series expansion with respect to $p$ shows that $\gamma_{\pm}(t) \simeq \gamma_0(t)$ when $p\ll1$. In this case, however, the master equation contains the nonsecular dissipator of Eq. (\ref{dissipator'}) and hence it is not in time-dependent Lindblad form so we cannot describe the dynamics in terms of the NMQJ approach. We will numerically investigate the dynamics of this regime of parameters in Sec. IV. For intermediate and large values of $p$ the three coefficients are clearly distinct, as can be seen in the second and third row of Fig. 1. We now focus on the secular regime $p \gg 1$ described by the last row in Fig. 1. \subsubsection{Non-Markovian Quantum Jumps} In the secular regime the nonunitary dynamics is described only by the dissipator of Eq. (\ref{secular dissipator}). The decay rates associated to transitions between the dressed states are given by $C_{\pm}^2\gamma_{\pm}(t)$ while the decay rate associated to phase flips in the dressed state basis is given by $C_0^2\gamma_0(t)$. When the laser is resonant with the atomic transition, i.e., $\Delta=0$, then $C_+^2=C_-^2=C^2_0=1/4$. In this case Fig. 1 shows that, for all values of $s$, $\gamma_0(t)$ is the dominant decay rate so the main contribution to the system dynamics is given by phase flips in the dressed states. A similar conclusion holds for $\Delta/\Omega \ll 1$, since in this case $C_+/C_0 \simeq C_-/C_0\simeq 1$. On the other hand, for $\Delta/\Omega \gg 1$, one gets $C_+/C_0 \simeq 2 \Delta/\Omega$ and $C_-/C_0 \simeq 0$. In this case the dynamics is dominated by transitions between dressed states, in particular those described by the jump operator $\bar{\sigma}_+$, occurring at a rate $C_+^2 \gamma_+(t) \simeq \gamma_+(t)$. \\ We note also that for increasing values of $s$ the stationary Markovian value of the time dependent coefficients decreases, due to a smaller effective coupling with the reservoir. Moreover, high values of $s$ are characterized by oscillatory behavior of all the three coefficients, independently from the value of $p$. For large enough values of $s$ the decay rates take temporarily negative values. This feature is typical of time-local non-Markovian open quantum systems and generally occurs when the characteristic frequency of the open quantum system, here $\omega_A \simeq \omega_L$, is detuned from the peak of the reservoir spectrum \cite{Maniscalco06,Maniscalco04b}. Negative values of the decay rates are interpreted, in the NMQJ formalism, as reversed jumps canceling the jumps previously occurred in the same channel when the decay rate was positive. So the presence of oscillations around zero in the decay rates indicates non-Markovian dynamics induced by the reservoir memory and describing the feedback of information and/or energy from the reservoir into the system \cite{JPRL09}. It is worth noticing that non-Markovian oscillations occur in $\gamma_{\pm}(t)$ for all values of $s$, in the secular regime $p \gg 1$. \\ In the next subsection we present the analytic expressions of the same coefficients for a different reservoir spectral density, namely the Ohmic one, and study their time evolution. Comparing the behavior of $\gamma_{\xi}(t)$ for the two types of spectra we will see which features are common and which ones vary significantly when changing the spectrum. \subsection{Ohmic reservoir} We now focus on the Ohmic spectral density with exponential cut-off function \begin{equation} \label{Jo} J_{Ohm}(\omega)=\alpha^2\,\omega\exp\left[-\frac{\omega}{\omega_C}\right], \end{equation} where $\omega_C$ is the cut-off frequency and $\alpha$ is a dimensionless coupling constant in the limit of weak coupling between the system and the environment, i.e., $\alpha\ll1$. The inverse of the cut-off frequency is the Ohmic reservoir correlation time $\tau_{C}=\omega_C^{-1}$. \subsubsection{Time-dependent coefficients} The non-Markovian time-dependent coefficients for the two-level atom in an Ohmic reservoir take the form \begin{eqnarray} \label{rateo} \gamma_{\xi}(T)&=&\alpha^2\,\omega_C \Big\{ \frac{2}{1+T^2}\Big[T\cos\left(q_o T\right)+\sin\left(q_o T\right)\Big] \\ &+& q_o e^{-q_o}\left(\pi-i\text{Ci}\bar{z}+i\text{Ci}z+\text{Si}\bar{z}+\text{Si}z\right) \Big\}, \nonumber\\ \lambda_{\xi}(T)&=&\frac{\alpha^2\omega_C}{2}\frac{1}{1+T^2}\Big\{ \Big[ \cos(q_o T)+T \sin(q_0 T)-1-T^2\Big]\nonumber\\ &+& q_o e^{-q_o}\Big[2\text{Chi}(q_o)+2\text{Shi}(q_o)-\text{Ci}\bar{z}-\text{Ci}z \nonumber \\ &+& i\text{Si}\bar{z}+i\text{Si}z\Big] \Big\}, \label{rateo2} \end{eqnarray} where $T=\omega_C t$, $z=q_o (T+i)$, $q_o = s_O + \xi p_O$, and the parameters $s_O$ and $p_O$ correspond to the $s$ and $p$ parameters introduced in the previous subsection, but now adapted to the Ohmic reservoir spectral density, i.e., $s_O=\omega_L/\omega_C$ and $p_O=\omega/\omega_C$. Moreover, in Eqs. (\ref{rateo})-(\ref{rateo2}), the bar is used to denote complex conjugation, $\text{Ci}$ and $\text{Si}$ are the cosine and the sine integrals and $\text{Chi}$ and $\text{Shi}$ are the hyperbolic cosine and hyperbolic sine integrals, respectively.\\ Similarly to the Lorentzian case of the previous subsection, $p_O\gg1$ corresponds to the secular regime and $p_O\ll1$ corresponds to the nonsecular regime. We note, however, that, in the Ohmic case, differently from the Lorentzian case, the parameters $p_O$ and $s_O$ are no longer independent. Our model of a driven two-level atom is valid when the Rabi frequency $\Omega$ and the detuning between the atom and the laser $|\Delta|=|\omega_A-\omega_L|$ are small compared to the atomic frequency $\omega_A$. This imposes a restriction on the relative values of $p_O$ and $s_O$, in particular we must have \begin{equation} \label{ } p_O \ll s_O. \end{equation} The Ohmic time-dependent coefficients for the secular, nonsecular and intermediate regimes are shown in Fig. 2. As in the Lorentzian case, in the nonsecular regime, the three decay rates coincide. In this case, therefore, similar considerations as those done in Sec. \ref{sec:3a} apply. Again, in the nonsecular and intermediate regimes, the master equation is not in the Lindblad form, hence little can be said about the dynamics from the behavior of the decay rates only. \subsubsection{Non-Markovian Quantum Jumps} In the secular regime, $p\gg 1$, the Ohmic coefficients display oscillatory behavior, similarly to the Lorentzian case of Fig. 1 (last row). For the Ohmic reservoir, however, all the three time-dependent coefficients are of similar order of magnitude. For short times they coincide, as one can see from the inset in Fig. 2, but as time passes they start oscillating out of phase. As a consequence, when the laser is resonant with the atomic transition, i.e., $\Delta=0$, both quantum jumps between dressed states and phase flip jumps contribute to the dynamics, contrarily to the Lorentzian case where the phase flips between dressed states were dominant.\\ Summarizing, in this section we have explored the differences in the dynamics due to different reservoir spectra. We have seen that for both the Lorentzian and the Ohmic spectral density, in the nonsecular regime, the three coefficients $\gamma_{\xi}(t)$ appearing in the Lindblad-type master equation have the same time dependency. \\ Nonetheless, different spectral distributions corresponding to different physical environments do give rise to noticeable differences. As an example, we have seen that, in the secular regime and in the case of resonance $\Delta=0$, the type of quantum jumps occurring in the systems do depend on the spectral properties: in the Lorentzian case phase flips between different eigenstates dominate the dynamics while in the Ohmic case quantum jumps between different dressed states also occur. Moreover, not all the spectra are similarly compatible with the assumptions on which our model rely. The Ohmic reservoir, for example, imposes some limitations on the value of the physical parameters characterizing the dynamics. \section{Bloch vector dynamics}\label{dynamics} An alternative way to describe the dynamics of a driven two-state system is by means of the Bloch vector $\mathbf{R}(t)$ whose components are defined as \begin{eqnarray} R_i(t)&=&Tr[\rho(t)\sigma_i], \label{bloch vector} \end{eqnarray} with $i=x,y,z$. The equation describing the dynamics of the Bloch vector, known as optical Bloch equation, can be obtained straightforwardly from Eq. (\ref{master equation}) \begin{equation} \label{OBE} \frac{d\mathbf{R}(t)}{dt}=[D(t)+D'(t)]\mathbf{R}(t)+\mathbf{d}(t)+\mathbf{d'}(t), \end{equation} with $D(t)+D'(t)$ the damping matrix and $\mathbf{d}(t)+\mathbf{d'}(t)$ the drift vector, whose explicit forms are given in Appendix A. Note that, also in this case, we separate the contribution of the nonsecular terms, contained in the primed quantities, from the contribution of the secular terms. As we will see in Sec. V, the optical Bloch equations prove to be particularly useful for studying the conditions under which the dynamical map is completely positive. Moreover, they provide us with a clear physical picture of the dynamics in terms of dephasing and dissipation phenomena, as described later in this section. \subsection{Secular regime} \begin{figure} \includegraphics[width=7cm]{fig3.eps} \caption{Dynamics of the $z$-component of the Bloch vector as a function of $T=\lambda t$ for $p=100$ and $s=0.1$. We have set $\alpha^2 / \omega_A=0.01$ and $\Omega / \omega_A=0.01$.} \label{Fig4} \end{figure} When $p\gg1$ the secular approximation is valid and the dynamics of the $z$-component of the Bloch vector $R_z$, in the dressed state basis, decouples from the $x$- and $y$-components. In this case the non-Markovian optical Bloch equations have a simple solution for any initial state $\mathbf{R}(0)=(x_0,y_0,z_0)$: \begin{eqnarray} \!\!\!\!R_x(t)\!\!\!&=&\!\!\!\exp[-\Gamma(t)](x_0\cos\omega t-y_0\sin\omega t),\label{rxsec} \\ \!\!\!\!R_y(t)\!\!\!&=&\!\!\!\exp[-\Gamma(t)](y_0\cos\omega t+x_0\sin\omega t),\label{rysec} \\ \!\!\!\!R_z(t)\!\!\!&=&\!\!\!e^{-\Lambda(t)}\!\! \left\{\!z_0\!+\!\!\!\int_0^t \!\!\!\! ds e^{\Lambda(s)}\!\left[ C_-^2 \gamma_-(s)-C_+^2 \gamma_+(s) \right]\!\! \right\} \label{highQsolution} \end{eqnarray} where \begin{eqnarray}\label{Gamma} \Gamma(t)\!\!\!&=&\!\!\!\frac{1}{2}\int_0^t ds \left[C_+^2\gamma_+(s)+C_-^2\gamma_{-}(s)+4C_0^2\gamma_0(s)\right],\\ \Lambda(t)\!\!\!&=&\!\!\!\int_0^t ds \left[ C_+^2\gamma_+(s)+C_-^2\gamma_{-}(s)\right], \label{Lambda} \end{eqnarray} and the time dependent coefficients $\gamma_{\xi}(t)$, with $\xi=\{+,0,- \}$, are given by Eq. (\ref{rates}). \\ It should be stressed that the solution of the non-Markovian Bloch equation, given by Eqs. (\ref{rxsec})-(\ref{highQsolution}), is valid for any form of the spectral density function $J(\omega)$ and therefore it can be used to describe the dynamics of a driven two-level system in any structured reservoir in the secular regime and in weak coupling. \\ As an example of the dynamics, we plot in Fig. 3 the time evolution of the $z$-component of the Bloch vector for the initial state $\mathbf{R}(0)=(0,0,1)$ in the Lorentzian case. For times of the order of the reservoir correlation time the Markovian exponential decay of $R_z$ is replaced by rapid non-Markovian oscillations, occurring at the same frequency of the oscillations of $\gamma_+(t)$ and $\gamma_-(t)$. Physically these oscillations correspond to a rapid exchange of energy and information between the two-level atom and the environment due to the reservoir memory. \subsubsection{Markovian limit} For times longer than the reservoir correlation time $\tau_C$ the time-dependent decay rates $\gamma_{\xi}(t)$ approach their Markovian stationary values $\gamma_{\xi}^M$ and the solution of the Bloch equations reduces to the well known Markovian one \cite{cohen} \begin{eqnarray} R_x(t)&=&e^{- t/{\tau_D}}(x_0\cos\omega t-y_0\sin\omega t),\label{Markovrxsec}\\ R_y(t)&=&e^{-t/\tau_D}(y_0\cos\omega t+x_0\sin\omega t), \label{Markovrysec}\\ R_z(t)&=&e^{-t/\tau_R}(z_0-z_{\infty})+z_\infty, \label{markovianhighQsolution} \end{eqnarray} where \begin{equation} \label{zinf} z_{\infty}=\frac{C_-^2 \gamma_-^M - C_+^2 \gamma_+^M}{C_-^2 \gamma^M_- + C_+^2 \gamma_+^M} \end{equation} is the $z$-component of the stationary Bloch vector $\mathbf{R}_{\infty}\equiv (0,0,z_\infty)$ and $\gamma^M_-$, $\gamma^M_+$ are the Markovian stationary values of $\gamma_-(t)$ and $\gamma_+(t)$, respectively. In Eqs. (\ref{Markovrxsec})-(\ref{markovianhighQsolution}), the Markovian relaxation time $\tau_R$ and decoherence time $\tau_D$ are \begin{eqnarray} \tau_R^{-1} &=&\frac{1}{2} \left[C_+^2\gamma_+^M+C_-^2\gamma_{-}^M+4C_0^2\gamma_0^M\right] \\ \tau_D^{-1} &=& C_+^2\gamma_+^M+C_-^2\gamma_{-}^M. \end{eqnarray} When the driven two-state system interacts with a structured environment, the relaxation and decoherence rates become time-dependent, the time dependency being determined by the form of the reservoir spectrum. \\ The equations above show that, in the Markovian limit, the well known relationship $2 \tau_R \ge \tau_D$ is satisfied. A close inspection to Eqs. (\ref{Gamma})-(\ref{Lambda}) shows that, even in the non-Markovian case, the time dependent decay rates always satisfy the relation $2 \Gamma(t) \ge \Lambda (t) $, if the time dependent coefficients $\gamma_{\xi}(t)$ are positive at all times. We have seen however, that the time dependent coefficients may oscillate taking temporarily negative values (See Figs. 1 and 2). In this case it is not a priori guaranteed that the inequality $2 \Gamma(t) \ge \Lambda (t) $ holds. Stated another way, at certain time instants, the generalized time-dependent relaxation and decoherence times may violate the $2 \tau_R (t) \ge \tau_D (t)$ inequality. We will explore in detail this issue in Sec. \ref{cp}.\\ It should be noted that, in the secular regime and in the limit of weak coupling between the system and the environment, the non-Markovian effects are small and occur at a time-scale that is many orders of magnitudes smaller than the relaxation time of the two-level system. However, as we will see in the following subsection, in the nonsecular regime, strong oscillations may characterise the dynamics of the Bloch vector also at longer time scales. \begin{figure*} \includegraphics[width=15 cm]{fig4.eps} \caption{The $z$-component of the Bloch vector in the Lorentzian case and in the nonsecular regime as a function of $T=\lambda t$ for $p=0.01$ and (a) $s=0.1$, (b) $s=10$. In each plot we have set $\alpha=0.01\omega_A$ and $\Omega=0.01\omega_A$. The insets show the dynamics in the short, non-Markovian time-scale.} \label{Fig5} \end{figure*} \begin{figure*} \includegraphics[width=15cm]{fig5.eps} \caption{The $x$ and $y$-components of the Bloch vector in the Lorentzian case and in the nonsecular regime as a function of $T=\lambda t$ for $p=0.01$ and (a) $s=0.1$, (b) $s=10$. In each plot we have set $\alpha=0.01\omega_A$ and $\Omega=0.01\omega_A$.} \label{Fig7} \end{figure*} \subsection{Nonsecular regime} When $p\ll1$ we cannot neglect the nonsecular terms in the dynamics of the driven two-level system. Due to the presence of these terms the equation of motion for the $z$-component of the Bloch vector does not decouple anymore from the equations for the $x$- and $y$- components. Therefore, we can no longer obtain an analytical solution for the optical Bloch equations. Furthermore, the master equation of the driven two-state system is no longer in the time-dependent Lindblad form and the microscopic description of the dynamics of the system in terms of the NMQJ method is not straightforward. \\ A numerical study of the solution of the full non-Markovian optical Bloch equations shows, however, substantial differences in the dynamics compared to the secular regime. In Fig. 4 we plot the time evolution of $R_z(T)$ for $p=0.01$ when the reservoir spectrum is Lorentzian, for two exemplary values of $s$. Interestingly, while for small $s$ $R_z(T)$ decays monotonically, for large $s$ strong oscillations are present and last for long times. These oscillations are due to the nonsecular terms and have to be distinguished from the short-time non-Markovian oscillations. The latter ones, occurring at the correlation time $\tau_C$, are shown in the inset of Fig. 4 (b). \\ The behavior of $R_z(T)$ can be traced back to the dynamics of the time-dependent coefficients $\gamma_{\xi}(T)$. We recall that, in the nonsecular regime, these three coefficients coincide, i.e., $\gamma_{\pm}(T)=\gamma_0(T)\equiv \gamma(T)$. As shown in Fig. 1 (first row), for $p=0.01$ and $s=0.1$, all three decay rates are positive hence we do not expect short time non-Markovian oscillations in the dynamics of $R_z(T)$. The initial quadratic decay of Fig. 4(a) is due to the fact that $\gamma(T)$, for $T \le 1$, is smaller then its Markovian stationary value and consequently the decay of $R_z(T)$ is slower than the one predicted by the Markovian theory. \\ For $p=0.01$ and $s=10$, on the contrary, Fig. 1 shows that $\gamma(T)$ oscillates taking negative values. These oscillations are responsible for the non monotonic behavior of $R_z(T)$ at short times, see inset of Fig. 4 (b). The nonsecular oscillations occur on a time-scale comparable to the relaxation time. Similar oscillations can be seen in the dynamics of the $x$- and $y$-components of the Bloch vector, see Fig. 5(b). \\ A second difference with the secular dynamics, well visible from Fig. 5, is that the stationary states of $R_x(t)$ and $R_y(t)$ are now no longer zero, at contrast with the prediction of the secular equations (\ref{rxsec})-(\ref{rysec}). In the bare state basis this corresponds to a non-zero steady state value of all three components of the Bloch vector, as one can see from Eq. (\ref{eq:RB}). The nonzero stationary value of $R_z^B$, an effect known as vacuum-field dressed state pumping \cite{lewenstein}, has been observed experimentally in Ref. \cite{Zhu}. On the other hand, the nonzero stationary value of $R_x^B$ and $R_y^B$ indicates a stationary value of the atomic dipole moment different from zero, leading to substantial changes in the resonance fluorescence spectrum \cite{lewenstein}. \section{Complete positivity}\label{cp} Theoretical descriptions of non-Markovian open quantum systems are often based on a series of assumptions and approximations without which it would not be possible to tackle the problem of the description of the dynamics in simple analytic terms. In the case described in the paper, e.g., the main assumptions and approximations are the factorized initial condition, i. e., $\rho(t=0)=\rho_S(t=0) \otimes \rho_R(t=0)$, with $\rho_S$ and $\rho_R$ the system and reservoir density matrices, respectively, the weak coupling approximation and, in some cases, the secular approximation. \\ The non-Markovian master equation we have used in the paper is not in Lindblad form, since even in the secular regime, the time-dependent coefficients $\gamma_{\xi}$ may temporarily take negative values. Therefore, both positivity and complete positivity (CP) of the dynamical map that, for Markovian systems in Lindblad form, are automatically guaranteed by the Lindblad-Gorini-Kossakowski-Sudarshan theorem \cite{lindblad, gorini}, can here be violated indicating that our solution no longer describe a physical state of the system. \\ In this section we will present the first study of complete positivity for the non-Markovian driven two-state model. We will derive explicit conditions for CP, and therefore positivity, of the dynamical map and we will see how these conditions have a clear and important physical interpretation. Once again, in the following subsections we will distinguish between the secular and nonsecular regimes. \subsection{Secular regime} Let us begin with the case in which the secular approximation is valid and we can neglect the nonsecular damping matrix $D'(t)$ and the nonsecular drift vector $\mathbf{d}'(t)$ in Eq. (\ref{OBE}). In this case the damping matrix is in block diagonal form, see Eq. (\ref{damp}). In the secular regime we can directly use the CP conditions presented in Ref. \cite{hall}. The details of the calculation are presented in Appendix B.\\ We find that the necessary and sufficient condition for CP for the driven two-state system, in the secular regime and in weak coupling, is given by \begin{equation} \label{NScond} 2\Gamma(t)\geq \Lambda(t)\geq0. \end{equation} Note that the physical meaning of Eq. (\ref{NScond}) is straightforward since the inverse of $\Gamma(t)$ and $\Lambda (t)$ are the non-Markovian deoherence and relaxation times respectively. Therefore the necessary and sufficient condition for CP, in the secular approximation, is that the decoherence rate is at each time instant twice as big as the relaxation rate. In the Markovian limit the conditions of Eq. (\ref{NScond}) reduces to the well known condition \begin{equation} \label{markoviannecc} 2\,\tau_R\geq\tau_D\geq0. \end{equation} Recall from Section IV A that the Markovian condition is always satisfied by the driven two-state system for any spectral density. Interestingly, inserting Eqs. (\ref{Gamma})-(\ref{Lambda}) into Eq. (\ref{NScond}) one sees that the necessary and sufficient condition for CP is equivalent to \begin{equation} \int_0^t ds\, \gamma_0(s) \ge 0. \label{SecCP} \end{equation} It is worth noting that the condition above does not depend on $\omega=\sqrt{\Delta^2+\Omega^2}$, as one can see from Eq. (\ref{rates}). \subsection{Nonsecular regime} The method used to study the CP condition in the secular regime is no longer applicable in the nonsecular regime, when the damping matrix is not in a block diagonal form anymore. We use a more general method, based on the positivity of the Choi matrix, and make use of the weak coupling limit \cite{cresser}. Again the details of the calculation are given in Appendix B.\\ We find that the necessary and sufficient condition for CP for the driven two-state system, in the nonsecular regime and in weak coupling, is given by \begin{equation} \Lambda(t)+2\Gamma(t)\geq0. \end{equation} We note in passing that in the nonsecular regime $\Lambda(t)$ and $\Gamma(t)$ cannot be interpreted anymore as decoherence and relaxation non-Markovian rates, since the form of the master equation is now much more complicated and no simple analytical solution for the optical Bloch equation can be found. However, recall that in the nonsecular regime the three decay rates coincide, i.e., $\gamma_\pm(t)=\gamma_0(t)\equiv\gamma(t)$. Therefore, we find that the CP condition takes a form very similar to the one valid for the secular regime, given by Eq. ({\ref{SecCP}}), \begin{equation} \label{ } \int_0^tds\,\gamma(s)\geq0. \end{equation} \section{Conclusions}\label{conclusions} In this paper we have studied the non-Markovian dynamics of a driven two-state system immersed in a structured environment. We have derived the non-Markovian master equation and the optical Bloch equations both with and without the secular approximation, and we have presented the solution in terms of the Bloch vector dynamics for general reservoir spectra. \\ We have compared the dissipative dynamics of the driven two-state system for two different reservoirs, namely the Lorentzian and the Ohmic reservoir, and we have discovered that it is strongly influenced by the spectral properties. For example, in the secular regime and on resonance, in the Lorentzian case the dynamics is dominated by phase jumps in the eigenstate basis, while in the Ohmic case the dominant quantum jumps describe transitions between the dressed states. \\ We have discovered the existence of strong and long living nonsecular oscillations in all components of the Bloch vector in some regions in the parameter space. The nonsecular terms were also discovered to have a significant effect on the stationary quantum state of our system. An interesting open question we will consider next is whether the nonsecular oscillations describe a feedback of information/energy from the system into the environment as measured, e.g., by the non-Markovianity measure proposed in Ref. \cite{JPRL09}. \\ We have also studied the validity of the secular approximation and how it depends on the spectral properties. In particular, our results show that in the Ohmic reservoir the use of the secular approximation is more subtle than in the Lorentzian case. That is, in the Ohmic case, one cannot always perform the secular approximation whenever $p_O\ll1$, since the model is not valid for $p_O\gtrsim s_O$, but instead both conditions have to be met before the secular approximation can be applied. \\ Finally, we have investigated in detail the complete positivity condition in both the secular and the nonsecular regime. We have discovered that this condition can be traced back to the behavior of the time-dependent coefficients appearing in the master equation and proportional, in the secular regime, to the decay rates of the system. Moreover, we have discovered that whenever the system is in time-dependent Lindblad form, i.e., when the secular approximation is valid, the CP condition consists of an inequality linking the non-Markovian decoherence and relaxation rates. Such inequality is the non-Markovian generalization of the well known condition $2 \tau_R \ge \tau_D$.\\ The dissipative driven two-state system is one of the most fundamental models of the theory of open quantum systems. Since most of our results are independent from the specific form of the spectral distribution, they can be straightforwardly applied to many different physical contexts where non-Markovian approaches are necessary, e.g., for implementations of quantum computing and other quantum technologies.\\ \begin{acknowledgments} This work was supported by the Emil Aaltonen foundation, the Finnish Cultural foundation, and by the Turku Collegium of Science and Medicine (S.M.). We acknowledge stimulating discussions with J. Piilo.\\ \end{acknowledgments}
1,314,259,996,778
arxiv
\section{Introduction} Superradiance is a classical phenomenon associated with ergosphere in a rotating black hole \cite{zeldovich,bardeen,misner,starobinsky}. If the modes of impinging bosonic fields $\Phi\sim \exp\{-i\omega t+im\phi\}$, with frequency $\omega$ and angular momentum $m$, are scattered off by the event horizon of black hole, the requirement of superradiant amplification is $0<\omega<m\Omega_H$, where $\Omega_H$ is the angular velocity of event horizon. The superradiance phenomenon allows extracting the rotational energy efficiently from black hole. It has been proposed by Press and Teukolsky \cite{press} to use superradiance phenomenon to built the \emph{black-hole bomb}. The essential of black-hole bomb mechanism is to add a reflecting mirror outside the rotating black hole. Then superradiant modes will bounce back and forth between the event horizon and the mirror. Meanwhile, the rotational energy extracted from black hole by means of superradiance process will grow exponentially. This mechanism has been recently restudied by many authors \cite{cardoso2004bomb,Hod,Rosa,Lee}. When the reflecting mirror is not artificial, superradiance amplification of impinging wave can lead to the instability of black hole, which is just called superradiant instability. This kind of instability has been studied extensively in recent years. For example, Kerr and Kerr-Newman black holes \cite{kerrunstable,detweiler,dolan,hodPLB2012} and Kerr-Newman black hole immersed in magnetic field \cite{konoplyaPLB} are all unstable against the massive scalar field perturbations, where the mass terms of perturbations play the role of reflective mirrors. Five-dimensional boosted Kerr black string \cite{DiasPRD2006} is also unstable against the massless scalar field where Kaluza-Klein momentum works as reflective mirror. For the rotating black holes in AdS space, the boundary at infinity can also work as the reflecting mirror. Small Kerr-AdS black hole in four dimensions is unstable against massless scalar field \cite{cardoso2004ads} and gravitational field \cite{cardoso2006prd} perturbations. Contrary to four-dimensional case, superradiance instability of five-dimensional rotating charged AdS black hole \cite{aliev} occurs only when the orbital quantum number is even. More recently, superradiant instability of small Reissner-Nordstr\"{o}m-anti-de Sitter black hole is investigated analytically and numerically \cite{uchikata}. We also studied the superradiant instability of charged massive scalar field in Kerr-Newman-anti-de Sitter black hole \cite{ranliplb}. In fact, besides AdS space, there are also other cases where the boundary at asymptotic infinity provides the reflecting mirror. For example, the rotating linear dilaton black hole \cite{clement} and the charged Myers-Perry black hole in G\"{o}del universe \cite{knopolya} are unstable due to superradiance. The reason of superradiant instability for these black holes originates from the Dirichlet boundary condition at asymptotic infinity of the perturbation fields. As mentioned above, superradiant instability of charged rotating asymptotically G\"{o}del black hole has been found by using the numerical methods \cite{knopolya}. In this paper, we will \emph{re-investigate} the same aspect of this kind of rotating black holes in G\"{o}del universe by using the analytical methods. We focus on the rotating asymptotically G\"{o}del black hole in five-dimensional minimal supergravity theory. This black hole is also called as Kerr-G\"{o}del black hole in literature. Firstly, by considering the scalar field perturbation in this background, we find that the asymptotically G\"{o}del spacetime requires the wave equation to satisfy the Dirichlet boundary condition at asymptotic infinity. Then, we divide the space outside the event horizon of Kerr-G\"{o}del black hole into the near-region and the far-region, and employ the matched asymptotic expansion method to solve the wave equation of scalar field perturbation. We only deal with black hole in the limit of small rotating parameter $j$ of G\"{o}del universe. The analysis of complex quasinormal modes by imposing the boundary conditions shows that the complex parts are positive in the regime of superradiance, which implies the growing instability of these modes. This is to say that the five-dimensional Kerr-G\"{o}del black hole is unstable against the scalar field perturbation. The reason for this instability is just the Dirichlet boundary condition at asymptotic infinity, which is similar to that of the rotating black holes in AdS space. The remaining of this paper is arranged as follows. In Section 2, we give a brief review of Kerr-G\"{o}del black hole in five-dimensional minimal supergravity theory. In Section 3, we investigate the classical superradiance phenomenon and the boundary condition of scalar field perturbation. In Section 4, the approximated solution of wave equation for scalar field is obtained by using the matched asymptotic expansion method and the superradiant instability is explicitly shown. The last section is devoted to conclusion and discussion. \section{Five-dimensional Kerr-G\"{o}del black hole} The bosonic part of five-dimensional minimal supergravity theory consists of the metric and a one-form gauge field, which are governed by Einstein-Maxwell-Chern-Simons (EMCS) equations of motion \begin{eqnarray} &&R_{\mu\nu}-\frac{1}{2}R g_{\mu\nu}=2\left( F_{\mu\alpha}F_{\nu}^{\;\;\alpha}-\frac{1}{4} g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}\right) \;,\nonumber\\ &&D_\nu\left(F^{\mu\nu}+\frac{1}{\sqrt{3}\sqrt{-g}}\epsilon^{\mu\nu\alpha\beta\gamma} A_\alpha F_{\beta\gamma}\right)=0\;. \end{eqnarray} The five-dimensional Kerr-G\"{o}del black hole is a solution to the EMCS equations of motion, the metric of which takes the form \cite{gimon} \begin{eqnarray}\label{metric} ds^2&=&-f(r)dt^2-q(r)r\sigma_L^3dt-h(r)r^2(\sigma_L^3)^2+ \frac{dr^2}{v(r)}\nonumber\\ &&+\frac{r^2}{4}(d\theta^2+d\psi^2+d\phi^2+2\cos\theta d\psi d\phi)\;, \end{eqnarray} where $\sigma_L^3=d\phi+\cos\theta d\psi$, and the metric functions are given by \begin{eqnarray} f(r)&=&1-\frac{2M}{r^2}\;,\nonumber\\ q(r)&=&2jr+\frac{2Ma}{r^3}\;,\nonumber\\ h(r)&=&j^2(r^2+2M)-\frac{Ma^2}{2r^4}\;,\nonumber\\ v(r)&=&1-\frac{2M}{r^2}+\frac{8jM(a+2jM)}{r^2}+\frac{2Ma^2}{r^4}\;. \end{eqnarray} The parameters $M$ and $a$ are related to the mass, and the angular momentum of black hole. In this metric, the parameter $j$ defines the scale of the G\"{o}del background and is responsible for the rotation of the G\"{o}del universe \cite{godel}. When $a=0$, this solution reduced to the Gimon-Hashimoto solution, i.e. the Schwarzschild black hole in G\"{o}del universe \cite{gimon}. The thermodynamics of this black hole has been studied in \cite{banichconpere,wupengprd}. The scalar field perturbation and greybody factor of Hawking radiation of this kind of black holes are also calculated in the limit of small $j$ in \cite{scalargodel}. This black hole has also been generalized to being charged \cite{wu} and other forms. In this paper, we consider the non-extremal black hole case. The metric function $v(r)$ has two positive real roots $r_\pm$, which are given by \begin{eqnarray}\label{horizon} r_{\pm}^2=M-4jMa-8j^2M^2\pm\sqrt{\xi}\;, \end{eqnarray} where \begin{eqnarray} \xi=(M-4jMa-8j^2M^2)^2-2Ma^2\;. \end{eqnarray} Clearly the non-extremal condition is given by $\xi>0$. The event horizon locates at the largest root $r_+$ of function $v(r)$. For latter convenience, we also present the expression of angular momentum at the event horizon \begin{eqnarray} \Omega_H&=&\frac{2q(r_+)}{r_+(1-4h(r_+))}\nonumber\\ &=&\frac{4(Ma+jr_+^4)}{r_+^4-4j^2r_+^4(r_+^2+2M)+2Ma^2}\;. \end{eqnarray} By employing the relation of metric functions \begin{eqnarray} q^2(r)+f(r)(1-4h(r))=v(r)\;, \end{eqnarray} and noting that $v(r)$ vanishes at the horizon, another simple expression for the angular momentum can be derived \begin{eqnarray} \Omega_H=\frac{2M-r_+^2}{Ma+jr_+^4}\;. \end{eqnarray} In the present paper, we consider the rotating parameters $a$ and $j$ are both positive. From the expression of $r_+^2$ in (\ref{horizon}), one can easily find $r_+^2<2M$, which implies that the angular momentum $\Omega_H$ is always positive when the rotating parameters $a$ and $j$ are positive. \section{Superradiance and boundary condition} Now let us consider the wave equation of massless scalar field perturbation in the background (\ref{metric}), which is given by Klein-Gordon equation \begin{eqnarray}\label{waveequation} \nabla_\mu\nabla^\mu\Phi=\frac{1}{\sqrt{-g}} \partial_\mu(g^{\mu\nu}\sqrt{-g}\partial_\nu\Phi)=0\;. \end{eqnarray} Because the metric has the killing vectors $\partial_t$, $\partial_\psi$, and $\partial_\phi$, we can take the ansatz of scalar field as \begin{eqnarray} \Phi=e^{-i\omega t+in\psi+im\phi}\Theta(\theta)R(r)\;. \end{eqnarray} Substituting this ansatz into the wave equation (\ref{waveequation}) and separating the variables, we can get the angular equation \begin{eqnarray}\label{angular} \frac{1}{\sin\theta}\partial_\theta\left[\sin\theta\partial_\theta\Theta(\theta)\right] -\frac{(n-m\cos\theta)^2}{\sin^2\theta}\Theta(\theta) +[l(l+1)-m^2]\Theta(\theta)=0\;, \end{eqnarray} and the radial equation \begin{eqnarray}\label{radial} \frac{1}{4r}\partial_r\left[r^3v(r)\partial_rR(r)\right] +\frac{r^2(1-4h(r))}{4v(r)} \left[\omega-\frac{2mq(r)}{r(1-4h(r))}\right]^2R(r) \nonumber\\ +\left[l(l+1)+\frac{4m^2h(r)}{(1-4h(r))}\right]R(r)=0\;. \end{eqnarray} Obviously, the angular equation (\ref{angular}) is independent of the black hole parameters and is exactly solvable. The solutions for the angular equation are just the spin-weighted spherical harmonics functions, where the integers $l = 0, 1, 2, \cdots$ are the separation constants and the modes $m = 0,\pm 1, \cdots ,\pm l$. In the following, we will specify the appropriate boundary conditions for the instability problem. At the horizon, the third set of terms in radial wave equation (\ref{radial}) can be neglected and this equation can be reduced to the form \begin{eqnarray}\label{radial1} v\partial_r(v\partial_rR(r))+(1-4h(r_+))(\omega-m\Omega_H)^2R(r)=0\;. \end{eqnarray} Near the horizon, we use the approximation $v(r)\cong 2(r_+^2-r_-^2)(r-r_+)/r_+^3$. Then the solution of equation (\ref{radial1}) satisfying the ingoing boundary condition at the horizon is given by \begin{eqnarray} R(r)\sim (r-r_+)^{-i\varpi}=e^{-i\varpi\ln(r-r_+)}\;, \end{eqnarray} where we have defined \begin{eqnarray}\label{varpi} \varpi=\frac{r_+(r_+^4+2Ma^2-4j^2r_+^6-8j^2Mr_+^2)^{1/2}} {2(r_+^2-r_-^2)}(\omega-m\Omega_H)\;. \end{eqnarray} This solution gives us the superradiance condition of five-dimensional Kerr-G\"{o}del black hole. When the frequency of the wave is such that $\varpi$ is negative, i.e. $\omega<m\Omega_H$, one is in the superradiant regime, and the amplitude of an ingoing bosonic field is amplified after scattering by the event horizon. Meanwhile, for the present purpose, it is enough to consider the frequency $\omega$ is positive, which gives the superradiance condition as \begin{eqnarray} 0<\omega<m\Omega_H\;. \end{eqnarray} From this condition, one can see that superradiance will occur only for the positive $m$. In the following, we will only work with the positive $m$. At infinity, the radial wave equation (\ref{radial}) is dominated by \begin{eqnarray} r\partial_r^2 R(r)+3\partial_r R(r)-4j^2\omega^2 r^3 R(r)=0\;. \end{eqnarray} The solution is given by \begin{eqnarray} R(r)\sim \frac{1}{r^2}e^{-j\omega r^2}\;, \end{eqnarray} where we have used the analogy with AdS backgrounds and imposed the Dirichlet boundary conditions at spatial infinity. With the boundary conditions that the ingoing wave at the horizon and the Dirichlet boundary condition at the infinity, one can solve the complex quasinormal modes of the massless scalar field in Kerr-G\"{o}del background. If the imaginary part of quasinormal mode is negative, it is known that the system is stable against this kind of perturbation. The instability means that the imaginary part is positive. In the next section, we will calculate the quasinormal modes by using the matching technique. It is shown that, in the regime of superradiance, the imaginary part of quasinormal mode is positive. In other words, the superradiant instability of five-dimensional Kerr-G\"{o}del black hole can be found analytically. \section{Analytical calculation of superradiant instability} In this section, we will present an analytical calculation of superradiant instability for the massless scalar perturbation. We will adopt the so-called matched asymptotic expansion method to solve the radial wave equation (\ref{radial}). It turns out to be convenient to use the new variable $x$ defined by $x=r^2$. Then the radial wave equation can be transformed into \begin{eqnarray}\label{radialx} \Delta\partial_x(\Delta\partial_x)R(x)+\frac{x^3}{4}(1-4h(x))\left[ \omega-\frac{2mq(x)}{\sqrt{x}(1-4h(x))}\right]^2R(x) \nonumber\\ +\Delta\left[l(l+1)+\frac{4m^2h(x)}{1-4h(x)}\right]R(x)=0\;, \end{eqnarray} where we have used $\Delta=x^2v(x)=(x-x_+)(x-x_-)$ with $x_{\pm}=r_{\pm}^2$. In order to employ the matched asymptotic expansion method, we should take the assumption $\omega M\ll 1$, and divide the space outside the event horizon into two regions, namely, a near-region, $x-x_+\ll 1/\omega$, and a far-region, $x-x_+\gg M$. The approximated solution can be obtained by matching the near-region solution and the far-region solution in the overlapping region $M\ll x-x_+\ll1/\omega$. Previous numerical works \cite{konoplyaads,knopolya} on the spectrum of asymptotically G\"{o}del black holes show a number of common features with the spectrum of AdS spacetime, where the rotational parameter $j$ of G\"{o}del universe plays the role of the inverse AdS radius $\ell$. Inspired by the work of \cite{cardoso2004ads}, where small AdS black hole are considered, we will deal with the rotating asymptotically G\"{o}del black hole in the limit of small rotational parameter $j$ in the following. The small AdS black hole condition implies that $r_+/\ell\ll 1$. For the small G\"{o}del black hole, we assume that $jr_+\ll 1$. With these assumptions, we can analyse the properties of the solution and study the stability of black hole against the perturbation by imposing the appropriate boundary conditions obtained in the last section. \subsection{Near-region solution} Firstly, Let us focus on the near-region in the vicinity of the event horizon, $\omega(x-x_+)\ll 1$. For the small $j$ black holes, this means $jr_+\ll 1$. The radial wave function (\ref{radialx}) in the near-region can be reduced to the form \begin{eqnarray} \Delta\partial_x(\Delta\partial_xR(x)) +\left[(x_+-x_-)^2\varpi^2 -l(l+1)\Delta\right]R(r)=0\;. \end{eqnarray} Noted that the last term in Eq.(\ref{radialx}) is neglected because we only consider the case $m\sim\omega$. Introducing the new coordinate variable \begin{eqnarray} z=\frac{x-x_+}{x-x_-}\;, \end{eqnarray} the near-region radial equation can be written in the form of \begin{eqnarray} z\partial_z(z\partial_z R(z)) +\left[\varpi^2-l(l+1)\frac{z}{(1-z)^2}\right]R(z)=0\;, \end{eqnarray} with \begin{eqnarray} \varpi=\frac{r_+(r_+^4+2Ma^2)^{1/2}}{2(r_+^2-r_-^2)}(\omega-m\Omega_H)\;. \end{eqnarray} This expression for $\varpi$ is coincide with the expression given in (\ref{varpi}) in the small $j$ limit. Through defining \begin{eqnarray} R=z^{i\varpi}(1-z)^{l+1}F(z)\;, \end{eqnarray} the near-region radial wave equation becomes \begin{eqnarray} z(1-z)\partial_z^2F(z)+[c-(1+a+b)]\partial_zF(z)-abF(z)=0\;, \end{eqnarray} with the parameters \begin{eqnarray} a&=&l+1+2i\varpi\;,\nonumber\\ b&=&l+1\;,\nonumber\\ c&=&1+2i\varpi\;. \end{eqnarray} In the neighborhood of $z=0$, the general solution of the radial wave equation is given in terms of the hypergeometric function \begin{eqnarray} R&=&Az^{-i\varpi}(1-z)^{l+1}F(l+1,l+1-2i\varpi,1-2i\varpi,z) \nonumber\\ &&+Bz^{i\varpi}(1-z)^{l+1}F(l+1,l+1+2i\varpi,1+2i\varpi,z)\;. \end{eqnarray} It is obvious that the first term represents the ingoing wave at the horizon, while the second term represents the outgoing wave at the horizon. Because we are considering the classical superradiance process, the ingoing boundary condition at the horizon should be employed. Then we have to set $B=0$. The physical solution of the radial wave equation corresponding to the ingoing wave at the horizon is then given by \begin{eqnarray} R=Az^{-i\varpi}(1-z)^{l+1}F(l+1,l+1-2i\varpi,1-2i\varpi,z)\;. \end{eqnarray} In order to match the far-region solution that will be obtained in the next subsection, we should study the large $r$, $z\rightarrow 1$, behavior of the near-region solution. For the sake of this purpose, we can use the $z\rightarrow 1-z$ transformation law for the hypergeometric function \begin{eqnarray} F(a,b,c,z)&=&\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-a)\Gamma(c-b)} F(a,b,a+b-c+1,1-z)\nonumber\\ &&+(1-z)^{c-a-b} \frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(a)\Gamma(b)}\nonumber\\ &&\times F(c-a,c-b,c-a-b+1,1-z)\;\;. \end{eqnarray} By employing this formula and using the properties of hypergeometric function $F(a,b,c,0)=1$, we can get the large $r$ behavior of the near-region solution as \begin{eqnarray}\label{nearsolutionlarge} R&\sim& A\Gamma(1-2i\varpi)\left[\frac{(r_+^2-r_-^2)^{-l}\Gamma(2l+1)} {\Gamma(l+1)\Gamma(l+1-2i\varpi)}r^{2l}\right. \nonumber\\&&\left. +\frac{(r_+^2-r_-^2)^{l+1}\Gamma(-2l-1)} {\Gamma(-l)\Gamma(-l-2i\varpi)}r^{-2l-2}\right]\;, \end{eqnarray} where the variable $x$ has been restored to $r$ for later convenience. This solution should be matched with the small $r$ behavior of the far-region solution obtained in the next subsection. \subsection{Far-region solution} In the Far-region, $x-x_+\gg M$, we can neglect the effects induced by the black hole, i.e. we have $a\sim 0$ and $M\sim 0$. The metric functions can be approximated as $v(x)=f(x)=1$, $h(x)=j^2x$, and $q(x)=2j\sqrt{x}$. One can deduce the far-region radial wave equation as \begin{eqnarray} \partial_x^2(xR)+\left[-j^2\omega^2+\frac{\omega(\omega-8mj)}{4x}-\frac{l(l+1)}{x^2}\right] (xR)=0\;. \end{eqnarray} By defining the new variable $\zeta=2j\omega x$, the far-region radial wave equation can be reduced to \begin{eqnarray} \partial_\zeta^2(\zeta R)+\left[-\frac{1}{4}+\frac{\rho}{\zeta} -\frac{l(l+1)}{\zeta^2}\right](\zeta R)=0\;, \end{eqnarray} with the parameter $\rho=(\omega-8mj)/8j$. This is a standard Whittaker equation $\partial_\zeta^2 W+ [-1/4+\rho/\zeta+(1/4-\mu^2)/\zeta^2]W=0$ with $W=\zeta R$ and $\mu=l+1/2$. The general solution is given by $W=\zeta^{\mu+1/2} e^{-\zeta/2}[\alpha M(\tilde{a},\tilde{b},\zeta)+\beta U(\tilde{a},\tilde{b},\zeta)]$, where $M$ and $U$ are Whittaker's functions with $\tilde{a}=1/2+\mu-\rho$ and $\tilde{b}=1+2\mu$. So the far-region solution of the radial wave equation is given by \begin{eqnarray} R=\zeta^l e^{-\zeta/2}\left[\alpha M(l+1-\rho,2l+2,\zeta) +\beta U(l+1-\rho,2l+2,\zeta)\right]\;. \end{eqnarray} Now we want to impose the boundary condition at asymptotic infinity. We are interested in the superradiance region with $0<\omega<m\Omega_H$, so we have $\zeta=2j\omega r^2\rightarrow +\infty$ when $r\rightarrow+\infty$. When $\zeta\rightarrow+\infty$, by using the properties of the Whittaker's functions $M(\tilde{a},\tilde{b},\zeta) \sim\zeta^{\tilde{a}-\tilde{b}}e^{\zeta}\Gamma(\tilde{b})/\Gamma(\tilde{a})$ and $U(\tilde{a}, \tilde{b},\zeta)\sim \zeta^{-\tilde{a}}$, one can get the large $r$ behavior of the far-region solution as \begin{eqnarray} R\sim \alpha\frac{\Gamma(2l+2)}{\Gamma(l+1-\rho)}(2j\omega r^2)^{-1-\rho}e^{j\omega r^2} +\beta(2j\omega r^2)^{-1-\rho}e^{-j\omega r^2}\;. \end{eqnarray} Obviously the first term is divergent at asymptotic infinity. To match the Dirichlet boundary condition at infinity, we have to set $\alpha=0$. Thus the far-region solution with the Dirichlet boundary condition at asymptotic infinity is given by \begin{eqnarray} R=\beta(2j\omega)^l r^{2l} e^{-j\omega r^2} U(l+1-\rho,2l+2,2j\omega r^2)\;. \end{eqnarray} This solution is just the solution of scalar field wave equation in the background of the pure five-dimensional G\"{o}del spacetime \cite{konoplyaads,Hiscock,tachyon}. We assume for a moment that we have no black hole, and calculate the real frequencies that can propagate in the pure five-dimensional G\"{o}del spacetime. In this setup, the spacetime geometry is horizon-free, and the solution of the scalar field perturbation in the background of the pure five-dimensional G\"{o}del spacetime should be regular at the origin $r=0$. When $\zeta\rightarrow 0$, using the properties of Whittaker's function $U(\tilde{a},\tilde{b},\zeta)\sim\zeta^{1-\tilde{b}}\Gamma(\tilde{b}-1)/\Gamma(\tilde{a})$, one can get the small $r$ behavior of the far-region solution as \begin{eqnarray} R\sim \beta(2j\omega)^{-l-1}\frac{\Gamma(2l+1)}{\Gamma(l+1-\rho)}r^{-2l-2}\;. \end{eqnarray} So, when $r\rightarrow 0$, $r^{-2l-2}\rightarrow\infty$, and the solution diverges. To have a regular solution at the origin $r=0$, we must demand that $\Gamma(l+1-\rho) \rightarrow\infty$. This occurs when the argument of the gamma function is a non-positive integer. Therefore, we have the condition \begin{eqnarray} l+1-\rho=-N,\;\;\;N=0,1,2,\cdots\;. \end{eqnarray} So the requirement of the regularity of the wave solution at the origin selects the frequencies of the scalar field that might propagate in the pure five-dimensional G\"{o}del spacetime \begin{eqnarray}\label{normalmode} \omega_N=8j(N+l+m+1)\;. \end{eqnarray} Now let us come back to the Kerr-G\"{o}del black hole case. In the spirit of \cite{cardoso2004ads}, we expect that there will be a small imaginary part $\delta$ in the allowed frequencies induced by the black hole event horizon \begin{eqnarray} \omega=\omega_N+i\delta\;. \end{eqnarray} From $\Psi\sim e^{-i\omega t} $, one can see that the small imaginary $\delta$ describes the slow growing instability of the modes when $\delta>0$. Our task is to prove that $\delta$ is positive in the regime of superradiance. Inserting this expression for the frequency $\omega$, one can get the far-region solution of the radial wave equation as \begin{eqnarray}\label{farsolution} R=\beta(2j\omega)^l r^{2l} e^{-j\omega r^2} U(-N-i\delta/8j,2l+2,2j\omega r^2)\;. \end{eqnarray} In order to match the far-region solution with the near-region solution, we need to find the small $r$ behavior of the far-region solution. It is known that the Whittaker's function $U(\tilde{a},\tilde{b},\zeta)$ can be expressed in terms of the Whittaker's function $M(\tilde{a},\tilde{b},\zeta)$. By inserting this relation on the far-region solution (\ref{farsolution}), we can show that the far-region solution can be rewritten as \begin{eqnarray} R=\beta(2j\omega)^l r^{2l} e^{-j\omega r^2} \frac{\pi}{\sin\pi(2l+2)}\left[ \frac{M(-N-i\delta/8j,2l+2,2j\omega r^2)} {\Gamma(-N-2l-1-i\delta/8j)\Gamma(2l+2)} \right.\nonumber\\ \left.-(2j\omega)^{-2l-1}r^{-4l-2} \frac{M(-N-2l-1-i\delta/8j,-2l,2j\omega r^2)} {\Gamma(-N-i\delta/8j)\Gamma(-2l)} \right] \;. \end{eqnarray} Applying to this expression the functional expressions for the gamma functions \begin{eqnarray} \Gamma(n+1)&=&n!\;,\nonumber\\ \Gamma(z)\Gamma(1-z)&=&\frac{\pi}{\sin\pi z}\;, \end{eqnarray} it is easy to show that \begin{eqnarray} \frac{1}{\Gamma(-2l)}=-\frac{\sin\pi(2l+2)}{\pi}(2l)!\;, \end{eqnarray} and for the small $\delta$ \begin{eqnarray} \frac{1}{\Gamma(-N-2l-1-i\delta/8j)}&=&(-1)^N \frac{\sin\pi(2l+2)}{\pi} (N+2l+1)!\;,\nonumber\\ \frac{1}{\Gamma(-N-i\delta/8j)}&=&(-1)^{N+1}N!i\delta/8j\;. \end{eqnarray} Then by using the property of Whittaker's function $M(\tilde{a},\tilde{b},0)=1$, one can get the small $r$ behavior of the far-region solution as \begin{eqnarray}\label{farsolutionsmall} R=\beta(-1)^N(2j\omega_N)^l \left[ \frac{(N+2l+1)!}{(2l+1)!}r^{2l} -i\delta \frac{(2l)!N!}{2^{2l+4}j^{2l+2}\omega_N^{2l+1}}r^{-2l-2} \right] \;. \end{eqnarray} \subsection{Matching condition: the unstable modes} By comparing the large $r$ behavior of the near-region solution with the small $r$ behavior of the far-region solution, one can conclude that there exists the overlapping region $M\ll x-x_+\ll1/\omega$ where the two solutions should match. In this region, the matching of the near-region solution in the large $r$ region (\ref{nearsolutionlarge}) and the far-region solution in the small $r$ region (\ref{farsolutionsmall}) yields the allowed values of the small imaginary part $\delta$ in the frequency $\omega$ \begin{eqnarray} \delta\cong -\sigma(\omega_N-m\Omega_H) r_+(r_+^4+2Ma^2)^{1/2}(r_+-r_-)^{2l}j^{2l+2}\;, \end{eqnarray} where \begin{eqnarray} \sigma&=&2^{2l+4}\omega_N^{2l+1} \frac{(l!)^2(2l+1+N)!}{((2l)!(2l+1)!)^2N!} \left[\prod_{k=1}^{l}(k^2+4\varpi^2)\right]\;, \end{eqnarray} with $\varpi=(\omega_N-m\Omega_H)r_+(r_+^4+2Ma^2)^{1/2}/2(r_+^2-r_-^2)$. So, we have \begin{eqnarray} \delta\propto -(Re[\omega]-m\Omega_H)\;. \end{eqnarray} It is easy to see that, in the superradiance regime, $Re[\omega]-m\Omega_H<0$, the imaginary part of the complex frequency $\delta>0$. The scalar field has the time dependence $e^{-i\omega t}=e^{-i\omega_N t}e^{\delta t}$, which implies the exponential amplification of superradiance modes. This will lead to the instability of these modes. From the normal modes in pure five-dimensional G\"{o}del spacetime (\ref{normalmode}), we can see that $Re[\omega]\sim j$. We have assumed that $\omega M\ll 1$. So we have $jM\ll 1$, which is consistent with the small G\"{o}del black hole assumption $jr_+\ll 1$ because $M\sim r_+^2$. This is to say that the two assumptions we have made in this section are consistent with each other. Let us make a qualitative comparison of our analytical results with the numerical one in \cite{knopolya}. From Eq.(46), we can see that the growth rate $\delta$ of the superradiant modes are propotional to $j^{2l+2}$. This implies that the larger $j$ corresponds to the higher superradiant instability growth rate, which supports the numerical conclusion in \cite{knopolya}. Next, the superradiant condition that $Re[\omega]-m\Omega_H<0$ can make a further limit on the parameter space where the superradiance can occur. From Eq.(38), we can see the superradiance condition for the mode of $l=m=1$ and $N=0$ becomes $24j<\Omega_H$. Because we are working on the parameter space that $jr_+^2\ll 1$, we can further take the limit $jM\ll 1$ in the expression of event horizon Eq.(4) and angular momentum Eq.(8). So the limit on the parameter $j$ can be approximated by the following expression \begin{eqnarray} j\sqrt{M}<\frac{1-\sqrt{1-2(a/\sqrt{M})^2}}{24(a/\sqrt{M})}\;, \end{eqnarray} where, the parameter $a/\sqrt{M}$ takes the value in the region of $(0,0.71)$, otherwise there will be no black hole in spacetime. By submitting $a/\sqrt{M}=0.71$ into the inequality, one can see that for $j\sqrt{M}\sim 0.059$ there is no superradiance. The numerical one in \cite{knopolya} is $j\sqrt{M}\sim 0.075$. The region of the superradiance in the parameter space by using the analytical method roughly coincides with the numerical results in \cite{knopolya}. So, comparing our result with the numerical one in \cite{knopolya}, it can be seen that our results are rough and can reproduce the numerical conclusion partly. The analytical method can not solved the superradiance instability as precisely as the numerical method. At last, we can conclude that the five-dimensional small Kerr-G\"{o}del black hole is unstable against the massless scalar field perturbation. This instability is caused by the superradiance of the scalar field. \section{Conclusion} This paper is devoted to an analytical study of superradiant instability of five-dimensional small Kerr-G\"{o}del black hole. This instability has been found by R. A. Konoplya and A. Zhidenko using the numerical methods previously in \cite{knopolya}. Generally, superradiant instability naturally happens when two conditions are satisfied: (1) Black hole has rotation; (2) There is a natural reflecting mirror outside the black hole. In the present case, Dirichlet boundary condition at infinity for the asymptotically G\"{o}del black hole, which is obtained in section 3 by analogy with the AdS background, plays the role of reflecting mirror. We have adopted the analytical methods which is used in \cite{cardoso2004ads} to study the superradiant instability of small Kerr-AdS black hole. We assume that the energy of scalar field perturbation is low and the scale of G\"{o}del black hole is small. Then we divide the space outside the event horizon into the near-region and the far-region and employ the matched asymptotic expansion method to solve the scalar field wave equation. The analysis of complex quasinormal modes explicitly shows that the complex parts are positive in the regime of superradiance, which implies the growing instability of these modes. This is to say that the five-dimensional small Kerr-G\"{o}del black hole is unstable against scalar field perturbation. As is well-known, the gravitational perturbation will also undergo an superradiant amplification when scattered by the event horizon. Unlike the simply rotating Kerr-AdS black hole, where the superradiant instability of a massless scalar field implies the gravitational instability \cite{kodama}, we can not obtain the similar conclusion for asymptotically G\"{o}del black hole directly. So it will be interesting to study the gravitational (in)stability of five-dimensional Kerr-G\"{o}del black hole. Because of the complexity of the metric and the field equation, it will be a challenging project for future work. \section*{Acknowledgement} The author would like to thank Ming-Fan Li for reading the manuscript and useful comments. This work was supported by NSFC, China (Grant No. 11147145 and No. 11205048).
1,314,259,996,779
arxiv
\section*{Abstract} {An inequality on torsional rigidity is established. For tangential polygons this inequality is stronger than an inequality of Polya and Szego for convex domains. (A survey of related work, not in the journal submission, is presented.) } \section{Introduction}\label{sect:Intro} In the most general situation $\Omega$ is a simply-connected plane domain. However as we wish to compare our inequalities with earlier results proved for convex domains, notably the Polya-Szego inequality (involving equation~(\ref{eqt:Bdef})), we have in mind convex domains. Although Theorem~\ref{thm:thm1} concerns more general domains, our main result derived from it, Theorem~\ref{thm:thm2}, concerns tangential polygons. We will denote the area of $\Omega$, $|\Omega|$, by $A$, and its perimeter $|\partial\Omega|$ by $L$. \subsection{The torsion pde, Problem (P(0))}\label{subsect:P0} The elastic torsion problem is to find a function $u_0$ satisfying $$ - \frac{\partial^2 u_0}{\partial x^2}- \frac{\partial^2 u_0}{\partial y^2} = 1\, {\rm in}\,\Omega ,\,\qquad u_0 = 0 \, {\rm on}\,\partial\Omega \, . \eqno({\rm P}(0)) $$ Define the {\it torsional rigidity} of $\Omega$ as, \begin{equation} Q_0 := \int_\Omega u_0 . \label{eqt:Qdef} \end{equation} (This differs by a factor of 4 from definitions elsewhere, e.g in~\cite{PoS51}.) There are many identities and inequalities concerning $Q_0$. For example, one identity is \begin{equation} Q_0 = \frac{1}{4}\, \int_{\partial\Omega} (x.n) \left(\frac{\partial u_0}{\partial n}\right)^2 . \label{eqt:bndryQ} \end{equation} Define (for convex $\Omega$), in the notation of~\cite{PoS51}, \begin{equation} B_\Omega = \int_{\partial\Omega} \frac{1}{x\cdot{n}} . \label{eqt:Bdef} \end{equation} An inequality, which we will call the Polya-Szego inequality, is \begin{equation} Q_0 \ge \frac{A^2}{4\ B_\Omega} ( = {\rm{ \ for\ tangential\ polygons\ }} \frac{A^3}{2L^2} ) \label{in:PS} \end{equation} Define $$ J(v) = \int_\Omega \left( 2v - |\nabla v|^2\right) .$$ The solution $u_0$ maximizes $J$ over functions vanishing on the boundary $\partial\Omega$. (More precisely over functions $v\in{{\mathring W}^1_2(\Omega)}$.) A quantity occuring in our Theorem~\ref{thm:thm1} is \begin{equation} Q_1 = \int_{\partial\Omega} \left(\frac{\partial u_0}{\partial n}\right)^2 . \label{eqt:Q1def} \end{equation} For tangential polygons from equation~(\ref{eqt:bndryQ}) we note that there is a very simple equation relating $Q_1$ and $Q_0$: this is needed for our Theorem~\ref{thm:thm2}. \subsection{Problem (P($\infty$))}\label{subsect:Pinf} Problem (P($\infty$) was defined in the statement of Theorem 2.2 of~\cite{KM93}. Our notation here is as in that paper. where $u_\infty$ solves $$ - \frac{\partial^2 u_\infty}{\partial x^2}- \frac{\partial^2 u_\infty}{\partial y^2} = 1\, {\rm in}\,\Omega ,\, \frac{\partial u_\infty}{\partial n} = -\frac{|\Omega|}{|\partial\Omega|} \, {\rm on}\,\partial\Omega \, {\rm and}\, \int_{\partial\Omega} u_\infty = 0 . \eqno({\rm P}(\infty)) $$ Define, as in~\cite{KM93} equations~(4.9) and (4.11), \begin{equation} \Sigma_\infty = \int_\Omega u_\infty , \qquad {\rm and\ \ } \Sigma_1= - \int_{\partial\Omega} u_\infty^2 , \label{eqt:SigDef} \end{equation} with $u_\infty$ satisfying Problem (P($\infty$)). Once again there is a variational characterisation of the solutions. This time $u_\infty$ is the maximizer of $J(v)$ as one varies over functions $v$ for which the integral of $v$ over the boundary $\partial\Omega$ is zero. The variational approach can be used to establish the inequality \begin{equation} \Sigma_\infty \ge Q_0 . \label{KM934p9} \end{equation} (See also~\cite{KM93} equation~(4.9) for a different approach.) The solution $u_\infty$ when $\Omega$ is a tangential polygon is given in~\cite{KM93} equations~(2.14) and (2.15). \section{Relations between $u_0$ and $u_\infty$} We already have~(\ref{KM934p9}) relating $\Sigma_\infty$ and $Q_0$. The next result involves $\Sigma_1$ and $Q_1$. (We haven't exp[ored to what extent the boundary smoothness might be relaxed. We need to apply the Divergence Theorem.) \begin{theorem}{\label{thm:thm1} For (convex) domains $\Omega$ with Lipschitz boundary which is also piecewise $C^1$, the following inequality is satisfied: \begin{equation} \frac{A^2}{L}-\frac{(\Sigma_\infty-Q_0)^2}{\Sigma_1}\le Q_1 . \label{eqt:QQ1R1a} \end{equation} } \end{theorem} \par\noindent Recall that $\Sigma_1<0$, so both terms on the left-hand side are positive. \smallskip \par\noindent{\it Proof.} Considering ${\rm div}(u_0\nabla u_\infty)$ the Divergence Theorem gives $$ Q_0 = \int_\Omega \nabla u_0\, \cdot\, \nabla_\infty .$$ Considering ${\rm div}(u_\infty\nabla u_0)$ the Divergence Theorem gives $$\int_{\partial\Omega} u_\infty \frac{\partial u_0}{\partial n} = - \Sigma_\infty + \int_\Omega \nabla u_0\, \cdot\, \nabla u_\infty . $$ These combine to give $$\Sigma_\infty - Q_0 = \int_{\partial\Omega} u_\infty\left(- \frac{\partial u_0}{\partial n}\right) .$$ We now introduce an arbitrary constant $c$ and subtract $c\,A$ from both sides, giving \begin{equation} (\Sigma_\infty - Q_0) -cA = \int_{\partial\Omega} (u_\infty-c)\, \left(- \frac{\partial u_0}{\partial n}\right) . \label{eqt:ceq} \end{equation} We now use the Cauchy-Schwarz inequality on the right-hand side to give $$\left((\Sigma_\infty - Q_0) -cA\right)^2 \le Q_1 \, \int_{\partial\Omega} (u_\infty-c)^2 , $$ where we have used equation~(\ref{eqt:QQ0t}) for one of the integrals on the right-hand side. Now, on using that the integral of $u_\infty$ around the boundary is zero, we have $$ \int_{\partial\Omega} (u_\infty-c)^2 = -\Sigma_1 + c^2 L .$$ Thus, for all real $c$ \begin{equation} \frac{\left((\Sigma_\infty - Q_0) -cA\right)^2}{-\Sigma_1 + c^2 L} \le Q_1 \label{in:cin} \end{equation} The function of $c$ on the left is clearly nonnegative and bounded, tending to $A^2/L$ as c tends to both plus and minus infinity. It has two critical points: the one making the function 0 is clearly the minimum. The other is at $c=c_*$ where $$ c_* = \frac{A\Sigma_1}{L (\Sigma_\infty - Q_0)} . $$ Subsitituting $c_*$ for $c$ in inequality~(\ref{in:cin}) yields the result of the theorem. \hfill$\square$ \medskip \section{Tangential polygons}\label{sect:Tang} \subsection{Geometric results}\label{subsect:tangGeom} A tangential polygon, also known as a circumscribed polygon, is a convex polygon that contains an inscribed circle (also called an incircle), a circle that is tangent to each of the polygon's sides. For any (convex) tangential polygon, the area $A$, perimeter $L$ and inradius $\rho$ are related by $$ A=\frac{1}{2} \rho L . $$ We always choose the origin of our coordinate system to be at the incentre. There is some literature on tangential polygons, e.g.~\cite{AM04,RP01}. Any triangle is a tangential polygon. Quadrilaterals which are tangential include kites and hence rhombi. Concerning $B$, defined in~\cite{PoS51} and here at equation~(\ref{eqt:Bdef}), another simple identity for (convex) tangential polygons is $$ B = \frac{L}{\rho} ,$$ (and $B\ge{2\pi}$ with equality only for the disk, and for any triangle, $B\ge{6\sqrt{3}}$): see~\cite{Ai58}. The quantities $\Sigma_\infty$ and $\Sigma_1$ can be expressed in terms of boundary moments $i_{2k}$ -- moments about the incentre -- defined by $$ i_{2k} = \int_{\partial\Omega} (x^2+y^2)^k . $$ We remark that the Cauchy-Schwarz inequality for the integrals implies that $i_4\ge{i_2^2/L}$. \medskip {For methods to calculate $i_{2k}$ in terms of the tangential polygon's inradius and angles, see~\cite{Ke20i}.} \subsection{The new inequality for tangential polygons}\label{subsect:tangNew} Since on the boundary $\partial\Omega$ of a tangential polygon we have $x\cdot{n}=\rho$, equation~(\ref{eqt:bndryQ}) yields, for tangential polygons, \begin{equation} Q_0 = \frac{\rho}{4}\int_{\partial\Omega} \left( \frac{\partial u_0}{\partial n}\right)^2 . \label{eqt:QQ0t} \end{equation} This enables us to rewrite the preceding theorem as follows. \begin{theorem}{\label{thm:thm2} For any tangential polygon $Q_0$ satisfies the inequality, quadratic in $Q_0$, \begin{equation} \frac{A^2}{L}-\frac{(\Sigma_\infty-Q_0)^2}{\Sigma_1}\le \frac{4}{\rho} Q_0 . \label{in:QQ1R1at} \end{equation} } \end{theorem} In our application we treat~inequality~(\ref{in:QQ1R1at}) as a quadratic inequality in $Q_0$ and it is satisfied if $$ Q_{0-} \le Q_0 \le Q_{0+} , $$ where, with $$ \delta = -\frac{\Sigma_1}{L}\left( 2A L^2\Sigma_\infty -L^3\Sigma_1 -A^4 \right) , $$ $$ Q_{0\pm} = \frac{1}{A}\left(-L\Sigma_1+A\Sigma_\infty \pm\sqrt{\delta} \right) . $$ \begin{comment} dA = (L^3*Sig1^2 - 2*A*L^2*Sig1*SigInf+Sig1*A^4)/L Q0m = (-L*Sig1+A*SigInf-Sqrt[dA])/A \end{comment} \subsection{$\Sigma_\infty$ and $\Sigma_1$}\label{subsect:tangSigma} For a tangential polygon \begin{equation} u_\infty = c_0 -\frac{1}{4} r^2\qquad {\rm where \ } r^2 = x^2+ y^2 , \ {\rm and\ \ } c_0 = \frac{i_2}{4 L} . \label{eqt:c0def} \end{equation} $c_0$ is such that the boundary integral is zero. This was noted at equations~(2.14), (2.15) of~\cite{KM93}. We find \begin{eqnarray} \Sigma_\infty &=& \frac{1}{16}\rho i_2 = \frac{A i_2}{8 L}, \label{eqt:tangPSigInf}\\ \Sigma_1 &=& -c_0^2\, L + \frac{1}{2} c_0 i_2 -\frac{1}{16} i_4 =\frac{1}{16}\left( \frac{i_2^2}{L} - i_4 \right) . \label{eqt:tangPSig1} \end{eqnarray} We can now readily rewrite our inequality~(\ref{in:QQ1R1at}) in terms of the geometric quantities $i_2$ and $i_4$. Using the expressions~(\ref{eqt:tangPSigInf},\ref{eqt:tangPSig1}) in terms of $i_2$, $i_4$ we find that the inequality of~(\ref{in:QQ1R1at}) is satisfied iff the following inequality, quadratic in $Q_0$, is satisfied \begin{equation} f(Q_0) := 32 AQ_0^2 - \frac{4Q_0}{\rho}\left(2A i_4 -\rho i_2^2 + A i_2 \rho^2\right) + A\rho\, (A i_4 -\frac{3}{8} i_2^2\rho) \le{0} \ . \label{eqt:fDef} \end{equation} Inequalities on the domain functionals $A$, $\rho$, $i_2$ and $i_4$ can be used to establish that both roots of $f(Q_0)=0$ are positive. Denote the smaller root by $Q_{0-}$ and the larger by $Q_{0+}$. The inequalities $Q_{0-}\leQ_0\leQ_{0+}$ improve, for tangential polygons, some well known inequalities such as the Polya-Szego inequality~(\ref{in:PS}), $$ Q_0 \ge \frac{A^2}{4\ B_\Omega} ( = {\rm{ \ for\ tangential\ polygons\ }} \frac{A^3}{2L^2} =\frac{\rho^2 A}{8} ) . $$ We now comment on the quadratic $f$. Consider first a disk radius 1 $$ A_\odot=\pi,\ \rho_\odot=1, \ i_{2,\odot}=i_{4,\odot}= 2\pi , $$ $$\leqno{{\rm so}}\qquad\qquad\qquad f_\odot(Q_0) = 32\pi Q_0^2 - 4Q_0 (2\pi^2) +\frac{1}{2}\pi^3 = 32\pi(Q_{0-}\frac{\pi}{8})^2 . $$ This agrees with that $Q_{0,\odot}=\pi/8$. Next consider an equilateral triangle, $$ A_\Delta=\sqrt{3},\ \rho_\Delta=\frac{1}{\sqrt{3}}, \ i_{2,\Delta}=4,\ i_{4,\Delta}= \frac{16}{5} , $$ $$\leqno{{\rm so}}\qquad\qquad\qquad f_\Delta(Q_0) = 32\sqrt{3} Q_0^2 - 4Q_0\sqrt{3}(\frac{12\sqrt{3}}{5}) + \frac{6\sqrt{3}}{5} = 32\sqrt{3} (Q_{0-}\frac{\sqrt{3}}{20})(Q_0 - \frac{\sqrt{3}}{4}) , $$ which is consistent with $Q_{0,\Delta}=\sqrt{3}/20$. Consider next general tangential polygons. Define $Q_B =\rho^2 A/8$, and recall the Polya-Szego inequality $Q_0\geQ_B$. We have $$ f(Q_B) = \frac{\rho^2 A}{2} \left(\rho A-\frac{1}{2}\rho i_2\right)^2 . $$ Thus the inequality $Q_0\geQ_{0-}$ improves on $Q_0\geQ_B$. Consider next the upper bound $Q_0\le{I_O/4}$ where $I_O$ is the polar moment of inertia about the incentre $O$, which can also be written $Q_0\le\Sigma_\infty$. Then $I_O=4\Sigma_\infty=\rho i_2/4$ and $$f(\frac{1}{16}\rho i_2) = -\frac{1}{2}\left(\frac{i_2}{2}-\rho A\right)\, \left(2A i_4-\rho i_2^2\right) .$$ Since both terms in parentheses are positive, one concludes that $\rho i_2/16\leQ_{0+}$ so the bound $Q_0\leQ_{0+}$ is weaker than the earlier bound. Summarizing, we have $$ Q_B=\frac{1}{8}\rho^2 A \le Q_{0-} \le Q_0 \le \frac{1}{16}\rho i_2 \le Q_{0+} .$$ Using the calculations of $\Sigma_\infty$ and $\Sigma_1$ for isosceles triangles with area $\sqrt{3}$ given in~ Part II \ we show in Figure~\ref{fig:plIsos} how the new inequality compares with earlier inequalities. As another check we note that perhaps the most studied isosceles triangle other than the equilateral is the right isosceles triangle. Let $\alpha$ be the apex angle of the isosceles triangle and $\sigma=\tan(\alpha/4)$ so $0<\sigma<1$. For this $\sigma=\sqrt{2}-1\approx{0.414214}$ and for area $\sqrt{3}$ its torsional rigidity is approximately 0.07827 (see~\cite{PoS51}) which is, as it must be, larger than $Q_{0-}$ which, at this $\sigma$, is 0.076511. However, at just 2.5\% difference, it is too close to the curve to plot usefully. \section{Conclusion, and open problems} There are many questions. We do not know if inequality~(\ref{eqt:QQ1R1a}) is implied by some other known inequality for torsional rigidity. Inequality~(\ref{in:QQ1R1at}) becomes an equality for circular disks and equilateral triangles. It may be that these are the only shapes which achieve this. For isosceles triangles with a given area, as indicated in Figure~\ref{fig:plIsos}, $Q_{0-}$ is maximized (and $Q_{0+}$ minimized) by the equilateral triangle. We believe that this will be true for all triangles. What happens with quadrilaterals, and more generally $n$-gons, is not known. It would also be possible to check how well the inequality checks with computed torsional rigidities for polygons, a few references being~\cite{Ha03,RSNC16,SC65}. Perhaps checking for the regular polygons where the data is given in Table 1 of~ Part II would be the easiest place to start. To date the work has been on domains for which $u_\infty$ is a quadratic polynomial. (\cite{KW20} treats rectangles.) Outside this class of domains, there are other domains for which all the functionals in~inequality~(\ref{eqt:QQ1R1a}) can be found, for example, the semi-circle has elementary function solutions for both $u_0$ and $u_\infty$ and the various functionals can be found (using Maple to sum the series). \newpage \section*{Structure of the remainder of this document} The preceding part of this document, called Part I from now on, has been accepted, subject to revision, for publication in {\it IMA Journal of Applied Mathematics}.\\ One referee was very positive and concluded:\\ ``This paper too is very well written, clear, and easy to follow. I strongly recommend publication in IMA Journal of Applied Mathematics"\\ The other referee wrote of the paper:\\ ``the work was not put into context and background/other related studies were not discussed .... I think it’d be good to discuss previous work on this topic, so that it is clear what the new contribution of this work is. The derived inequalities will be interesting once a discussion is given (e.g of other inequalities which have appeared in the literature, why are these important etc.). "\\ The same referee also suggested:\\ ``computing torsional rigidities for specific cases and comparing to other studies".\\ This supplement is intended to address this, while leaving the journal paper short and focussed on the new research. \\ Numerics for the torsional rigidities of regular polygons and of isosceles triangles (in the papers submitted to IMA) are repeated here in Parts I and IIa. Additional numerics for rhombi are given near the end of Part III. \newpage \begin{itemize} \item The earlier part of this document, Part I, is a preprint form of the IMA paper. Some items addressing a referee's concerns with the original form or the paper are in Part Ib. \item Part II concerns geometric matters relating to tangential polygons.\\ Part IIa is adapted from material used in a different context in the paper~\cite{Ke20i}\\ Part IIb contains geometric items not in the IMA paper(s).\\ The first topics are related to $n$-gons including`isoperimetric inequalities', when, with the number of vertices $n$ fixed, with some fixed parameter (e.g. area) regular polygons optimize over some other parameter (e.g. minimize the perimeter).\\ A second topic is inequalities, sometimes not involving $n$ or at least allowing $n$ to range over positive integers, `circumgons' and `circum-$n$-gons', i.e. shapes in which part of the boundary is the disk with radius the inradius. One of these is the `single-cap', the convex hull of the disk and a single point outside it: a circum-1-gon. Another is the `symmetric double-cap': a circum-2-gon. We will see these in connection with Blaschke-Santalo diagrams. \item In Part III we return to considerations of torsional rigidity. The emphasis is on bounds for convex domains, in particular convex polygons especially tangential polygons. Some sections are devoted to triangles, especially isosceles, and tangential quadrilaterals, especially rhombi. \item The remaining parts are only slightly connected to the torsion problem.\\ Part IV concerns replacing the Dirichlet b.c. with a Robin b.c..\\ Part V notes some other pde problems where tangential polygons are mentioned. \end{itemize} The treatment is very uneven. I have not checked the more advanced recent pde papers. Some of the geometry papers cited in Part II are very elementary. The suggestion (by Buttazzo) that I look at Blaschke-Santalo diagrams has led to items at present poorly integrated with the study of my bound $Q_{0-}$ (with just Part III~\S\ref{sec:BSQ} indicating one direction). I hope a later version of this supplement will correct some of these defects. \newpage \begin{center} {\large{{\textsc{ Part Ib }}}} \end{center} \section*{Numerics for the isosceles triangle} Numerics for the isosceles triangle were described earlier, but here is some amplification. \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{plIsos.jpg}} \caption{For an isosceles triangle with area$\sqrt{3}$. $\sigma$ is tan of 1/4 of the apex angle. Blue is $Q_B$, red is the new lower bound $Q_{0-}$, black is $Q_\Delta$, orange is $\rho i_2/16$, green is $Q_{0+}$. } \label{fig:plIsos} \end{figure} Another lower bound on $Q_0$, as in~\cite{Sol92}, is that, amongst triangles with a given inradius, the equilateral triangle has the smallest $Q_0$. Thus \begin{equation} Q_0\ge Q_{\rm Sol} = \frac{9\sqrt{3}}{20} \rho^4 . \label{eqt:Sol92} \end{equation} For isosceles triangles this lower bound improves on our $Q_{0-}$ when the apex angle is slightly less than $\pi/3$. See Figure~\ref{fig:IIasolQ0m}. \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIasolQ0m.pdf}} \caption{A plot, against $\sigma$, of the difference $Q_{0-}-Q_{\rm Sol}$, the latter term defined in equation(\ref{eqt:Sol92}). } \label{fig:IIasolQ0m} \end{figure} \clearpage \section*{Speculation on equality in inequality~(\ref{in:QQ1R1at}).} Inequality~(\ref{in:QQ1R1at}) becomes an equality for circular disks and equilateral triangles. I don't have a proof, but it might be that one only gets equality for these shapes. There is just one inequality used in the proof of the theorems. It is a Cauchy-Schwarz Inequality and it will be an equality iff $$ u_\infty-c_* = {\rm const}\, \frac{\partial u_0}{\partial n} , $$ all the way around the boundary. The lhs is a quadratic function of $(x,y)$. Consider next a genuine polygon, a tangential $n$-gon. Without loss of generality consider a side parallel to the $x$-axis as the interval $-1\le{x}\le{1}$, and $y=-h$, with the $n$-gon being in the half-plane $\{ y>-h\}$. The only possibility is $$ \frac{\partial u_0}{\partial y} = -\frac{1}{4} (1-x^2) , \qquad{\rm on \ } y=-h,\ \ -1<x<1 ,$$ as the gradient of $u_0$ is zero at the corners. This has implications for the values of $c_0$ in $u_\infty$ and $c_*$. Further information might be obtained by considering sides adjoining the $y=-h$ side, with angles $\alpha_-$ at $x=-1$ and $\alpha_+$ at $x=1$. One might get other restrictions on the $c_0$ and $c_*$ values, and on the $\alpha\pm$. Even if the disk and equilateral triangle are the only bounded domains for which we get equality, it might be difficult to show this. If one considers an infinite wedge as `tangential polygon' this might be a counterexample. Consider a wedge, apex at the origin and symmetric about the $x$-axis, $\theta=0$. Suppose that the wedge has angle $\alpha$. Then $$u_{0,{\rm wedge}} = -\frac{r^2}{4}\left( 1- \frac{\cos(2\theta)}{\cos(\alpha)}\right) + {\rm const} r^{\pi/\alpha}\cos(\pi\frac{\theta}{\alpha}) , $$ solves the torsion equation, has zero Dirichlet boundary data. Here $$ \frac{\partial u_0}{\partial n} = \frac{1}{r} \frac{\partial u_0}{\partial\theta} . $$ It might be that we can set the const to 0 to see a counter example. (The solution in a sector is given in~\cite{KCT97}.) \newpage \begin{center} {\large{{\textsc{ Part IIa: $\Sigma_\infty$ and $\Sigma_1$ for tangential polygons }}}} \end{center} \section*{Abstract for Part IIa} Items useful in calculation for tangential polygons in general, and for particular cases, are here extracted from~\cite{Ke20i}. \section{Outline of Part IIa}\label{sec:OutlineIIa} In this part the functionals $\Sigma_\infty$ and $\Sigma_1$ are calculated for various tangential polygons. The starting point is equation~(2.14), (2.15) from the 1993 paper~\cite{KM93} -- repeated at appropriate times in this document -- which gives $u_\infty=c_0-\frac{1}{4}(x^2+y^2)$ for tangential polygons, with $c_0$ such that the boundary integral of $u_\infty$ is 0. \begin{itemize} \item In \S\ref{sec:Tang} we find that starting from $u_\infty$ the functionals $\Sigma_\infty$ and $\Sigma_1$ can be expressed in terms of various moments of inertia. \item In \S\ref{sec:xtremeShapes} we treat the circular disk and the equilateral triangle. \item In \S\ref{sec:regn} we study regular $n$-gons. \item Any triangle is a tangential polygon.\\ In \S\ref{sec:isos} we find the functionals for any isosceles triangle. \item In~\S\ref{sec:tangQuad}. we note papers relevant to work on tangential quadrilaterals. \end{itemize} We will use `tangential polygon' as defined before. \begin{itemize} \item Genuine $n$-sided polygons for which every side is tangent to the incircle (and hence for which the intersection of the boundary with the incircle is $n$ points, the points of tangency) will be called {\it tangential $n$-gons}. \item When tangential polygon is the convex hull of $n$ points outside the incircle and the incircle we will call it a {\it circum-$n$-gon}. (This is a slightly different terminology than in~\cite{AM04}. \end{itemize} Tangential $n$-gons are particular cases of circum-$n$-gons. The union over all $n$ from 0 to $\infty$ of circum-$n$-gons gives all tangential polygons (the $n=0$ case being considered as the disk). A circum-1-gon is also called a 1-cap, a circum-2-gon is also called a 2-cap: we will see these in Part IIb. We use established terminology with a circum-4-gon being called a tangential quadrilateral, a circum-6-gon a tangential hexagon, etc. \section{Tangential polygons}\label{sec:Tang} \subsection{Geometric results}\label{subsec:tangGeom} Here we continue from the basic geometric definitions and results given in Part IIa~\S\ref{sec:OutlineIIa}. \begin{comment} A tangential polygon, also known as a circumscribed polygon, is a convex polygon that contains an inscribed circle (also called an incircle), a circle that is tangent to each of the polygon's sides. For any (convex) tangential polygon, the area $A$, perimeter $L$ and inradius $\rho$ are related by $$ A=\frac{1}{2} \rho L . $$ We always choose the origin of our coordinate system to be at the incentre. Concerning $B$, defined in~\cite{PoS51} and here at equation~(\ref{eq:Bdef}), another simple identity for (convex) tangential polygons is $$ B = \frac{L}{\rho} ,$$ (and $B\ge{2\pi}$ with equality only for the disk, and for any triangle, $B\ge{6\sqrt{3}}$): see~\cite{Ai58}. \end{comment} There are various well-known or elementary observations: \begin{itemize} \item Of the polygons with fixed perimeter and angles, the tangential polygon has the greatest area. \item Given two (convex) tangential polygons with the same incircle their intersection is also a (convex) tangential polygon with the same incircle. \end{itemize} Papers concerning tangential polygons include~\cite{AM04,RP01} (and many more particularly concerning tangential $n$-gons for $3\le{n}\le{6}$ will be given at appropriate places later in this document). There is a literature on (convex) tangential polygons, one fact being that, considering the polygons as linkages touching the incircle, if the number of sides is odd the polygon is rigid but not if the number of sides is even. Entertaining as such facts are, they do not appear to be relevant to our pde exercise. We will need boundary moments $i_{2k}$ -- moments about the incentre -- defined by $$ i_{2k} = \int_{\partial\Omega} (x^2+y^2)^k , $$ and the polar area moment about the incentre $$ I_{2k} = \int_\Omega (x^2+y^2)^2 . $$ (Caution. There are many results concerning moments about the centroid. For example, as in~\cite{PoS51}, the moments about the centroid are minimized over $n$-gons with a given fixed area by the regular $n$-gon. We haven't checked in general if it is the case that the moments about the incentre are minimized over $n$-gons by the regular $n$-gon, but when $n=3$ and one minimizes over isosceles triangles, the equilateral triangle is the minimizer.) We remark that the Cauchy-Schwarz inequality for the integrals implies that $i_4\ge{i_2^2/L}$ (and $I_4\ge{I_2^2/A}$). \medskip \subsubsection{$I_{2k}=\rho\,i_{2k}/(2k+2)$.} A result which we first noticed in two special cases is the following. \smallskip {\par\noindent}{\bf Result.} {\it For any tangential polygon $A=\rho\,L/2$, $I_2=\rho\,i_2/4$ and, more generally, $I_{2k}=\rho\,i_{2k}/(2k+2)$.} {\par\noindent}{\it Proof.} The Divergence Theorem can be used to give a boundary integral which equals $I_2$. Consider polar coordinates, radial coordinate $r$, and the Laplacian of a function of $r$: $$ \Delta u = \frac{1}{r}\frac{\partial}{\partial r} r \frac{\partial u }{\partial r} . $$ Also, for use in the following, for any tangential polygon $x.n=\rho$, here $x$ being the vector to a point on $\partial\Omega$. To avoid using $x$ as a vector, equally $ r e_r.n=\rho$, with $e_r$ the unit vector in the $r$-direction. With $u=r^2$ so $\Delta r^2 = 4$ the Divergence Theorem gives $$ 2\rho L = \int_{\partial\Omega} 2 r e_r.n = \int_\Omega 4 = 4 A . $$ Similarly, with $u=r^4$ so $\Delta r^4 = 16 r^2$ the Divergence Theorem gives $$ 4\rho i_2 = \int_{\partial\Omega}( \nabla r^4).n =\int_\Omega 16\, r^2 = 16 I_2 . $$ {\par\noindent}{\it Alternative Proof.} Start with a tangential polygon $T(1)$ origin at the incentre, with radius 1. Define $T(\rho)=\rho\, T(1)$ with second area moment $I_2(\rho)$ and second boundary moment $i_2(\rho)$. Along with $T(\rho)$ consider a similar scaled polygon $T(\rho+\Delta\rho)$ with $\Delta\rho$ small. Then $$I_2(\rho+\Delta\rho)-I_2(\rho) \sim \Delta\rho\, i_2(\rho)+O((\Delta\rho)^2) . $$ But, dimensionally, $i_2(\rho)=\rho^3\, i_2(1)$ and $I_2(\rho)=\rho^3\, I_2(1)$. Taking limits in the displayed equation gives the central part of $$ 4\rho^3 I_2(1) = \frac{d I_2(\rho)}{d\rho}= i_2(\rho) = \rho^3 i_2(1). $$ This gives $i_2(1)=4I_2(1)$ and hence, more generally, $I_2(\rho)=\rho\,i_2(\rho)/4$ as asserted. The same methods give $I_{2k}=\rho\,i_{2k}/(2k+2)$, and the case $k=0$ is $A=\rho\,L/2$. \subsubsection{Methods to calculate $i_{2k}$.} Denote with a bar just contributions from a vertical side, at $x=\rho$, extending from $y=-\eta_-$ to $y=\eta_+$. Then $${\bar i}_{2k} = \int_{-\eta_-}^{\eta_+} (\rho^2+y^2)^k\, dy .$$ Hence \begin{eqnarray*} {\bar i}_0 &=& \eta_+ + \eta_- ,\\ {\bar i}_2 &=& \rho^2 {\bar i}_0 +\frac{1}{3} (\eta_+^3 + \eta_-^3) ,\\ {\bar i}_4 &=& -\rho^4 {\bar i}_0 + 2\, \rho^2 {\bar i}_2 +\frac{1}{5} (\eta_+^5 + \eta_-^5) . \end{eqnarray*} Assuming the tangential polygon has $n$ sides, this gives \begin{eqnarray} { i}_0=L &=& \sum_{k=1}^n (\eta _{k+} + \eta _{k-} ), \label{eq:i0eta}\\ { i}_2 &=& \rho^2 { i}_0 +\frac{1}{3} \sum_{k=1}^n (\eta _{k+}^3 + \eta _{k-}^3) , \label{eq:i2eta}\\ { i}_4 &=& -\rho^4 { i}_0 + 2\, \rho^2 { i}_2 +\frac{1}{5} \sum_{k=1}^n (\eta _{k+}^5 + \eta _{k-}^5) . \label{eq:i4eta} \end{eqnarray} There is the geometrically evident fact that $\eta _{k+}=\eta _{k+1,-}$ as one traverses from one of the polygon's sides to the next. \medskip The easiest case to consider is the regular $n$-gon, side $s_n=2\rho\tau_n$ where $\tau_n=\tan(\pi/n)$. Then \begin{eqnarray} { i}_0=L &=& n\, s_n = 2n \rho\tau_n , \label{eq:i0reg} \\ { i}_2 &=& \rho^2 { i}_0 +\frac{n}{12} s_n^3 = n\, s_n \left(\rho^2 + \frac{1}{12} s_n^2 \right) =\frac{2}{3} n \rho^3 \tau_n (3 + \tau_n^2), \label{eq:i2reg} \\ { i}_4 &=& -\rho^4 { i}_0 + 2\, \rho^2 { i}_2 +\frac{n}{80} s_n^5 = n\, s_n \left( \rho^4 +\frac{\rho^2 s_n^2}{6} +\frac{s_n^4}{80}\right) , \\ &=& \frac{2}{15} n \rho^5 \tau_n\left( 15 + 10\, \tau_n^2 + 3\, \tau_n^4 \right) . \label{eq:i4reg} \end{eqnarray} These had been calculated independently from first principles, as reported in~\S\ref{subsec:Generaln}, before our observations concerning general tangential polygons. We now consider how $\eta_+$ and $\eta_-$ might be found in terms of vertex angles of the tangential polygon. Recall that in any tangential polygon the angle bisector at any vertex passes through the incentre. Suppose side $k$ lies between vertices $k$ and $k+1$. Let the angle at vertex $k$ be $\alpha_k$. (If there are $n$ sides the sum over all the $\alpha_k$ is $(n-2)\pi$.) Then, with \begin{equation} T_k = \frac{1}{\tan\frac{\alpha_k}{2}}=\tan(\frac{\pi-\alpha_k}{2}), \label{eq:Tkdef} \end{equation} $$ \eta_{k-}= {\rho} T_k,\qquad \eta_{k+}= \rho T_{k+1}. $$ Summing over $k$ \begin{equation} A= \rho^2 {\sum\atop{k}} T_k, \qquad L= 2\rho {\sum\atop{k}} T_k . \label{eq:i0Tgen} \end{equation} (This checks with our previous $A=\rho L/2$.) Another well-known identity is $$ \frac{L^2}{4A} = {\sum\atop{k}} T_k. $$ Similarly \begin{eqnarray} { i}_2 &=& \rho^2 { i}_0 +\frac{2\rho^3}{3} \sum_{k=1}^n T_k^3 = 2\rho^3\, \sum_{k=1}^n\left(T_k+\frac{ T_k^3}{3}\right), \label{eq:i2Tgen}\\ { i}_4 &=& -\rho^4 { i}_0 + 2\, \rho^2 { i}_2 +\frac{2\rho^5}{5} \sum_{k=1}^n T_k^5 \nonumber\\ &=& 2\rho^5\, \sum_{k=1}^n\left(T_k+\frac{ 2T_k^3}{3}+\frac{ T_k^5}{5}\right). \label{eq:i4Tgen} \end{eqnarray} \subsection{$\Sigma_\infty$ and $\Sigma_1$}\label{subsec:tangSigma} For a tangential polygon $$ u_\infty = c_0 -\frac{1}{4} r^2\qquad {\rm where \ } r^2 = x^2+ y^2 , $$ and $c_0$ is such that the boundary integral is zero: \begin{equation} c_0 = \frac{i_2}{4 L} . \label{eq:c0def} \end{equation} where $i_{2k}$ are the boundary moments defined previously. This was noted at equations~(2.14), (2.15) of~\cite{KM93}. Hence \begin{eqnarray} \Sigma_\infty &=& \frac{A\, i_2}{4 L}- \frac{1}{4} I_2 =\frac{1}{16}\rho i_2 = \frac{A i_2}{8 L}, \label{eq:tangPSigInf}\\ \Sigma_1 &=& -c_0^2\, L + \frac{1}{2} c_0 i_2 -\frac{1}{16} i_4 =\frac{1}{16}\left( \frac{i_2^2}{L} - i_4 \right), \label{eq:tangPSig1} \end{eqnarray} and the notation, as before, has $I$ for area moments, and $i$ for boundary moments. \section{Equilateral triangle and disk}\label{sec:xtremeShapes} The results for these domains are well known: see e.g.~\cite{MK94}. For the unit disk $$ A=\pi,\ L_\odot=2\pi,\ Q_{\odot,0}=\Sigma_{\odot,\infty}=\frac{\pi}{8},\ \Sigma_{\odot,1}=0, \ i_{\odot,2}=i_{\odot,4}=2\pi,\ I_{\odot,2}=\frac{\pi}{2}.$$ For an equilateral triangle with side $2a=s_3$, $$A_{\Delta}= a^2\,\sqrt{3},\ L_{\Delta}=6a,\ Q_{\Delta,0} = \frac{\sqrt{3} a^4}{20} = \frac{A_{\Delta}^2}{20\sqrt{3}},\ $$ $$ \Sigma_{\Delta,\infty} = \frac{a^4}{4\sqrt{3}} = \frac{A_{\Delta}^2}{12\sqrt{3}},\ \Sigma_{\Delta,1}= \frac{A_{\Delta}^{5/2}}{90\, \ 3^{1/4}} .$$ $$i_{\Delta,2}= \frac{4}{3} \, 3^{1/4}\, A^{3/2},\ i_{\Delta,4}= \frac{16}{45} \, 3^{3/4}\, A^{5/2} ,\ I_{\Delta,2}= \frac{\sqrt{3}}{9}\, A^2.$$ For triangles and disks, both with area $\pi$, the St Venant isoperimetric inequality is consistent with $$0.3927= Q_{\odot,0} =\frac{\pi}{8}>Q_{\Delta,0}=\frac{\pi^2}{20\sqrt{3}}=0.28491\ .$$ The inequality for the $\Sigma_\infty$ is $$0.3927= \Sigma_{\odot,\infty} =\frac{\pi}{8}< \Sigma_{\Delta,\infty} =\frac{\pi^2}{12\sqrt{3}}=0.47485\ . $$ \section{Regular $n$-gons}\label{sec:regn} \subsection{General $n$}\label{subsec:Generaln} We denote the inradius by $\rho$, the side by $s$, area by $A$, perimeter by $L$ and the angle at the centre subtended by a single side by $\gamma$. For the (regular) $n$-gon, simple geometry gives $\gamma_n=2\pi/n$ so $$ \frac{s_n}{2\rho_n}=\tau_n \qquad{\rm where\ } \tau_n=\tan(\frac{\gamma_n}{2}) =\tan(\frac{\pi}{n}), $$ and $$A_n = \frac{n s_n\rho_n}{2}= \frac{n s_n^2}{4\tau_n} = n\rho_n^2 \tau_n .$$ We will wish to specify the area (to be $\pi$), so we note $$ s_n^2 =\frac{4 A_n \tau_n}{n} \qquad{\rm and\ } L_n = n s_n =2\sqrt{n A_n \tau_n}.$$ The inradius $\rho_n$ occurs in some formulae, so we note $$\rho_n^2 = \frac{A_n}{n \tau_n} . $$ We are not aware of any simple formula for the torsional rigidity $Q_{0}(n)$ for a regular $n$-gon, but there have been many numerical studies (and theoretical studies starting from Schwarz-Christoffel conformal mapping). Some numerical results will be given for particular instances later. Formulae for $i_0=L$, $i_2$ and $i_4$ have been presented earlier at equations~(\ref{eq:i0reg}), (\ref{eq:i2reg}) and~(\ref{eq:i4reg}). In view of their central role and the detailed algebraic manipulations in their derivation we record here an independent derivation. Using polar coordinates, and considering the side with $x=\rho_n=r\cos(\theta)$ for which the polar angle at the centre lies between $\theta=-\pi/n$ and $\theta=\pi/n$, $$ i_2(n) = n \int_{-\pi/n}^{\pi/n} \left(\frac{\rho_n}{\cos(\theta)}\right)^2 \, \frac{dy}{d\theta}\, d\theta ,$$ in which $y=\rho_n\tan(\theta)$. Thus \begin{eqnarray*} i_2(n) &=& n\rho_n^3 \int_{-\pi/n}^{\pi/n} \frac{1}{\cos(\theta)^4}\, d\theta ,\\ &=& \frac{2\,n\rho_n^3 \, \left(1+2\cos(\frac{\pi}{n})^2\right)\, \sin(\frac{\pi}{n})}{3\,\cos(\frac{\pi}{n})^3} , \\ &=& \frac{2}{3} n \rho_n^3 \tau_n (3+\tau_n^2) ,\\ &=& \frac{2}{3} \sqrt{\frac{A^3}{n \tau_n}} (3+\tau_n^2) . \end{eqnarray*} Similarly \begin{eqnarray*} i_4(n) &=& n\rho_n^5 \int_{-\pi/n}^{\pi/n} \frac{1}{\cos(\theta)^6}\, d\theta ,\\ &=& \frac{2\,n\rho_n^5 \, \left(3+4\cos(\frac{\pi}{n})^2+8\cos(\frac{\pi}{n})^4\right)\, \sin(\frac{\pi}{n})}{15\,\cos(\frac{\pi}{n})^5} ,\\ &=& \frac{2}{15} n \rho_n^5 \tau_n (15+10\tau_n^2+3\tau_n^4) . \end{eqnarray*} The area moment of inertia $I_2(n)$ is similarly calculated from that of the isosceles triangle with apex at the origin. Before noting the general relation at equation~(\ref{eq:tangPSigInf}) a short calculation -- just for regular polygons -- established that, for a regular polygon, \begin{equation} I_2(n) =\frac{\rho_n}{4}\, i_2(n) . \label{eq:i2I2} \end{equation} Using equations~(\ref{eq:i0reg}), (\ref{eq:i2reg}) and~(\ref{eq:i4reg}), equation~(\ref{eq:tangPSigInf}) becomes \begin{equation} \Sigma_\infty(n) = i_2(n)\, \left( \frac{A}{4 L_n}- \frac{\rho_n}{16} \right) = \frac{ A^2\, (3+\tau_n^2)}{24 n\tau_n} . \label{eq:tangPSigInf2} \end{equation} (From~(\ref{eq:tangPSigInf2}), $\Sigma_\infty>\Sigma_{\odot,\infty}=Q_{\odot,0}=\pi/8$ when $A=\pi$.) Also \begin{equation} \Sigma_1(n) = - \frac{1}{16 L}\left( Li_4 -i_2^2\right) = -\frac{1}{90}\sqrt{\frac{ A^5\, \tau_n^5}{n^3}} . \label{eq:tangPSig12} \end{equation} The original motivation for assembling this particular list of quantities was that they were needed for a bound associated with slip flow down a pipe with cross-section $\Omega$: see~\cite{Ke20i}. Numeric values for the torsional rigidity are available in several references. The entries for $J/A^2$, where $J=4Q_0$, in the following table are taken from~\cite{Ha03}. See also~\cite{RSNC16} and, for $n=3,\ 4$ and $6$, also~\cite{PoS51}. In Table~\ref{tbl:tbl1} we take our polygons to have area $\pi$. In Table~\ref{tbl:tbl2} we take our polygons to have circumradius 1. \begin{table}[h] \begin{center} \begin{tabular}{|| c | c | c | c| c | c ||} \hline $n$& $4Q_0/A^2$& $Q_0$ & $L$ & $\Sigma_\infty$& $-\Sigma_1$ \\ 3& 0.11547& 0.28492& 8.0806& 0.47485& 0.14769 \\ 4& 0.14058& 0.34687& 7.0898& 0.41123& 0.024296 \\ 5& 0.14943& 0.36870& 6.7565& 0.39936& 0.007822 \\ 6& 0.15340& 0.37850& 6.5978& 0.39571& 0.003349 \\ 7& 0.15546& 0.38358& 6.5086& 0.39426& 0.001689\\ 8& 0.15664& 0.38649& 6.4530& 0.39359& 0.0009485 \\ 9& 0.15736& 0.38827& 6.4159& 0.39325& 0.0005754 \\ 10&0.15783& 0.38943& 6.3899& 0.39306& 0.0003699 \\ 11&0.15815& 0.39022& 6.3709& 0.39294& 0.0002489 \\ 12&0.15837& 0.39076& 6.3566& 0.39287& 0.0001738 \\ $\infty$& 0.15915 &0.3927 & 6.2832& 0.3927&0 \\ \hline \end{tabular} \caption{Regular polygons with area $\pi$} \label{tbl:tbl1} \end{center} \end{table} \begin{comment} 3 1.50000 0.11547 4 1.35063 0.14058 5 0.66839 1.27393 0.14943 6 0.65875 1.22608 0.15340 7 0.64977 1.19300 0.15546 8 0.64198 1.16863 0.15664 9 0.63532 1.14986 0.15736 10 0.62962 1.13492 0.15783 11 0.62472 1.12274 0.15815 12 0.62048 1.11261 0.15837 13 0.61677 1.10404 0.15854 14 0.61351 1.09670 0.15866 15 0.61063 1.09033 0.15875 16 0.60805 1.08476 0.15882 17 0.60575 1.07984 0.15887 18 0.60367 1.07546 0.15891 19 0.60179 1.07154 0.15895 20 0.60007 1.06801 0.15898 30 0.58882 1.04557 0.15910 40 0.58293 1.03428 0.15913 50 0.57931 1.02748 0.15914 70 0.57510 1.01967 0.15915 100 0.57188 1.01380 0.15915 infty 0.56419 1.00000 0.15915 \end{comment} \begin{table}[h] \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{|| c | c | c | c| c | c | c | c ||} \hline $n$& $4Q_0/A^2$&$A_n$& $Q_0$ & $L_n$ & $\Sigma_\infty$& $-\Sigma_1$& $\rho_n$ \\ \hline 3& $\sqrt{3}/15$& $3\sqrt{3}/4$ &$9\sqrt{3}/320$& $3\sqrt{3}$ & $3\sqrt{3}/64$& $3\sqrt{3}/320$& $1/2$\\ & 0.11547& 1.2990 & 0.0487 & 5.1962 & 0.0812& 0.0162 & 0.5\\ \hline 4& 0.14058& $2$ & 0.14058& $4\sqrt{2}$ & $1/6$& $\sqrt{2}/180$& $1/\sqrt{2}$\\ & & 2 & & 5.6569 & 0.1667 & 0.007857& 0.707107 \\ \hline 6& 0.15340& $3\sqrt{3}/2$ &0.2589 &$6$ & $5\sqrt{3}/32$& $1/480$&$\sqrt{3}/2$ \\ & &2.5981& & 6 & 0.270633& 0.0020833& 0.866025 \\ \hline $n$& & $\frac{n}{2}\sin(\frac{2\pi}{n})$ & &$2n\sin(\pi/n)$ & (\ref{eq:tangPSigInf2}) &(\ref{eq:tangPSig12}) &$\cos(\pi/n)$\\ \hline $\infty$& $1/(2\pi)$ & $\pi$ &$\pi/8$ & $2\pi$& $\pi/8$&0 & 1\\ & 0.180043& & & & & 0& 1\\ \hline \end{tabular} } \caption{Regular polygons with circumradius 1} \label{tbl:tbl2} \end{center} \end{table} \clearpage \subsection{The square, $n=4$}\label{subsec:square} For a square with side $s_4$, $$A_\square= s_4^2, \ L_\square= 4s_4,\ I_{\square,2}=\frac{A^2}{6},\ \rho_4=\frac{1}{2}s_4, \ i_{\square,2}=\frac{4}{3} s_4^3 .$$ Hence, consistent with equation~(\ref{eq:tangPSigInf}), we have $$\Sigma_{\square,\infty} = \frac{s_4^4}{24} =\frac{A_\square^2}{24} .$$ For a square and a disk each of area $\pi$ $$0.3927= \Sigma_{\odot,\infty} =\frac{\pi}{8}< \Sigma_{\square,\infty} =\frac{\pi^2}{24}= \ 0.41123. $$ Calculation for a rectangle reported in~\cite{KW20} gives (with $a=b$) $$ \Sigma_{\square,1} = -\frac{s_4^5}{720} = - \frac{A_\square^{5/2}}{720}. $$ \section{Triangles, especially isosceles}\label{sec:isos} \subsection{Geometric preliminaries and checks}\label{subsec:isosGeom} \subsubsection{$A$, $L$, $\Sigma_\infty$ and $\Sigma_1$ for isosceles triangles} Consider the isosceles triangle whose incentre is at the origin, whose base is length $2a$ and whose apex angle is $\alpha$. Denote by $\rho$ the inradius of the triangle. The triangle's vertices are at $(a,-\rho)$, $(0,h-\rho)$ and $(-a,-\rho)$. The case where $h=a\sqrt{3}$, $\rho=h/3$ corresponds to an equilateral triangle. The area of the triangle is denoted by $A$. The vertex angle is $\alpha$. Change the notation, what was formerly denoted $T_k$, we will denote $T_A$, $T_B$ and $T_C$. It is convenient to define $$ \sigma = \tan(\frac{\alpha}{4}) . $$ We have $$T_A=\tan(\frac{\pi-\alpha}{2})=\frac{1-\sigma^2}{2\sigma},\ T_B=T_C=\tan(\frac{\pi+\alpha}{4})= \frac{1+\sigma}{1-\sigma} . $$ We remark though that an alternative parameter, alternative to $\sigma$, is $a$, $2a$ being the length of the base. Then \begin{eqnarray*} \frac{h}{a} &=&\cot(\frac{\alpha}{2}) = \frac{1-\sigma^2}{2\sigma} ,\\ \frac{\rho}{a} &=&\tan(\frac{\pi-\alpha}{4}) = \frac{1-\sigma}{1+\sigma} , \qquad \rho=\frac{2A}{L} , \end{eqnarray*} and, using equation~(\ref{eq:i0Tgen}) or directly, \begin{eqnarray*} A &=& a\,h = a^2\, \frac{1-\sigma^2}{2\sigma} ,\\ L &=& 2\left(a+\sqrt{a^2+h^2}\right) = a\frac{(1+\sigma)^2}{\sigma} ,\\ B &=& \frac{L}{\rho}=\frac{L^2}{2A} =\frac{(1+\sigma)^3}{\sigma(1-\sigma)}. \end{eqnarray*} There are several identities relating these geometric quantities. A relation between the area, perimeter, and base is \begin{equation} 4 L a^3 - L^2 a^2 + 4 A^2 = 0 . \label{eq:AaL} \end{equation} This is consistent with the geometrically obvious $$ L = 2 \left(a+\sqrt{a^2+(\frac{A}{a})^2 } \right) , $$ which can be regarded as an example of using $a$ as a parameter. ($L(a)$ is a positive convex function with, when $A=\sqrt{3}$ a minimum of 6 when $a=1$.) Less immediately useful in this paper is the circumradius $R_V$. The radius of the circumcircle (circle through the 3 vertice) is: $$ R_V= {\frac {a\,(a^{2}+(\frac{A}{a})^2)}{2A}}. $$ The centre of the circle lies on the symmetry axis of the triangle, this distance below the apex. An isosceles triangle with given area $A$ is uniquely determined if $\sigma$ is given between 0 and 1 or if $a$ is given between 0 and infinity, in the latter case with $$\sigma = -\frac{A}{a^2}+\sqrt{\frac{A^2}{a^4} + 1} . $$ There are various formulae for the inradius $\rho$ including \begin{eqnarray*} \frac{\rho}{a} &=& \frac{ \sqrt{a^2+h^2}- a}{h} ,\\ &=& \frac{L-4a}{2h}. \end{eqnarray*} {\bf Result.} {\it At fixed area, $B$ and $L$ are positive convex functions of $\sigma$ for $0<\sigma<1$, each having its minimum at $\sigma=2-\sqrt{3}$. Over isosceles triangles with given area, the equilateral triangle minimizes each of $B$ and $L$.\\ At fixed area, $\rho$ is a positive concave function of $\sigma$ for $0<\sigma<1$, having its maximum at $\sigma=2-\sqrt{3}$. Over isosceles triangles with given area, the equilateral triangle maximizes $\rho$.} Integration gives, for the area moment, $$ I_2(A,\sigma) = \frac{A^2}{12}\, \frac{(1-\sigma)^6+12\sigma^2(1-\sigma)^2+16\sigma^3}{\sigma(1-\sigma)(1+\sigma)^3} . $$ {\bf Result.} {\it $I_2(A,\sigma)$ is a convex function of $\sigma$ for $0<\sigma<1$ with its minimum at $\sigma=2-\sqrt{3}$. Over isosceles triangles with given area, the equilateral triangle minimizes the area moment of inertia about the incentre.} There are various identities for $I_2$, e.g. \begin{equation} 4 I_2-\frac{1}{6} A L^2 + \frac{8 A^3}{L a} - \frac{16 A^3}{L^2} +2 A a^2 = 0 . \label{eq:Aai2} \end{equation} Eliminating $L$ gives $$ 36 a^4 A I_2^2 -12 a^2 (12 a^8 +11 a^4 A^2 +A^4) I_2 + A (24 a^{12}+33 A^2 a^8 + 6A^4 a^4 +A^6) = 0 . $$ {\bf ToDo.} The discriminant of the quadratic for $I_2$ is nice, and the 2 solutions for $I_2$ are reasonably simple (and possibly tidier than the expressions in $\sigma$). However surely only one is relevant, which one?\\ Might $i_2=I_2/(4\rho)$ (with $\rho=a^2(L-4a)/(2A)$) be neater than $I_2$. See if there is an equation like~(\ref{eq:AaL}) involving just $i_2$, $A$ and $a$. Equation~(\ref{eq:AaL}) involves $i_0$, $A$ and $a$. If there is, it might be that $i_2$ considered as a function of $a$ might be tidier than it is as a function of $\sigma$. \bigskip For the boundary moment we split the integral into the part over the base, and over another of the sides: $$ i_{2k} = i_{2k}(base) + 2\,i_{2k}(side) .$$ Once again, as in equation~(\ref{eq:i2I2}), we find, at $k=1$, $$ I_2 =\frac{\rho}{4}\, i_2 . $$ {\bf Result.} {\it At fixed area, $i_2$ is a positive convex functions of $\sigma$ for $0<\sigma<1$, with its minimum at $\sigma=2-\sqrt{3}$. Over isosceles triangles with given area, the equilateral triangle minimizes $i_2$ the the boundary moment about the incentre.} Equation~(\ref{eq:tangPSigInf}) becomes, with $i_2$ calculated either directly or from~(\ref{eq:i2Tgen}), \begin{eqnarray} \Sigma_\infty(A,\sigma) &=& i_2\, \left( \frac{A}{4 L}- \frac{\rho}{16} \right)=\frac{1}{16}\rho i_2 = \frac{1}{4} I_2 , \nonumber\\ &=& \frac{ A^2}{48} \, \frac{((1-\sigma)^6+12\sigma^2(1-\sigma)^2+16\sigma^3)}{\sigma(1-\sigma)(1+\sigma)^3} . \label{eq:isosPSigInf2} \end{eqnarray} Restating the preceding Result for $I_2$:\\ {\bf Result.} {\it $\Sigma_\infty(A,\sigma)$ is a convex function of $\sigma$ for $0<\sigma<1$ with its minimum at $\sigma=2-\sqrt{3}$. Over isosceles triangles with given area, the equilateral triangle minimizes $\Sigma_\infty$.}\\ The formulae for $i_4$ and $\Sigma_1$ are more elaborate. See~(\ref{eq:i4Tgen}). Define $$ p_1(\sigma) =(1-\sigma)^{12}+9 \sigma (1-\sigma)^{10}-40 \sigma^3 (1-\sigma)^6+144 \sigma^5 (1-\sigma)^2+256 \sigma^6 .$$ One can show that $p_1$ is positive on $0\le\sigma\le{1}$.\\ The negative quantity $\Sigma_1$ is found as in~(\ref{eq:tangPSig1}):\\ \begin{eqnarray} \Sigma_1 &=& \frac{1}{16}\left( \frac{i_2^2}{L} - i_4 \right), \nonumber\\ &=& -\frac{1}{360}\,\frac{A^3}{L\,(\sigma (1-\sigma) (1+\sigma))^3}\ p_1(\sigma) . \label{eq:isosS1} \end{eqnarray} Our main test cases are the equilateral triangle which has $\sigma=2-\sqrt{3}$ and the right isosceles triangle which has $\sigma=\sqrt{2}-1$. (Numerical values for the torsional rigidities of other isosceles triangles are available, for example in~\cite{PSH54}.) It would, of course, be possible to produce tables, as done in \S\ref{subsec:Generaln} in the different context of regular polygons, for a range of vertex angles for the isosceles triangles. The starting point for this would be existing results for $Q_0$, combined with our formulae for $L$, $\Sigma_\infty$~(\ref{eq:isosPSigInf2}) and $\Sigma_1$~(\ref{eq:isosS1}). \newpage \subsection{The right isosceles triangle}\label{subsec:rightIsos} \begin{table}[h] \begin{center} \resizebox{\textwidth}{!}{ \begin{tabular}{|| c | c | c | c| c | c | c ||} \hline $\alpha$& $4Q_0/A^2$&$A_n$& $Q_0$ & $L_n$ & $\Sigma_\infty$& $-\Sigma_1$\\ \hline $\pi/3$& $\sqrt{3}/15$& $3\sqrt{3}/4$ &$9\sqrt{3}/320$& $3\sqrt{3}$ & $3\sqrt{3}/64$& $3\sqrt{3}/320$ \\ $\rho=1/2$ & 0.11547& 1.2990 & 0.0487 & 5.1962 & 0.0812& 0.0162\\ \hline $\pi/2$& 0.10436 & $1$ & 0.02609& $2+2\sqrt{2}$ & $(3-2\sqrt{2})/3$& $(131-91\sqrt{2})/90$\\ $\rho=\sqrt{2}-1$ & & 1 & & 4.8284 & 0.0572 & 0.0256285 \\ \hline \end{tabular} } \caption{Isosceles triangles with circumradius 1} \label{tbl:tblisos} \end{center} \end{table} \section{Tangential quadrilaterals, especially kites and rhombi}\label{sec:tangQuad} A quadrilateral is tangential if and only if the sums of lengths of each pair of opposite sides are equal. Examples of tangential quadrilaterals are the kites, which include the rhombi, which in turn include the squares. (A bicentric kite is an orthogonal kite: a bicentric rhombus is a square.) Torsional rigidities have been found numerically, for rhombi in~\cite{RI54,SC65}. The other quantities occuring in the lower bound $R$ are easily found. For example, the relevant quantities for a rhombus are as follows. Consider a rhombus with inradius $\rho$, area $A$. Let $\alpha$ be the angle at an acute vertex. The points of tangency of the incircle with the sides of the rhombus divide each side into a smaller part $\eta_-$ and a larger part $\eta_+$. Denote $\tan(\alpha/2)$ by $\tau$. Then $\eta_-=\rho\tau$ and $\eta_+=\rho/\tau$. From equation~(\ref{eq:i0eta}) $$ L=i_0 =4\rho\left( \tau +\frac{1}{\tau}\right) ,\quad A =\frac{1}{2}\rho L = 2\rho^2\left( \tau +\frac{1}{\tau}\right), \quad \rho =\sqrt{\frac{A}{2\left( \tau +\frac{1}{\tau}\right)}} . $$ (At fixed $A$, $L$ is minimized for the square, $\tau=1$.) We have, from~(\ref{eq:i2eta},\ref{eq:i4eta}), \begin{eqnarray*} i_2 &=& \rho^3\left( 4\left( \tau +\frac{1}{\tau}\right)+ \frac{4}{3} \left( \tau^3 +\frac{1}{\tau^3}\right) \right) ,\\ i_4 &=& \rho^5\left( 4\left( \tau +\frac{1}{\tau}\right)+ \frac{8}{3} \left( \tau^3 +\frac{1}{\tau^3}\right)+ \frac{4}{5} \left( \tau^5 +\frac{1}{\tau^5}\right) \right) . \end{eqnarray*} These check, in the case $\tau=1$ with the quantities given in~\S\ref{subsec:square}. Equations~(\ref{eq:tangPSigInf},\ref{eq:tangPSig1}) give $\Sigma_\infty$ and $\Sigma_1$ in terms of $A$, $L$, $i_2$ and $i_4$. The results for general rhombi are used in Part~IIb~\S\ref{subsubsec:BStangQuad} and in Part~III~\S\ref{sec:RhombiQ}. There are many geometric results concerning tangential quadrilaterals. A tangential quadrilateral is bicentric if and only if its inradius (hence area) is greater than that of any other tangential quadrilateral having the same sequence of side lengths. There may be similar results for some other domain functionals. \begin{comment} One of many examples, concerning bicentric quadrilaterals, is\\ {\small https://demonstrations.wolfram.com/PonceletsPorismForQuadrilaterals/ \end{comment} Amongst all quadrilaterals with a given area that which\\ minimizes perimeter $L$,\\ maximizes $Q_0$ (or similarly $\dot r$)\\ is square: see~\cite{PoS51} p159. The proof in~\cite{PoS51} involves symmetrisation, with kites to rhombi to rectangles then kites, etc.. Perhaps because of curiosity on how the successive symmetrisations performed there have been numerical studies of the torsion problem for rectangles, kites and rhombi. For rhombi an early example is~\cite{RI54}. \newpage \begin{center} {\large{{\textsc{ Part IIb: More geometry for tangential polygons }}}} \end{center} \section*{Abstract for Part IIb} Further items on tangential polygons, additional to those in Part IIa (which are taken from~\cite{Ke20i}), are collected here. In particular some Blaschke-Santalo diagrams for some geometric functionals are presented. \section{Outline of Part IIb}\label{sec:OutlineIIb} In as much as the main focus of these notes should be the lower bound $Q_{0-}$ of Part I, we remark that Parts IIa and IIb only provide results on the geometric quantities entering the formula for $Q_{0-}$, namely $\rho$, $A$, $L$, $i_2$, $i_4$. We defer further treatment of $\Sigma_1$ and $Q_{0-}$ to Parts III and IV. \medskip There are two main, and different, sorts of geometries. \begin{itemize} \item One, especially prominent in~\S\ref{sec:BlSa}, especially~\S\ref{subsec:cap} and~\S\ref{subsec:rLR}, involves general tangential polygons, convex circumgons. (When we count the extreme points outside the incircle these are, if $n$ points, called circum-$n$-gons.) \item The other concerns genuine $n$-gons. Their boundaries consist solely of straight line segments. The hope is that one can show that regular $n$-gons optimize appropriate domain functionals over subsets of, sometimes all, tangential $n$-gons. The hope is sometimes realized, an easy example (in~\S\ref{subsec:triGenz}) being as follows:\\ {\bf Result.}{\it Let $\Omega_0$ be a tangential $n$-gon with inradius $\rho$, and vertex angles $\alpha_k$. Let $\Omega_1$ be a tangential $n$-gon with the same inradius $\rho$ and the same vertex angles except that vertices $\alpha_i$ and $\alpha_j$ are each replaced by their average $(\alpha_i+\alpha_j)/2$. Then each of $L$, $A$, $i_2$, $i_4$ and $d_O$ are reduced in the change from $\Omega_0$ to $\Omega_1$, strictly so if $\alpha_i\ne{\alpha_j}$.}\\ The `easy' above refers to the proof. It wasn't immediately obvious to this author before the proof. By way of contrast, the following seems almost self-evident.\\ {\bf Corollary.}{\it Amongst all tangential $n$-gons with a fixed inradius, the regular $n$-gon minimizes each of $L$, $A$, $i_2$, $i_4$ and $d_O$.}\\ We have yet to check the possibility that $Q_{0-}$ behaves, in this respect, the same as~\cite{Sol92} Theorem 1 gives for $Q_0$. Discussion of this is defered to Part III. \end{itemize} Here is an outline of this part. \begin{itemize} \item In \S\ref{sec:Transformations} we consider operations involving tangential polygons. \item In \S\ref{sec:Construction} we indicate how we coded to construct tangential polygons in order to later compute domain functionals for them. \item In \S\ref{sec:Duality} considerations of `duality' direct the study. \item In \S\ref{sec:propIIb} we collect a somewhat miscellaneous set of inequalities and geometric facts. \item In \S\ref{sec:IIbTri} we consider triangles: in \S\ref{sec:quad} tangential quadrilaterals. \item In \S\ref{sec:Bicentric} we note a few facts concerning bicentric polygons. All triangles are bicentric. All regular polygons are bicentric. \item In \S\ref{sec:BlSa} we present some information about Blaschke-Santalo diagrams for some geometric functionals. \end{itemize} Blaschke-Santalo results can sometimes lead in to proving isoperimetric results. Here is the style of an example with $\cal F$ some domain functional. \begin{itemize} \item Amongst triangles with fixed $\rho$ and $L$ that which $<$ optimizes $\cal F$ $>$ is $<$ squat $|$ tall $>$ isosceles. \item Amongst \ldots isosceles triangles at fixed $A=\rho L/2$ that which $<$ optimizes $\cal F$ $>$ is equilateral. \end{itemize} As an example consider $d_O$ (defined and treated extensively in~\S\ref{sec:BlSa}). At fixed $\rho$ and $A=\rho L/2$ squat isosceles triangles minimize $d_O$. With this preliminary, when considering triangles with given $A$, minimizing $d_O$ we need only consider isosceles triangles with that area. (For $Q_{0-}$ I have in Part I seen that, amongst isosceles triangles with a given area that which maximizes $Q_0$ is equilateral.) See~\S\ref{sec:IIbTri} for more formal treatment, but for now the following very informal discussion might make the second step plausible. Consider now squat isosceles triangles with given incentre and its base vertices on the circle radius $d_O$. It is eminently plausible that increasing the inradius of this family of triangles will increase the area, suggesting that at fixed $d_O$ one gets maximum $A$. Conversely one expects at fixed $A$ to get minimum $d_O$ at the equilateral triangle. \section{Transformations involving tangential polygons}\label{sec:Transformations} Changing scale by some factor $t$ changes a tangential polygon with inradius $\rho$ to one with inradius $t\rho$. Mostly we fix the inradius, and always have the origin of coordinates at the centre of the incircle. In this situation, as we have already noted, in Part~IIa~\S\ref{subsec:tangGeom} that given two (convex) tangential polygons with the same incircle their intersection is also a (convex) tangential polygon with the same incircle. (For tangential $n$-gons the number of vertices could increase.) \subsection{Convex $n$-gon to tangential $n$-gon} Given a convex $n$-gon, and hence its sequence of angles, all less than $\pi$, then one can define a tangential $n$-gon with the same sequence of angles and the same area. This transformation reduces the perimeter. (See~\cite{AM04}.) \subsection{Tangential $n$-gons to tangential $m$-gons, $m\ge{n}$} As a particular case of the intersection of tangential polygons being tangential polygon we mention ``corner cutting". Given a tangential polygon $\Omega$ and a half-plane $H$ containing the incircle of $\Omega$, then $H\cap{\Omega}$ is a tangential polygon. Both $L$ and $A$ are decreased while $\rho=2A/L$ stays constant. The number of vertices increases. Let $\Omega$ be a tangential polygon. Let $\sigma(l,.)$ be reflection through a line $l$ through the origin. Then $\sigma(l,\Omega)$ is a tangential polygon and so is its intersection with $\Omega$.\\ In the case of tangential $n$-gons, the numbers of vertices changes. For example, if $\Omega$ is an equilateral triangle and $l$ is parallel to a side, the intersection is a hexagon. \subsection{Tangential $n$-gons to tangential $m$-gons, $m\le{n}$} Begin with a tangential polygon with $n\ge{4}$ vertices. Moving a tangency point to an adjacent tangency point will result in an edge disappearing. While the inradius stays the same, the new tangential polygon might not be bounded with an example of this having the starting point as a square. \subsection{Permutations of angles of tangential $n$-gons} If one permutes the entries of a sequence of angles or a tangential polygon, keeps the inradius the same, one has another tangential polygon with the same $\rho$, $A$, $L$, $i_2$, $i_4$, $d_O$. As an example consider reflection about any `diagonal': the incentre moves, but the reflected polygon is tangential. \subsection{Tangential $2m$-gons to 2-special $2m$-gons} Another transformation which at least takes tangential quadrilaterals to tangential quadrilaterals is $m$-descendant mapping: see~\cite{Wo81}. This paper also defines a $n$-gon with $n=2m$ even to be {\it 2-special} if the sum of lengths even-indexed sides is equal to that of the odd-indexed sides. Also the {\it 2-descendant map} of a $n$-gon with sides $s_{{\rm in},j}$ is the $n$-gon with sides $$ s_{{\rm out},j} =\frac{1}{2}(s_{{\rm in},j}+s_{{\rm in},j+1} ). $$ The 2-descendant map of any 2-special $2m$-gon is 2-special. In particular the 2-descendant map of a tangential quadrilateral is a tangential quadrilateral. Repeated application of 2-descendant maps to an initial tangential $2m$-gon would take one ever closer to a regular $2m$-gon. We remark that the doubly stochastic circulant matrix $\frac{1}{2}M(n)$, defined in~\S\ref{subsec:Circulant} is here applied to ${\mathbf s}_{\rm in}$. The effect of many successive operations with the map leading to the regular $n$-gon corresponds to the fact that the matrix powers tend to the $1/n$ times the matrix $E$ all of whose entries are 1: $$ \left(\frac{1}{2}M(n)\right)^k \rightarrow \frac{1}{n} E(n)\qquad {\rm as\ \ } k\rightarrow\infty . $$ {\bf ToDo.} Check out guess that applying a 2-descendant map to a tangential hexagon may not lead to tangential hexagon. (It is known that being 2-special is necessary but not sufficient for a hexagon to be tangential.) \medskip One can also consider $2m$-gons as linkages. Again a tangential $2m$-gon can move as a linkage to another 2-special configuration. For tangential quadrilaterals the linkage remains, when convex, a tangential quadrilateral (but this will not be the case for 6-gons). \subsection{Tangential $n$-gons to Tangential $n$-gons, preserving $\rho$} \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbTilt.pdf}} \caption{Diagram for `tilting transformation'} \label{fig:IIbTilt} \end{figure} A transformation, called a `tilting transformation' is now defined. In this context refer to Figure~\ref{fig:IIbTilt}. A side is `tilted', its tangent contact point moved, so that it becomes parallel to the line joining the contact points of adjacent sides. All tangent points except one remain fixed. Suppose the initial configuration is as shown in the figure. The red and green tangent lines remain fixed, but suppose the tangent point of the black side is considered movable. The fixed tangent points either side of the movable one are at $$ \zeta_- = (-X,h), \qquad \zeta_+ = (X,h) .$$ The coordinates of the movable tangent point might be taken as $$ \zeta = \rho \left(\frac{1-t^2}{1+t^2},\frac{2t}{1+t^2}\right) \ \ {\rm\ and\ w.l.o.g.\ } \rho=1 .$$ The area $A(t)$ of the quadrilateral obtained from intersecting the tangential $n$-gon with the half-space $\{y>h\}$ can be found, as can the length $L(t)$ of the three line segments in the upper part of its perimeter. Finding the formulae for $A(t)$ and $L(t)$ is not too onerous, with one check being that $2A(t)-L(t)$ is independent of $t$. Both $A(t)$ and $L(t)$ are minimized at $t=1$ which has the topmost (black) tangent line parallel to $y=h$ and the two consecutive angles of the tangential $n$-gon equal. \medskip Repeated application of the tilting transformation, for varying sides, will, in the limit, get one to a tangential $n$-gon with all angles equal. If a tangential $n$-gon has all angles equal it is regular. Thus, amongst all tangential $n$-gons with radius $\rho$, that which has the smallest area (and perimeter $L=2A/\rho$) is the regular $n$-gon. \medskip Similarly, amongst all tangential $n$-gons with radius $\rho$, that which has the smallest $d_O$ is the regular $n$-gon. \medskip The `tilting transformation' is easy to visualize geometrically. Less obvious is that at fixed $\rho$ one can reduce $A$, $L$, $i_2$, $i_4$, $d_O$ by averaging any two angles (not necessarily adjacent angles as above). The reduction of $L$ is a consequence of the convexity of the $\cot$ function on $(0,\pi/2)$, the quantities $T_k$ defined in Part IIa equation~\ref{eq:Tkdef}) and the representation of $L$ as the sum of the $T_k$. The other quantities $i_2$, etc. are treated similarly. More details are given in~\S\ref{subsec:triGenz}. \subsection{Circum-$n$-gons to circum-$m$-gons, $m\le{n}$} Let $\Omega$ be a tangential polygon with inradius $\rho_0$ and with the maximum distance of the boundary to the origin $d_O$. Then, for any ball $B$ centred at the origin, the convex hull ${\rm conv}(B\cup{\Omega})$ is a tangential polygon.\\ {\bf ToDo.} Prove this and investigate the behaviour of $L/\rho$ as the radius of $B$ increases from $\rho_0$ to $d_O$. \subsection{Minkowski sums} The Minkowski sum of convex polygons is a convex polygon. The Minkowski sum of two line segments is a parallelogram.\\ The Minkowski sum of two triangles is a hexagon (usually not tangential).\\ Any convex polygon is the Minkowski sum of triangles and line segments. A reference for this (which I have yet to check) is page 177 of \\ I. M. Yaglom, V. G. Boltyanskii, (1961) {\it Convex Figures}, New York: Holt, Rinehart and Winston.\\ This leads on to the following. \noindent {\bf Questions.} (i) Is {\it any} convex hexagon the Minkowski sum of 2 triangles?\\ (ii) Is, for $n\ge{6}$, any convex $n$-gon with origin at the centroid the Minkowski sum of a small number (perhaps just 2) of tangential $m$-gons (with $m\le{n/2}$), now (unlike everywhere else in this document) all with centroid at the origin?\\ A positive answer to a question like this might suggest, from properties established for all tangential $m$-gons, corresponding properties for convex $n$-gons. And if this were to be the case, one can imagine establishing some domain-functional property for tangential $n$-gons and then getting something from the concavity of the domain-functional under Minkowski sum. \subsection{Rearrangements???} For the purely geometric functionals $A$, $L$, $i_2$, $i_4$, $Q_{0-}$, $d_O$ studied in this document rearrangements might not be needed. For functionals, like conformal inradius, transfinite diameter, torsional rigidity, fundamental frequency -- functionals involving integrals of gradient squared, etc. -- rearrangements are an appropriate tool. \cite{PoS51} used Steiner symmetrization to establish, for triangles and quadrilaterals, that, at given area, the regular $n$-gon optimizes. These are equilateral triangle and square respectively. However, Steiner symmtrizing polygons with more vertices typically increases the number of vertices. \cite{SZ04,Sol92,SZ10} manage to use some sort of rearrangement preserving the number of vertices of a convex $n$-gon. Dissymetrization?\\ {\bf ToDo.} Try to understand this. See also~\cite{Ba06} \medskip Polarizations seem to be building blocks for the rearrangements with which I am more familiar, namely those used in~\cite{PoS51}. Polarizations, alone, are not likely to be a tool for rearranging tangential polygons. The following guesses and questions began with drawing sketches. \begin{itemize} \item The polarization of a triangle about an angle bisector just reflects the triangle. \item Can anything beyond the incircle staying fixed be said about the polarization of a triangle about any line through the incentre? Similar question for any tangential polygon $\Omega$ about any line through the incentre? Non-convex circumgons? \end{itemize} \cite{Sol92}\\ (i) presents results of the kind that regular $n$-gons optimize over all $n$-gons with the same area;\\ (ii) that tangential polygons are used (see Part III~\S\ref{sec:BoundsQIso}).\\ A process called `dissymetrization' gets used. This, and polarization, get a mention in~\cite{SZ10}. \subsection{Spaces of polygons?} See~\cite{GPT17}. \section{Construction of tangential polygons}\label{sec:Construction} With just a few exceptions (on 1-cap, etc.) to date our computations have been for tangential $n$-gons. Our first method provided data for actually drawing up the polygon and required an initial specification of the inradius. After this prescribe $n$ (which in our computations so far just $n\le{6}$). Then choose $n$ increasing values of $\theta_k$ in $(-\pi,\pi)$ with the maximum difference of consecutive $\theta_k$ less than $\pi$. This yields $\exp(\theta_k)$ on the unit circle as tangent points. (The restriction on the separation of the $\theta_k$ is in order that a convex polygon is constructed.) From each pair of consecutive tangent points, find the point of intersection of the tangent lines. These give the vertices of the tangential polygon and there are standard formulae for perimeter $L$, area $A$, in terms of the coordinates but it is easier to note that with the tangent lengths one can find the $T_k$ and use these to find $L$, $i_2$, $i_4$, etc. \medskip If one doesn't need to draw the polygon and is interested in functionals that stay constant under change of scale, e.g. $L/\rho$, one can use the $T_k$ as defined in Part IIa equation~(\ref{eq:Tkdef}). The tangent lengths are given by $\eta_k=\rho\,T_k$. Simple formulae to find the other functionals $L$, $i_2$, $i_4$ are given Part IIa~\S\ref{subsec:tangGeom}, and another $d_O$ in~\S\ref{sec:BlSa}. We start from result given at the beginning of~\S\ref{sec:Transformations}:\\ {\bf Existence Theorem.}{\it Given a convex $n$-gon, and hence its sequence of angles, all less than $\pi$, then one can define a tangential $n$-gon whose incentre is at the origin, with the same sequence of angles. }\\ Clearly once one has one, one can rescale by any factor preserving the properties. \medskip \noindent{\bf Corollary. }{\it Given a set $S$ of $n$-numbers $0<\alpha_k<\pi$ summing to $(n-2)\pi$, then the different tangential $n$-gons arising from the different permutations of $S$ all have the same values for $$\frac{L}{\rho},\ \ \frac{i_2}{\rho^3},\ \ \frac{i_4}{\rho^5},\ \ \frac{d_O}{\rho}. $$ } \medskip In any event, for many calculations later in this part, one can start directly with the tangent lengths, or with the $\alpha_k$ or with the $T_k$: it is not always necessary to calculate the vertex coordinates. In some contexts moving between tangential $n$-gons by permuting angles may lose properties. In particular, permuting angles of a bicentric $n$-gon will, in general result in a tangential $n$-gon which is not bicentric. (The simplest example would be any bicentric quadrilateral with 4 different angles. Permuting the angles must lose the property that the sum of opposite angles is $\pi$.) While $d_O$, the distance from the incentre origin to a vertex, doesn't need the vertex coordinates and and is invariant, at fixed $\rho$ under permutations of the angles, this may not be the case for the circumradius $R$. \section{Duality} \label{sec:Duality} \begin{comment} (* PonceletCentinamma.txt Poncelet’s porism: a long story of renewed discoveries, I Andrea Del Centina *) (* For triangles, given inradius rho and Rv, here denoted R, the radius of the circle through vertices, how are the sides related? *) perim = s1+s2+s3; arearho = rho*perim/2; (* R circumradius of vertices *) areaR = s1*s2*s3/(4*R); ph= perim/2; AsqrHeron = ph*(ph-s1)*(ph − s2)*(ph − s3) (* sSol= Solve[{arearho==A, areaR=A, AsqrHeron=A^2},{s1,s2,s3}] long and Length[sSol] is 6 *) Reduce[arearho == A && areaR == A && AsqrHeron == A^2 && L == s1 + s2 + s3, {s1, s2, s3}] (* gives that the sides are roots of cubic *) \end{comment} Denote the inner product for plane vectors with a dot. Define {\it polar-reciprocation} $\cal P$ of a point $x$ by $$ {\cal P}(x) = \{ z\ |\ z\cdot{x} = 1 \} .$$ $\cal P$ takes a point to a line. (This differs from the most common definition of polar in convex geometry in which one has $z\cdot{x}\le{1}$ so points map to half-planes.) Next continue the definition. Let $D$ be a set in the plane. Define ${\cal P}(D)$ by $$ {\cal P}(D) = \{ z\ |\ z\cdot{x} = 1 \ \forall x\in{D} \} .$$ $\cal P$ takes the unit circle to itself. $\cal P$ takes lines (not through the origin) to points: in particular $\cal P$ takes a line tangent to the unit circle to its point of tangency with the unit circle. See\\ \verb$https://en.wikipedia.org/wiki/Pole_and_polar$\\ \verb$https://en.wikipedia.org/wiki/Dual_polygon$ Thus the boundary lines of a tangential polygon map to the vertices of a cyclic polygon and vice-versa. \bigskip \noindent {\bf `Vertex-side' duality, adapted from wikipedia} As an example of the side-angle duality of polygons we compare properties of the cyclic and tangential polygons, especially quadrilaterals. {\small \begin{center} \begin{tabular}{|c|c|} \hline Cyclic $n$-gon& Tangential $n$-gon\\ \hline \hline Circumscribed circle& Inscribed circle\\ \hline Perpendicular bisectors of the sides are& Angle bisectors are\\ concurrent at the circumcentre& concurrent at the incentre\\ \hline $n$ even: The sums of the two pairs& $n$ even: The sums of the two pairs\\ of opposite/alternate angles& of opposite/alternate sides\\ are equal& are equal\\ \hline \end{tabular} \end{center} } For a cyclic $2m$-gon the sum of the alternate angles is $(m-1)\pi$. We remark that for quadrilaterals, $m=4$ $n=2$ the converse is true but it is false for $n\ge{3}$. The same is true for tangential $2m$-gons: for $m\ge{3}$ being `2-special' is necessary but not sufficient for a $2m$-gon to be tangential. $n=6$, $m=3$. Brianchon's theorem states that the three main diagonals of a tangential hexagon are concurrent.\\ The polar reciprocal and projective dual of the conics version of Brianchon's theorem give Pascal's theorem. Duality is evident again when comparing an isosceles trapezoid to a kite. {\small \begin{center} \begin{tabular}{|c|c|} \hline Isosceles trapezoid& Kite\\ \hline \hline Two pairs of equal adjacent angles& Two pairs of equal adjacent sides\\ \hline One pair of equal opposite sides& One pair of equal opposite angles\\ \hline An axis of symmetry through& An axis of symmetry through\\ one pair of opposite sides& one pair of opposite angles\\ \hline Circumscribed circle& Inscribed circle\\ \hline \end{tabular} \end{center} } Let $P(n)$ be an ordered list of $n$ points $\zeta_k$ on a circle, $P(n+1)=P(n)\cup\{\zeta_{n+1}\}$ with $\zeta_{n+1}$ after $\zeta_{n}$ and before $\zeta_{1}$.\\ Let $TP(P(n))$ be the tangential $n$-gon with the points of $P(n)$ its tangent points.\\ Let $CP(P(n))$ be the cyclic $n$-gon with the points of $P(n)$ its vertices. The first entry in the table below indicates how areas change on introducing the additional point on the circle. \begin{center} \begin{tabular}{|c|c|} \hline $|TP(P(n+1))|\le|TP(P(n))|$& $|CP(P(n+1))|\ge|CP(P(n))|$\\ \hline A tangential $2m$-gon has all sides equal iff& A cyclic $2m$-gon has all angles equal iff \\ the alternate angles are equal.& the two sets of alternate sides are equal.\\ \hline Equilateral tangential for $n=2m$ even& Equiangular cyclic for $n=2m$ even\\ Opposite angles equal if $n/2$ is even& Opposite sides equal if $n/2$ is even\\ \hline \end{tabular} \end{center} See~\cite{deV11}. \subsection{Tangential and cyclic polygons, continued} The set of all convex sets is a lattice under operations of intersection, $\Omega_1\cap\Omega_2$, and convex-hull of union, ${\rm conv}(\Omega_1\cup\Omega_2)$.\\ The set of tangential polygons with incentre at the origin and given inradius $\rho$ is closed under intersection.\\ The set of cyclic polygons with circumcentre at the origin and given circumradius $R_V$ is closed under convex-hull-union. (A cyclic polygon is the convex hull of its extreme points all of which lie on the circle radius $R_V$.)\\ In the case of $n$-gons the numbers of vertices can increase. \medskip Let the coordinates of the vertices of a convex $n$-gon be $(x_k,y_k)$ with the vertices traversed in order (and vertex 1 identified with vertex $n+1$).\\ The polygon is cyclic with circumcentre O and circumradius 1 if the distance of every vertex from O is 1: $$x_k^2 + y_k^2=1\qquad \forall k . $$ The polygon is tangential with incentre O and inradius 1 if the distance of every side from O is 1: $$(x_{k+1}-x_k)^2 + (y_{k+1}-y_k)^2= (x_{k+1} y_k - y_{k+1} x_k)^2 \qquad \forall k . $$ The tangency point on each line, the closest point to O, is $$ x_t = \frac{y_{k+1}-y_k}{x_k\, y_{k+1}-y_k\, x_{k+1}}, \qquad y_t = \frac{x_{k+1}-x_k}{x_k\, y_{k+1}-y_k\, x_{k+1}} . $$ \begin{comment} e=(x-x1)/(x2-x1)-(y-y1)/(y2-y1) e1 = Collect[e, {x, y}, Factor] e0 = e1 /. {x -> 0, y -> 0} e2 = Collect[e/e0, {x, y}, Factor] s = Coefficient[e2, y]; c = Coefficient[e2, x] check = Simplify[c^2 + s^2] Simplify[e2 /. {y -> -s, x -> -c}] \end{comment} \medskip In establishing, by Steiner symmetrisation, isoperimetric properties of $n$-gons, for $n=3$ and $n=4$, sequences of polygons which alternate between tangential and cyclic occur. Here is a quote from~\cite{PoS51}: \begin{quote} {\bf Of all quadrilaterals with a given $A$, the square has the smallest $L$, $I_c$ (polar moment of inertia about the centroid), $\overline r$, \ldots but the largest $\dot r$ and $Q_0$.} \ldots\ldots it is sufficient to indicate a sequence of symmetrizations which transform, ultimately, a given quadrilateral into a square. Symmetrizing a given quadrilateral with respect to a perpendicular to one of ·its diagonals, we change it into a quadrilateral having a diagonal as axis of symmetry. Symmetrizing this new quadrilateral with respect to a perpendicular to its axis of symmetry, we change it into a rhombus. Symmetrizing the rhombus with respect to a perpendicular to one of its sides, we change it into a rectangle. Symmetrizing the rectangle with respect to a perpendicular to one of its diagonals, we obtain another rhombus. Repeating the last two steps in succession, we obtain an infinite sequence in which rhombi alternate with rectangles. \end{quote} Rhombi are tangential polygons (with equal sides): \\ rectangles are cyclic (with equal angles). \medskip Steiner symmetrization is not applicable to showing that regular $n-gons$ optimize when $n\ge{5}$. If one (initially at least) focuses on geometric quantities like $$\frac{A}{L^2},\ \frac{i_2}{L^3},\ \frac{i_4}{L^4} , \qquad{\rm and\ }\ \frac{Q_{0-}}{A^2} ,$$ it may be possible to devise other transformations between $n$-gons\\ which alternate between tangential and cyclic,\\ which change functionals like those immediately above monotonically and\\ which converge to the regular $n$-gon. \medskip \section{Miscellaneous properties of tangential polygons}\label{sec:propIIb} \subsection{Circulant matrices and tangential $n$-gons}\label{subsec:Circulant} This subsection treats questions like the following:\\ Given a set of $n$ of positive side lengths $(s_j)$ how can we recognize if there could be a tangential polygon with these side lengths? We begin with a connection between tangential polygons and circulant matrices presented near the beginning of the wikipedia page on tangential polygons. Let $P$ and $M=I+P$ be the $n\times{n}$ circulant matrices as follows. $P$ is the $n\times{n}$ cyclic permutation: \begin{equation*} P = \begin{pmatrix} 0 & 1 & 0& \cdots & 0 \\ 0& 0 & 1 & \cdots & 0 \\ \vdots&\vdots&\vdots& \ddots & \vdots \\ 1 & 0 & 0& \cdots & 0 \end{pmatrix} \end{equation*} The matrix $\frac{1}{2}M$ is doubly stochastic. The wikipedia page states: \begin{quote} There exists a tangential polygon of $n$ sequential sides $s_1,\ldots, s_n$ if and only if the system of equations \begin{equation} M{\mathbf \eta} = {\mathbf s} ,\label{eq:Metas} \end{equation} has a solution $(\eta_1,\dots, \eta_n)$ in positive reals. If such a solution exists, then $(\eta_1,\dots, \eta_n)$ are the tangent lengths of the polygon (the lengths from the vertices to the points where the incircle is tangent to the sides). \end{quote} Once one has the $\eta$ and $\rho$ the angles $\alpha_k$ are determined from $$\eta_k (=\eta_{k-}) = \rho T_k \qquad{\rm where\ \ } T_k = \frac{1}{\tan(\frac{\alpha_k}{2})} .$$ See PartIIa, equation~(\ref{eq:Tkdef}). That the sum of the $\alpha_k$ is $(n-2)\pi$ leads to one further equation which we record, as an aside, in the next subsubsection. \subsubsection{Some relations between the $T_k$} As before, consider $n$-gons and denote the angle at vertex $k$ by $\alpha_k$, with $k$ increasing as one goes around the convex $n$-gon in counterclockwise direction. The sum over all the $\alpha_k$ is $(n-2)\pi$. Fix the inradius $\rho$ as 1. Suppose the points of tangency of the tangential $n$-gon are $\zeta_j=\exp(i\phi_j)$. Again $j$ increases as one goes around the convex $n$-gon in counterclockwise direction. The angle at O formed by the lines $O\zeta_j$ and $O\zeta_{j+1}$ is $\phi_{j+1}-\phi_j$. \vspace{1cm} Then, with \begin{equation} T_k = \frac{1}{\tan\frac{\alpha_k}{2}}=\tan(\frac{\pi-\alpha_k}{2}), \label{eqBS:Tkdef} \end{equation} $T_k>0$ since the polygon is convex. Since we know the sum of $\alpha_k/2$: \begin{equation} \sum_{k=1}^n \frac{\alpha_k}{2} =\sum_{k=1}^n {\rm arccot}(T_k) = (n-2)\frac{\pi}{2} . \label{eqBS:arccot} \end{equation} Some, but not all, the information in this can be expressed in equations involving just rational functions of the $T_k$, i.e. without the transcendental arccot function. We denote the elementary symmetric polynomial of degree $k$ by $$ {\rm SymmetricPolynomial}(k, \ldots ) ,$$ and define $$ e_k = {\rm SymmetricPolynomial}(k,[\frac{1}{T_1},\frac{1}{T_2},\ldots,\frac{1}{T_n}]) .$$ For tangential $n$-gons, we first treat $n$ odd, then $n$ even.\\ When $n$ is odd $\cos((n-2)\pi/2)=0$ so the cosine of the sum of all the $\alpha_k/2$ is $0$ we have \begin{equation} e_0 -e_2 + e_ 4- e_6 \dots = 0 . \label{eqBS:elPec} \end{equation} When $n$ is even $\sin((n-2)\pi/2)=0$ so the sine of the sum of all the $\alpha_k/2$ is $0$ we have \begin{equation} e_1 -e_3 + e_ 5- e_7 \dots = 0 . \label{eqBS:elPes} \end{equation} We will need the formulae for perimeter, i.e. $L=i_0$, for $i_2$ and for $i_4$ as given in Part IIa~\S\ref{subsec:tangGeom} namely equations~(\ref{eq:i0Tgen}), (\ref{eq:i2Tgen}) and~(\ref{eq:i4Tgen}). \subsubsection{Examples at $n=3$ or $4$} For a triangle $ A={\sqrt{\frac{L}{2}(\frac{L}{2}-a)(\frac{L}{2}-b)(\frac{L}{2}-c)}}$ and since $L=2\sum\eta_k$ $$\leqno{\rm tang3gon:}\qquad A= \sqrt{(\eta_1+\eta_2+ \eta_3)\eta_1\eta_2\eta_3 } .$$ For a tangential quadrilateral wikipedia gives $$\leqno{\rm tang4gon:} A= \sqrt{(\eta_1+\eta_2+ \eta_3+\eta_4) (\eta_1\eta_2\eta_3 + \eta_2\eta_3\eta_4 + \eta_3\eta_4\eta_1 +\eta_4\eta_1\eta_2)} . $$ \begin{figure}[ht] \centerline{\includegraphics[height=11cm,width=9cm]{IIbBrianchonTheoremsvg.pdf}} \vspace{-4cm} \caption{From wikipedia. Brianchon's Theorem. Diagonals of a tangential hexagon are concurrent. } \label{fig:Brianch} \end{figure} \subsubsection{Non-negative solution for $\mathbf\eta$ given $\mathbf s$?} Return now to the linear equations~(\ref{eq:Metas}). Denote the dependence of $M$ on $n$ by writing $M(n)$. There is very different behaviour when $n$ is odd than when $n$ is even as, for example, $$ {\rm det}(M(n))= 1-(-1)^n .$$ Thus $M(n)$ is invertible when $n$ is odd, but not when $n$ is even. There being many questions I have been, as yet, unable to answer, and as properties of $M(n)$ might ultimately be useful, I have collected several properties of $M(n)$ here, but not found uses for some of them yet. Let ${\mathbf e}$ be the vector all of whose entries are 1. Then, for all $n$, $$ M(n)\, {\mathbf e} = 2 \, {\mathbf e} .$$ Geometrically this corresponds to a regular $n$-gon with side 2 and tangent lengths 1. The eigenvalues of $P$ are the roots of unity, and the eigenvalues of $M$ require us just add 1 to these. The eigenvectors are, of course, the same. $${\rm CharacteristicPolynomial}(M(n),\lambda) = -(-1)^n (1 - (\lambda - 1)^n) . $$ The transpose $M^T(n)$ has exactly the same properties as described for $M(n)$ in the preceding paragraph. If one ignores the nonnegativity requirement, clearly there is, for any rhs $\mathbf s$ a unique `solution' to the linear equations when $n$ is odd. When $n$ is even, there is only a `solution' when the rhs is orthogonal to the nullspace of $M^T(n)$, i.e only when \begin{equation} \sum_{k\ {\rm odd}} s_k = \sum_{k\ {\rm even}} s_k , \label{eqIIb:evod} \end{equation} and, also, `solutions' are not unique. $M(n)$ is normal, commutes with its transpose. As $M\,M^T$ is symmetric and Toeplitz it is centrosymmetric. A {\it persymmetric matrix} is a square matrix which is symmetric with respect to the northeast-to-southwest diagonal. $M(n)$ is persymmetric and, as such, satisfies $$ M(n)\, J = J\, M^T(n),\qquad {\rm where\ } J\ {\rm is\ the\ exchange \ matrix} . $$ $J$ is the matrix with 1s on its northeast-to-southwest diagonal and 0s elsewhere. Define next $$M_i(n) = (I(n)+\sum_{k=1}^{n-1} (-1)^k P(n)^k )/2.$$ We have $$ M(n)\, M_i(n) = \frac{1-(-1)^n}{2} I(n) . $$ For $n$ odd $M_i(n)$ is the inverse of $M(n)$. Also (via Cayley-Hamilton Theorem) $$ \sum_{k=1}^{n} (-1)^k {n\choose k} M(n)^k = -(1-(-1)^n)\, I(n) .$$ The obvious next question is `what conditions on the sides ensure there is a {\em nonnegative} solution for $\mathbf \eta$'? \smallskip \noindent {\bf Farkas Lemma.} {\it Exactly one of the following two assertions is true:\\ 1.There exists an ${\mathbf \eta}\in{\mathbb{R}^n}$ such that $M{\mathbf \eta}={\mathbf s}$ and ${\mathbf {\eta}} \geq 0$.\\ 2. There exists a $\mathbf {y} \in \mathbb {R} ^{n}$ such that $\mathbf {M} ^{\mathsf {T}}\mathbf {y} \geq 0$ and $\mathbf {s} ^{\mathsf {T}}\mathbf {y} <0$.}\\ We have yet to devise a good use for this lemma but suspect it will be relevant to conditions for general $n\ge{3}$. \medskip Here we do not attempt to find the conditions for general $n$. We will attempt to find necessary and sufficient conditions on $\mathbf s$ for the existence of nonnegative $\mathbf\eta$, separately, for each of $n=3$, 4, 5 and 6. \medskip \noindent{\bf Triangles.} When $n=3$, $$ M(3)^{-1}= \frac{1}{2} \left( \begin{array}{ccc} 1 & -1 & 1 \\ 1 & 1 & -1 \\ -1 & 1 & 1 \\ \end{array} \right) , $$ so one has nonnegative solutions for $\eta$ iff the nonnegative $s$ are such that the sum of any two is greater than (or equal to) the remaining side's length. This accords with the fact that any triangle is tangential (indeed bicentric). \medskip \noindent{\bf Tangential pentagons.} When $n=5$, $$ M(5)^{-1}= \frac{1}{2} \left( \begin{array}{ccccc} 1 & -1 & 1 & -1 & 1 \\ 1 & 1 & -1 & 1 & -1 \\ -1 & 1 & 1 & -1 & 1 \\ 1 & -1 & 1 & 1 & -1 \\ -1 & 1 & -1 & 1 & 1 \\ \end{array} \right) . $$ Consider triples of sides in which just one pair of sides are adjacent. There are five such triples. There are nonnegative solutions for $\eta$ iff the nonnegative $\mathbf s$ are such that for any of these five triples the sum of those in the triple is greater than (or equal to) the sum of the remaining two sides' lengths. \bigskip Let $n=2m$ for $m\ge{2}$. Define $$ {\mathbf{nv}} = \left( \ (-1)^k \ \right) ,$$ which is a basis for the nullspace of $M(n)$ (and also of $M^T(n)$). Suppose $\mathbf s$ satisfies equation~(\ref{eqIIb:evod}) so `solutions', albeit without the nonnegativity condition imposed, exist. These solutions are \begin{equation} {\mathbf \eta}_{\rm gen} = {\rm PseudoInverse}(M(n))\,{\mathbf s} +c\, {\mathbf{nv}} , \label{eq:etaGen} \end{equation} but it remains to investigate the restrictions on $\mathbf s$ and $c$ needed so that amongst the ${\mathbf \eta}_{\rm gen}$ there is at least one with ${\mathbf \eta}\ge{0}$. In this document we will attempt this only for $n=4$ and $n=6$. Before that, however, we consider $n=2m$ in general. By standard properties $$ M(n)\, {\rm PseudoInverse}(M(n))\, M(n) = M(n) .$$ Using the fact that when $n=2m$ the matrix $M(n)$ has the simple block structure $$ M(n) =\begin{pmatrix} U& L\\ L& U \end{pmatrix} , $$ it is easy to find $A$ and $B$ so that $$ {\rm PseudoInverse}(M(n) =\begin{pmatrix} A& B\\ B& A \end{pmatrix} . $$ The matrix $L$ has just one 1 in the bottom left corner and $U=M(m)-L$. We have \begin{eqnarray*} A &=&\frac{1}{2} ({\rm PseudoInverse}(U+L)+{\rm PseudoInverse}(U-L) ),\\ B &=&\frac{1}{2} ({\rm PseudoInverse}(U+L)-{\rm PseudoInverse}(U-L) ) . \end{eqnarray*} We also have $L\,U\,L$ is the zero matrix, and $L\,U^{-1}\,L=-L$. \medskip \noindent{\bf Tangential quadrilaterals.} It is already known that no further restriction beyond $$ s_1+s_3=\frac{L}{2}=s_2+s_4 $$ is needed to ensure that the quadrilateral is tangential. So it remains just an exercise to show that the system of equations $$ M(4){\mathbf \eta} = \begin{pmatrix} s_1\\ s_2\\ \frac{L}{2}-s_1\\ \frac{L}{2}-s_2 \end{pmatrix} $$ has, for all $0<s_1<L/2$ and $0<s_2<L/2$ a positive $\eta$ solution. When $n=4$, $$ {\rm PseudoInverse}(M(4)) = \frac{1}{8} \left( \begin{array}{cccc} 3 & -1 & -1 & 3 \\ 3 & 3 & -1 & -1 \\ -1 & 3 & 3 & -1 \\ -1 & -1 & 3 & 3 \\ \end{array} \right) . $$ There is no loss of generality in setting $L=2$ and considering $1/2\le{s_1}<1$ and $1/2\le{s_2}<s_1$. If we try the formula~(\ref{eq:etaGen}) we are led to consider the function $\phi$ $$\phi(s_1,s_2,c) ={\rm min}(1 + 2 s_1 - 2 s_2-c, -1 + 2 s_1 + 2 s_2+c, 1 - 2 s_1 + 2 s_2-c, 3 - 2 s_1 - 2 s_2+c), $$ over the triangle in $(s_1,s_2)$ space defined in the preceding sentence. Considering the final entry in the min defining $\phi$, we have $\phi(s,s,0)<0$ for $3/4<s<1$, so we need to choose $c$ (which can depend on $\mathbf s$) appropriately. We find, over the triangle in $(s_1,s_2$-space $\phi(s_1,s_2,2s_1+2s_2-3)=0$, i.e. the last entry is zero but the other 3 entries are nonnegative. Except for a positive multiple, the other 3 entries (first 3) are $$ 1 - s_2, -1 + s_1 + s_2, 1 - s_1 . $$ \noindent{\bf Tangential hexagons.} Unlike the situation with $n=4$ extra conditions on the sides are needed. The corresponding problem for cyclic hexagons is mentioned in ~\cite{deV16} (and the same author has other papers involving tangential and cyclic hexagons,~\cite{deV02},~\cite{deV11}). After writing the above, I found~\cite{BS05} gives the following. \noindent {\bf Theorem.}{\it There will be a tangential hexagon with given side lengths $s_1,s_2,...,s_6$ if and only if the equality $$ s_1 +s_3 +s_5 =s_2 +s_4 +s_6, $$ and the following nine inequalities are satisfied: $$s_1 >0,s_2 >0,...,s_6 >0 , $$ \begin{eqnarray*} s_1 - s_2 + s_3 &>& 0, \\ s_3 - s_4 + s_5 &>& 0, \\ s_5 - s_6 + s_1 &>& 0. \end{eqnarray*} } \noindent In words, the length of any side is less than the sum of the lengths of the adjacent sides. \medskip \cite{BS05} also gives the conditions on $\mathbf s$ for an octagon to be tangential. When $n=6$, $$ {\rm PseudoInverse}(M(6)) = \frac{1}{12} \left( \begin{array}{cccccc} 5 & -3 & 1 & 1 & -3 & 5 \\ 5 & 5 & -3 & 1 & 1 & -3 \\ -3 & 5 & 5 & -3 & 1 & 1 \\ 1 & -3 & 5 & 5 & -3 & 1 \\ 1 & 1 & -3 & 5 & 5 & -3 \\ -3 & 1 & 1 & -3 & 5 & 5 \\ \end{array} \right) . $$ (On looking at the corresponding outputs for large $n=2m$ one finds that $2n$ times ${\rm PseudoInverse}(M(n))$ has entries $\pm$ odd integers less than $n$. And, as noted before, there is a block matrix structure too.) \noindent{\bf ToDo.} Check out that the conditions on $\mathbf s$ given in~\cite{BS05} are necessary and sufficient to ensure that $\eta$ is nonnegative. \subsection{Geometric isoperimetric inequalities}\label{subsec:GeomIsoper} Lets begin with the first, historic, instance of an isoperimetric inequality in which regular $n$-gon optimizes over all $n$-gons. The following argument is from geometers in ancient Greece, perhaps around 200BC. {\bf L-A Isoperimetric Result.} Amongst convex $n$-gons with given perimeter, that which has the largest area is the regular $n$-gon. Start with a convex $n$-gon, $\Omega_0$. Suppose $P_{i-1}$, $P_i$, $P_{i+1}$ are consecutive vertices. (i) show that among all isoperimetric triangles with the same base the isosceles triangles has maximum area. Thus by changing $\Omega_0$ moving point $P_i$ to $P_i'$ with $P_{i-1}$, $P_i'$, $P_{i+1}$ isosceles, the area of the changed polygon is increased. Apply this process to all triples of consecutive vertices. By iteration, one finds that the optimal polygon must be equilateral: call it $\Omega_1$.\\ (ii) show that if the polygon $\Omega_1$ is not equiangular, its area may be increased by redistributing perimeter from a pointy to a blunt angle until the two angles are the same. An alternative to (ii) is to use that Among all $n$-gons with given side lengths, the cyclic $n$-gon has the largest area. And a cyclic $n$-gon with equal sides is regular. \smallskip (For this and related isoperimetric inequalities see~\cite{AHM09}. See also~\cite{deV11} concerning equiangular and equilateral considerations.) \medskip By a $\cal D$ we shall mean some domain functionals, and we are interested in pairs of these for which one has a result of the form\\ {\it For tangential $n$-gons with fixed ${\cal D}_1$ the regular $n$-gon $<$maximizes$|$minimizes$>$ ${\cal D}_2$}\\ Table~\ref{tbl:tbl1ar} below presents some: \begin{table}[h] \begin{center} \begin{tabular}{|| c | c | c | c ||} \hline ${\cal D}_1$& & ${\cal D}_2$& Remark \\ \hline area & min& perimeter& \\ area & max& inradius& $A=\rho L/2$ \\ inradius& min& perimeter& Jensen inquality\\ inradius& min& area& " \\ inradius& min& $i_2=\frac{16\Sigma_\infty}{\rho}$& " \\ inradius& min& $i_4$& " \\ \hline \end{tabular} \caption{Tangential $n$-gons} \label{tbl:tbl1ar} \end{center} \end{table} The Jensen inequality/convexity concerns convex functions $\phi$ on some interval $I$, and that $$ {\rm with }\ x_i\in{I} \ {\rm and }\ \sum a_i =1\ {\rm with } a_i\ge{0}\ ,\ \phi(\sum a_i x_i)\le \sum a_i\phi(x_i) . $$ The inequality is reversed for concave $\phi$. By using the formula for the length for a $n$-gon given at equation~(\ref{eq:i0Tgen}) and the convexity of $\cot(\cdot/2)$ on $(0,\pi)$ we have the first entry of the following. \begin{itemize} \item We use $\cot(\alpha/2)$ convex for $\alpha\in(0,\pi)$ and (\ref{eq:i0Tgen}). We have $$\frac{L}{2\rho n} = \sum\frac{1}{n}\cot(\frac{\alpha_k}{2}) \ge \cot\left( \sum \frac{1}{n}\frac{\alpha_k}{2}\right) =\cot(\frac{(n-2)\pi}{2 n}) \frac{L_n}{2\rho n} , $$ where $L_n$ is the perimeter of the regular $n$-gon with inradius $\rho$. This establishes the entry in the table corresponding to fixed $\rho$, mimimizing the perimeter (and since $A=\rho\,L/2$ also minimiing $A$). \item Using that $$ \cot(\frac{\alpha}{2}) + \frac{\cot(\frac{\alpha}{2})^3}{3}\ \ {\rm convex\ for\ } \alpha\in(0,\pi), $$ and the formula (\ref{eq:i2Tgen}) the Jensen inequality approach above gives that, at fixed $\rho$, $i_2$ is minimized by the regular polygon with the same number of sides. \item Starting from formula (\ref{eq:i4Tgen}), the result on $i_4$ follows in the same way. \end{itemize} {\it For cyclic $n$-gons with fixed ${\cal D}_1$ the regular $n$-gon $<$maximizes$|$minimizes$>$ ${\cal D}_2$}\\ Table~\ref{tbl:tbl2ar} below presents some: \begin{table}[h] \begin{center} \begin{tabular}{|| c | c | c | c ||} \hline ${\cal D}_1$& & ${\cal D}_2$& Remark \\ \hline area & min& perimeter& \\ area & max& inradius& \\ circumradius& max& perimeter& Jensen inquality\\ circumradius& max& area& " \\ \hline \end{tabular} \caption{Cyclic polygons} \label{tbl:tbl2ar} \end{center} \end{table} There is a huge literature even restricting to convex sets.\\ {\small \verb$https://math.stackexchange.com/questions/749528/isoperimetric-inequality-isodiametric-inequality-hyperplane-conjecture-what$ } Also the `stability' of the isoperimetric inequalities is studied in many different ways sometimes involving `isoperimetric deficit', sometimes `Fraenkel asymmetry'. \medskip If $L$ is the perimeter of a convex polygon $\Omega$, $A$ its area, $\rho$ the inradius, and $s$ the length of any chord through the centre of a largest inscribed circle, then $$ L^2 -4\pi A \ge\frac{\pi^2}{4} (s-2\rho)^2 .$$ This is a sharpened isoperimetric inequality for convex polygons.\\ H. Hadwiger, {\it Comment. Math. Helv.} {\bf 16} (1944),305-309.\\ Tangential polygons can be regarded as limits of tangential $n$-gons, so the inequality applies to them and is $$ L (L-2\pi\rho) \ge \frac{\pi^2}{4} (s-2\rho)^2 .$$ In notation as in the list at the beginning of~\S\ref{sec:BlSa}, $$ s \ge d_O+\rho\qquad{\rm so}\ \ L (L-2\pi\rho) \ge \frac{\pi^2}{4} (d_O-\rho)^2 . $$ With, as in~\S\ref{subsec:rLdO}, $x=L/d_O$, $y=\rho/d_O$, $$ x \, (x-2\pi y) \ge \frac{\pi^2}{4} (1-y)^2 ,$$ \begin{comment} Solve[x*(x-2*Pi*y)- (Pi*(1-y)/2)^2 ==0, y]; ParametricPlot[{(Pi*(2*y + Sqrt[1 - 2*y + 5*y^2]))/2, y}, {y, 0, 1}] (* a bit left of the upper left 1-cap curve of fig:IIbtangQuadxy *) \end{comment} or $$ x = \frac{1}{2} \pi \left(\sqrt{5 y^2-2 y+1}+2 y\right) . $$ This gives a curve bit left of the upper left 1-cap curve of Figure~\ref{fig:IIbtangQuadxy}. \bigskip There is a considerable literature concerning moments of inertia {\it about the centroid}. (There are, of course, situations involving appropriate symmetries when the centroid and incentre will coincide.) \\ {\bf Results.} {\it The equilateral triangle minimizes the moment of inertia, among all convex curves with given perimeter.}\\ References include~\cite{KH87}, \cite{Ti63}, \cite{MG74}. See also~\S\ref{sec:centroids}. \subsection{Further geometric items} For every convex domain $$ \pi \rho+ \frac{|\Omega|}{\rho} \le |\partial\Omega| \le 2\, \frac{|\Omega|}{\rho} . $$ See~\cite{CFG02} equation (8). For tangential polygons the right hand side is an equality and the left-hand side is just $L\ge{2}\pi\rho$.\\ There may be some use for Bonnesen inequalities, see~\cite{Oss79}, and stronger forms for tangential polygons. \medskip Repeat here, for the third time(!), the existence statement of~\S\ref{sec:Transformations} and of~\S\ref{sec:Construction}. Let there be given a convex polygon, and hence its sequence of angles, all less than $\pi$. Then one can define a tangential polygon with the same sequence of angles and the same area. \medskip A tangential polygon has a larger area than any other convex polygon with the same perimeter and the same interior angles in the same sequence. (See~\cite{AM04,Kn44,YJ19}.)\\ Amongst all convex polygons with the same area and with the same interior angles in the same sequence\\ (i) those which have the smallest perimeter are tangential polygons, and\\ (ii) those which have the largest inradius are tangential polygons. \medskip Amongst all tangential quadrilaterals with a given sequence of side lengths, that which maximizes the area is bicentric.\\ One starting point is the more general result.\\ Amongst all quadrilaterals with given side lengths, that which has maximum area is cyclic.\\ (Proof: Use Bretschneider's formula.)\\ (A very easy special case is that the area of a kite with given sides is maximized by the right kite.) \begin{comment} (* sides a b, diagonals p q *) q= Sqrt[a^2-(p/2)^2])+Sqrt[b^2-(p/2)^2] area = p*q/2; Solve[D[area,p]==0,p] pSol = (2*a*b)/Sqrt[a^2 + b^2]; Simplify[q /. p->pSol,Assumptions -> {a>0,b>0}] (* Sqrt[a^2 + b^2] so right kite *) \end{comment} There may be generalization to $2m$-gons, in particular hexagons.\\ \verb$https://mathworld.wolfram.com/CyclicHexagon.html$\\ gives area in terms of sides. \noindent{\bf Theorem.} {\it For any quadrilateral with given edge lengths, there is a cyclic quadrilateral with the same edge lengths.} \noindent{\bf Theorem.} {\it The cyclic quadrilateral has the largest area of all quadrilaterals with sides of the same length.} \subsection{Cheeger constant} For tangential polygons $\Omega$, the Cheeger constant is $$ h_\Omega = \frac{|\partial\Omega|+\sqrt{4\pi|\Omega|}}{|2\Omega|}. $$ Clearly, at fixed area, since perimeter is minimized by the regular $n$-gon, the regular $n$-gon minimizes $h_\Omega$ over tangential $n$-gons with given area. Much more has been established. {\it Among all simple polygons with a given area and at most $n$ sides, the regular $n$-gon minimizes the Cheeger constant.} (See~\cite{BF}.) {\it If $\Omega$ is a convex polygon, we denote $\Omega_*$ the (unique up to rigid motions) circumscribed polygon which has the same area as $\Omega$ and whose angles are the same as those of $\Omega$, then $$ h(\Omega) \ge h(\Omega_*) ,$$ with equality if and only if $\Omega=\Omega_*$ (up to rigid motions).} \begin{comment} Lemma 1. Let $\rho>0$. Among all plane convex sets $\Omega$ with fixed positive area $A$ (with $A>\pi\rho^2$) that contain a disk of radius $\rho$ around the origin, $i_2$ is maximized if $\Omega$ is the single-cap, the convex hull or a disk of radius $\rho$ and a point (the point being, up to rotation invariance, uniquely defined by the area constraint). Theorem: If a bicentric quadrilateral has an incircle and a circumcircle with radii $\rho$ and $R$ respectively, then its area $A$ satisfies $$ A\ge 2\rho\sqrt{2\rho(\sqrt{4R^2+\rho^2}-\rho)} ,$$ where equality holds if, and only if, the quadrilateral is also an isosceles trapezium. M. Josefsson, Maximal area of a bicentric quadrilateral, Forum Geom., 12 (2012) pp. 237-241. 2. M. Radiæ, Certain inequalities concerning bicentric quadrilaterals, hexagons and octagons, Journal of Inequalities in Pure and Applied Mathematics, Volume 6, Issue 1, Article 1 (2005). On principal frequencies, volume and inradius in convex sets Lorenzo Brasco and Dario Mazzoleni Nonlinear Differential Equations and Applications NoDEA volume 27, Article number: 12 (2020) https://doi.org/10.1007/s00030-019-0614-2 \end{comment} \section{Triangles}\label{sec:IIbTri} Given the area $A$ and the angles $\alpha$, $\beta$ and $\gamma=\pi-\alpha-\beta$ of the triangle, one can determine the inradius and thence, if needed, the sides. We have $$ A = \rho^2\left(\cot\frac{\alpha}{2}+\cot\frac{\beta}{2}+\cot\frac{\gamma}{2}\right) .$$ The semiperimeter $s=(a+b+c)/2$ is found from $$ s= \frac{A}{\rho} = \rho\,\left(\cot\frac{\alpha}{2}+\cot\frac{\beta}{2}+\cot\frac{\gamma}{2}\right) = \sqrt{A\, \,\left(\cot\frac{\alpha}{2}+\cot\frac{\beta}{2}+\cot\frac{\gamma}{2}\right)}.$$ Now, defining $f$ from $$\frac{\sin(\alpha)}{a}=\frac{\sin(\beta)}{b}=\frac{\sin(\gamma)}{c}=\frac{1}{f} ,$$ we have $$s=\frac{a+b+c}{2}=\frac{f}{2}\, \left( \sin(\alpha)+\sin(\beta)+\sin(\gamma)\right) ,$$ which determines $f$ and hence all the sides. \medskip Using $$ T_A=\frac{1}{\tan(\frac{\alpha}{2})},\quad T_B=\frac{1}{\tan(\frac{\beta}{2})},\quad T_C= \frac{T_A+T_B}{T_A T_B-1} , $$ $i_2$ and $i_4$ can be found from equations~(\ref{eq:i2Tgen}) and~(\ref{eq:i4Tgen}). From these $\Sigma_\infty$ and $\Sigma_1$ can be found: see Part III~\S\ref{sec:GenTri} With apologies for leaving the write-up in code, all the reasonable results are readily proved for triangles. The code also produces an example of a Blaschke-Santalo diagram, shown in Figure~\ref{fig:rLdOtri}, of the kind we will see later for other tangential polygons. {\small \begin{verbatim} ( * An isoperimetric result Amongst triangles with a given inradius, taken as 1, that which has the smallest perimeter is the equilateral triangle. The allowed values of TA>0 and TB>0 are those for which TA*TB>1 *) TCfn[TA_,TB_]:= (TA+TB)/(TA*TB-1); Lfn[TA_,TB_]:= TA+TB+TCfn[TA,TB]; (* Lfn ia half the length *) (* The hessian is positive definite, positive diagonal entries and Det>0 on using TA*TB>1, etc. *) hess = Map[Simplify, D[tmp, {{TA, TB}, 2}]]; Factor[Det[hess]] (* Find minimum by checking where gradient is 0 *) g = Map[Simplify, Grad[Lfn[TA, TB], {TA, TB}]] Solve[{g[[1]] == 0, g[[2]] == 0}, {TA, TB}] (* gives (TA,TB) = (Sqrt[3],Sqrt[3]) which is equilateral triangle *) (* Define also *) dOfn[TA_,TB_]:= Max[{TA,TB,TCfn[TA,TB]}]; (* One can also show that at given inradius, the triangle which minimizes the distances from incentre to vertices is equilateral. *) (* One can also imagine fixing not just the inradius but also dO and with these TWO constraints finding (i) the shape with the maximum perimeter, (ii) the shape with the minimum perimeter. And find that they come out to be the obvious different sorts of isosceles triangles. *) (* Think of A as apex of triangle Expect to see different behaviour for TA large - small apex angle to what one gets when TA is small - big apex angle. Let B the angle I will vary be <= Pi/2 so TB >=1. Actually TB also gets restricted to be TB >= Max[1,1/TA] This restiction causes TC>0 *) (* fix TA, vary TB LAfn is a convex function and its first derivative is zero when TB=TC, i.e. triangle is isosceles So, at fixed rho, the perimeter of a triangle with one angle fixed is minimized when the triangle is isosceles having TB=TC= (1+Sqrt[1+TA^2])/TA BCequal = Factor[TCfn[TA, TB] - TB] Solve[BCequal == 0, TB] Factor[D[Lfn[TA, TB], TB]/BCequal] (* clearly nonzero *) Factor[D[Lfn[TA, TB], {TA, 2}]] (* clearly positive *) Plot[Lfn[TA, (1 + Sqrt[1 + TA^2])/TA], {TA, 0.1, 8}] (* convex function, minimum at TA=Sqrt[3] *) tmp = Together[Simplify[D[Lfn[TA,(1 + Sqrt[1 + TA^2])/TA], TA]]] u = Sqrt[1 + TA^2]; Simplify[tmp - (-2 + u)*(1 + u)^2/(TA^2*u)] (* 0 *) Simplify[tmp /. TA -> Sqrt[3]]. (* 0 *) ppu =ParametricPlot[{Lfn[TA,(1+Sqrt[1+TA^2])/TA]/dOfn[TA,(1+Sqrt[1+TA^2])/TA], 1/dOfn[TA,(1+Sqrt[1+TA^2])/TA]},{TA,0.01,Sqrt[3]},PlotStyle->Red]; ppl =ParametricPlot[{Lfn[TA,(1+Sqrt[1+TA^2])/TA]/dOfn[TA,(1+Sqrt[1+TA^2])/TA], 1/dOfn[TA,(1+Sqrt[1+TA^2])/TA]},{TA,Sqrt[3],128},PlotStyle->Green]; pairs[AB_] := {Lfn[AB[[1]], AB[[2]]]/dOfn[AB[[1]], AB[[2]]], 1/dOfn[AB[[1]], AB[[2]]]}; rdm[Npts_] := Module[{k} , Map[pairs, Table[{1 + RandomReal[{0, 32}], 1 + RandomReal[{0, 32}]}, {k, 1, Npts}]]]; lp = ListPlot[rdm[20000], PlotRange -> All]; rLdOtri = Show[{ppu, ppl, lp}, PlotRange -> All] \end{verbatim} } \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{rLdOtri.pdf}} \caption{Triangles: $x=L/(2 d_O)$, $y=\rho/d_O$ with $\rho=1$.} \label{fig:rLdOtri} \end{figure} The dots in the figure are from random choices for two of the $T$ values (corresponding to two angles). The large number corresponding to large values of $d_O$ is from using a uniform random distribution for the $T$ allowing large values (so thin, or squat, triangles). The red curve is from tall thin isosceles triangles. The green curve is from tall thin isosceles triangles. \clearpage \subsection{Generalizing, angle-averaging}\label{subsec:triGenz} We have also used Mathematica {\tt Minimize} on these tasks and it worked well for $n=3$ and $n=4$ and {\tt NMinimize} for larger $n$. However, having already proved the result using Jensen's inequality in \S\ref{subsec:GeomIsoper} one learnt (very) little from the exercise. However, one way to learn a little more is to use the convexity of $\cot$ to `average two angles' as remarked upon in the `tilting transformation' treated in~\S\ref{sec:Transformations}. We keep $\rho$ fixed, say $\rho=1$. Choose two vertices, with angles $\alpha_i$ and $\alpha_j$. Let $$\sigma_i=\tan(\frac{\alpha_i}{4})\qquad{\rm so\ } \ \ T_i = \frac{1-\sigma_i^2}{2\sigma_i} .$$ Then, as $$\tan(\frac{\alpha_i+\alpha_j}{4}) = \frac{\sigma_i + \sigma_j}{1-\sigma_i \sigma_j}) .$$ The convexity of the $\cot$ function on $(0,\pi/2)$ is \begin{eqnarray*} 2\cot(\frac{\alpha_i+\alpha_j}{4}) &\le& \cot(\frac{\alpha_i}{2}) + \cot(\frac{\alpha_j}{2}) , \\ \frac{2(1-\sigma_i \sigma_j)}{\sigma_i + \sigma_j} &\le& T_i + T_j . \end{eqnarray*} From this, and the representation of $L$ as a sum of $T_k$, we see that $L$ is reduced by averaging two of the angles. \medskip The same argument works for $i_2$ and for $i_4$ on using the convexity of $\cot^3$ and of $\cot^5$. \medskip It is also easy to work out the differences, by how much the quantities $L$, $i_2$, etc. decrease. {\small \begin{verbatim} (* Write T[1]=1/tan(alpha1/2) and T[2] in terms of u1 = tan(alpha1/4) and u2 resp. *) T[1] = (1-u1^2)/(2*u1); T[2] = (1-u2^2)/(2*u2); (* After angle averaging/ optimal tilting when adjacent *) tAve= (1-u1*u2)/(u1+u2); Tout[1]= tAve; Tout[2]= tAve; Ldifference = Factor[2*(T[1]+T[2]-2*tAve)] (* (((u1 - u2)^2*(1 - u1*u2))/(u1*u2*(u1 + u2))) *) i2STfn[TT_]:= (TT+TT^3/3); i2difference = Factor[2*(i2STfn[T[1]]+i2STfn[T[2]]-2*i2STfn[tAve])] (* long expression - an obviously positive expression * Ldifference^3 *) \end{verbatim} } \section{Tangential quadrilaterals}\label{sec:quad} We begin with the context: \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbQuadrilateral_hierarchy_svg.pdf}} \caption{ wikipedia, attribution below } \label{fig:wikQuad} \end{figure} Attribution for Figure~\ref{fig:wikQuad}: By Alexgabi, jlipskoch\\ \verb$https://commons.wikimedia.org/wiki/File:Laukien_sailkapena.svg,$\\ CC BY-SA 3.0\\ \verb$https://commons.wikimedia.org/w/index.php?curid=34027107$ \clearpage There is a huge literature on tangential quadrilaterals. See~\cite{CS00,Gr08,Haj08,Ho11,JoBiMinArea,Jo10,Jo11,JoBiMaxArea,Mi09,Mi12} In a tangential quadrilateral the two diagonals and the two tangency chords are concurrent. \noindent {\bf Theorem.}{\it Let $ABCD$ be a tangential quadrilateral and $O_d$ be the point of intersection of its diagonals. Invert, using $O_d$ as pole, each of the vertices, the inverse of $A$ denoted by $A_1$, etc. Let $A_1B_1C_1D_1$ be the quadrilateral obtained by these inversions. Then$A_1B_1C_1D_1$ is a tangential quadrilateral.} See~\cite{Mi12}. Here is some additional comment. The intersection of the diagonals of the two quadrilaterals coincide. Mobius maps map, in general, lines to circles, but lines through the pole are preserved. Thus the angles between the diagonals at $O_d$ are the same (as the diagonals are). The conformal map $z\rightarrow\frac{1}{z}$ {\it locally} preserves angles between lines, and so does its conjugate. I noticed some qualitative similarity between inputs and outputs. In particular inputs like kites produced outputs like kites. We already have that if the diagonals of the input cross at right angles so will those of the output. \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=11cm]{IIbinvQuad.pdf}} \caption{Inverse map. See~\cite{Mi12}.} \label{fig:invQuad} \end{figure} The image under the inverse map with the incentre as pole seems often to take tangential quadrilateral vertices to points that look somewhat like vertices of a rhombus. \clearpage \section{Bicentric polygons}\label{sec:Bicentric} In our calculations we often use $T_k$ as inout variables, or equivalently $\eta_k=\rho\,T_k$. There are additional identities for bicentrics beyond those that one has for tangential $n$-gons. wikipedia gives, for bicentric quadrilaterals, $$\leqno{\rm bicentric4gon:}\qquad \rho^2 = \eta_1\eta_3=\eta_2\eta_4 .$$ \cite{Ra05} studies bicentric hexagons and gives $$\leqno{\rm bicentric6gon:} \rho^2 = \eta_1\eta_3 +\eta_3\eta_5 + \eta_5\eta_1 =\eta_2\eta_4 +\eta_4\eta_6+\eta_6\eta_2.$$ There has been use of these in subsequent calculations. The challenge put to me, admittedly in connection with $Q_{0-}$ rather than the geometric functionals treated in this Part IIb, is: investigate Blaschke-Santalo diagrams for tangential polygons. One triple that can be so investigated is $(\rho,L,R_V)$ for bicentric polygons. For triangles, see the later section on Blundon's inequality, e.g inequalities~(\ref{in:Blund}). There is a huge literature on bicentric polygons dating back to the 19th century. The topics include Fuss's Theorem, Poncelet’s porism and more. Some of this is impressive so is presented below - with nothing original of mine - but with the hope that some might be of use in future attempts to produce Blaschke-Santalo diagrams. \medskip \subsection{Fuss's Theorem, Poncelet's Porism} We would like to establish (polynomial) relations involving $(\rho,L,R_V)$. Fuss's theorem(s) involves $(\rho,R_V,d)$ where $d$ is the distance between the circumcentre and incentre of the bicentric polygon. For triangles, The quantity $d$ can be recognized in Blundon's inequality~(\ref{in:Blund}) and is, as in the first displayed equation below, $d=\sqrt{R_V(R_V-2\rho)}$. The following is a (slightly adapted) quote from\\ \verb$https://mathworld.wolfram.com/PonceletsPorism.html$ \begin{quote} The three numbers $(\rho,R_V,d)$ will not be arbitrary and along with $n$, they will have to satisfy certain relations. For the case of a triangle, one such relation is sometimes called the Euler triangle formula: $$ R_V^2 - 2 R_V\rho - d^2 = 0. $$ One of popular notations for such relations (which is necessary and sufficient for existence of a bicentric polygon) can be given in terms of the quantities $$ a=\frac{1}{R_V+d},\ \ b=\frac{1}{R_V-d},\ \ c=\frac{1}{\rho} .$$ For a triangle, the Euler formula has the form: $$a + b = c, $$ for a bicentric quadrilateral $$a^2 + b^2 = c^2 .$$ The relationship for a bicentric pentagon is $$ 4(a^3 +b^3 +c^3) = (a+b+c)^3 .$$ Let \begin{eqnarray*} E_1 &=& -a^2+b^2+c^2 ,\\ E_2 &=& a^2-b^2+c^2 ,\\ E_3 &=& a^2+b^2-c^2 . \end{eqnarray*} The relationship for a bicentric hexagon is $$ \frac{1}{E_1} + \frac{1}{E_2} = \frac{1}{E_3} $$ \end{quote} \subsection{Triangles, again} The sides $s_1$, $s_2$, $s_3$ of a triangle are the roots of the cubic \begin{equation} y^3 - L y^2+(\frac{L^2}{4}+\rho^2+4\rho\,R_V)y - 2 L\rho R_V=0. \label{eq:triCubic} \end{equation} \noindent{\bf ToDo.} Find conditions on the coefficients of this cubic in order that it have 3 positive roots with the largest root less than the sum of the other two. Sturm sequences might be useful. Do Blundon's inequalities come from this? \subsection{Bicentric quadrilaterals} The wikipedia article `Bicentric quadrilateral' contains many items. There is a quartic equation with coefficients in terms of $L$, $\rho$ and $R_V$ whose solutions are the sides of a bicentric polygon: \begin{equation} y^{4} - L y^{3}+(\frac{L^{2}}{4}+2\rho^{2}+ 2\rho {\sqrt{4R_V^{2}+\rho^{2}}})y^{2} - \rho L({\sqrt{4R_V^{2}+\rho ^{2}}}+\rho ) y+\rho ^{2}\frac{L^{2}}{4}=0 . \label{eq:biQuadQuartic} \end{equation} \noindent{\bf ToDo.} Find conditions on the coefficients of this quartic in order that it have 3 positive roots with the largest root less than the sum of the other three. Sturm sequences might be useful. \bigskip There are huge number of inequalities. \noindent{\bf Theorem.}{\it If a bicentric quadrilateral has an incircle and a vertex-circumcircle with radii $\rho$ and $R_v$ respectively, then its area $A$ satisfies $$ \frac{1}{2}\rho\, L=A\ge 2\rho\sqrt{2\rho(\sqrt{4R_v^2+\rho^2}-\rho)} ,$$ where equality holds if, and only if, the quadrilateral is also an isosceles trapezium.}\\ See~\cite{JoBiMinArea}.\\ For a square $\rho=1$, $R=R_v=\sqrt{2}$, $A=4$ gives equality in the preceding inquality. \medskip For a bicentric quadrilateral $$\frac{1}{32}(L^2-16A) \le R_v^2-\rho^2$$ with equality if and only if the hexagon is regular.\\ \subsection{Bicentric hexagons} \noindent For a bicentric hexagon $$\frac{1}{36}(L^2-8\sqrt{3}\,A) \le R_V^2-\frac{4}{3}\rho^2$$ with equality if and only if the hexagon is regular. For bicentric $n$-gons see~\cite{Mao96}. \subsection{$R=R_v=d_O$ for regular polygons} In a regular $n$-gon, the side $s_n=2\rho\tau_n$ where $\tau_n=\tan(\pi/n)$. The circumradius $R$ and $d_O$ coincide. We have $$ 4(R^2-\rho^2) = s_n^2$$ so $$ R=d_O = \sqrt{\rho^2+\frac{s_n^2}{4}}=\rho\sqrt{1+\tau_n^2} . $$ See Part IIa~\S\ref{sec:regn} for the formulae for $L=i_0$, $i_2$ and $i_4$. \begin{comment} Also \begin{eqnarray} L={ i}_0 &=& n\, s_n = 2n \rho\tau_n , \label{eqBS:i0reg} \\ { i}_2 &=& \rho^2 { i}_0 +\frac{n}{12} s_n^3 = n\, s_n \left(\rho^2 + \frac{1}{12} s_n^2 \right) =\frac{2}{3} n \rho^3 \tau_n (3 + \tau_n^2), \label{eqBS:i2reg} \\ { i}_4 &=& -\rho^4 { i}_0 + 2\, \rho^2 { i}_2 +\frac{n}{80} s_n^5 = n\, s_n \left( \rho^4 +\frac{\rho^2 s_n^2}{6} +\frac{s_n^4}{80}\right) , \\ &=& \frac{2}{15} n \rho^5 \tau_n\left( 15 + 10\, \tau_n^2 + 3\, \tau_n^4 \right) . \label{eqBS:i4reg} \end{eqnarray} \end{comment} \subsection{Bicentrics from regular} Given any regular $n$-gon any choice of 3 of its vertices gives a bicentric polygon as all triangles are bicentric. Given a regular 8-gon, selection of its alternate vertices gives a bicentric 4-gon, a square. Given a regular 9-gon, there is a selection of 5 of its vertices giving a bicentric 5-gon. See~\cite{To07,To15}. \section{Blaschke-Santalo diagrams for tangential polygons}\label{sec:BlSa} \subsection{Definitions for Blaschke-Santalo diagrams} Where we feel it helps the exposition there will be some repetition of material already presented in PartIIa. We always choose the origin of our coordinate system to be at the incentre. The inradius $\rho$ and perimeter $L$ will occur in our diagrams, and there will be various choices of a third geometric quantity which, for the present, we denote by $q$. In our Blaschke-Santalo diagrams the horizontal axis is usually $x=L/q$ and the vertical axis $y=\rho/q$. (Further work in which $x=L/\rho$ and, as before, $y=\rho/q$ might be undertaken, motivated by having the same $x$ for the different $q$, e.g. $q$ being $i_2^{1/3}$, $Q_{0-}^{1/4}$, etc.) The area of a tangential polygon is $A=\rho\,L/2$. The quantity $x/y=L/\rho$ is denoted by $B$ in~\cite{PoS51} . We have $B\ge{2\pi}$ with equality only for the disk, and for any triangle, $B\ge{6\sqrt{3}}$): see~\cite{Ai58}. The perimeter and inradius of a regular $n$-gon are related by $$\rho = \frac{L}{2n}\cot(\frac{\pi}{n}) , $$ and for a tangential $n$-gon $$B\ge{2n\tan(\pi/n)} . $$ The other domain functionals are as follows.\\ (1) There is a first set of geometric quantities:\\ $\bullet$ circumradius $R$ (radius of the smallest disk containing the region);\\ $\bullet$ the distance from the incentre to the boundary $d_0$;\\ (2) There are moments about the incentre:\\ $\bullet$ the second boundary moment $i_2$;\\ $\bullet$ the fourth boundary moment $i_4$;\\ (3) There are quantities leading to $Q_{0,-}$:\\ $\bullet$ $\Sigma_\infty=\rho\, i_2/16$;\\ $\bullet$ $0>\Sigma_1=(i_2^2/L -i_4)/16$;\\ $\bullet$ $Q_{0-}$ the lower bound on torsional rigidity treated in Part I, i.e.~\cite{Ke20tIMA}.\\ We defer discussion of these last items, $Q_{0,-}$, until Part III. \medskip In connection with bicentric polygons, including triangles, the radius of the circle through the vertices is denoted $R_V$, and we always have $R\le{R_V}$. \medskip An outline of the remainder of this section follows. \begin{itemize} \item In~\S\ref{subsec:cap} we define cap domains which enter the account of the $(\rho,L,R)$ diagram in the next subsection. \item In~\S\ref{subsec:dOngon} we present formulae for $d_O$ for various tangential $n$-gons. \item In~\S\ref{subsec:rLR} we begin by noting that the account for general convex domains in~\cite{BCS03} has on one of its boundaries the 2-cap. We have done some computations, and have a belief that regular $n$-gons occur on one of the boundaries. However, the circumradius $R$ doesn't seem to be a good lead-in for computations related to ($i_2$, $i_4$ and) $Q_{0-}$. \item \S\ref{subsec:rLdO} is the main subsection. The quantity $d_O$ is a function easily defined in terms of the $T_k$ occuring in expressions for $L$, $i_2$ and $i_4$, and hence in $Q_{0-}$. The expressions for $L$, $i_2$ and $i_4$ are given in Part IIa. The only Blaschke-Santalo diagrams presented here are those for $(\rho,L,d_O)$. Future work may involve $(\rho,L,i_2)$ and $(\rho,L,i_2)$, but for now there is just occasional comment on relevant inequalities for particular tangential $n$-gons. \item Though we believe it to be an aside to our main purpose of investigating functionals related to $Q_{0-}$ over all tangential polygons, in~\S\ref{subsec:rLRV} we propose to treat, for bicentric polygons the triple $(\rho,L,R_V)$. So far, the main result is just that already in the literature for triangles. Some items in~\S\ref{sec:Bicentric} may be useful in future efforts. \end{itemize} \subsection{Some extremals amongst circumgons: the 1-cap and symmetric 2-cap}\label{subsec:cap} Because our long-term motivation concerns $Q_{0-}$ and this involves $i_2$ (and $i_4$) se begin with the following. \par\noindent {\it Let $\rho>0$. Among all plane convex sets $\Omega$ with fixed positive area $A$ (with $A>\pi\rho^2$) that contain a disk of radius $\rho$ around the origin, $i_2$ is maximized if $\Omega$ is the single-cap, the convex hull or a disk of radius $\rho$ and a point (the point being, up to rotation invariance, uniquely defined by the area constraint).}\\ The proof is elementary. See~\cite{HS20} where it is a Lemma needed prior to establishing gradient bounds for the torsion function. Exactly the same proof gives the corresponding result for $i_4$. \medskip Consider tangential polygons with $\rho$ and $d_O>\rho$ fixed. Amongst these the 1-cap\\ $\bullet$ minimizes $A$, $L$, $i_2$ and $i_4$ and \\ $\bullet$ minimizes $Q_0$.\\ This is because of domain monotonicity of these functionals.\\ {\bf Question.} {\it Given two tangential polygons $\Omega_0$ and $\Omega_1$ (with the same incentre? and) with $\Omega_0\subset\Omega_1$, is $Q_{0-}(\Omega_0)\leQ_{0-}(\Omega_1)$?} \begin{figure}[h] $${\includegraphics[height=5cm,width=5cm]{gr1.pdf}} \qquad\qquad {\includegraphics[height=5cm,width=5cm]{gr2.pdf} }$$ \caption{Left: the symmetric 2--cap. Right: the 1-cap. } \label{fig:tsquareplbNe0} \end{figure} We can easily calculate the perimeter of the 1-cap. Denote by $\alpha$ the angle at its vertex on the circle radius $d_O$. Then $\sin(\alpha/2)=\rho/d_O$ and \begin{eqnarray*} L &=& (\pi+\alpha)\rho + 2\sqrt{d_O^2-\rho^2} ,\\ \frac{L}{d_O} &=& \left( \pi +2\arcsin(\frac{\rho}{d_O})\right)\,\frac{\rho}{d_O} + \sqrt{1-(\frac{\rho}{d_O})^2} . \end{eqnarray*} We will see this as occuring as the lower right boundary in diagrams with $x=L/d_O$, $y=\rho/d_O$. The limiting cases are \begin{itemize} \item $d_O\rightarrow\rho$, $y\rightarrow{1}$, $x\rightarrow{2\pi}$ corresponding to a disk; \item $d_O\rightarrow\infty$, $y\rightarrow{0}$, $x\rightarrow{2}$ corresponding to a long flat shapes like the 1-cap. \item $d_O\rightarrow\infty$, $y\rightarrow{0}$, $x\rightarrow{4}$ corresponding to a long flat shapes like the 2-cap.\\ \item The quadrilateral might have one side tending to zero, say as a symmetric tangential trapezium tending to an isosceles triangle. When the squat isosceles triangle becomes very thin, $y\rightarrow{0}$, $x\rightarrow{4}$. \end{itemize} \bigskip We will see the 2-cap in~\S\ref{subsec:rLR}. \clearpage \subsection{$d_O$ for various tangential $n$-gons}\label{subsec:dOngon} The distances from the incentre of the vertices of a tangential $n$-gon are given by $$\sqrt{\rho^2 +\eta_k^2} = \rho\sqrt{1+T_k^2} . $$ \subsubsection{$R$, $d_O$ for regular polygons} In a regular $n$-gon, the side $s_n=2\rho\tau_n$ where $\tau_n=\tan(\pi/n)$. The circumradius $R$ and $d_O$ coincide. We have $$ 4(R^2-\rho^2) = s_n^2$$ so $$ R=d_O = \sqrt{\rho^2+\frac{s_n^2}{4}}=\rho\sqrt{1+\tau_n^2} . $$ See Part IIa~\S\ref{sec:regn} for the formulae for $L=i_0$, $i_2$ and $i_4$. \begin{comment} Also \begin{eqnarray} L={ i}_0 &=& n\, s_n = 2n \rho\tau_n , \label{eqBS:i0reg} \\ { i}_2 &=& \rho^2 { i}_0 +\frac{n}{12} s_n^3 = n\, s_n \left(\rho^2 + \frac{1}{12} s_n^2 \right) =\frac{2}{3} n \rho^3 \tau_n (3 + \tau_n^2), \label{eqBS:i2reg} \\ { i}_4 &=& -\rho^4 { i}_0 + 2\, \rho^2 { i}_2 +\frac{n}{80} s_n^5 = n\, s_n \left( \rho^4 +\frac{\rho^2 s_n^2}{6} +\frac{s_n^4}{80}\right) , \\ &=& \frac{2}{15} n \rho^5 \tau_n\left( 15 + 10\, \tau_n^2 + 3\, \tau_n^4 \right) . \label{eqBS:i4reg} \end{eqnarray} \end{comment} \subsubsection{$R$, $d_O$ for triangles} For an equilateral triangle $$ L= 6\sqrt{3}\rho, \qquad R = d_O = 2\rho . $$ In general $R$ and $d_O$ differ. The circumradius of an acute angled triangle is the radius of the circle through the three vertices ($R=R_V$). For a triangle with one interior angle measuring more than $\pi/2$, an obtuse triangle, the circumradius is half the length of the longest side (and $R<R_V$). For triangles equation~(\ref{eqBS:elPec}) is $$ 1-\frac{1}{T_1 T_2} -\frac{1}{T_2 T_3} -\frac{1}{T_3 T_1} = 0. $$ As in Part IIa, most of our calculations have, to date, concentrated on isosceles triangles. See Part IIa~\S\ref{sec:isos} for the formulae for $L=i_0$, $i_2$ and $i_4$. \medskip We return to triangles in~\S\ref{subsubsec:BStriangles}. For now it is sufficient to record that $$\leqno{\rm triangle:}\qquad d_O = \rho\sqrt{1 +{\rm max}(T_1,T_2,T_3)^2} . $$ \subsubsection{$d_O$ for tangential quadrilaterals} For tangential quadrilaterals equation~(\ref{eqBS:elPes}) is $$ \frac{1}{T_1} +\frac{1}{T_2} +\frac{1}{T_3} +\frac{1}{T_4}- \frac{1}{T_1 T_2 T_3}-\frac{1}{T_1 T_2 T_4} - \frac{1}{ T_1 T_3 T_4 }-\frac{1}{T_2 T_3 T_4} = 0. $$ We return to tangential quadrilaterals in~\S\ref{subsubsec:BStangQuad}. For now it is sufficient to record that $$\leqno{\rm tang\ quad:}\qquad d_O = \rho\sqrt{1 +{\rm max}(T_1,T_2,T_3,T_4)^2} . $$ \bigskip \subsection{$(\rho,L,R)$}\label{subsec:rLR} For any plane convex set $$\rho\le{R},\ L\le{2\pi\,R},\ 2\pi\rho\le{L},\ 4R\le{L} . $$ There is some discussion of how to compute $R$ at\\ {\small \verb$https://mathematica.stackexchange.com/questions/121987/how-to-find-the-incircle-and-circumcircle-for-an-irregular-polygon$ } We can scale our shapes so there is no loss of generality in setting $\rho=1$. Define $$ y = \frac{1}{R}\ \ {\rm and}\ \ x=\frac{L}{R} , $$ (with our $x$ a factor of $2\pi$ greater than than in~\cite{BCS03}). $$ x=2\pi,\ y=1\qquad{\rm for\ a\ disk}. $$ \cite{BCS03} establish, for convex sets, $$4\left(\sqrt{1-y^2} +\ y\,{{\rm arcsin}(y)} \right) \le x=\frac{L}{R} \le4\left(\sqrt{1-y^2} +{\rm arcsin}(y) \right) . $$ The lower bound at the left corresponds to values for a symmetric 2-cap, which is a circumgon tangential polygon. However, the right-hand, upper bound is for a convex shape which is {\it not} a tangential polygon. Our computations suggest that amongst the tangential polygon shapes which will occur on the right-hand upper bound are the regular polygons. In Figure~\ref{fig:IIbrLR} the right-most blue curve is the shape from~\cite{BCS03} which is {\it not} a tangential polygon. The upper-left green curve is form the symmetric 2-cap. The red dots are some regular polygons, the rightmost upper dot the circular disk. The lowest is the equilateral triangle, and the next up, joining the upper blue curve is the square. The scatter of blue dots are from tangential quadrilaterals, and the upper blue line from rhombi. \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbrLR.pdf}} \caption{Except for the lower curve, some results on $(L/R,\rho/R)$ pairs for tangential polygons. See description in text.} \label{fig:IIbrLR} \end{figure} \clearpage \subsection{$(\rho,L,d_O)$}\label{subsec:rLdO} \subsubsection{Triangles}\label{subsubsec:BStriangles} See also PartIIa \S\ref{sec:isos}. \smallskip \noindent {\it Amongst all isosceles triangles with given inradius \begin{itemize} \item the equilateral triangle minimizes $d_O$, \item the equilateral triangle minimizes $A$ and $L=2A/\rho$ and $L^2/A$, and \item the equilateral triangle minimizes $i_2$ and $i_2/(A\,L)$. \end{itemize}} \begin{comment} rho = 1 a = rho*(1+sigma)/(1-sigma) h = a*(1-sigma^2)/(2*sigma) A = a*h; L = 2*A/rho; Solve[D[A,sigma]==0,sigma] tau = 2*sigma/(1-sigma^2) b = 2*Sqrt[A*tau] check = Simplify[b-2*a,Assumptions-> {sigma>0, sigma<1}] SigInf = ((A^2)/48)*((1-sigma)^6+12*sigma^2*(1-sigma)^2 +16*sigma^3)/(sigma*(1-sigma)*(1+sigma)^3); i2= 16*SigInf/rho i2Aai2 = Factor[(A*L^2/6-8*A^3/(L*a)+16*A^3/L^2 -2*A*a^2)/rho] check = Simplify[i2-i2Aai2] Plot[i2, {sigma, 0, 1}, PlotRange -> {{0, 1}, {0, 100}}] tmp= Factor[D[Numerator[i2],sigma],Extension->Sqrt[3]] Solve[tmp==0,sigma] Plot[{A, i2, i2/(A*L)}, {sigma, 0, 1}, PlotRange -> {{0, 1}, {0, 50}}, PlotStyle -> {Red, Green, Blue}] \end{comment} \medskip \noindent {\it At given $\rho$ and $A$ greater than the area of the equilateral triangle of the same inradius, amongst isosceles triangles \begin{itemize} \item $d_O$ is maximised by tall isosceles triangles (apex angle $\alpha<\pi/3$ small, $\sigma<2-\sqrt{3}$ small) \item $d_O$ is minimised by short squat isosceles triangles (apex angle $\alpha>\pi/3$ near $\pi$, $\sigma>2-\sqrt{3}$ near 1). \end{itemize}} (Above may be true for all triangles.) \medskip The calculations for $i_2$ for the following are yet to be done.\\ {\bf Conjecture} {\it At given $\rho$ and $A$ greater than the area of the equilateral triangle of the same inradius, amongst isosceles triangles \begin{itemize} \item $i_2$ is maximised by tall isosceles triangles (apex angle $\alpha<\pi/3$ small, $\sigma<2-\sqrt{3}$ small) \item $d_O$ is minimised by short squat isosceles triangles (apex angle $\alpha>\pi/3$ near $\pi$, $\sigma>2-\sqrt{3}$ near 1). \end{itemize}} \subsubsection{Tangential quadrilaterals}\label{subsubsec:BStangQuad} \noindent{\bf Rhombi} \noindent {\it Amongst all rhombi with given side, \begin{itemize} \item the square minimizes $d_O$, \item the square maximizes $A$ and $\rho=2A/L$, and \item the square minimizes $i_2/(A\,L)$. \end{itemize}} \noindent These are easily established, as follows. $$ A = 2\rho^2 (T+\frac{1}{T}) \qquad L= 4\rho (T+\frac{1}{T}) . $$ Hence $$\frac{L^2}{A} = 8 (T+\frac{1}{T}) , $$ which is minimized when $T=1$, the square.\\ Next \begin{eqnarray*} i_2 &=& 2\rho^3\left( T+\frac{1}{T}+\frac{1}{3}(T^3+\frac{1}{T^3})\right) ,\\ &=& \frac{2\rho^3}{3} (T+\frac{1}{T})^3 , \end{eqnarray*} so that $$\frac{i_2}{A L}=\frac{1}{12}(T+\frac{1}{T}) , $$ which is minimized at $T=1$. \medskip We also have the following. \noindent {\it Amongst all rhombi with given inradius \begin{itemize} \item the square minimizes $d_O$, \item the square minimizes $A$ and $L=2A/\rho$ and $L^2/A$, and \item the square minimizes $i_2$ and $i_2/(A\,L)$. \end{itemize}} \medskip A plot of $y=\rho/d_O$ against $x=L/d_O$ is shown in Figure~\ref{fig:IIbRhombxy} \begin{comment} plIIbRhombxy= ParametricPlot[ {4*A(T+1/T)/Sqrt[1+Max[T,1/T]^2],1/Sqrt[1+Max[T,1/T]^2]}, {T,0.125,8},PlotStyle->Black] Export["IIbRhombxy.pdf",plIIbRhombxy] \end{comment} \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbRhombxy.pdf}} \caption{For rhombi, plot of $y=\rho/d_O$ against $x=L/d_O$. The top right corner is $(1/\sqrt{2},4\sqrt{2})$ corresponding to a square.} \label{fig:IIbRhombxy} \end{figure} \clearpage \noindent{\bf Kites} \noindent {\it Amongst kites with given area $A$ and distance $d$ between apexes, the rhombus has \begin{itemize} \item the smallest perimeter $L$, \item the largest inradius $\rho=2A/L$, and \item the smallest $4I_2=\rho\,i_2$. \end{itemize}} A proof uses Steiner symmetrisation. \medskip One can easily recover results which, in a more general form, are given in the next subsubsection. \noindent {\it Amongst kites with given sides $s_1$ and $s_2$ the right kite has the largest area.} \noindent For ease of writing, suppose $s_1<s_2$. The rhombus case can be treated separately. Let $h$ be the distance between the vertices adjacent to unequal sides. Let $\alpha_1$ be the angle at the vertex on both sides of length $s_1$. Denote by $\beta$, the repeated angle, the angle at vertices adjacent to the unequal sides. $$A =\frac{1}{2}d h\ {\rm which\ can\ be\ written\ }\ A = s_1 s_2 \sin(\beta). $$ The formula at the right can be deduced from that on the left using the trigonometry: $$ h = 2 s_1\sin(\alpha_1/2), \qquad d= s_2 \frac{\sin(\beta)}{\sin(\alpha_1/2)}, $$ the latter from the triangle sine rule. Thus since $A= s_1 s_2 \sin(\beta)$, the area is maximized at $\beta=\pi/2$. We remark \begin{eqnarray*} \rho &=& \frac{ s_1 s_2\sin(\beta)}{s_1+s_2} ,\\ \frac{L}{\rho} &=& \frac{2(s_1+s_2)^2}{s_1 s_2\sin(\beta)} . \end{eqnarray*} Also, on using $s_1<s_2$, $$\frac{d_O}{\rho} =\sqrt{1+{\rm max}(\cot(\frac{\beta}{2}),\cot(\frac{\alpha_1}{2}) )^2} , $$ and the sine rule for triangles gives $\alpha_1$ in terms of $s_1$, $s_2$ and $\beta$. \noindent{\bf Other tangential quadrilaterals} For a quadrilateral with angles $\alpha_k$ and sequence of sides $[s_1, s_2, L/2-s_1,L/2-s_2]$, the area is $$ A = \sqrt{s_1\, s_2\, (L/2-s_1)\,(L/2-s_2)} \sin(\frac{\alpha_1+\alpha_3}{2}) . $$ For any quadrilateral, since the sum of the angles is $2\pi$, $$ \sin(\frac{\alpha_1+\alpha_3}{2}) = \sin(\frac{\alpha_2+\alpha_4}{2}) ) .$$ The formula for the area $A$ above gives the following. \smallskip \noindent {\it Amongst all tangential quadrilaterals with given sequence of sides, the bicentric quadrilateral maximizes $A$ and $\rho=2A/L$.\\ A kite is bicentric iff it is a right kite.} \medskip In general, amongst all kites with a given pair of sides it is {\it not} the case that $i_2/(A\,L)$ is minimized by a right kite. (Let $\beta$ be the repeated angle in the kite. The quantity $i_2/(A\,L)$ plotted as a function of $1/\tan(\beta/2)$ appears to have a unique minimum which for a rhombus is at $\beta=\pi/2$, but in general is not.) \noindent{\bf More on quadrilaterals} \cite{KKLY17} treat quadrilaterals with diagonals intersecting at right angles. If the side lengths are denoted $s_1$, $s_2$, $s_3$, $s_4$ these satisfy $$ s_1^2 + s_3^2 = s_2^2 + s_4^2 .$$ If a quadrilateral is tangential and has its diagonals intersecting at right angles it is kite. (See also~\cite{KK20} \newpage \noindent{\bf Bicentric quadrilaterals, $(\rho,L,d_O)$} \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{plBicenQuadIIb.pdf}} \caption{$x=L/d_O$, $y=\rho/d_O$ for bicentric quadrilaterals} \label{fig:plBicenQuad} \end{figure} The upper left boundary curve in Figure~\ref{fig:plBicenQuad} (red) corresponds to right kites: the lower right curve (green) corresponds to tangential isosceles trapeziums. The dots are just from randomly generated tangential quadrilaterals. The uppermost point $(4\sqrt{2},1/\sqrt{2})$ corresponds to a square. The bottom boundary is approached by long thin shapes, and the left-most point is $(2,0)$ corresponding to the limit of thin shapes where $L\sim{2d_O}$. \medskip Another approach to the boundary curves involves working with the tangent lengths $\eta_j$. For a bicentric quadrilateral with $\rho=1$ we have $$ \eta_1\eta_3=\rho^2=1= \eta_2\eta_4\ {\rm so}\ \ L= \eta_1+\frac{1}{\eta_1}+\eta_2+\frac{1}{\eta_2} .$$ Choose $\eta_1$ to be the largest of the $\eta_k$, so $\eta_1>1$ and $d_O=\sqrt{1+\eta_1^2}$. Now $\eta_2$ must be less than or equal to $\eta_1$ and greater than or equal to $1/\eta_1$, and consequently So $$ L_-=2+\eta_1+\frac{1}{\eta_1}\le L \le 2(\eta_1+\frac{1}{\eta_1}) = L_+ .$$ Plotting $y=1/d_O$ against $x=L_-/d_O$ gives one of the boundary curves, and the other is from $y=1/d_O$ against $x=L_+/d_O$. \begin{comment} ppp = ParametricPlot[{2*(e1 + 1/e1 + 2)/Sqrt[1 + e1^2], 1/Sqrt[1 + e1^2]}, {e1, 1, 16}, PlotStyle -> Red] ppm = ParametricPlot[{4*(e1 + 1/e1)/Sqrt[1 + e1^2], 1/Sqrt[1 + e1^2]}, {e1, 1, 16}, PlotStyle -> Green] Show[{ppp, ppm}, PlotRange -> {{2, 6}, {0, 0.8}}] \end{comment} As an aside we remark that for a bicentric quadrilateral for which $\rho=1$ and $s_1$ ia the largest side (so $s_1\ge{2}$) and $s_2$ is the larger of the other pair (so $s_2>1$), $$L = \frac{s_1 s_2(s_1+s_2+\sqrt{4+(s1-s2)^2}}{s_1 s_2-1} .$$ \clearpage \noindent{\bf General tangential quadrilaterals} In Figure~\ref{fig:IIbtangQuadxy} the scatter of (blue) points towards the left upper are from $T_k$ values with the first 3 not all that far from 1. (Those blue dots that are near the bottom come from those with $T_4$ large, such as occurs if the first 3 of the $T_k$ were near $1/\sqrt{3}$, the $\alpha_k$ near $2\pi/3$.) The red dots are from choosing the first 3 $T_k$ randomly in the interval $(0,1000)$. Large values of $T_k$ cause $d_O$ to be large, hence the cluster close to $y=0$. We haven't definitively defined the boundaries. However we suspect the upper left boundary, at least for small enough $y$ ($d_O$ large) have the minimum $L$ shapes somewhat like the 1-cap, perhaps kites with a small apex angle at $d_O$. \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbtangQuadxy.pdf}} \caption{$x=L/d_O$, $y=\rho/d_O$ for tangential quadrilaterals} \label{fig:IIbtangQuadxy} \end{figure} The dashed curve is the bound from the 1-cap, discussed nearthe beginning of this section. \clearpage \subsubsection{Tangential pentagons} \vspace{1cm} \noindent{\bf Bicentric pentagons} \par\noindent An elaborate formula for the area of a cyclic pentagon in terms of side lengths is given in~\cite{Ro95}. \vspace{0.5cm} \subsubsection{Tangential hexagons} \vspace{1cm} \noindent{\bf Bicentric hexagons} \par\noindent A question that arises is, what conditions other than $$s_1+s_3+s_5 = s_2+s_4+s_6 =\frac{L}{2} ,$$ are satisfied by the sides, or tangent lengths $\eta_j$, for a bicentric hexagon. We have, for tangential hexagons, $$ s_1= \eta_1+\eta_2,\ s_2= \eta_2+\eta_3,\ s_3 = \eta_3+\eta_4, $$ $$ s_4= \eta_4+\eta_5,\ s_5= \eta_5+\eta_6,\ s_6= \eta_6+\eta_1,$$ or, in the notation of~\S\ref{subsec:Circulant}, $$ M(6)\, {\mathbf \eta} = {\mathbf s} .$$ From~\cite{RK09} equations~(2.29),~(2.30) $$\rho \sqrt{\frac{\eta_1 \eta_3 \eta_5}{\eta_1+\eta_3+\eta_5}} =\eta_2 \eta_5 = \eta_1 \eta_4 = \eta_3 \eta_6 , $$ and, from~\cite{RK09} equation~(2.26) $$ \eta_1 \eta_3 + \eta_3 \eta_5 + \eta_5 \eta_1 =\eta_2 \eta_4 + \eta_4 \eta_6 + \eta_6 \eta_2 =\rho^2 . $$ \vspace{0.5cm} \noindent{\bf Bicentric hexagons} An elaborate formula for the area of a cyclic hexagon in terms of side lengths is given in~\cite{Ro95}. \goodbreak \subsection{$(\rho,R_V,L)$ and generalizing Blundon's inequality}\label{subsec:rLRV} We hope to treat bicentric quadrilaterals in the future, but, for now, here are results for triangles. \subsubsection{Triangles} One of the many entries in wikipedia's list of triangle inequalities is the following, which in much of the literature is known as Blundon's inequality: {\small \begin{equation} 2R_V^{2}+10R_V\rho-\rho^{2}-2\sqrt{R_V}(R_V-2\rho)^{3/2} \leq \frac{L^{2}}{4} \leq 2R_V^{2}+10R_V\rho-\rho^{2}+2\sqrt{R_V}(R_V-2\rho)^{3/2} . \label{in:Blund} \end{equation} } Quoting from wikipedia: \begin{quote} Here the expression $$\sqrt {R_V^{2}-2R_V\rho} ={\rm dist(incentre,circumcentre)},$$ In the double inequality~(\ref{in:Blund}), the first part holds with equality if and only if the triangle is isosceles with an apex angle of at least $\pi/3$, and the last part holds with equality if and only if the triangle is isosceles with an apex angle of at most $\pi/3$. Thus both are equalities if and only if the triangle is equilateral. \end{quote} Let $x=L/R_V$, $y=\rho/R_V$. The allowed values of the $(x,y)$ pair is the region inside the curves given by $$4 (2+10 y - y^2 -2(1-2 y)^{3/2}) \le x^2 \le 4 (2+10 y - y^2 +2(1-2 y)^{3/2}) , $$ and shown in Figure~\ref{fig:IIbBlund}. \begin{comment} ppBlund1= ParametricPlot[{2*Sqrt[(2+10 y - y^2 -2(1-2 y)^(3/2))],y}, {y,0,1/2},PlotStyle->Red]; ppBlund2= ParametricPlot[{2*Sqrt[(2+10 y - y^2 +2(1-2 y)^(3/20)],y}, {y,0,1/2},PlotStyle->Red]; IIbBlund = Show[{ppBlund1,ppBlund2}] Export["IIbBlund.pdf",IIbBlund] \end{comment} \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIbBlund.pdf}} \caption{The lhs of the Blunden inequality is shown in red, the rhs in blue dashed. } \label{fig:IIbBlund} \end{figure} \begin{comment} plBlunL = ParametricPlot[{Sqrt[4*(2+10*y - y^2 -2*(1-2*y)^(3/2))],y}, {y,0,2},PlotStyle->Red] plBlunU = ParametricPlot[{Sqrt[4*(2+10*y - y^2 +2*(1-2*y)^(3/2))],y}, {y,0,2},PlotStyle->{Blue,Dashed}] IIbBlund= Show[{plBlunL,plBlunU}, AspectRatio -> 1] \end{comment} Some of the behaviour of isosceles triangles for the various Blaschke-Santalo diagrams is indicated in the table below. \begin{center} \begin{tabular}{|c|c|c|} \hline & squat & tall\\ \hline $L/R$ & $\sim{4}$&$\sim{4}$\\ $\rho/R$& $\rightarrow{0}$& $\rightarrow{0}$\\ \hline $L/R$ & $\sim{4}$&$\sim{2}$\\ $\rho/R$& $\rightarrow{0}$& $\rightarrow{0}$\\ \hline $L/R_V$ & $\sim{0}$&$\sim{4}$\\ $\rho/R_V$& $\rightarrow{0}$& $\rightarrow{0}$\\ \hline \end{tabular} \end{center} \clearpage \section{Moments about the centroid, $I_c$}\label{sec:centroids} The moment of inertia about the centroid, $I_c$ is, amongst $n$-gons with a given area, minimized by the regular $n$-gon: \begin{eqnarray*} I_c &\ge& I_c({\rm reg-n-gon}) ,\\ &=&\frac{1}{4} \rho({\rm reg-n-gon})\, i_2({\rm reg-n-gon}) , \\ &=&\frac{A^2 (3+\tau_n^2)}{6 n \tau_n}\ {\rm where\ } \ \tau_n = \tan(\frac{\pi}{n}) . \end{eqnarray*} (The calculations for the equations above are in Part IIa.) The St Venant inequality is the left-hand side below $$ \frac{Q_0}{A^2}\le \frac{1}{8\pi} \le \frac{I_c}{4 A^2} \le \frac{I_2}{4 A^2} ,$$ where, as before, $I_2$ is the moment about the incentre, and, for tangential polygons, $I_2=\rho i_2/4$. We can add this inequality into that near the end of Part I~\S\ref{subsect:tangNew} that for any tangential polygon $$\frac{1}{8}\rho^2 A \le Q_{0-}\leQ_0\le\frac{1}{4}I_c\le\frac{1}{4}I_2\leQ_{0+} .$$ \subsection{Formulae for $I_2-I_c=A |z_c-z_I|^2 $} The identity in the subsection heading is from the Parallel Axis Theorem.\\ \verb$https://en.wikipedia.org/wiki/Parallel_axis_theorem$ Denote by $z_c$ the coordinates of the centroid, and $z_I$ those of the incentre. Take, as elsewhere, the incentre to be the origin. Write $$ d_{IG} = |z_c-z_I| . $$ \medskip \noindent{\bf Triangles} \noindent For triangles $d_{IG}$ is given in terms of the side lengths in\\ \verb$https://mathworld.wolfram.com/Incenter.html$\\ Let the sides be denoted by $s_k$ and define \begin{eqnarray*} S_1 &=& s_1+s_2+s_3 , \\ S_2 &=& s_1 s_2+s_2 s_3+s_3 s_1 ,\\ S_3 &=& s_1 s_2 s_3 . \end{eqnarray*} Then \begin{equation} d_{IG}^2 =\frac{ 5 S_1 S_2 -S_1^3 -18 S_3}{9 S_1} . \label{eq:dIG3} \end{equation} \begin{comment} (* d_{IG}^2 is IGsqr *) IGsqr=-(a^3 - 2*a^2*b - 2*a*b^2 + b^3 - 2*a^2*c + 9*a*b*c - 2*b^2*c - 2*a*c^2 - 2*b*c^2 + c^3)/ (9*(a + b + c)); (* below is 0 *) Simplify[IGsqr + (18*a*b*c + (a + b + c)^3 - 5*(a + b + c)*SymmetricPolynomial[2, {a, b, c}])/(9*(a + b + c))] (* 0 *) SymmetricPolynomial[2, {a, b, c}] (* a b + b c + c a *) \end{comment} \medskip \noindent{\bf Quadrilaterals} For a rhombus the centroid and incentre coincide at the point of intersecion of the diagonals. We have yet to derive, for tangential quadrilaterals or special cases of these, formulae analogous to~(\ref{eq:dIG3}). See~\cite{KKLY17,KK20} for results concerning centroids, and characterizations of kites, etc. See also~\cite{My06,HS09}. \begin{comment} s1=e1+e2; s2=e2+e3; s3=e3+e4; s4=e4+e1; Simplify[s1*s3-s2*s4] (* (-e1 + e3) (e2 - e4) *) Simplify[s1^2 + s3^2 - (s2^2 + s4^2)] (* twice this *) \end{comment} \begin{comment} If the incircle is tangent to the sides AB, BC, CD, DA at T1, T2, T3, T4 respectively, and if N1, N2, N3, N4 are the isotomic conjugates of these points with respect to the corresponding sides (that is, AT1 = BN1 and so on), then the Nagel point of the tangential quadrilateral is defined as the intersection of the lines N1N3 and N2N4. Both of these lines divide the perimeter of the quadrilateral into two equal parts. More importantly, the Nagel point N, the "area centroid" G, and the incenter I are collinear in this order, and NG = 2GI. This line is called the Nagel line of a tangential quadrilateral. \end{comment} \newpage \begin{center} {\large{{\textsc{ Part III: Other inequalities and properties for $Q_0$,\\ isoperimetric inequalities, etc. }}}} \end{center} \section*{Abstract for Part III} A referee asked for a bit more on `context' and more numerics. There is a huge literature on bounds for torsional rigidity and in this part I focus on that most relevant to convex polygons. \begin{itemize} \item I also return to more on triangles, especially isosceles triangles for which, unsuprisingly, if one makes use of information specific to triangles, one can improve on my $Q_{0-}$ bound of Part I. \item Numerics for rhombi again indicate that $Q_{0-}$ is quite close to the actual torsional rigidity. \end{itemize} \section{Introduction to Part III}\label{sect:IntroIII} A famous open problem is as follows.\\ {\it Amongst all simple polygons with $n$ vertices with a given area, does the regular $n$-gon have the greatest torsional rigidity?}\\ \cite{PoS51} establishes this for $n = 3$ and $n = 4$ but the question is open for $n > 4$. It remains open for tangential $n$-gons too.\\ It will be easier to investigate, for tangential $n$-gons, this for the lower bound $Q_{0,c}$ rather than the actual torsional rigidity $Q_0$. The results in this direction are, so far, very slight (just isoceles triangles). We have yet to establish it for general triangles $n=3$. \medskip An outline for this part is as follows. \begin{itemize} \item In~\S\ref{sec:BoundsQIso} we present some previously published bounds. We also introduce the style of isoperimetric inequalities which will be studied in later sections. \item In~\S\ref{sec:genQ0} we review some bounds on $Q_0$. \item In~\S\ref{sec:MoreIsos} we study isosceles triangles There are many questions on how the geometry affects the domain functionals. We have, in~\S\ref{app:isosIsop}, some isoperimetric results: some functionals are optimized, over isosceles triangles with given area, by the equilateral triangle. \item In~\S\ref{sec:GenTri} (yet to be written!) we will, very briefly, consider extending the work on $Q_{0-}$ to general triangles. \item In~\S\ref{sec:RhombiQ} we show, numerically for rhombi, how close $Q_{0-}$ is to $Q_0$.\\ This parallels the work on isosceles triangles presented at the end of Part I. \item In~\S\ref{sec:BSQ} we consider Blaschke-Santalo diagrams involving $Q_0$, or $Q_{0-}$, for tangential polygons. \item In~\S\ref{sec:QFurtherQs} I give further questions in connection with $Q_{0-}$. Some of these are of the form: if some property has been established for $Q_0$, does $Q_{0-}$ have the same property. Domain monotonicity is one such property, and one question is whether it remains, for $Q_{0-}$ under corner cutting (see Part IIb~\S\ref{sec:Transformations}). Another, not discussed there, is whether, amongst tangential $n$-gons with a given inradius, the regular $n$-gon minimizes $Q_{0-}$. \end{itemize} \section{Bounds on $Q_0$, esp. isoperimetric}\label{sec:BoundsQIso} Other papers involving the torsional rigidity of tangential polygons include~\cite{CFG02}\S{2.2} involving web functions and~\cite{Sal18}. Web functions are particularly simple for tangential polygons, and align with the similar level curves of~\cite{PoS51}, level curves which are the same shape as the boundary. In~\cite{Sal18} the bounds are in terms of $$ I(q,\partial\Omega) = \int_\Omega {\rm dist}(z,\partial\Omega)^q . $$ (The notation is that of~\cite{Ke07} as, in this Supplement, there are already other uses for $I$ and $i$. The capital $I$ is, as with our other use, an integral over the domain.) For tangential polygons $I_0(\partial\Omega)=|\Omega|$ and $$ I(q,\partial\Omega) = \frac{(p+1)(p+2)}{(q+1)(q+2)} I(p,\partial\Omega)\rho(\Omega)^{q-p} = \frac{1}{(q+1)(q+2)}\, |\Omega|\, \rho(\Omega)^q .$$ We specialise a much more general theorem of \cite{Sal18} to the following.\\ {\bf Theorem~\cite{Sal18}}. {\it Let $\Omega$ be a tangential polygon. Then $$Q_0(\Omega) \ge \frac{1}{16} (p + 1)(p + 2)I(p,\partial\Omega)\,\rho(\Omega)^{2-p} \qquad{\rm where}\ -1 \le p < \infty.$$ Equality holds if $\Omega$ is a disk.}\\ In particular, with $p=2$ $$ \frac{1}{2} |\Omega|\rho^2 = 3 I(2,\partial\Omega) \le 4Q_0(\Omega) , $$ which recovers~\cite{PoS51}\S5.8 equation(7) on p100. See inequality~(\ref{in:PS}) in Part I. \cite{Sal18} is largely concerned with how this might generalize to domains other than tangential polygons. \medskip The famous open problem stated at the beginning of this part is as follows.\\ {\it Amongst all simple polygons with $n$ vertices with a given area, does the regular $n$-gon have the greatest torsional rigidity?}\\ Repeating from there: \cite{PoS51} establishes this for $n=3$ and $n=4$ but the question is open for $n>4$. Of course the question can be asked with a smaller sets, e.g. convex polygons or tangential polygons or bicentric polygons or ... To date there are no answers for the torsional rigidity. There are, however, for some other domain functionals: see Part IIb for purely geometric ones. The conformal inradius and related radii are others that on fixing the area is extreme for the regular $n$-gon (see~\cite{SZ04}). By a $\cal D$ we shall mean some domain functionals, and we are interested in pairs of these for which one has a result of the form\\ {\it For tangential $n$-gons with fixed ${\cal D}_1$ the regular $n$-gon $<$maximizes$|$minimizes$>$ ${\cal D}_2$}\\ Table~\ref{tbl:tbl1arIso} below presents some (repeating some entries from Table~\ref{tbl:tbl1ar} of Part IIb): \begin{table}[h] \begin{center} \begin{tabular}{|| c | c | c | c ||} \hline ${\cal D}_1$& & ${\cal D}_2$& Remark \\ \hline area & min& perimeter& \\ area & max& inradius& $A=\rho L/2$ \\ inradius& min& perimeter& Jensen inquality\\ inradius& min& area& " \\ inradius& min& $Q_0$& Solynin~\cite{Sol92,SZ10}\\ & & &see below\\ \hline \end{tabular} \caption{Tangential polygons} \label{tbl:tbl1arIso} \end{center} \end{table} In~\cite{Sol92} the result, at fixed inradius $\rho$, that $Q_0$ is minimized at the regular $n$-gon is first given, in Theorem 1, for tangential $n$-gons, and, after that, in Theorem 2, for more general $n$-gons. (I have yet to check the proofs.) \vspace{0.5cm} \subsection{Bounds on $Q_0$ for convex $\Omega$} We have, in Part I, reported the results, from~\cite{PoS51} $$ \frac{A^2}{4B}\le Q_0 \le \frac{A^2}{8\pi} .$$ The left-hand inequality is in Part I, inequality~(\ref{in:PS}): the right-hand inequality is the St Venant Inequality. There is equality in both for disks. \cite{PoS51} p99 gives, for convex domains, $$ B\le \frac{2A}{\rho^2} ,$$ which is an equality for tangential polygons. Combining these gives for convex domains, the left-hand inequality in $$\frac{1}{8}\rho^2 A \le Q_0 \le c \rho^2 A . $$ The left hand inequality is from~\cite{PoS51}\S5.8~equation~(7), and equality occurs for $\Omega$ a disk. The right hand inequality with $c=\frac{1}{3}$ is one of several due to Makai, and equality is approached by rectangles as they become long and slender, leaving the question as to the best constant $c$ when restricted to tangential polygons. From~\cite{PoS51} p254, for a thin rectangle $$ Q_0 \sim\frac{1}{3} \rho^2 A \ \ {\rm for}\ \ \rho\rightarrow{0} . $$ For thin isosceles triangles, with small vertex angle, $$ Q_0 \sim \frac{1}{6} \rho^2 A \ \ {\rm for}\ \ \rho\rightarrow{0} . $$ For the lower bound found in Part I $$Q_{0,-} \sim \frac{21}{128} \rho^2 A \ \ {\rm for}\ \ \rho\rightarrow{0} .$$ This checks with $Q_{0-}\leQ_0$. Another inequality for convex domains is $$\frac{1}{3}\frac{A^3}{L^2} \le Q_0 \le \frac{2}{3}\frac{A^3}{L^2} . $$ The left inequality is approached by thin rectangles. The right inequality is approached by thin acute isosceles triangles. (It is curious that thin rectangles occur as upper bounds in one inequality in this section, and lower bounds in another.) However, inequalities for convex sets, $$ \frac{\rho}{2}\le \frac{A}{L}\le \rho(1-\frac{\pi\rho}{L})\le \rho ,$$ (in which the central inquality is equality for a disk,~\cite{SA00}) are consistent with $$\frac{1}{3}\frac{A^3}{L^2} \le Q_0 \le \frac{1}{3}\rho^2 A . $$ For tangential polygons the inequalities above give $$\frac{1}{2}\frac{A^3}{L^2}=\frac{1}{8}\rho^2 A \le Q_0 \le \frac{1}{6} \rho^2 A = \frac{2}{3}\, \frac{A^3}{L^2} . $$ The extreme domains are the disk (left) and thin isosceles triangles (right). See also~\cite{BBP20}. \subsection{Cheeger constant and $Q_0$} We have $$ Q_0(\Omega) h(\Omega)^4 \ge Q_0(B) h(\Omega)^4 = 2\pi. $$ Equality occurs only for disks. See~\cite{LMR18}, and specialize to 2-dimensions. Thus for our tangential polygons $$Q_0(\Omega)\ge \frac{32\pi A^4}{(L+\sqrt{4\pi A})^4} .$$ \begin{comment} hFn[A_,L_]:= (L+Sqrt[4*Pi*A])/(2*A) Simplify[2*Pi/hFn[Pi, 2*Pi]^4] (* returns Pi/8 as it should *) Qh4Equilat = Simplify[(Sqrt[3]/20)*hFn[Sqrt[3],6]^4] (* after loading isosQ0Bndsmma.txt *) QhGuess = Qh4Equilat/hFn[Sqrt[3],L3s]^4; plQhGuess = Plot[QhGuess,{sigma,0,1},PlotStyle->{Green,Dashed}] plQ0LBcube2 = Show[{plIsos, plQ0LBcub, plQhGuess}, PlotRange -> {{0.1, 0.55}, {0.06, 0.09}}, AxesOrigin -> {0.2, 0.06}] (* The green dashed curve is higher than the others near equilat *) \end{comment} \vspace{4cm} Much is known about the elastic torsion function and, in particular, its properties in convex domains. It is known that in convex $\Omega$, the square root of the torsion function $\sqrt{u_0}$ is concave. In some proofs of this one uses that for solutions of the torsion equation $\log(1-4H)$, with H the hessian determinant, is harmonic. $u_0$ is, itself, not concave in any domain with corners. However one can ask on what subset $\Omega_1$ of $\Omega$ is $u_0$ concave. For an equilateral triangle (and for a disk), $\Omega_1$ contains the incircle. (See~\S{11} of~\cite{KMtorsion}.) One wonders what additional conditions, if any, might be needed for it to happen in other tangential polygons that $\Omega_1$ contains the incircle. Improvements on the St Venant inequality involving Fraenkel asymmetry are given in~\cite{BP16}. \medskip There is a Urysohn’s type inequality which we denote by (U).\\ (U): among convex sets with given mean width, the torsional rigidity is maximized by balls.\\ The mean width $w$ of any compact shape $\Omega$ in two dimensions is $L/\pi$, where $L$ is, as before, the perimeter of the convex hull of $\Omega$. So $w$ is the diameter of a circle with the same perimeter as the convex hull. So in (U) it is just maximizing with same perimeter. Inequality (U) is weaker than St Venant. There are inequalities involving polar moment of inertia about the centroid.\\ {\it For convex domains $Q_0 I_c A^{-4}$ is maximized by its value for an equilateral triangle.} (Related results, if not exactly this, are in~\cite{Pol55}.)\\ We conjecture:\\ {\it For tangential polygons $Q_0 I_2 A^{-4}$ is maximized by its value for an equilateral triangle.} Sperb $u_{\rm max}\le\rho^2$. Strip $u_0 = (\rho^2-x^2)/2$ \begin{comment} \section{Isoperimetric inequalities for polygons}\label{sec:genIsop} Especially after~\cite{PoS51} there has been extensive interest in isoperimetric inequalities. In this connection consider sets of simply-connected domains with a given area: \begin{eqnarray*} {\cal A}(A) &=& \{\Omega\ | \ {\rm simply-connected\ with\ }|\Omega|=A\} ,\\ {\cal A}_n(A) &=& \{\Omega_n\ | \ {\rm polygon\ with\ }n\ {\rm sides\ and\ }|\Omega_n|=A\}. \end{eqnarray*} Further isoperimetric results - equilateral triangles maximizing over ${\cal A}_3(A)$ - are given in later sections. \end{comment} \section{Some bounds on $Q_0$}\label{sec:genQ0} Recall the well-known estimates \begin{eqnarray*} Q_0 &\le& \frac{\pi}{8}\, \left(\frac{A}{\pi}\right)^2 = Q_{\odot,0} ,\\ Q_0 &\le& \Sigma_\infty . \end{eqnarray*} Define, in the notation of~\cite{PoS51}, \begin{equation} B_\Omega = \int_{\partial\Omega} \frac{1}{x\cdot{n}} . \label{eq:Bdef} \end{equation} Some well known lower bounds are: \begin{eqnarray*} Q_0 &\ge& \frac{A^2}{4\ B_\Omega} ( = {\rm{ \ for\ tangential\ polygons\ }} \frac{A^3}{2L^2} ) ,\\ Q_0 &\ge& \frac{\pi}{8}\, {\dot{r}}^2 , \end{eqnarray*} where ${\dot{r}}$ is the maximum interior mapping radius of $\Omega$. \section{More on isosceles triangles}\label{sec:MoreIsos} \subsection{Isoperimetric inequalities for isosceles triangles}{\label{app:isosIsop}} Numerical data on functionals associated with isosceles triangles was given in\S\ref{sec:isos}. Our main emphasis in the following will be isosceles triangles rather than general triangles. However, some general statements are available. Recall that $\cal A$ is all (simply-connected) domains, ${\cal A}_n$ is all $n$-gons. \begin{itemize} \item $Q_0/A^2$ is maximized over ${\cal A}_3$ (triangles) for an equilateral triangle. \item $B=L/\rho$ is minimized over ${\cal A}_3$ (triangles) for an equilateral triangle. \item ${\dot r}/\sqrt{A}$ is maximized over ${\cal A}_3$ (triangles) for an equilateral triangle. \end{itemize} \subsubsection{$B$}\label{app:BIsosIsop} $B=L/\rho$ for any tangential polygon, so for isosceles triangles in particular.\\ The disk minimizes $B$ and $Q_0 B/A^2$ over tangential polygons.\\ The equilateral triangle minimizes $B$ and $Q_0 B/A^2$ over triangles. \begin{tabular}{|| c | c | c | c | c | c ||} \hline shape&$n$& $8\piQ_0/A^2$& $Q_0/(A/\pi)^2$&$B=\frac{L}{\rho}$&$4Q_0 B/A^2$ \\ \hline disk&$\infty$& 1& $\pi/8$& $2\pi$& $1$ \\ hexagon&6& 0.9643& 0.3786& $4\sqrt{3}$&$1.063$ \\ square& 4& 0.8834& 0.3469& $8$&$1.125$ \\ \hline equilateral $\Delta$&3& 0.7255& 0.2849& $6\sqrt{3}$&$1.200$ \\ right isosceles& &0.6557& 0.2575& $4(1+\sqrt{2})$&$1.217$ \\ \hline \end{tabular} \subsubsection{$\dot r$}\label{app:rdotIsosIsop} \bigskip The calculation of $\dot r$ for polygons often involves the use of Schwarz-Christoffel conformal mappings. Some exact values are given on p273 of~\cite{PoS51}. \begin{itemize} \item (Aside, except for $n=3$.) For a regular polygon with $n$ sides, and perimeter $L_n$, $$ {\dot r}_n =\frac{\Gamma(1-\frac{1}{n})}{2^{1+2/n}\, \Gamma(\frac{1}{2})\, \Gamma(\frac{1}{2}+\frac{1}{n})}\, L_n . $$ \item Again citing~\cite{PoS51} p158, amongst all triangles with a given area that which maximizes $\dot r$ is equilateral. \item For (regular polygons and) triangles we have $$\pi {\dot r} {\bar r} = A, $$ $A$ being the area and $\bar r$ the transfinite diameter. \item Working from earlier more general results (Haegi 1951) in~\cite{Fi14rdot} it is given that for an isosceles triangle, vertex angle $\alpha$ and base $2\sin(\alpha/2)$ the transfinite diameter, denoted there by $\kappa$, is $$\kappa(\alpha) = \frac{\sqrt{\pi+\alpha}}{8\pi^{5/2}} \left(\frac{\pi+\alpha}{4\alpha}\right)^{\alpha/(2\pi)} \, \frac{\sin(\alpha)^2}{\sin(\alpha/2)}\, \Gamma(\frac{\alpha}{\pi})\, \Gamma(\frac{\pi-\alpha}{2\pi})^2 . $$ \end{itemize} For isosceles triangles with area $A$ and vertex angle $\alpha$ $${\bar r} = \frac{2\sqrt{A\tan(\alpha/2)}}{\sin(\alpha/2)}\, \kappa \ \ {\rm so\ \ } {\dot r} = \frac{A}{\pi{\bar r}} = \sqrt{A}\, \frac{\sin(\alpha/2)}{2\pi\sqrt{\tan(\alpha/2)}}\, \frac{1}{\kappa} . $$ Some values of $\dot r$, as given in~\cite{PoS51}, are copied in the following table. \begin{tabular}{|| c | c | c | c | c | c | c ||} \hline shape&$ $& $\dot r$& $\dot r$&$\frac{\dot r}{\sqrt{A}}$& $8Q_0/(\pi{\dot r}^4)$ \\ \hline disk&radius $a$& $a$& $a$& $0.56419$& $1$ \\ hexagon&side $s_6$& $\frac{2^{5/3}\, \sqrt{3}\pi}{\Gamma(1/3)^3}\, s_6$& $0.89850 s_6$& $0.55744$&$1.011$ \\ square&side $s_4$& $\frac{4\sqrt{\pi}}{\Gamma(1/4)^2}\,s_4$&$0.53835 s_4$& $0.53935 $& $1.058$ \\ \hline equilateral $\Delta$&side $s_3$ & $\frac{2\pi}{\Gamma(1/3)^3}\, s_3$& $0.3268\, s_3$ & $0.49665$&$1.209$ \\ right isosceles& equal sides $a$&$\frac{4\sqrt{2\pi}}{3^{3/4}\, \Gamma(1/4)^2} \, a$& $0.33462 a$& $0.47320$&$1.325$ \\ \hline \end{tabular} \subsubsection{Simple comments on torsion for isosceles triangles}\label{subsec:isosQ0} As noted before, for an equilateral triangle the torsion function is a cubic polynomial in $x$ and $y$ corresponding to forming the products of three linear terms, each linear term being 0 on one side of the triangle. There appear to be no other simple solutions for isosceles triangles, though there is a series formula for the torsional rigidity of the right isosceles triangle. In the table below, in the second, third and fourth columns we take the base to be 2, the area to be $h$, $h$ being the height; in the fifth and sixth columns the area is $\sqrt{3}$, the height is $h$ and the base $2\sqrt{3}/h$. \bigskip \begin{tabular}{|l|c|c|c|c|c|} \hline infinitely acute & $h \rightarrow\infty $&$A\sim{h}$& $Q_0\sim \frac{1}{6} h $& $h \rightarrow\infty $& $Q_0\sim\frac{1}{2h}$\\ equilateral & $h=\sqrt{3}$& $A=\sqrt{3}$ &$Q_0=\frac{\sqrt{3}}{20}$& $h=\sqrt{3}$& $Q_0=\frac{\sqrt{3}}{20}$\\ right isosceles &$h=1$& $A=1$&$Q_0 = 0.026091$ & $h=3^{1/4}$& \\ infinitely flat & $h\rightarrow{0}$&$A\sim{h}$&, $Q_0\sim \frac{1}{24} h^3$& $h\rightarrow{0}$& $Q_0\sim\frac{h}{8}$ \\ \hline \end{tabular} \medskip \goodbreak \subsubsection{More variational bounds on $Q_0$ for isosceles triangles} \noindent The bounds on $Q_0$ we report here are often very old. There are, of course, variational formulations of the Problem (P(0)). For positive, differentiable functions $v$, let $$ E(v, 0, k) = \int_\Omega v^k \qquad{\rm and\ \ } E(v,1,k) = \int_\Omega |\nabla v|^k \ . $$ With \begin{equation} Q_{0,\rm LB}(v)= E(v,0,1)^2 / E(v,1,2) \label{eq:e2} \end{equation} it can be shown that for any smooth function $v$ vanishing on the boundary of $\Omega$, the expression $Q_{0,\rm LB}(v)$ provides a lower bound on the torsional rigidity $Q_0=Q_{0,\rm LB}(u)$. The theory for this is given in~\cite{PoS51}. For isosceles triangles, base $2a$, height $h$, a simple trial function $v$ to substitute into (\ref{eq:e2}) is \begin{equation} v_{\rm cub} = (y+\rho) (1-(x/a)-((y+\rho)/h)) (1+(x/a)-((y+\rho)/h)) . \label{eq:e3} \end{equation} Evaluating the integrals gives \begin{equation} Q_{0\rm{cub}}=Q_{0,\rm LB}(v_{\rm cub} )= \frac{a^3 h^3}{(30 a^2 + 10 h^2)} = \frac{A^2}{30\tau +10/\tau}, \label{eq:e4} \end{equation} with $\tau=a/h=\tan(\alpha/2)$ as before, and the expression on the right of (\ref{eq:e4}) is, in fact, an approximation given in Roark's tables, valid for the the vertex angle $\alpha$ of the isosceles triangle lying between 40 and 80 degrees. It is exact for the equilateral case where the vertex angle is 60 degrees. It would, we think, be an improvement to Roark's tables to have noted, after defining this rational function of $a$ and $h$, to have noted that $Q_0>Q_{0,\rm LB}(v_{\rm cub})$ for all positive values of $a$ and $h$, and after this to have noted the range of $h/a$ over which it provides a good approximation to $Q_0$. We can compare this lower bound with our earlier $Q_{0-}$ as shown in Figure~\ref{fig:plIsos}. In the next figures the bound of~(\ref{eq:e4}) is shown brown, dashed. Unsurprisingly it is good for triangles near equilateral improving on $Q_{0-}$ there, but it is worse than $Q_{0-}$ when not near equilateral. For example, for a right isosceles triangle, with area $\sqrt{3}$ we found $$Q_0 =0.07827,\qquad Q_{0-}=0.07651,\qquad Q_{0\rm{cub}} = \frac{3}{40}= 0.075 . $$ \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{plQ0LBcube1.jpg}} \caption{For an isosceles triangle with area$\sqrt{3}$. $\sigma$ is tan of 1/4 of the apex angle. Blue is $Q_B$, red is the new lower bound $Q_{0-}$, black is $Q_\Delta$, brown dashed is the cubic one underdiscussion. } \label{fig:plQ0LBcube1} \end{figure} \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{plQ0LBcube2.jpg}} \caption{For an isosceles triangle with area$\sqrt{3}$. $\sigma$ is tan of 1/4 of the apex angle. Blue is $Q_B$, red is the new lower bound $Q_{0-}$, black is $Q_\Delta$, brown dashed is the cubic one underdiscussion. } \label{fig:plQ0LBcube2} \end{figure} \clearpage \bigskip Define $v_{\rm quad}$ to be even in $x$ and $$v_{\rm quad} = y (1+(x/a)-(y/h)) \ \ {\rm for}\ \ 0 < x < a. $$ Good lower bounds on $Q_0$ for when $h$ is small can be found using this as test function. \begin{comment} The curve which we have shown joining to b=0, f(0)=1/4 , is that associated with using $Q_{LB}(v_{\rm quad})$ instead of the $S$ we used in defining $f$ Again, the true values of $S$ must lie above this curve. \end{comment} See PartI, near Figure~\ref{fig:IIasolQ0m} for another lower bound on $Q_0$. Many other bounds are available. See for example the Appendix, by Helfenstein of the paper~\cite{PSH54}. {\bf ToDo.} Check the Helfenstein work. Stretching in one direction Define $$D(1,t)= \{ (tx,y) | (x,y) \in D \}. $$ In particular, $D(1,1)=D$. Domain monotonicity gives $QQ_0(D(1,t))$ is increasing in $t$. In~\cite{PSH54} it is shown that $t/Q_0(D(1,t))$ is increasing and concave in $t^2$. The appendix to their paper by Helfenstein makes good use of this in connection with isosceles triangles, finding upper and lower bounds on $Q_0$ which differ by no more than 12\%. These are, unfortunately, rather elaborate, and with cheap numerical computing, it is perhaps better to use the stretching result as yet another check on numerics. {\bf ToDo.} Main interest is in $Q_0(D(1,t))/|D(1,t)|^2$ where the maximum occurs for $t$ giving an equilateral triangle. \bigskip \subsection{Miscellaneous other bounds} Consider an isosceles triangle with a base of length 2 and height of length $h$, and angles $\beta$, $\beta$, and $\pi-2\beta$ respectively. Since $\tan(\beta)=h$, and the area $A$ of the triangle is $h$, it can be shown that an inequality from~\cite{BFNT20} is $$Q_0 \le \frac{1}{8} (1+A^2)^2\, (A-{\rm arctan}(A)) . $$ This inequality is, in~\cite{BFNT20}, used with triangles which are thin, and, while satisfied for equilateral, it is weak there. \begin{comment} van den Berg, M., Ferone, V., Nitsch, C., and Trombetti, C. (2020).\\ On a Polya functional for rhombi, isosceles triangles, and thinning convex sets. \\ Revista Matematica Iberoamericana, 36(7), 2091-2105. \\ \verb$https://doi.org/10.4171/rmi/1192$\\ When $A$ is small, the rhs above asymptotes to $A^3/24$ equilateral Q0equilat=Sqrt[3]/20; Aequilat=Sqrt[3]; rhsAbove = 16*(Sqrt[3]-Pi/3)/8; N[{Q0equilat,rhsAbove}] and one finds {0.087 , 1.37} bound is very far from close \end{comment} \section{General triangles}\label{sec:GenTri} From the calculations of $L$, $i_2$ nad $i_4$ in~Part IIb~\S\ref{sec:IIbTri}, $\Sigma_\infty$ and $\Sigma_1$ can be found. \section{Rhombi}\label{sec:RhombiQ} The formulae for $A$, $L$, $i_2$, $i_4$, $\Sigma_\infty$, $\Sigma_1$ are given in Part~IIa~\S\ref{sec:tangQuad}. The numerical values of $Q_0$ are from~\cite{SC65} (and checked with Mathematica NDSolve). \begin{figure}[ht] \centerline{\includegraphics[height=10cm,width=14cm]{IIIplRhomb.jpg}} \caption{ Rhombus } \label{fig:plRhomb} \end{figure} \clearpage \section{Blaschke-Santalo for $Q_0$}\label{sec:BSQ} If one were to assemble, for tangential polygons with given $\rho$ and $d_O$, a Blaschke-Santalo diagram, I would expect that with $Q_0$ would be similar to that with the area (or equivalently the perimeter) given earlier. In particular, at given $\rho$ and $d_O$, \begin{itemize} \item the 1-cap would minimize $Q_0$ thereby providing the left-upper boundary in the diagram, and \item the regular polygons would be amongst the maximizing shapes thereby providing a scatter of points on the right-lower boundary. \end{itemize} There may be independent interest in curves, or other subsets, in the region, e.g. for \begin{itemize} \item triangles \item kites/rhombi\\ other quadrilaterals. \end{itemize} A much easier task is to find the diagram for the lower bound $Q_{0-}$. I expect everything stated above for $Q_0$ would occur in this simpler situation. \section{Further questions}\label{sec:QFurtherQs} \noindent {\bf Question.} Does `corner cutting' (defined in~\S\ref{sec:Transformations}) increase $Q_0/A^2$?\\ It would be (much) easier to begin with $Q_{0-}$ and isosceles triangles each having its apex corner cut to form a symmetric tangential trapezium. Given the area of a regular $n$-gon, its inradius is, as in Part IIa~\S\ref{sec:regn}, $$ \rho_n^2 = \frac{A}{n\tau_n} \qquad {\rm with\ \ } \tau_n =\tan(\frac{\pi}{n}) . $$ Formulae for $i_{2,n}$, $i_{4,n}$, $\Sigma_{1,n}$ and $\Sigma_{\infty,n}$ are also given in Part IIa~\S\ref{sec:regn}. Let $f_n(Q_0)$ be the quadratic in $Q_0$ defined at Part I, equation~(\ref{eqt:fDef}) using the values appropriate to the regular $n$-gon for $\rho$, $i_2$ and $i_4$. The leading coefficient, i.e. coefficient of $Q_0^2$, is the same for the two polynomials. For an equilateral triangle with area $\sqrt{3}$, we found in Part I $$ f_3(Q_0) = f_\Delta(Q_0) = 32\sqrt{3} (Q_{0-}\frac{\sqrt{3}}{20})(Q_0 - \frac{\sqrt{3}}{4}) . $$ We wish to show that, for any tangential $n$-gon with the same area $$ Q_{0-}\le Q_{0-,n} .$$ This suggests that we try to show $$ f(Q_{0,n})\le{0}, \qquad f_n(Q_0)\ge{0} . $$ This leads to the question of how roots of a quadratic change when its coefficients change. Both polynomials $f$ are of the form $$ 32 A(Q_0^2 +b*Q_0 +c) = 0 ,$$ with two positive roots $$ 2\sqrt{c}<b<0, \qquad 0<c<\frac{b^2}{4} .$$ If we could show that both coefficients $b$ and $c$ are largest for the regular $n$-gon case, this would establish that $Q_{0,n}$ was larger than any other tangential $n$-gon. It would seem prudent \begin{itemize} \item to begin with $n=3$, \item then $n=4$, initially with special cases (rhombi, kites, etc.) \item then $n=5$, \item then $n=6$, \item then general $n$. \end{itemize} \newpage \begin{center} {\large{{\textsc{ Part IV: Robin boundary conditions $Q(\beta)$,\\ isoperimetric inequalities, etc. }}}} \end{center} \section{Robin boundary conditions} Part IV concerns a different problem than the earlier parts. The function $R$ of~\cite{Ke20i} is a function of a nonnegative parameter $\beta$ and geometric functionals $A$, $L$, $\Sigma_\infty$ and $\Sigma_1$ and $Q_0$. Knowledge of these, for triangles, (as in Part II) is essential for the following questions. \smallskip Let $\beta\ge{0}$. For solutions $u_\beta$ of Problem (P($\beta))$ of~\cite{Ke20i} in a triangle define $Q_{\rm triangle}(\beta)$ as the integral of $u_\beta$ over the triangle.\\ \noindent{\bf Question.} {\it Consider triangles of fixed area, say $\sqrt{3}$. Amongst these triangles is that which has the greatest $Q(\beta)$ the equilateral triangle?}\\ (For $\beta=0$ this is the case as proved in~\cite{PoS51}.) (A similar question, but for the fundamental frequency is asked in~\cite{LS17}.) \medskip In~\cite{Ke20i} we presented a lower bound $R(\beta,\ldots)$ for $Q(\beta)$ and noted that it provided a good approximation to $Q(\beta)$. For triangles the only argument of $R$ that isn't a relatively simple function of the triangle's geometry is the torsional rigidity $Q_0$. This leads on to simpler questions (for which some of Part III might be relevant). \smallskip \noindent{\bf Question.} {\it Consider triangles of fixed area, say $\sqrt{3}$. What additional properties of the torsional rigidity $Q_{\rm triangle}(0)$ will ensure that for these triangles that which has the greatest $R(\beta,\ldots)$ is the equilateral triangle?} \smallskip \noindent{\bf Questions.} {\it Consider triangles of fixed area, say $\sqrt{3}$. Replacing, in $R(\beta,\ldots)$, $Q_{\rm triangle}(0)$ by one of its bounds or approximations, will ensure that for these triangles that which has the greatest $R(\beta,\ldots,Q_{0,{\rm approx}})$ is the equilateral triangle?}\\ (The plural in `Questions' is because there are several possible bounds that might be used.) \smallskip \noindent There is some indication via numerics that, restricting to isosceles triangles, equilateral triangles maximize. \bigskip More generally, similar questions can be asked for $n$-gons. For these, does the regular $n$-gon optimize? \newpage \begin{center} {\large{{\textsc{ Part V: Miscellaneous PDE and tangential polygons }}}} \end{center} Tangential polygons get a mention in other pde papers. \bigskip Papers by Solynin include treatment of the fundamental frequency for tangential $n$-gons. \bigskip Consider evolution of temperature satisfying the heat equation, zero Dirichlet data on boundary of a convex plane domain $C$, and unit initial value. In the convex domain $C$ there will, at each fixed time, be a unique point at which the solution is a maximum, hot-spot. \cite{MS08} characterises the tangential polygons which have stationary hot-spots, basically as the regular polygons. We remark that the integral of the solution to the heat equation, integrated w.r.t. time $t$ from 0 to infinity, solves the torsion problem. So the unique maximum of the torsion function would have to be the stationary hot-spot. They must coincide with the unique critical point of the first eigenfunction. What further restrictions are needed on a tangential polygon in order that the incentre is a stationary hot spot? \newpage
1,314,259,996,780
arxiv
\section{Introduction} \label{sec:intro} Let $x_1, \ldots, x_n$ be indeterminates over the ring $\mathbb{Z}$ of integers and $S = \mathbb{Z}[x_1, \ldots, x_n]$. Let $p$ be zero or a prime number. For any field $\Bbbk$, the general linear group $\mathrm{GL}_n(\Bbbk)$ acts on $S \otimes_\mathbb{Z} \Bbbk$. Say that a monomial $S$-ideal $I$ is \define{$p$-Borel-fixed} if $I(S \otimes_\mathbb{Z} \Bbbk)$ is fixed under the action of the Borel subgroup of $\mathrm{GL}_n(\Bbbk)$ consisting of all the upper triangular invertible matrices over $\Bbbk$ for any infinite field $\Bbbk$ of characteristic $p$. (This definition does not depend on the choice of $\Bbbk$; see~Proposition~\ref{proposition:pBorel}.) Let $I$ be any monomial $S$-ideal. In Theorem~\ref{theorem:mainThm} we will show that for any prime number $p$, there exists a (monomial) $S$-ideal $J$ that is $p$-Borel-fixed and that, for any field $\ell$, there is a region (independent of $\ell$) in the multigraded Betti table of $J(S \otimes_\mathbb{Z} \ell)$ (as a module over $S \otimes_\mathbb{Z} \ell$) that is determined by the multigraded Betti table of $I(S \otimes_\mathbb{Z} \ell)$. This shows that, homologically, the class of Borel-fixed ideals in positive characteristic is as bad as the class of all monomial ideals. There is a combinatorial characterization of $p$-Borel-fixed $S$-ideals; see Proposition~\ref{proposition:pBorel}. It follows from this characterization that if $I$ is $0$-Borel-fixed, then $I(S \otimes_\mathbb{Z} \ell)$ if Borel-fixed for all fields $\ell$, irrespective of $\charact \ell$; the converse is not true. The Eliahou-Kervaire complex~\cite{ElKeMinimalReslns90}*{Theorem~2.1} gives $S$-free resolutions of $0$-Borel-fixed ideals in $S$, which specialize to minimal resolutions over any field field $\ell$. In particular, the $\mathbb{N}^n$-graded Betti table (and, hence, the $\mathbb{N}$-graded Betti table) of a $0$-Borel-fixed $S$-ideal remains unchanged after passing to any field. On the other hand, if we only assume that $I$ is $p$-Borel-fixed, with $p>0$, then little is known about minimal resolutions of $I(S \otimes_\mathbb{Z} \ell)$ for some field $\ell$, including when $\charact \ell = p$. A systematic study of Borel-fixed ideals in positive characteristic was begun by K.~Pardue~\cite{PardueThesis94}. In positive characteristic, Proposition~\ref{proposition:pBorel} was proved by him. He gave a conjectural formula for the (Castelnuovo-Mumford) regularity of principal $p$-Borel-fixed ideals. A.~Aramova and J.~Herzog \cite{ArHePrincipalpBorel97}*{Theorem~3.2} showed that the conjectured formula is a lower bound for regularity; Herzog and D.~Popescu \cite{HePoRegpBorel01}*{Theorem~2.2} finished the proof of the conjecture by showing that it is also an upper bound. V.~Ene, G.~Pfister and Popescu \cite{EnPfPoBettipStable00} determined Betti numbers and Koszul homology of a class of Borel-fixed ideals in $\Bbbk[x_1, \ldots, x_n]$, where $\charact \Bbbk = p > 0$, which they called `$p$-stable'. Our main result (Theorem~\ref{theorem:mainThm}) arose in the following way. It is known that the Eliahou-Kervaire resolution is cellular~\cite{MerminEKCellular10}. Using algebraic discrete Morse theory, M.~{J\"ollenbeck} and V.~Welker constructed minimal cellular free resolutions of principal Borel-fixed ideals in positive characteristic~\cite{JoWealgDMT09}*{Chapter~6}; see, also,~\cite{SinefBorel08}. We were trying to see whether this extends to more general $p$-Borel-fixed ideals when we realized the possibility of the existence of $p$-Borel-fixed ideals whose Betti tables might depend on the characteristic. As a corollary of our construction and the result of M.~Velasco~\cite {VelasNonCW08} that there are monomial ideals with a non-cellular minimal resolution, we conclude that there are $p$-Borel-fixed ideals that admit a non-cellular minimal resolutions. We remarked earlier that the $\mathbb{N}$-graded Betti table of a $0$-Borel-fixed $S$ ideal remains identical over any field. Pardue~\cite{PardueThesis94}*{Conjecture~V.4, p.~43} conjectured that this is true also for $p$-Borel-fixed ideals; see Conjecture~\ref{conjecture:pardue} for the statement. (This conjecture also appears in~\cite {PeStEKResln08}*{4.3}.) There has been some evidence that the conjecture is true. If $J$ is a $p$-Borel-fixed $S$-ideal, then the projective dimension of $J(S \otimes_\mathbb{Z} \ell)$ is determined by the largest $i$ such that $x_i$ divides some minimal monomial generator of $J$. The regularity of $J(S \otimes_\mathbb{Z} \ell)$ does not depend on $\ell$~\cite {PardueThesis94}*{Corollary~VI.9}; this is part of the motivation for Pardue to make this conjecture. Later, Popescu~\cite {PopescuExtremal05} showed that the extremal Betti numbers of $J(S \otimes_\mathbb{Z} \ell)$ does not depend on $\ell$. However, Example~\ref{examplebox:counterExamplePardue} shows that the conjecture is not true. We thank Ezra Miller and the anonymous referees for helpful comments. The computer algebra system \texttt{Macaulay2}~\cite{M2} provided valuable assistance in studying examples. \section{Preliminaries} \label{sec:prelims} We begin with some preliminaries on estimating the graded Betti numbers of monomial ideals and on $p$-Borel-fixed ideals. By $\mathbb{N}$ we denote the set of non-negative integers. When we say that $p$ is a prime number, we will mean that $p > 0$. By ${\mathbf e}_1, \ldots, {\mathbf e}_n$, we mean the standard vectors in $\mathbb{N}^n$. Let $A$ be an $\mathbb{N}^d$-graded polynomial ring (for some integer $d \geq 1$) over a field $\Bbbk$, with $A_{\mathbf 0} = \Bbbk$. Let $M$ be an $\mathbb{N}^d$-graded $A$-module. (All the modules that we deal with in this paper are ideals or quotients of ideals.) The \define{$\mathbb{N}^d$-graded Betti numbers} of $M$ are $\beta_{i,{\mathbf a}}^A(M) := \dim_\Bbbk \tor_i^A(M,\Bbbk)_{\mathbf a}$. The \define{$\mathbb{N}^d$-graded Betti table} of $M$ is the element $(\beta_{i,{\mathbf a}}^A(M))_{i, {\mathbf a}} \in \mathbb{Z}^{\mathbb{N} \times \mathbb{N}^d}$. For ${\mathbf a} = (a_1, \ldots, a_d) \in \mathbb{N}^d$, we write $|{\mathbf a}| = a_1+ \cdots+ a_d$. \begin{notation} \label{notation:coeffIdeals} Let $A$ be a Noetherian ring and $z$ an indeterminate over $A$. Let $B = A[z]$; it is a graded $A$-algebra with $\deg z = 1$. For a graded $B$-ideal $I$, define $A$-ideals $I_{\langle i \rangle} = ((I:z^i) \cap A)$, for all $i \in \mathbb{N}$. \end{notation} Note that for all $i \in \mathbb{N}$, $I_{\langle i \rangle} \subseteq I_{\langle i+1 \rangle}$. Moreover, since $A$ is Noetherian, $I_{\langle i \rangle} = I_{\langle i+1 \rangle}$ for all $i \gg 0$. \begin{lemma} \label{lemma:BettiIAndIzI} Adopt Notation~\ref{notation:coeffIdeals}. Suppose that $A$ is a $\mathbb{N}^d$-graded polynomial ring (for some integer $d \geq 1$) over a field $\Bbbk$ of arbitrary characteristic, with $A_{\mathbf 0} = \Bbbk$. Let $I$ be a graded $B$-ideal (in the natural $\mathbb{N}^{d+1}$-grading of $B$). Then for all ${\mathbf a} \in \mathbb{N}^d$, \[ \beta_{i,({\mathbf a},j)}^B(I) = \begin{cases} 0, & \text{if}\; j < 0,\\ \beta_{i, {\mathbf a}}^A (I_{\langle 0 \rangle}), & \text{if}\; j = 0, \;\text{and}\\ \beta_{i-1, {\mathbf a}}^A (I_{\langle j \rangle}/I_{\langle {j-1} \rangle}), & \text{otherwise}. \end{cases} \] \end{lemma} \begin{proof} Fix ${\mathbf a} \in \mathbb{N}^d$. Let $M := I_{\langle 0 \rangle}B \oplus \bigoplus_{l\geq1} (I_{\langle l \rangle}/I_{\langle l-1 \rangle})\otimes_A B(-(\mathbf 0, l))$. We need to prove that $\beta_{i,({\mathbf a},j)}^B(I) = \beta_{i,({\mathbf a},j)}^B(M)$ for all $i,j$. Note that $z$ is a non-zero-divisor on $M$. Moreover, $M/zM \simeq I_{\langle 0 \rangle} \otimes_A (B/zB) \oplus \bigoplus_{l\geq1} (I_{\langle l \rangle}/I_{\langle l-1 \rangle})\otimes_A (B/zB)(-(\mathbf 0, l)) \simeq I/zI$. Therefore there are two exact sequences \[ \xymatrix@R=1ex{ 0 \ar[r]& I(-(\mathbf 0,1)) \ar[r]^-z& I \ar[r]& I/zI \ar[r]& 0, \\ 0 \ar[r]& M(-(\mathbf 0,1)) \ar[r]^-z& M \ar[r]& I/zI \ar[r]& 0.} \] The maps $\tor_i^B(I(-(\mathbf 0,1)),\Bbbk) \stackrel{z}{\to} \tor_i^B(I,\Bbbk)$ and $\tor_i^B(M(-(\mathbf 0,1)),\Bbbk) \stackrel{z}{\to} \tor_i^B(M,\Bbbk)$ are zero. Therefore, for all $i$ and for all $j > 0$. \begin{equation} \label{equation:bettiIzI} \beta_{i,({\mathbf a},j)}^B(I) + \beta_{i-1,({\mathbf a},j-1)}^B(I) = \beta_{i,({\mathbf a},j)}^B(I/zI) = \beta_{i,({\mathbf a},j)}^B(M) + \beta_{i-1,({\mathbf a},j-1)}^B(M). \end{equation} Note that outside a bounded rectangle inside $\mathbb{Z}^2$, the functions $(i,j) \mapsto \beta_{i,({\mathbf a},j)}^B(I)$ and $(i,j) \mapsto \beta_{i,({\mathbf a},j)}^B(M)$ take the value zero. Therefore it follows from~\eqref{equation:bettiIzI} that $\beta_{i,({\mathbf a},j)}^B(I) = \beta_{i,({\mathbf a},j)}^B(M)$ for all $i,j$. \end{proof} \begin{definition} \label{definition:stretch} Adopt Notation~\ref{notation:coeffIdeals}. Let $d = (d_0 < d_1 < \cdots)$ be an increasing sequence of natural numbers. Define an operation $\Phi_{d}$ on graded $B$-ideals by setting $\Phi_{d}(I)$ to be the $B$-ideal generated by $\oplus_{i\in\mathbb{N}} I_{\langle i \rangle} z^{d_i}$. \end{definition} \begin{proposition} \label{proposition:stretchBetti} Adopt the hypothesis of Lemma~\ref{lemma:BettiIAndIzI}. Then \[ \beta_{i,({\mathbf a},j)}(\Phi_{d}(I)) = \begin{cases} \beta_{i, ({\mathbf a}, l)}(I), & \text{if}\; j=d_l \\ 0, & \text{otherwise}. \end{cases} \] \end{proposition} \begin{proof} This follows immediately by noting that, for all $j \in \mathbb{N}$, $(\Phi_{d}(I))_{\langle j \rangle} = I_{\langle l \rangle}$ where $l$ is such that $d_{l} \leq j < d_{l+1}$. (If $d_0 > 0$, then $(\Phi_{d}(I))_{\langle j \rangle} = 0$ for all $0 \leq j < d_0$.) \end{proof} \subsection*{Borel-fixed ideals} For the duration of this paragraph and Proposition~\ref{proposition:pBorel}, assume that $p$ is zero or a positive prime number. Given two non-negative integers $a$ and $b$, say that $a \preccurlyeq_p b$ if $\binom{b}{a} \neq 0 \mod p$. Then there is the following characterization of Borel-fixed ideals; for positive characteristic, it was proved by Pardue~\cite {PardueThesis94}*{Proposition II.4}. For details, see~\cite{eiscommalg}*{Section~15.9.3}. \begin{proposition}[\cite{eiscommalg}*{Theorem~15.23}] \label{proposition:pBorel} Let $\Bbbk$ be an infinite field of characteristic $p$. An ideal $I$ of $\Bbbk[x_1,\dots,x_n]$ is Borel fixed if and only $I$ is a monomial ideal and for all $i<j$ and for all monomial minimal generators $m$ of $I$, $(x_i/x_j)^sm \in I$ for all $s \preccurlyeq_p t$ where $t$ is the largest integer such that $x_j^t \mid m$. \end{proposition} \begin{conjecture} [\cite{PardueThesis94}*{Conjecture~V.4, p.~43}] \label {conjecture:pardue} Let $p$ be a prime number. Let $I$ be a $p$-Borel-fixed monomial $S$-ideal. Then the $\mathbb{N}$-graded Betti table of $I(S \otimes_\mathbb{Z} \ell)$ is independent of $\charact \ell$ (equivalently, $\ell$) for all fields $\ell$ (of arbitrary characteristic). \end{conjecture} \section{Construction} \label{sec:construction} Recall that $S = \mathbb{Z}[x_1, \ldots, x_n]$ and that $I$ is a monomial $S$-ideal. Fix a prime number $p$ and let $\Bbbk$ be any field of characteristic $p$. We now describe an algorithm that constructs an $S$-ideal $J$ such that $J(S \otimes_\mathbb{Z} \Bbbk)$ is Borel-fixed. \begin{construction} \label{construction:pBorelFixed} Input: A monomial $S$-ideal $I$. Set $i=1$ and $J_0 = I$. \begin{asparaenum} \item \label{enum:upperBdForReg} Pick $r_{i}$ an upper bound for $\reg_{(S \otimes_\mathbb{Z} \ell)}(J_{i-1}(S \otimes_\mathbb{Z} \ell))$ that is independent of the field $\ell$. \item \label{enum:addNewGen} Pick a positive integer $e_{i}$ such that $p^{e_{i}} > r_{i}$. Let $d = (0 < p^{e_{i}} < 2p^{e_{i}} < 3p^{e_i} < \cdots)$. Set $J_i = \Phi_d(J_{i-1} + (x_i^{p^{e_i}}))$ with $A = \mathbb{Z}[x_1, \ldots, x_{i}, x_{i+2}, \cdots, x_n]$, $z = x_{i+1}$ and $B=S$ (Definition~\ref{definition:stretch}). Note that we are adding a large power of $x_i$ but modifying the resulting ideal with respect to $x_{i+1}$. \item \label{enum:checkAndRepeat} If $i=n-1$ then set $J =J_{i}$ and exit, else replace $i$ by $i+1$ and go to Step~\eqref{enum:upperBdForReg}. \end{asparaenum} Output: A monomial $S$-ideal $J$. \end{construction} Before we state our theorem, we need to identify a region of the $\mathbb{N}^n$-graded Betti table of $J(S \otimes_\mathbb{Z} \ell)$ that captures the $\mathbb{N}^n$-graded Betti table of $I(S \otimes_\mathbb{Z} \ell)$. Let ${\mathcal A} = \{{\mathbf a} : |{\mathbf a}| \le r_1\}$ (with $r_1$ as in Step~\eqref{enum:upperBdForReg}) and ${\mathcal B} = \{{\mathbf b} : b_j < p^{e_j}-1\}$. \begin{theorem} \label{theorem:mainThm} The ideal $J$ is $p$-Borel-fixed. Moreover, there is an injective map $\psi : {\mathcal A} \to {\mathcal B}$ such that for all fields $\ell$ (of arbitrary characteristic), for all $1 \leq i \leq n$, and for all ${\mathbf b} \in {\mathcal B}$, \[ \beta_{i,{\mathbf b}}^{S \otimes_\mathbb{Z} \ell}(J(S \otimes_\mathbb{Z} \ell)) = \begin{cases} \beta_{i,\psi^{-1}({\mathbf b})}^{S \otimes_\mathbb{Z} \ell}(I(S \otimes_\mathbb{Z} \ell)) , & \text{if}\; {\mathbf b} \in \image \psi, \\ 0, & \text{otherwise}. \end{cases} \] \end{theorem} Let us make some remarks about the construction. In Step~\eqref{enum:upperBdForReg}, we may, for example, take $r_i$ to be the degree of the least common multiple of the minimal monomial generators of $J_{i-1}$; that this is a bound for regularity (independent of characteristic) follows from the Taylor resolution. There are stronger bounds, e.g., the largest degree of a minimal generator of the lex-segment ideal with the same Hilbert function as $J_{i-1}(S \otimes_\mathbb{Z} \ell)$. Additionally, one may insert a check at Step~\eqref{enum:checkAndRepeat} whether $J_i(S \otimes_\mathbb{Z} \mathbb{Z}/p)$ is Borel-fixed using Proposition~\ref{proposition:pBorel}. The algorithm will, then, terminate before or at the stage $i=m-1$ where $m = \max \{i : x_i \;\text{divides a minimal monomial generator of}\; I\}$. The proofs of Theorem~\ref{theorem:mainThm} and Proposition~\ref{proposition:nonCellularMinRes} hinge on the following lemma. See~\cite {eiscommalg}*{Section~A3.12} for mapping cones and~\cite{MiStCCA05}*{Chapter~4} for cellular resolutions. In the proof of the theorem, we first describe the change in the $\mathbb{N}^n$-graded Betti table at Step~\eqref{enum:addNewGen}. Readers familiar with multigraded resolutions will be able to see that the Betti numbers of $J$ in the region ${\mathcal B}$ should be the Betti numbers of the ideal obtained from $I$ by replacing $x_i$ with $x_i^{p^{e_{i-1}}}$ and hence contain information of the Betti numbers of $I$. For the sake of readability, we will abbreviate, for monomial $S$-ideals ${\mathfrak a}$, $\beta_{i,{\mathbf b}}^{S \otimes_\mathbb{Z} \ell}({\mathfrak a}(S \otimes_\mathbb{Z} \ell))$ by $\beta_{i,{\mathbf b}}^{\ell}({\mathfrak a})$ and $\reg_{(S \otimes_\mathbb{Z} \ell)}({\mathfrak a}(S \otimes_\mathbb{Z} \ell))$ by $\reg_{\ell}({\mathfrak a})$, from here till the end of the proof of theorem. \begin{lemma} \label{lemma:mappingCone} Let $1 \leq j \leq n$ and $\ell$ be any field. \begin{asparaenum} \item \label{enum:mappingConeQuotientIsSat} $(J_{j-1} :_S x_j^{p^{e_j}}) = (J_{j-1} :_S x_j^\infty)$. \item \label{enum:mappingConeMinimal} Let $F_\bullet$ and $F'_\bullet$ be minimal $(S \otimes_\mathbb{Z} \ell)$-free resolutions of $(S/J_{j-1}) \otimes_\mathbb{Z} \ell$ and $(S/(J_{j-1} :_S x_j^{p^{e_j}})) \otimes_\mathbb{Z} \ell$.\\ Write $M_\bullet$ for the mapping cone of the comparison map $F'_\bullet(-x_j^{p^{e_j}}) \to F_\bullet$ that lifts the injective map $(S/(J_{j-1} :_S x_j^{p^{e_j}}) (-x_j^{p^{e_j}}) \stackrel{x_j^{p^{e_j}}} \to S/J_{i-1})\otimes_\mathbb{Z} \ell$. Then for each $i$, the set of degrees of homogeneous minimal generators of $F'_{i}(-x_j^{p^{e_j}})$ is disjoint from that of $F_i$. In particular, $M_\bullet$ is a minimal $(S \otimes_\mathbb{Z} \ell)$-free resolution of $(S/(J_{j-1} + (x_j^{p^{e_j}}))) \otimes_\mathbb{Z} \ell$. \end{asparaenum} \end{lemma} \begin{proof} \underline{\eqref{enum:mappingConeQuotientIsSat}}: Follows from the choice of ${e_j}$. \underline{\eqref{enum:mappingConeMinimal}}: The assertion about generating degrees follows from the choice of ${e_j}$. As a consequence, we see that the map $F'_i(-x_j^{p^{e_j}}) \to F_i$ is minimal, i.e., if we represent it by a matrix, all the entries are in the homogeneous maximal ideal. Therefore $M_\bullet$ is minimal, and, hence a minimal resolution of $(S/(J_{j-1} + (x_j^{p^{e_j}}))) \otimes_\mathbb{Z} \ell$. \end{proof} \begin{proof}[Proof of the theorem] Without loss of generality, we may assume that $\Bbbk$ is infinite. Let $x_1^{a_1}\cdots x_n^{a_n}$ be a minimal monomial generator of $J$. For all $1 \le i \leq n-1$, $a_{i+1}$ is a multiple of $p^{e_i}$ and $x_i^{p^{e_i}} \in J$. Note that for all integers $l \geq 1$, if $m \preccurlyeq_p lp^{e_i}$ for some integer $m$, then $m$ is a multiple of $p^{e_i}$. By Proposition~\ref{proposition:pBorel} $J$ is $p$-Borel-fixed; note that $e_1<e_2<\cdots$. The assertion about the Betti numbers $\beta_{i,{\mathbf b}}^{\ell} (J)$ follows from the discussion below, repeatedly applying~\eqref{equation:dichotomy}. Fix $1 \leq j \leq n-1$. If $|{\mathbf b}| \ge i+p^{e_j}$ then $|{\mathbf b}| > i + \reg_{\ell}(J_{j-1})$, so the Betti numbers $\beta_{i,{\mathbf b}}^{\ell} (J_{j-1} + (x_j^{p^{e_j}}))$ are determined by the resolution of $(S/(J_{j-1} :_S x_j^\infty))(-p^{e_j}{\mathbf e}_j)$; hence, in particular, for such ${\mathbf b}$, if $\beta_{i,{\mathbf b}}^{\ell} (J_{j-1} + (x_j^{p^{e_j}})) \neq 0$, then $b_j \geq i+p^{e_j}$. Putting this together, we obtain the following: \[ \beta_{i,{\mathbf b}}^{\ell} (J_{j-1} + (x_j^{p^{e_j}})) = \begin{cases} \beta_{i,{\mathbf b}}^{\ell} (J_{j-1}), & \text{if}\; b_j < i+p^{e_j} \\%|{\mathbf b}| < i+p^{e_j},\\ \beta_{i-1,{\mathbf b}-p^{e_j}{\mathbf e}_j}^{\ell} (J_{j-1} :_S x_j^\infty), & \text{otherwise.} \end{cases} \] Proposition~\ref{proposition:stretchBetti} implies that for all ${\mathbf b} \in \mathbb{N}^n$, \begin{equation} \label{equation:indStepBetti} \beta_{i,{\mathbf b}}^{\ell} (J_{j}) = \begin{cases} \beta_{i,{\mathbf b}'}^{\ell} (J_{j-1}), & \text{if}\; p^{e_j} \mid b_{j+1} \;\text{and}\; b_j < i+p^{e_j}, \\ \beta_{i-1,{\mathbf b}''}^{\ell} (J_{j-1} :_S x_j^\infty), & \text{if}\; p^{e_j} \mid b_{j+1} \;\text{and}\; b_j \ge i+p^{e_j}, \\ 0, & \text{otherwise}, \end{cases} \end{equation} where write ${\mathbf b}' = {\mathbf b} - (b_{j+1}-\frac{b_{j+1}}{p^{e_j}}) {\mathbf e}_{j+1}$ and ${\mathbf b}'' = {\mathbf b}' -p^{e_j}{\mathbf e}_j$. We can recover the $\mathbb{N}^n$-graded Betti table of $J_{j-1}$ from the $\mathbb{N}^n$-graded Betti table of $J_j$. To make this precise, suppose that $\beta_{i,{\mathbf b}}^{\ell} (J_{j}) \neq 0$. Then the resulting dichotomous situation from~\eqref{equation:indStepBetti} has the following re-interpretation: \begin{equation} \label{equation:dichotomy} \begin{aligned} b_j < i+p^{e_j} &\quad\text{if and only if}\quad \beta_{i,{\mathbf b}}^{\ell} (J_{j}) = \beta_{i,{\mathbf b}'}^{\ell} (J_{j-1}),\\ b_j \ge i+p^{e_j} &\quad\text{if and only if}\quad \beta_{i,{\mathbf b}}^{\ell} (J_{j}) = \beta_{i-1,{\mathbf b}'}^{\ell} (J_{j-1} :_S x_j^\infty). \end{aligned} \end{equation} We will not explicitly construct the map $\psi$, but will observe that it can be done putting together the changes at each stage $j$. \end{proof} \begin{proposition} \label{proposition:nonCellularMinRes} Let $p$ be any prime number, $\Bbbk$ a field of characteristic $p$ and $R := S \otimes_Z \Bbbk = \Bbbk[x_1, \ldots, x_n]$. Let $I$ be any monomial $S$-ideal and $J$ be as in Construction~\ref{construction:pBorelFixed}. If $IR$ has a non-cellular minimal $R$-free resolution then so does $JR$. In particular, there exists a Borel-fixed $R$-ideal with a non-cellular minimal resolution. \end{proposition} \begin{proof} The second assertion follows from the first since there are monomial ideals that have non-cellular minimal resolutions~\cite{VelasNonCW08}; therefore we prove that if $IR$ is a non-cellular minimal resolution then so does $JR$. As proposition does not involve looking at the behaviour of $I$ and $J$ in two different characteristics, so, for the duration of this proof, we may assume that Construction~\ref{construction:pBorelFixed} is done over $R$ instead of $S$. Hereafter, we assume that $I$ and $J$ are $R$-ideals. Note that it suffices to show, inductively, that, in Construction~\ref{construction:pBorelFixed}, if $J_{i-1}$ has a non-cellular minimal resolution, then so does $J_i$. It is immediate that $J_i$ has a cellular minimal resolution if and only if $(J_{i-1} + (x_i^{p^{e_i}}))$ has one; this is because the same CW-complex supports minimal resolutions of $(J_{i-1} + (x_i^{p^{e_i}}))$ and $J_i := \Phi_d(J_{i-1} + (x_i^{p^{e_i}}))$. Therefore, it suffices to show that if $J_{i-1}$ has a non-cellular minimal resolution then so does $(J_{i-1} + (x_i^{p^{e_i}}))$. This is an immediate consequence of the choice of $e_i$ and of Lemma~\ref{lemma:mappingCone}. Let $F_\bullet$ be a non-cellular minimal resolution of $J_{i-1}$. Let $F'_\bullet$ be any minimal resolution of $S/(J_{j-1} :_S x_j^{p^{e_j}})$. Then the mapping cone $M_\bullet$ is necessarily non-cellular: for, otherwise, if there is a CW-complex $X$ that supports $M_\bullet$, then for ${\mathbf b} = (p^{e_i}-1, \ldots, p^{e_i}-1)$, $X_{\leq {\mathbf b}}$ supports $F_\bullet$. \end{proof} \begin{examplebox}[Counter-examples to Conjecture~\ref{conjecture:pardue}] \label{examplebox:counterExamplePardue} Note that, since graded Betti numbers are upper-semicontinuous functions of characteristic, for an $S$-ideal $J$, the $\mathbb{N}$-graded Betti table of $(J(S \otimes_\mathbb{Z} \ell))$ depends on $\charact \ell$ if and only if the $\mathbb{N}^n$-graded Betti table depends on $\charact \ell$. Let $I$ be any monomial $S$-ideal such that its $\mathbb{N}^n$-graded Betti table depends on $\charact \ell$. Let $p$ be any prime number and $\Bbbk$ any field of characteristic $p$. Let $J$ be the ideal from Construction~\ref{construction:pBorelFixed}. Then $J(S \otimes_\mathbb{Z} \ell)$ is Borel-fixed while its $\mathbb{N}^n$-graded Betti table depends on $\charact \ell$. As a specific example, we consider the minimal triangulation of the real projective plane~\cite {BrHe:CM}*{Section~5.3}. We have \begin{align*} S & = \mathbb{Z}[x_1, \ldots, x_6] \\ I & = (x_1x_2x_3, x_1x_2x_4, x_1x_3x_5, x_2x_4x_5, x_3x_4x_5, x_2x_3x_6, x_1x_4x_6, x_3x_4x_6, x_1x_5x_6, x_2x_5x_6). \intertext{With $p=2$, $e_1 = 3$, $e_2 = 5$, $e_3 = 7$, $e_4 = 9$, and $e_5 = 11$, we obtain} J & = (x_1^8, x_2^{32}, x_1x_2^8x_3^{32}, x_3^{128}, x_1x_2^8x_4^{128}, x_4^{512}, x_1x_3^{32}x_5^{512}, x_2^8x_4^{128}x_5^{512}, x_3^{32}x_4^{128}x_5^{512}, \\ & \qquad x_5^{2048}, x_2^8x_3^{32}x_6^{2048}, x_1x_4^{128}x_6^{2048}, x_3^{32}x_4^{128}x_6^{2048}, x_1x_5^{512}x_6^{2048}, x_2^8x_5^{512}x_6^{2048}). \end{align*} Then the Betti numbers $\beta_{2,2729}^{S \otimes_\mathbb{Z} \ell}(J(S \otimes_\mathbb{Z} \ell))$ and $\beta_{3,2729}^{S \otimes_\mathbb{Z} \ell}(J(S \otimes_\mathbb{Z} \ell))$ (which correspond to $\beta_{2,6}^{S \otimes_\mathbb{Z} \ell}(I(S \otimes_\mathbb{Z} \ell))$ and $\beta_{3,6}^{S \otimes_\mathbb{Z} \ell}(I(S \otimes_\mathbb{Z} \ell))$, respectively) are nonzero precisely when $\charact \ell = 2$; otherwise they are zero. \end{examplebox} After this paper was posted on the {\tt arXiv}, Matteo Varbaro asked us whether there are $p$-Borel-fixed ideals minimally generated in a single degree that exhibit different Betti tables in different characteristics. There are: for instance, if we take $J_1$ to be the sub-ideal of the ideal $J$ of the above example generated by the monomials of degree $2725$ in $J$, i.e., $J_1 = J \cap (x_1, \ldots, x_6)^{2725}$. Being the intersection of two $p$-Borel-fixed ideals, $J_1$ is $p$-Borel-fixed. Moreover, for all $i$, for all $j > 2725$ and for all fields $\ell$, $\beta_{i,i+j}^{S \otimes_\mathbb{Z} \ell}(J(S \otimes_\mathbb{Z} \ell)) = \beta_{i,i+j}^{S \otimes_\mathbb{Z} \ell}(J_1(S \otimes_\mathbb{Z} \ell))$. \def\cfudot#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else \setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7 \hbox{\raise.1ex\hbox to 1\wd7{\hss.\hss}}\penalty 10000 \hskip-1\wd7\penalty 10000\box7} \begin{bibdiv} \begin{biblist} \bib{ArHePrincipalpBorel97}{article}{ author={Aramova, Annetta}, author={Herzog, J{\"u}rgen}, title={{$p$}-{B}orel principal ideals}, date={1997}, ISSN={0019-2082}, journal={Illinois J. Math.}, volume={41}, number={1}, pages={103\ndash 121}, url={http://projecteuclid.org/getRecord?id=euclid.ijm/1255985847}, } \bib{BrHe:CM}{book}{ author={Bruns, Winfried}, author={Herzog, J{\"u}rgen}, title={Cohen-{M}acaulay rings}, series={Cambridge Studies in Advanced Mathematics}, publisher={Cambridge University Press}, address={Cambridge}, date={1993}, volume={39}, ISBN={0-521-41068-1}, } \bib{eiscommalg}{book}{ author={Eisenbud, David}, title={Commutative algebra}, series={Graduate Texts in Mathematics}, publisher={Springer-Verlag}, address={New York}, date={1995}, volume={150}, ISBN={0-387-94268-8; 0-387-94269-6}, note={With a view toward algebraic geometry}, } \bib{ElKeMinimalReslns90}{article}{ author={Eliahou, Shalom}, author={Kervaire, Michel}, title={Minimal resolutions of some monomial ideals}, date={1990}, ISSN={0021-8693}, journal={J. Algebra}, volume={129}, number={1}, pages={1\ndash 25}, url={http://dx.doi.org/10.1016/0021-8693(90)90237-I}, } \bib{EnPfPoBettipStable00}{article}{ author={Ene, Viviana}, author={Pfister, Gerhard}, author={Popescu, Dorin}, title={Betti numbers for {$p$}-stable ideals}, date={2000}, ISSN={0092-7872}, journal={Comm. Algebra}, volume={28}, number={3}, pages={1515\ndash 1531}, url={http://dx.doi.org/10.1080/00927870008826911}, } \bib{HePoRegpBorel01}{article}{ author={Herzog, J{\"u}rgen}, author={Popescu, Dorin}, title={On the regularity of {$p$}-{B}orel ideals}, date={2001}, ISSN={0002-9939}, journal={Proc. Amer. Math. Soc.}, volume={129}, number={9}, pages={2563\ndash 2570}, url={http://dx.doi.org/10.1090/S0002-9939-01-05840-3}, } \bib{JoWealgDMT09}{article}{ author={J{\"o}llenbeck, Michael}, author={Welker, Volkmar}, title={Minimal resolutions via algebraic discrete {M}orse theory}, date={2009}, ISSN={0065-9266}, journal={Mem. Amer. Math. Soc.}, volume={197}, number={923}, pages={vi+74}, } \bib{M2}{misc}{ label={M2}, author={Grayson, Daniel~R.}, author={Stillman, Michael~E.}, title={Macaulay 2, a software system for research in algebraic geometry}, date={2006}, note={Available at \href{http://www.math.uiuc.edu/Macaulay2/} {http://www.math.uiuc.edu/Macaulay2/}}, } \bib{MerminEKCellular10}{article}{ author={Mermin, Jeffrey}, title={The {E}liahou-{K}ervaire resolution is cellular}, date={2010}, ISSN={1939-0807}, journal={J. Commut. Algebra}, volume={2}, number={1}, pages={55\ndash 78}, url={http://dx.doi.org/10.1216/JCA-2010-2-1-55}, } \bib{MiStCCA05}{book}{ author={Miller, Ezra}, author={Sturmfels, Bernd}, title={Combinatorial commutative algebra}, series={Graduate Texts in Mathematics}, publisher={Springer-Verlag}, address={New York}, date={2005}, volume={227}, ISBN={0-387-22356-8}, } \bib{PardueThesis94}{book}{ author={Pardue, Keith}, title={Nonstandard borel-fixed ideals}, date={1994}, note={Thesis (Ph.D.)--Brandeis University}, } \bib{PopescuExtremal05}{article}{ author={Popescu, Dorin}, title={Extremal {B}etti numbers and regularity of {B}orel type ideals}, date={2005}, ISSN={1220-3874}, journal={Bull. Math. Soc. Sci. Math. Roumanie (N.S.)}, volume={48(96)}, number={1}, pages={65\ndash 72}, } \bib{PeStEKResln08}{article}{ author={Peeva, Irena}, author={Stillman, Mike}, title={The minimal free resolution of a {B}orel ideal}, date={2008}, ISSN={0723-0869}, journal={Expo. Math.}, volume={26}, number={3}, pages={237\ndash 247}, url={http://dx.doi.org/10.1016/j.exmath.2007.10.003}, } \bib{SinefBorel08}{article}{ author={Sinefakopoulos, Achilleas}, title={On {B}orel fixed ideals generated in one degree}, date={2008}, ISSN={0021-8693}, journal={J. Algebra}, volume={319}, number={7}, pages={2739\ndash 2760}, url={http://dx.doi.org/10.1016/j.jalgebra.2008.01.017}, } \bib{VelasNonCW08}{article}{ author={Velasco, Mauricio}, title={Minimal free resolutions that are not supported by a {CW}-complex}, date={2008}, ISSN={0021-8693}, journal={J. Algebra}, volume={319}, number={1}, pages={102\ndash 114}, url={http://dx.doi.org/10.1016/j.jalgebra.2007.10.011}, } \end{biblist} \end{bibdiv} \end{document}
1,314,259,996,781
arxiv
\section{ Introduction} \label{sec1} The Minority Game(MG) is a particular version of the El Farol Bar problem. The latter was introduced by Brian Arthur as a prototypical model for the complex emergent behavior in a system of many interacting agents having only incomplete information, and bounded rationality \cite{elfarol}. This problem is about $N$ agents, who have to repeatedly make choices between two alternatives, and the winners are those who selected the alternative chosen by fewer agents. MG has been studied a lot as a mathematical model of learning, adaptation, and co-evolution of agents \cite{mgpapers1, mgpapers2}. An overview and bibliography may be found in \cite{mgbook, mg1, mg2}. The interesting feature of the minority game is that the agents seem to be able to coordinate their actions, without any direct communication with each other, and the system can self-organize to a state in which the fluctuations in the steady state are much less than what would be expected if each agent made a random choice. This is called the efficiency of the markets. In a system of $N$ interacting agents, with $N$ odd, the degree of efficiency of the system may be measured by how close is the average number of happy agents in the steady state to the maximum possible value $(N-1)/2$. Simulations of MG have shown that typically the difference is of order $N^{1/2}$. The coefficient of the $N^{1/2}$ depends on details of the model, like how far back in the past the agents look to decide their action, but it can be much less than the value for agents making random choices. The minimum value of the coefficient attained in several variants of the MG is about $1/10$ \cite{mg1}. A variation of the minority game, focussing on the efficient utilization of resources was studied by Chakrabarti et al as the Kolkata Paise Restaurant problem \cite{kpr1, kpr2, kpr3}. In this variation, there are $N$ restaurants, and $N$ agents, and there is rank order amongst the restaurants. Each restaurant can take only one agent per day, and agents prefer to go to a higher ranked restaurant. In spite of this complication, it was found that an egalitarian probabilistic strategy exists in which the agents visit restaurants in a cyclic order. Also, the agents can reach this cyclic state in a short time. In this paper, we describe a probabilistic strategy, inspired by the strategy suggested in \cite{kpr3}, for the minority games, that is very simple, but is more efficient than those previously studied in literature. In this strategy, the average deviation of the number of people in the minority from the maximum $(N-1)/2$ can be reduced to be of order $N^{\epsilon}$, for any $\epsilon > 0$, and the time required to reach this level increases with $N$ only as $\log \log N$. In addition, we show that a game where all agents follow this strategy, is stable against individual cheaters. Our strategy is an application of the general win-stay-lose-shift strategy \cite{nowak}, an adaptation of which to MG was discussed earlier by Reents et al \cite{reents}. In the latter, the deviation from best possible can be made of order $1$, but the time required grows as $N^{1/2}$. We are able to get a much faster approach to optimum by using a shift probability that depends on the current distance from optimum. Other probabilistic strategies for minority games have also been discussed in literature \cite{thermal1, thermal2, xie}, and in some cases, it has been noted that, they can perform better than the deterministic ones \cite {sornette}. While our strategy seems more or less obvious, we did not find it discussed in the literature so far, and it seems worthwhile to study it quantitatively. The plan of the paper is as follows: in section \ref{sec2}, we define the rules of the game precisely and argue that the strategy defined leads to a very efficient use of resources. In section \ref{sec3}, we show that individual agents have no incentive to cheat, if every body else follows the same strategy. Section \ref{sec4} contains the results of our simulations of the model, and \ref{sec5} contains some concluding remarks. \section{Definition of the model} \label{sec2} The model we consider is a variation of the El Farol Bar problem. We consider a small city with exactly two restaurants. There are $N$ people in the city, called agents, each of whom goes for dinner every evening to one of the two restaurants. The prices and quality of food is quite similar in both, and the only thing that governs the choice of agents about which restaurant they go to on a particular day is that the quality of service is worse if the restaurant is crowded. We assume that $N$ is odd, and write $N = 2 M +1$. The restaurant is said to be crowded on a particular day, if the number of people turning up to eat there that day exceeds $M$. An agent is happy if he goes to a restaurant that is uncrowded, and will be said to have a payoff $1$. He turns up at a crowded restaurant, his payoff is $0$. Once the choice of which restaurant to go to is made, an agent cannot change it for that day. The agents can not communicate with each other in any way directly in deciding which restaurant to go to. However, each of them has available to him/her the entire earlier history of how many people chose to go to the first restaurant (call it A), on any earlier day. Let us denote the number of agents turning up at A on the $t-$th day by $M - \Delta(t)$. Then the number of agents turning up at the Restaurant B are $M + \Delta(t) +1$. At the end of day $t$, the value of $\Delta(t)$ is made public, and is known to all the agents. Using the information $\{\Delta(t')\}$, for $t' = 1,2 ...t$, the agents try to guess the choice that other customers who share the same public knowledge will make, and decide which restaurant to go to on the day $(t+1)$, and try to optimize their payoff. In the standard MG, the public information is not the value of $\Delta(t)$, but only whether it was negative or not \cite{mgpapers1, mgpapers2}. [ In contrast, in our model, the agents have better quality of information, and this difference is important.] Also, in MG each agent has a finite set of strategies available to him/her, which uses only the history $\{\Delta(t)\}$ for $m$ previous days, where $m$ is a fixed non-negative integer. Each strategy is deterministic: for a given history, it tells which restaurant agent should go to. While the agent has more than one strategy available to him/her, he chooses the strategy that has the best `performance score' in the recent past. There is no probabilistic component in the choice of any agent. For a given history, the future choices of all agents for all subsequent days are fully determined. In the problem we study here, we allow agents to have probabilistic strategies. For a given history $\{\Delta(t)\}$, a strategy will specify a probability $p$ with which he should go to restaurant A. Another important difference from the MG's is that we allow the strategy to depend explicitly on the payoffs received in the $m$ previous days. In MG, the strategy does not explicitly involve previous payoffs. The payoff only affect the outcome indirectly, through the performance scores that determine which strategy is used by the agent. The simplest case corresponds to $m=0$. In this case, an agent has no information. His probabilistic strategy is to make a random choice of which restaurant to go to, with equal probability. In this case, the probability that $r$ people show up at Restaurant A is clearly, is \begin{equation} {\rm Prob}(r) = {\begin{pmatrix} N\\ r \end{pmatrix}} 2^{-N} \end{equation} The expectation value of $r$ is $N/2$, and for large $N$, the distribution is nearly gaussian, with a width proportional to $\sqrt{N}$. We can measure the inefficiency of the system by a parameter $\eta$ defined as \begin{equation} \eta = \lim_{N \rightarrow \infty} \frac{4}{N} \langle (r -N/2)^2 \rangle \end{equation} where $\langle ~\rangle$ denotes averaging over a long time evolution, and over different initial conditions. The normalization has been chosen, so that the inefficiency parameter $\eta$ of the system with agents using his /her choice randomly is $1$. We now describe a simple $m=1$ probabilistic strategy, that gives a highly efficient system, where inefficiency parameter can be made of order $(1/N^{1 - \epsilon})$, for any $\epsilon >0$. The strategy is defined as follows: At $t=0$, each agent chooses one of the two restaurants with probability $1/2$ each, independently of others. At any subsequent time $t+1$, each agent follows the same simple strategy : If at time $t$, he found himself in the minority, he chooses the same restaurant as at time $t$. If he found himself in the majority, and the number of people visiting the same restaurant as him was $M + \Delta(t) +1$, with $\Delta(t) \geq 0$, he changes his choice with a small probability $p$, and sticks to earlier choice with probability $1 -p$, independent of other agents. The value of $p$ depends only on $\Delta(t)$. It is approximately equal to $ \Delta /M$ for $\Delta >0$. The precise dependence of $p$ on $\Delta$ is discussed later in the paper. For large $M$, the number of people changing their choice is distributed according to the Poisson distribution, with mean approximately equal to $\Delta$, and width varying as $\sqrt{\Delta(t)}$. Thus we have the approximate recursion $ \Delta (t+1) \approx \sqrt{\Delta(t)}$, for $\Delta(t) \gg 1$. This shows that within a time of order $\log \log N$, the magnitude of $\Delta$ will become of ${\cal O}(1)$, and then remain of order $1$. \section{Stability against individual cheaters} \label{sec3} In the previous section, we have shown that if all the agents follow the proposed common strategy, the social inefficiency of the system is considerably reduced. However, selfish agents may not do what is expected of them for social good, and act differently, if it gives them profit. In this section, we show that if all the other people are following the common strategy outlined above, there is a specially selected value of $p$, for each $\Delta >0$, such that if the other agents follow the strategy with this value of $p$, a single individual gains no advantage by cheating. The emergence of effective cooperation amongst selfish agents in our problem may seem rather paradoxical at first. After all, the main point of MG is that agents gain by differentiating, and not following the same strategy. If rational agents know that they cannot improve their immediate individual gain by cheating, they would then try to maximize their individual long-term payoff. This they can do, if they follow the same common strategy. {\it This cooperative strategy is beneficial for everybody in the long run, and deviating from it has no advantage}. This is the reason for the emergent cooperation between agents in our model. Let us consider any particular day $t$. Let the number of people who showed up in the restaurant A be $M -\Delta(t)$. We may assume $\Delta(t) \geq 0$ , without loss of generality. We consider first the case $\Delta >0$. We consider a particular agent Alice, who went to A on the $t$-th day, and found herself in the happy situation of being in the minority. Alice assumes that all other agents follow the strategy. Then, all other agents who went to A will go to it again on day $(t+1)$. There are $M + \Delta +1$ agents that went to B. Each of these agents will change his/her choice with probability $p$. Let $r$ be the number of agents that actually change their choice at time $(t+1)$. Then, $r$ is a random variable, with a distribution given by \begin{equation} {\rm Prob}_{p} (r) = \dbinom{M + \Delta +1}{r} p^{r} (1 - p)^{M+ \Delta +1 -r} \end{equation} For $M \gg 1$, this distribution tends to the Poisson distribution with parameter $\lambda = p ( M + \Delta +1)$, given by \begin{equation} {\rm Prob}_{\lambda}(r) = \lambda^r e^{ -\lambda }/ r! \end{equation} If Alice chooses to go to A the next day, she will be in the winning position, if $r \leq \Delta$. Hence her expected payoff $EP(Alice|stay)$, if she chooses to stay with her present choice is \begin{equation} EP(Alice|stay) = \sum_{r=0}^{\Delta} {\rm Prob}_{p} (r) \end{equation} If, on the other hand, Alice would switch her choice, she would win if $ r \geq \Delta+2$. Hence, we have her expected payoff $EP(Alice|switch) $ if she chooses to switch, given by \begin{equation} EP(Alice|switch) = \sum_{r=\Delta+2}^{\infty} {\rm Prob}_{p} (r) \end{equation} For Alice to have no incentive to cheat, we must have \begin{equation} EP(Alice|stay) \geq EP(Alice|switch). \label{eq:Alice} \end{equation} Now consider the agent Bob, who went to B on day $t$. He also assumes that all other people will follow the strategy: those who went to A will stick to their choice, and those who went to B switch their choice with probability $p$. There are $ M + \Delta$ other persons who went to B. If Bob chooses to cheat, and decide to stay put, without using a random number generator, the number of agents switching would be a random number $ \tilde{r}$, with a distribution given by \begin{equation} {\rm Prob}'_{p}(\tilde{r}) = \dbinom{M + \Delta}{\tilde{r}} p^{ \tilde{r}} (1 - p)^{M+ \Delta - \tilde{r}} \end{equation} He would be in the minority, if $\tilde{r} \ge \Delta +1$. Thus, if he chooses to stay, we have his expected payoff $EP(Bob|stay)$ given by \begin{equation} EP(Bob|stay) = \sum_{\tilde{r}=\Delta+1}^{\infty} {\rm Prob}'_{p} (\tilde{r}) \end{equation} On the other hand, if Bob decide to switch his choice, he would win if $ \tilde{r} \leq \Delta-1$. In that case, his expected payoff $EP(Bob|switch)$ is given by \begin{equation} EP(Bob|switch) = \sum_{\tilde{r}=0}^{\Delta -1} {\rm Prob}'_{p} (\tilde{r}) \end{equation} We choose the value of $p$ to make these equal. Thus the equation determining $p$, for a given $\Delta$ and $N$ is \begin{equation} EP(Bob|stay) = EP(Bob|switch) \label{eq:Bob} \end{equation} If the above condition is satisfied, Bob can choose to stay, or switch, and his expected payoff is the same. More generally, he can choose to switch with a probability $\alpha$, and his payoff is independent of $\alpha$. In that case, what is the optimum value of $\alpha$ for Bob? One has to bring in a different optimization rule to decide this, and it seems reasonable that Bob would choose a value that optimizes his long-time average payoff, (which is the same for any other agent), and hence choose the value $p$. In the limit of $M \gg \Delta$, eq. (\ref{eq:Bob}) simplifies, as the dependence on $M$ drops out, and we get a simple equation determining the dependence of the Poisson parameter $\lambda$ on $\Delta$. Then, Eq. (\ref{eq:Bob}) becomes \begin{equation} \sum_{r=0}^{\Delta -1} \frac{ \lambda^r }{ r!} e^{-\lambda}= \sum_{r= \Delta +1}^{\infty} \frac{ \lambda^r} { r! }e^{-\lambda} \label{eq:12} \end{equation} This equation may be rewritten, avoiding the infinite summation, as \begin{equation} 2 \sum_{r=0}^{\Delta -1} \frac{ \lambda^r e^{-\lambda }}{ r!} = 1 - \frac{\lambda^\Delta e^{- \lambda } }{ \Delta!} \label{eq:lambda} \end{equation} It is easy to see that Eq.(\ref{eq:lambda}) implies that Eq.(\ref{eq:Alice}) is also satisfied. For the sake of simplicity, we will only consider this limit of large $M$ in the following. The extention to finite $M$ presents no special difficulties. Thus, for any given value of $\Delta > 0$, the optimum value of $\lambda$ is determined by solution of Eq. (\ref{eq:lambda}). This equation is easily solved. The resulting values of $\lambda$ for different $\Delta$ are shown in Table \ref{table1}. For large $\Delta$, we show in the Appendix that ($\lambda - \Delta $) tends to $ 1/6$. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig_1.eps} \caption{Variation of expected payoff for the next day of an agent in Restaurant A ($P_{Alice}$) and Restaurant B ($P_{Bob}$) with $\Delta$.} \label{fig1} \end{center} \end{figure} We note that the values of $\lambda$ do not have to be broadcast to the agents by any central authority. Each individual rational agents will be able to deduce them as optimal, without any need to communicate with others. Fig. \ref{fig1} shows the variation of the expected payoff for the next day of Alice and Bob with $\Delta$. As expected we can see that for large values of $\Delta$, the expected payoff of an agent in either restaurant tend to the value $1/2$. Alice's payoff is a bit bigger than $1/2$, but this advantage is short-lived. Also, Bob cannot utilize this predictability of the system, as an attempt to switch by him change the outcome with finite probability. \begin{table}[ht] \caption{} \centering \begin{tabular}{| c | r | c | r |} \hline\hline $\Delta$ & \multicolumn{1}{c |}{$\lambda$} & $\Delta$ & \multicolumn{1}{c |}{$\lambda$ }\\[0.5ex] \hline 1 & 1.14619 &8 & 8.16393 \\ 2 & 2.15592 & 9 & 9.16423 \\ 3 & 3.15942 & 10 & 10.16448\\ 4 & 4.16121 & 20 & 20.16557\\ 5 & 5.16229 &30 & 30.16594\\ 6 & 6.16302 & 40 & 40.16612\\ 7 & 7.16354 & 50 & 50.16623\\[1ex] \hline \end{tabular} \label{table1} \end{table} Now, we consider the case $\Delta =0$. In this case, restaurant A has exactly $M$, and B has $M +1$ people. We now show that there is no optimum value of $\lambda$ in this case. A naive extention of the strategy for $\Delta >0$ to this case would be that Alice does not switch. But then, if there is a nonzero $\lambda$, and agents from B switch to A, Bob has an incentive to cheat, as if he goes to A, he would be sure to be in the majority. If he cheats, and stays back, but at least one other people leave from B to A ( which occurs with nonzero probability for any non-zero $\lambda$), he has some chance to be on the winning side. Clearly, $\lambda =0$ is not a viable strategy, as then nobody switches, and the state at day $(t +1)$ is same as on day $t$. And same situation is met again. While this is a solution which minimizes wastage of resources, and is `socially efficient', this is clearly a very unfair state of affairs, where a subset of people are privileged, and have payoff $1$ every day, and another set has no chance of any payoff. Consider the possible strategy that in this case, all people who went to A switch with probability $\lambda'/M$, and all who went to B switch with probability $\lambda''/(M+1)$, with both $\lambda'$ and $\lambda''$ non-zero. Let $r'$ and $r''$ be the random variables denoting the number of people switching sides from A to B, and from B to A respectively. Then, $r'$ and $r''$ are Poisson-distributed independent random variables with mean $\lambda'$ and $\lambda"$ respectively. Repeating the analysis above, we see that the condition that Alice has no incentive to cheat gives the condition \begin{equation} {\rm Prob}( r' < r''-2 ) ={\rm Prob}( r' \geq r'') \label{eq:d0:1} \end{equation} Similarly, for the absence of incentive to cheat for Bob, we should have \begin{equation} {\rm Prob}( r' < r''-1 ) ={\rm Prob}( r' \geq r''+1) \label{eq:d0:2} \end{equation} It is easy to see that Eq. (\ref{eq:d0:1}) and Eq. (\ref{eq:d0:2}) are mutually inconsistent, as the LHS of the former is strictly less than the LHS of the latter, and for RHS it is the opposite. Thus, we cannot find nonzero finite values $\lambda'$ and $\lambda''$, which will give a stable strategy against individuals cheating. Thus, if we reach $\Delta =0$, it is not clear what any agent should do. We note that in this case, though Bob does not expect to gain anything on the next day by switching, he would still like to do that to upset the status quo, and improve his chance of winning the day after. Of course, as Alice realizes that some people from B are likely to switch, she would like to switch as well. A simple strategy is that in this case, all agents irrespective of whether they were in minority or not on day $t$, switch the next day with a proability $M^{\epsilon -1}$, where $\epsilon$ is a real number $0 \leq \epsilon \leq 1$. This corresponds to both $\lambda'$ and $\lambda''$ very large, of order $M^{\epsilon}$. We shall refer to this step as a major resetting event. For a given value of $\epsilon$, the value of $|\Delta|$ just after resetting is of order $M^{\epsilon/2}$. Then it lakes time of order $\log \log{M}$ to reach the value $\Delta =0$. Then the maximum contribution to the mean efficiency parameter comes from the major resetting events, and it is easy to see that the mean inefficiency parameter would vary as $M^{\epsilon -1}/\log \log {M}$. Then, for more efficiency, we should keep $\epsilon$ small. \section{Monte Carlo simulations} \label{sec4} We have studied the time evolution of a set of $N$ agents using this strategy using Monte Carlo simulations, with $N=2001$. If the restaurant with greater attendance has $M + 1 + \Delta$ agents on a given day, with $\Delta > 0$, the next day each of them switches his/her choice with probability $\lambda(\Delta)/( M + \Delta +1)$, and the agents in the minority restaurant stick to their choice. If there are exactly $M +1$ agents in the majority restaurant, all agents switch their restaurant with a probability $ 1/(2 M^{1 -\epsilon})$. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig_2.eps} \caption{ A typical evolution of a system of $2001$ agents for two different choices of the parameter $\epsilon$ = $ 0.5$ and $0.7$. The large deviations correspond to major events (see text). } \label{fig2} \end{center} \end{figure} The result of a typical evolution is shown in Fig. \ref{fig2}, for two choices of $\epsilon$: $0.5$ and $0.7$. We see that the majority restaurant changes quite frequently. The large peaks in $|\Delta|$ correspond to resettings of the system, and clearly, their magnitude decreases if $\epsilon$ is decreased. There is very little memory of the location of majority restaurant in the system. To be specific, let $S(t)$ is $+1$ if the minority restaurant is A in the $t$-th step, and $-1$ if it is B. Then the autocorrelation function $\langle S(t) S(t +\tau)\rangle$ decays exponentially with $\tau$, approximately as $\exp( - K \tau)$. The value of $K$ depends on $\epsilon$, but is about $2$, and the correlation is negligible for $\tau> 3$. Fig. \ref{fig3} shows the probability distribution of $\Delta$ in the steady state for two different values of $\epsilon$. Fig. \ref{fig4} gives a plot of the inefficiency parameter $\eta$ as a function of $\epsilon$. In each case, the estimate of $\eta$ was obtained using a single evolution of the system for $10000$ time steps. The fractional error of estimate is less than the size of symbols used. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig_3.eps} \caption{Probability distribution of $\Delta$ in the steady state for $\epsilon = .3,.7$ obtained by evolving $N = 2001$ agents for $10^{6}$ time steps. The red bars have been shifted a bit to the right for visual clarity.} \label{fig3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig_4.eps} \caption{Variation of inefficiency parameter $\eta$ with $\epsilon$, obtained by averaging the evolution of $N = 2001$ agents for 10000 time steps.} \label{fig4} \end{center} \end{figure} We define $A_i(t)$ equal to $+1$ if the $i$-th agent was in the restaurant A at time $t$, and $-1$ otherwise. We define the auto-correlation function of the $A$-variables in the steady state as \begin{equation} C(\tau) = \frac{1}{N} \sum_i \langle A_i(t) A_i(t +\tau) \rangle \end{equation} In Fig. \ref{fig5}, we have shown the variation of $C(\tau)$ with $\tau$. We see that this function has a large amount of persistence. This is related to the fact that only a small fraction of agents actually switch their choice at any time step. Clearly, the persistence time is larger for smaller $\epsilon$. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{fig_6.eps} \caption{$C(\tau)$ as a function of $\tau$ for $\epsilon = .3, .5$ and $.7$. Each data point is obtained by averaging over 10000 simulation steps.Total number of agents is N = 2001.} \label{fig5} \end{center} \end{figure} \section{Discussion} \label{sec5} In our analysis of the strategy discussed, we assumed that whenever the system reaches a state in which one restaurant has exactly $M$ agents, it is not possible to find a strategy for reaching a nearby state, with only a few agents switching, and the system undergoes a major resetting. However, consider a situation where because of shared common history, the agents agree to a convention that if such a state is reached, it continues for $T$ more days without change, as it is socially efficient, and on the $(T+1)$-th day, the major resetting occurs. The rationale for such a choice would be that all agents recognize that this state has overall maximum social benefit, and in the long run, any agent would spend equal amount of time in the privileged class. Clearly, for realistic modelling, $T$ should not be too large. It has to be significantly less than the expected lifetime of an agent. The number of consecutive days when $\Delta$ is nonzero is of order $\log \log N$, and then for $T$ consecutive days $\Delta $ remains zero. Then, the volatility $\eta$ in such a strategy is given by \begin{equation} \eta \simeq \frac{ K_1 N^{\epsilon -1}}{ T + K_2 \log \log N} \end{equation} where $K_1$ and $K_2$ are some constants. This conclusion is not very surprising. A society that has a larger value of $T$ has more overall social benefit than one with a shorter value. However, agents have to look for something other than payoff on the next day to realize this, and one needs to go beyond myopic strategies that maximize the payoff on the next day. An interesting question is what strategies would emerge if the agents try to maximize the sum of their expected payoffs in the next $n$ days for $n > 1$. Generalization of this strategy to the Kolkata Paise Restaurant problem is straight forward. The strategy is as follows: If an agent was fed at restaurant of rank $k$ at time step $t$, he goes to restaurant of rank $k-1$ at time $t+1$. If he found no food at time step $t$, He picks at random one restaurant, out of the restaurants that had no customer at step $t$. If the picked restaurant has rank $k'$, he goes to the restaurant with rank $k'-1$. Then, the average time required to reach a cyclic state is of order $\log N$. And in the cyclic state, each agent gets to sample all the restaurants. The strategy can be made robust against cheaters, if we make the additional rule that if more than one customer shows up at the restaurant of rank $k$, preference is given to the customer who was served at rank $(k+1)$ restaurant the previous day. An interesting question is the effect of heterogeneity in agents, as far as the value of $\epsilon$ is concerned. There may be impatient agents that do not want to wait, and switch with probability $1/2$ as soon as the value $\Delta =0$ is reached. If the number of such agents is $N^{a}$, with $ a <1$, it is easy to see that the final efficiency parameter can not be less than $ N^{a-1}$. In order to get substantial decease in inefficiency, the number of such agents should be small. The optimum value of $T$, or of the parameter $\epsilon$ is not decidable within the framework of our model, as one needs to bring in other criteria like fairness or social equality, and decide the relative weights of these and social efficiency and the wish to have the next win quickly to determine the optimum choice. Also their have to be some general shared values amongst the agents to make this possible. Clearly, a discussion of these issues is beyond the scope of our work. Acknowledgements: We thank Dr. Bill Yeung for a very useful correspondence. The work of DD is supported in part by Department of Science and Technology, Government of India by the grant SR/S2/JCB-24/2006.
1,314,259,996,782
arxiv
\section{Introduction} \label{sec:intro} Since first observed by the COHERENT collaboration in 2017 \cite{Akimov:2017ade} with a CsI detector, and subsequently in 2020 with a liquid argon (LAr) detector \cite{COHERENT:2020iec}, coherent elastic neutrino-nucleus scattering (CE$\nu$NS) has been recognized as a powerful tool for Standard Model (SM) measurements and beyond-the-SM (BSM) searches. Examples of the physics cases that can be studied range from the determination of the mean-square radii of neutron distributions and low-energy measurements of the weak mixing angle \cite{Cadeddu:2017etk,Cadeddu:2020lky,Papoulias:2017qdn,Papoulias:2019txv,Miranda:2020tif,AristizabalSierra:2019zmy}, up to searches for new interactions in the neutrino sector covering a whole spectrum of possible mediators (see e.g. \cite{Farzan:2018gtr,Shoemaker:2017lzs,Dutta:2019eml,Farzan:2015hkd,Liao:2017uzy,Coloma:2017ncl,Coloma:2019mbs,Flores:2020lji,AristizabalSierra:2019ufd,AristizabalSierra:2019ykk,AristizabalSierra:2018eqm,Brdar:2018qqj,Lindner:2016wff,AristizabalSierra:2020zod,AristizabalSierra:2021fuc,Hurtado:2020vlj,Denton:2018xmq,Giunti:2019xpr,Chang:2020jwl,Miranda:2021kre,Flores:2021kzl}). Interestingly, the same experimental infrastructures used for CE$\nu$NS measurements, provide as well environments suitable for searches of new degrees of freedom involving light dark matter (LDM)\cite{Dutta:2019nbn,Dutta:2020vop,COHERENT:2019kwz,COHERENT:2021pvd} and axion-like particles (ALPs) \cite{Dent:2019ueq,AristizabalSierra:2020rom}. Motivated by this wide range of possibilities, plans for further CE$\nu$NS measurements are underway. They involve experiments using reactor neutrinos (e.g. CONUS \cite{Hakenmuller:2019ecb,Bonet:2020awv,CONUS:2021dwh}, CONNIE \cite{Aguilar-Arevalo:2019jlr}, MINER \cite{Agnolet:2016zir}, RED-100 \cite{Akimov:2017hee}, $\nu$-cleus \cite{Strauss:2017cuu}, TEXONO \cite{Wong:2015kgl}, vIOLETA \cite{Fernandez-Moroni:2020yyl}, SBC \cite{SBC:2021yal} and the Dresden-II reactor experiment \cite{Colaresi:2021kus}), measurements at COHERENT with germanium and NaI detectors \cite{Barbeau:2021exu}, the Coherent CAPTAIN-Mills (CCM) experiment~\cite{CCM:2021leg} as well as at the European Spallation Source (ESS) \cite{Baxter:2019mcx}. Plans to extended measurements/searches with decay-in-flight neutrino beams such as NuMI \cite{MINERvA:2016iqn} or LBNF \cite{DUNE:2016evb} using gaseous targets with the directional $\nu$BDX-DRIFT are as well expected \cite{AristizabalSierra:2021uob,Abdullah:2020iiv}. Measurements of CE$\nu$NS in multi-ton dark matter (DM) detectors and at RES-NOVA, using archaeological lead, are part of the facilities in which CE$\nu$NS will be looked for \cite{Aprile:2015uzo,Aalbers:2016jon,Malling:2011va,RES-NOVA:2021gqp,Strigari:2009bq,AristizabalSierra:2017joc,Gonzalez-Garcia:2018dep,Dutta:2017nht,Schwemberger:2022fjl,Suliga:2020jfa}. Overall, an international program covering the different energy windows where CE$\nu$NS can be observed is well established. These energy windows offer features that make them particularly suitable for certain types of new physics searches. Pulsed decay-at-rest (DAR) neutrino beams (such as those at the spallation neutron source and the ESS) provide energy and timing spectra, thus making them particularly useful in searches for flavor-dependent new physics. Decay-in-flight (DIF) neutrino beams---instead---are rather suited for testing nuclear physics hypotheses, due to their higher energy. Finally, given the extremely low-energy thresholds of reactor experiments, sensitivity to physics producing spectral distortions at low momentum transfer becomes a main target. Arguably, the prototypical scenario in that case corresponds to neutrino magnetic moments and transitions, for which the differential cross section exhibits a Coloumb divergence \cite{Vogel:1989iv}. Scenarios with light mediators, although not leading to such pronounced spectral features, can also be tested with reactor data. In this regard the recent suggestive observation of CE$\nu$NS by the Dresden-II reactor experiment \cite{Colaresi:2022obx} offers an opportunity to systematically test the presence of such new light mediators. The Dresden-II reactor experiment consists of a 2.924 kg p-type point contact germanium detector (NCC-1701) operating at 0.2 $\text{keV}_\text{ee}$ and located at $\sim 10\,\text{m}$ from the 2.96 GW Dresden-II nuclear reactor. The data released follow from a 96.4 days exposure with 25 days of reactor operation outages in which no visible CE$\nu$NS signal was observed. Analyses relying on these data and investigating the implications of a modified Lindhard quenching factor (QF) as well as limits on light vector mediators have been already presented in Ref. \cite{Liao:2022hno}. These data have been used also to place limits on a variety of new physics scenarios including neutrino non-standard interactions (NSI), light vector and scalar mediators and neutrino magnetic moments in Ref. \cite{Coloma:2022avw}. In this paper we extend upon these analyses and consider the impact of the Dresden-II reactor data on: (i) Low-energy measurements of the weak mixing angle at a $\mu\simeq 10\,$MeV renormalization scale, (ii) neutrino generalized interactions (NGI) with light mediators, of which light vector and scalar mediators are a subset, (iii) neutrino magnetic transition couplings leading to up-scattering events (the so-called sterile neutrino dipole portal \cite{McKeen:2010rx,Brdar:2020quo}, $\bar\nu_e + N \to F_4 + N$ with $F_4$ a heavy sterile neutrino).\\ The remainder of this paper is organized as follows. In Sec. \ref{sec:ngi-lm} we briefly present the physics scenarios treated in our statistical analysis, including a short discussion on how the weak mixing angle can affect the event rate. In Sec. \ref{sec:parameter_space_analysis} we discuss differential event rates, total event rates and the details of the statistical analysis we have adopted along with our results. Finally, in Sec. \ref{sec:conclusions} we present our summary and conclusions. \section{CE$\nu$NS differential cross section, weak mixing angle and new physics scenarios} \label{sec:ngi-lm} In the SM the CE$\nu$NS differential cross section follows from a t-channel neutral current process and reads \cite{Freedman:1973yd,Freedman:1977xn} \begin{equation} \label{eq:xsec_CEvNS} \left . \frac{d\sigma}{dE_r} \right|_\text{SM} = \frac{G_F\,m_N}{2\pi}Q_W^2 F^2(q^2) \left( 2 - \frac{m_N E_r}{E_\nu^2} \right)\ , \end{equation} where $G_F$ refers to the Fermi constant, $m_N$ to the nuclear target mass, $E_r$ to recoil energy, $E_\nu$ to the incoming neutrino energy and $Q_W$ to the weak charge coupling, that accounts for the $Z^0$-nucleus interaction in the zero momentum transfer limit. Since the scatterer has an internal structure, this coupling is weighted by the nuclear weak form factor $F^2(q^2)$\footnote{For DAR and DIF neutrino beams the form factor plays an important role. For reactor neutrinos, instead, the energy regime is such that to a large degree $F^2(q^2)\to 1$.}. Hence, the ``effective'' coupling $Q_W\times F(q^2)$ encapsulates the expected behavior: As the momentum transfer $q$ increases, the weak charge diminishes and so does the strength of the interaction. Neglecting higher-order momentum transfer terms that arise from the nucleon form factors, one explicitly has \begin{equation} \label{eq:weak-cahrge} Q_W=Z\,g_{V,\text{SM}}^p + (A-Z)\,g_{V,\text{SM}}^n\ . \end{equation} Here the proton and neutron vector couplings are dictated by the fundamental $Z^0-\mathrm{q}$ ($\mathrm{q}=u,d$) couplings, given by $g_{V,\text{SM}}^p=1/2 - 2\sin^2\theta_W$ and $g_{V,\text{SM}}^n=-1/2$. For the value of the weak mixing angle at $\mu=m_{Z^0}$, $\sin^2\theta_W|_{\overline{\text{MS}}}(m_{Z^0})=0.23122\pm 0.00003$ \cite{Tanabashi:2018oca}, one can easily check that the neutron coupling exceeds the proton coupling by about a factor 10, resulting in the $N^2=(A-Z)^2$ dependence predicted in the SM for the CE$\nu$NS cross section. However, a fair amount of events allows for sensitivities to $\sin^2\theta_W$. The SM predicted value at $q=0$ (obtained by RGE extrapolation in the minimal subtraction ($\overline{\text{MS}}$) renormalization scheme) is \begin{equation} \label{eq:weak-mixing-angle-q-Eq0} \sin^2\theta_W(q=0) = \kappa(q=0)|_{\overline{\text{MS}}} \sin^2\theta_W|_{\overline{\text{MS}}}(m_{Z^0})\ , \end{equation} with $\kappa(q=0)|_{\overline{\text{MS}}}=1.03232\pm 0.00029$ \cite{Kumar:2013yoa}. Variations around this value lead to fluctuations of the predicted cross section and of the event rate (see Sec. \ref{sec:parameter_space_analysis}). Although statistical analyses of the weak mixing angle have been performed in the light of COHERENT data \cite{Papoulias:2017qdn,Miranda:2020tif} and are expected to follow also from the electron channel at e.g. DUNE \cite{deGouvea:2019wav}, the interesting aspect of an analysis using reactor data has to do with the different energy scale of such an indirect measurement (compared with COHERENT or DUNE) and potentially with the amount of data. \subsection{Renormalizable NGI} \label{sec:renormalizable-NGI} Effective NGI\footnote{In contrast to the standard effective interaction jargon, here the typical energy scale has to be just above the MeV scale. For reactor experiments this means $\Lambda>q\simeq 19\,$MeV.} were first considered by T. D. Lee and Cheng-Ning Yang in Ref. \cite{Lee:1956qn}. They have been as well considered in the context of neutrino propagation in matter in Ref. \cite{Bergmann:1999rz}. More recently they have been considered in the context of CE$\nu$NS analyses in Ref. \cite{Lindner:2016wff} and within COHERENT CsI measurements in Ref. \cite{AristizabalSierra:2018eqm}\footnote{In this Reference the acronym NGI, and thus the name ``neutrino generalized interactions'' rather than generalized neutrino interactions, was introduced as to mimic the acronym NSI for ``neutrino nonstandard interactions''.}. Although the Dresden-II reactor data can be used to analyze effective NGI, given its rather low recoil energy threshold one could expect beforehand that better sensitivities to NGI induced by light mediators are achievable. Note that an analysis of this scenario in the context of multi-ton DM detectors has been presented recently in Ref. \cite{Majumdar:2021vdw}. Focusing on this case, the most general Lagrangian can be written schematically as follows \begin{equation} \label{eq:NGI_Lag} \mathcal{L}_{\nu-\mathrm{q}} = \sum_{\substack{X=S,P \\V,A,T}} \left[ \overline{\nu}\,\,f_X\Gamma_X\,\nu\,X + \sum_{\mathrm{q}=u,d} \overline{\mathrm{q}}\,\Gamma_X\,\left(g_X^\mathrm{q} + i \gamma_5 h_X^\mathrm{q}\right)\,\mathrm{q}\,X \right]\ , \end{equation} where $\Gamma_X=\{\mathbb{I},i\gamma_5,\gamma_\mu,\gamma_\mu\gamma_5,\sigma_{\mu\nu}\}$ with $\sigma_{\mu\nu}=i[\gamma_\mu,\gamma_\nu]/2$, the parameters in the quark and neutrino currents ($f_X, g^\mathrm{q}_X$ and $h^\mathrm{q}_X$) are taken to be real and the interactions to be lepton flavor universal. Here, $X$ refers to the field responsible for the interaction. Integrating $X$ out leads to an effective Lagrangian that contains, among other terms, NSI as a subset. In the absence of a robust deviation from the SM CE$\nu$NS prediction, there is no a priori reason for any of these interactions to be preferred over the others. However, those involving nuclear spin (spin-dependent interactions) are expected to produce lower event rates, in particular in heavy nuclei \cite{Freedman:1977xn}. Dropping those couplings and moving from quark to nuclear operators the resulting Lagrangian reads \begin{widetext} \begin{equation} \label{eq:Lag_nuclear} \mathcal{L}_{\nu-N}=\sum_{X=\text{All}}\overline{\nu}f_X\Gamma_X\nu\,X + \sum_{X=S,V,T}\overline{N}\,\overline{C}_X\Gamma_X N\,X +\sum_{\substack{(X,Y)=(P,S)\\\;\;\qquad(A,V)}}\overline{N}\,i\overline{D}_X\Gamma_Y N\,X \ . \end{equation} \end{widetext} Expressions for the coupling of the nucleus to the corresponding mediator are given by \cite{AristizabalSierra:2018eqm} \begin{align} \label{eq:nuclear-couplings} \overline{C}_S&=Z\sum_\mathrm{q}\frac{m_p}{m_\mathrm{q}}f_{T_\mathrm{q}}^p g^\mathrm{q}_S + (A-Z)\sum_\mathrm{q}\frac{m_n}{m_\mathrm{q}}f_{T_\mathrm{q}}^n g^\mathrm{q}_S\ , \\ \overline{C}_V&=Z(2 g^u_V + g^d_V) + (A-Z)(g^u_V + 2 g^d_V)\ , \\ \overline{C}_T&=Z(\delta_u^p g^u_T + \delta_d^p g^d_T) + (A-Z)(\delta_u^n g^u_T + \delta_d^n g^d_T)\ , \end{align} where the different nucleon coefficients are obtained from chiral perturbation theory from measurements of the $\pi$-nucleon sigma term and from data of azimuthal asymmetries in semi-inclusive deep-inelastic-scattering and $e^+ e^-$ collisions \cite{Cheng:1988im,Anselmino:2008jk,Courtoy:2015haa,Goldstein:2014aja,Radici:2015mwa}. Expressions for $D_P$ and $D_A$ can be obtained by replacing $g^\mathrm{q}_S\to h_P^\mathrm{q}$ and $g_V^\mathrm{q}\to h_A^\mathrm{q}$ in $C_S$ and $C_V$, respectively. The differential cross section induced by the simultaneous presence of all the interactions in Eq.~(\ref{eq:Lag_nuclear}) can be adapted to the light mediator case from the result derived in the effective limit in Refs. \cite{Lindner:2016wff,AristizabalSierra:2018eqm} \begin{widetext} \begin{equation} \left.\frac{d\sigma}{dE_r}\right|_\text{NGI}=\frac{G_F^2}{2\pi}m_NF^2(q^2) \left[ \xi_S^2\frac{2 E_r}{E_r^\text{max}} + \xi_V^2\left(2 - \frac{2 E_r}{E_r^\text{max}}\right) + \xi_T^2\left(2 - \frac{E_r}{E_r^\text{max}}\right) \right]\ . \end{equation} \end{widetext} Here $E_r^\text{max}\simeq 2E_\nu^2/m_N$ and, in contrast to the effective case, the $\xi_X$ parameters are $q^2=2m_NE_r$ dependent, though they follow the same definitions \begin{equation} \label{eq:xi_parameters} \xi_S^2=C_S^2 + D_P^2\ , ~~ \xi_V^2=C_V^2 + D_A^2\ , ~~ \xi_T^2= 4 C_T^2 \ . \end{equation} The parameters in the right-hand side are in turn defined as: \begin{equation} \label{eq:cross_sec_parameters} C_X=\frac{1}{\sqrt{2}G_F}\frac{f_X\overline{C}_X}{2m_NE_r + m_X^2}\ , \quad D_X=\frac{1}{\sqrt{2}G_F}\frac{f_X\overline{D}_X}{2m_NE_r + m_X^2}\ , \end{equation} with the exception of $C_V$ which is shifted by the SM contribution, $C_V\to Q_W + C_V$, with $Q_W$ given by Eq.~(\ref{eq:weak-cahrge}). Two relevant remarks follow from the expressions in Eqs.~(\ref{eq:xi_parameters}) and (\ref{eq:cross_sec_parameters}). First of all, one can notice that in the low momentum transfer limit and with $m_X\ll q$ the $\xi_X$ parameters are enhanced. This is at the origin of the spectral distortions that could be expected if any of these interactions sneaks in the signal. Secondly, unlike the effective case, where each $\xi_X$ can be treated as a free parameter (thus allowing to encapsulate various interactions at the same time, e.g. in $\xi_S$ a scalar and pseudoscalar interaction), in this case the $q^2$ dependence does not allow that. Thus, if one considers e.g. $\xi_S$, in full generality a four-parameter analysis is required. To assess the impact of the Dresden-II reactor experiment signal, we then proceed by assuming a single mediator at a time: $\xi_S$ determined only by $C_S$ and $\xi_V$ by $Q_W+C_V$. Let us finally note that for the case of $\xi_T$, such an assumption is not necessary. \begin{figure*}[t] \centering \includegraphics[scale=0.35]{new_plots_Gaussian/datapoints_new_may22.pdf} \caption{Experimental data from the Dresden-II reactor obtained during 96.4 days exposure time using the NNC-1701 germanium detector. CE$\nu$NS data follow from residual counts after the subtraction of the best-fit background components \cite{Colaresi:2022obx}. The spectral rates of signal events are also shown, for the SM prediction obtained with the modified Lindhard QF [see Eq.~(\ref{eq:QF_Linhard_modified})] (gray curves, solid for $q=0$ and dashed for $q = -20\times 10^{-5}$, in both cases $k=0.157$) and for various new physics scenarios, with same assumptions on the QF.} \label{fig:data-points} \end{figure*} \subsection{Sterile neutrino dipole portal} \label{sec:nu-F-magnetic-transition} In the Dirac case neutrino magnetic and electric dipole moment couplings are dictated by the following Lagrangian \cite{Grimus:2000tq} \begin{equation} \label{eq:Lag_mag_moment} \mathcal{L}=\overline{\nu}\,\sigma_{\mu\nu}\,\lambda\,\nu_R\,F^{\mu\nu} + \text{H.c.}\ , \end{equation} where in general $\lambda$ is a $3\times N$ matrix in flavor space. These couplings are chirality flipping and so the scattering process induced by an ingoing active neutrino produces a sterile neutrino in the final state. Thus, Dirac neutrino magnetic moments always induce up-scattering processes ($\nu_L+N\to F_4+N$). The mass of the outgoing fermion, being a free parameter, is only constrained by kinematic criteria. Given an ingoing neutrino energy $E_\nu$, its mass obeys the following relation: \begin{equation} \label{eq:kinematic_constraint} m_4^2\lesssim 2m_N E_r\left(\sqrt{\frac{2}{m_N E_r}}E_\nu -1\right)\ . \end{equation} For the nuclear recoil energies involved at the Dresden-II experiment and for neutrino energies near the kinematic threshold, $E_\nu \sim 9.5\,$MeV, the upper bound $m_4\lesssim 8\,$MeV applies. The interactions in Eq.~(\ref{eq:Lag_mag_moment}) contribute to the CE$\nu$NS cross section \cite{McKeen:2010rx} through \begin{widetext} \begin{equation} \label{eq:dipole_portal_xsec} \left. \frac{d\sigma}{dE_r}\right|_\text{DP}= \alpha_\text{EM}\,\mu_{\nu,\text{Eff}}^2\,F^2(q^2) Z^2\, \left[\frac{1}{E_r} - \frac{1}{E_\nu} - \frac{m_4^2}{2E_\nu E_rm_N} \left(1- \frac{E_r}{2E_\nu} + \frac{m_N}{2E_\nu}\right) + \frac{m_4^4(E_r-m_N)}{8E_\nu^2E_r^2m_N^2} \right]\ . \end{equation} \end{widetext} Here $\alpha_\text{EM}$ refers to the electromagnetic fine structure constant and $\mu_{\nu,\text{Eff}}$ to a dimensionless [normalized to the Bohr magneton, $\mu_B=e/(2m_e)$] parameter space function that involves combinations of the entries of the $\lambda$ matrix weighted by neutrino mixing angles and possible CP phases (for details see \cite{AristizabalSierra:2021fuc,Miranda:2021kre}). Note that in the limit $m_4\to 0$, Eq.~(\ref{eq:dipole_portal_xsec}) matches the ``standard'' neutrino magnetic moment cross section \cite{Vogel:1989iv}. \section{The data, the recoil spectrum and the statistical analysis} \label{sec:parameter_space_analysis} In this section we present a brief discussion of the data reported by the Dresden-II reactor experiment, provide the technical tools that allow the calculation of the CE$\nu$NS signal (within the SM and with new physics) and present our statistical analysis along with our results for the scenarios discussed in Sec. \ref{sec:ngi-lm}. \subsection{Data and recoil spectra} \label{sec:data_plus_spectra} The Dresden-II reactor experiment consists of a p-type point contact (PPC) 2.924 kg ultra-low noise and low energy threshold (0.2 keV$_{\rm ee}$) germanium detector located at $\sim 10\,$m from the 2.96 GW Dresden-II boiling water reactor (BWR): The NCC-1701 detector \cite{Colaresi:2021kus}. The proximity to the detector along with its high power implies a high flux of electron anti-neutrinos. The data accumulated during 96.4 days of effective exposure with the reactor operating at nominal power (Rx-ON), hint to a first ever observation of CE$\nu$NS using reactor neutrinos, as recently reported in Ref. \cite{Colaresi:2022obx}. The residual difference between the full spectrum and the best-fit background components (the suggested CE$\nu$NS signal) spans over the measured energy range $E_M\subset [0.2,0.4]\,$keV$_\text{ee}$ and involves 20 data bins equally spaced ($0.01\,$keV$_\text{ee}$), as shown in Fig.~\ref{fig:data-points}. The CE$\nu$NS differential recoil energy spectrum follows from a convolution of the electron anti-neutrino flux and the CE$\nu$NS cross section, namely \begin{equation} \label{eq:recoil-spectrum} \frac{dR}{dE_r}=N_T\int_{E_\nu^\text{min}}^{E_\nu^\text{max}} \,\frac{d\Phi_{\overline{\nu}_e}} {dE_\nu}\frac{d\sigma_\text{CE$\nu$NS}}{dE_r}\,dE_\nu\ . \end{equation} The number of germanium nuclei in the detector is given by $N_T=m_\text{det}\,N_A/m_\text{Ge}$, with $N_A$ the Avogadro number, $m_\text{Ge}$ the germanium molar mass and $m_\text{det}=2.924\,$kg. The integration limits are given by $E_\nu^\text{min}=\sqrt{m_N E_r/2}$, with $E_r$ being the recoil energy, and $E_\nu^\text{max}$ the kinematic value determined by the electron anti-neutrino flux. We take the values of the atomic number and nuclear mass for $^{72}\text{Ge}$. For neutrino energies below 2 MeV we use the anti-neutrino spectral function from Kopeikin \cite{Kopeikin:2012zz}, while for energies above that value we consider Mueller et al.~\cite{Mueller:2011nm}. For flux normalization we use $\mathcal{N}=4.8\times 10^{13}\overline{\nu}_e/\text{cm}^2/\text{sec}$, as given in Ref.~\cite{Colaresi:2022obx}. The differential anti-neutrino flux in Eq.~(\ref{eq:recoil-spectrum}) therefore involves the spectral function and the normalization. The CE$\nu$NS differential cross section is dictated by Eq.~(\ref{eq:xsec_CEvNS}), but can also involve contributions from NGI couplings or the sterile neutrino dipole portal discussed in Sec. \ref{sec:renormalizable-NGI} and \ref{sec:nu-F-magnetic-transition}. \begin{figure*} \centering \includegraphics[scale=0.43]{new_plots_Gaussian/chi2_sw2_Lindhard_margin_qk_iron_Gaussian_may22.pdf} \caption{$\Delta\chi^2$ profiles for $\sin^2\theta_W$ for the two QFs considered [modified Lindhard QF, Eq.~(\ref{eq:QF_Linhard_modified}), and iron-filter QF as given in the ancillary files in Ref. \cite{Colaresi:2022obx}]. For the modified Lindhard QF the result follows after marginalization over $k$ and $q$ [see Eq.~(\ref{eq:QF_Linhard_modified})]. } \label{fig:weak-mixin-angle-chiSq-single-parameter} \end{figure*} For detectors relying on ionization (it applies to scintillation as well), such as the NCC-1701, only a fraction of the nuclear recoil energy is available in a readable format. The characterization of that fraction is given by the QF, $Q$, defined as the ratio between the nuclear recoil given in ionization ($E_I$) over that generated by an electron recoil of the same kinetic energy ($E_r$). Quantitatively, this means that the ionization energy expected from a given recoil energy is given by $E_I=Q\,E_r$. With the aid of the QF, the differential ionization spectrum can then be written according to \begin{equation} \label{eq:ionization-spectrum} \frac{dR}{dE_I}=\frac{dR}{dE_r}\frac{dE_r}{dE_I}= \frac{dR}{dE_r}\left(\frac{1}{Q}-\frac{E_I}{Q^2}\frac{dQ}{dE_I}\right)\ . \end{equation} For sufficiently high-recoil energy regimes (above 5 keV$_\text{nr}$ or so) the QF is well described by the Lindhard model \cite{osti_4701226}. However, its validity is questionable in any material for sub-keV energies, as pointed out in Ref. \cite{osti_4153115}. For germanium, recent measurements of its QF using recoils from gamma emission following thermal neutron capture, photo-neutron sources, and a monochromatic filtered neutron beam have shown substantial deviations from the Lindhard model expectations at recoil energies below $\sim 1.3\,\text{keV}_\text{nr}$ \cite{Collar:2021fcl}. In the context of DM direct detection searches, Ref. \cite{Sorensen:2014sla} has addressed this issue providing a slight modification of the Lindhard QF \begin{equation} \label{eq:QF_Linhard_modified} Q(E_r)=\frac{k\,g(\epsilon)}{1 + k\,g(\epsilon)}-\frac{q}{\epsilon}\ ,\\[4mm] \end{equation} where the first term is the standard Lindhard QF with $g(\epsilon)=3\epsilon^{0.15}+0.7\epsilon^{0.6}+\epsilon$ and $\epsilon=11.5 Z^{-7/3}\,E_r$. The second term (the correction) is such that deviations from the standard behavior start to show up at about $0.1\,$keV. In our analyses we adopt this parametrization, and therefore we include $k$ and $q$ as free parameters. In addition to this QF, we employ as well the ``iron-filter'' QF reported in the ancillary files provided by Ref. \cite{Colaresi:2022obx}. The CE$\nu$NS ionization differential spectrum in Eq.~(\ref{eq:ionization-spectrum}) has to be smeared by the intrinsic resolution of the detector. Following the information of the {\tt README} ancillary file \cite{Colaresi:2022obx}, we take the resolution to be a Gaussian truncated energy-dependent distribution given by~\cite{Coloma:2022avw} \begin{equation} \label{eq:gaussian} G(E_M,E_I,\sigma)=\frac{2}{1+\text{erf}(E_I/\sqrt{2}/\sigma)} \frac{1}{\sqrt{2\pi}\sigma}e^{-\Delta E^2/2/\sigma^2}\ . \end{equation} Here, the energy-dependent Gaussian width $\sigma^2=\sigma_n^2+E_I\,\eta\,F$ involves the intrinsic electronic noise of the detector $\sigma_n=68.5\,$eV (for the 96.4 days of Rx-ON data), the average energy of $e^-$-hole formation in germanium $\eta=2.96\,$eV, and the Fano factor whose value we fix to the average value in the range [0.10-0.11], $F=0.105$. As stressed in the ancillary file, overall the second term in the Gaussian width measures the dispersion in the number of information carriers ($e^-$-hole pairs). The smearing of the ionization differential spectrum results in the measured differential spectrum \begin{equation} \label{eq:measured_diff_spectrum} \frac{dR}{dE_M}=\int_{\eta}^\infty\,G(E_M,E_I,\sigma)\,\frac{dR}{dE_I}\,dE_I\ , \end{equation} from which the number of events in the $i$th bin is obtained by integration over the measured energy $E_M$, in the interval $[E_M^i-\Delta E_M,E_M^i+\Delta E_M]$ ($\Delta E_M=5\,\text{eV}_\text{ee}$). The integration lower limit is set by the minimum average ionization energy $\eta \sim 3~\mathrm{eV_{ee}}$ required to produce an $e^-$-hole pair in germanium. \subsection{Statistical analysis} \label{sec:statistics} Our analysis is based on the $\chi^2$ function \begin{equation} \label{eq:chiSq_dist} \chi^2 (\vec{S},\alpha) = \sum_i \left[ \frac{N_\text{th}^{i}(\vec{S}, \alpha) - N_\text{meas}^{i}}{\sigma_i} \right]^2 + \left( \frac{\alpha}{\sigma_{\alpha}}\right)^2 \, , \end{equation} where $N_\text{th}^{i}$ and $N_\text{meas}^{i}$ are the theoretical and measured number of events, respectively, in the $i$th energy bin. Note that in the definition of the $\chi^2$ function we are assuming the data to follow a Gaussian distribution. Although assuming a Poisson distribution would be a better choice given the dataset, both statistical and systematic errors (which have a bigger impact on the results) can be readily included under the Gaussian assumption. Here, $\sigma_i$ represents the corresponding uncertainty of the $i$th measurement which includes systematic and statistical uncertainties. Here, $\vec{S}$ represents a set of new physics parameters, while $\alpha$ is a nuisance parameter which accounts for the flux normalization uncertainty, for which we consider $\sigma_{\alpha} = 5\%$. The theoretical number of events is \begin{equation} \label{eq:theoretical_number_events} N_\text{th}(\vec{S}, \alpha)= (1+\alpha) N_\text{CEvNS}(\vec{S})\ , \end{equation} which, of course, includes the SM piece in addition to the new physics contribution. Equipped with the tools discussed in Sec. \ref{sec:parameter_space_analysis} along with the $\chi^2$ function in Eq.~(\ref{eq:chiSq_dist}), we begin our discussion by focusing on the implications for the weak mixing angle. Figure~\ref{fig:weak-mixin-angle-chiSq-single-parameter} shows the $\Delta\chi^2$ distributions in terms of $\sin^2\theta_W$ for the two QFs considered in the analysis. In the case of the modified Lindhard QF, our result is obtained by marginalizing over the parameters $k$ and $q$ [see Eq.~(\ref{eq:QF_Linhard_modified})]. Notice that the $\Delta \chi^2$ profile for the case of Lindhard QF is rather flat at the bottom, thus making its best fit value not very statistically meaningful. Specifically, the Lindhard parameters are allowed to float in the ranges\footnote{Note, that $k=0.27$ corresponds to the limit set by CONUS~\cite{Bonet:2020awv}.} $0.14 \leq k \leq 0.27$ and $-40 \leq q/10^{-5} \leq 0$. As expected, a strong dependence on the QF is observed. The best-fit values differ by about $~\sim 6.5 \%$, with the iron-filter QF favoring a larger $\sin^2\theta_W$ value. The $1\sigma$ ranges read \begin{alignat}{2} \label{weak_mix_angle_1_sigma} \text{Modified Lindhard QF:}&\quad&\sin^2\theta_W&=0.178^{+0.280}_{-0.090}\, \nonumber\\ \text{Iron filter QF:}&\quad&\sin^2\theta_W&=0.190^{+0.039}_{-0.046}\ , \end{alignat} thus showing the disparity of the values obtained as a consequence of a different QF model. One can notice as well that both values differ substantially from the SM RGE expectation. In particular, the best fit result from the iron-filter QF analysis is compatible with the SM RGE prediction at $80.7\%$ C.L., whereas the result from the modified Lindhard QF is in agreement at $1\sigma$, given the spread of its $\Delta \chi^2$ distribution. From these results one can conclude that with the current data set and the lack of a better knowledge of the germanium QF, a robust determination of the weak mixing angle seems not possible. Although featuring a moderate disparity, these results can be understood as a first determination of the weak mixing angle at low energies using CE$\nu$NS data from reactor anti-neutrinos. They can be compared with the values obtained from COHERENT CsI and LAr data \cite{Akimov:2017ade,COHERENT:2020iec} and other dedicated experiments that include atomic parity violation (APV) \cite{Wood:1997zq,Derevianko:2010kz}, proton weak charge from Cs transitions ($Q_\text{weak}$) \cite{Androic:2018kni}, M{\o}ller scattering (E158) \cite{Anthony:2005pm}, parity violation in deep inelastic scattering (PVDIS) \cite{Wang:2014bba} and neutrino-nucleus scattering (NuTeV) \cite{Zeller:2001hh}. A summary of these results is displayed in Fig.~\ref{fig:RGE_evolution}, which shows as well the RGE running calculated in the $\overline{\text{MS}}$ renormalization scheme \cite{Erler:2004in}. The value for the weak mixing angle at the $1\sigma$ level extracted from the best fit in Fig.~\ref{fig:weak-mixin-angle-chiSq-single-parameter} is shown. For the renormalization scale at which the measurement applies, we have adopted a rather simple procedure. We have translated the ionization energy range into recoil energy with the aid of the QF. With the values obtained for $E_r^\text{min}$ and $E_r^\text{max}$ we have then calculated the momentum transfer by using the kinematic relation $q^2=2m_NE_r$. This result corresponds to the first CE$\nu$NS-based determination of $\sin^2\theta_W$ with reactor data at $\mu \sim 10\,$MeV. With further data, and more importantly a better understanding of the germanium QF, this result is expected to highly improve in the future. \begin{figure*} \centering \includegraphics[scale=0.45]{new_plots_Gaussian/sw2_mu_Gaussian_may22.pdf} \caption{Weak mixing angle RGE running in the SM, calculated in the $\overline{\text{MS}}$ renormalization scheme as obtained in Ref. \cite{Erler:2004in}, along with measurements at different renormalization scales \cite{Akimov:2017ade,COHERENT:2020iec,Wood:1997zq,Derevianko:2010kz,Androic:2018kni,Anthony:2005pm,Wang:2014bba,Zeller:2001hh}. The $1\sigma$ result obtained using the Dresden-II reactor data is shown assuming the modified Lindhard and iron-filter QF (see text for further details).} \label{fig:RGE_evolution} \end{figure*} We now move on to the case of NGI. For this analysis we assume universal quark couplings and switch off the pseudovector couplings in the vector case (those controlled by $\xi_V$) as well as the pseudoscalar couplings in the scalar case (those controlled by $\xi_S$). These simplifications reduce the analysis to pure vector and pure scalar interactions, controlled by the couplings $g_V^2 = g_V^\mathrm{q} f_V$ and $g_S^2 = g_S^\mathrm{q} f_S$ (and the mediators masses), as investigated in Refs. \cite{Liao:2022hno,Coloma:2022avw}. For the tensor case no assumption on different contributions is required. The cross section is determined by $\xi_T$ and, under the assumption of universal quark couplings, it is eventually controlled by $g_T^2 = g_T^\mathrm{q} f_T$. Again, for the statistical analysis using the modified Lindhard QF we vary as well $q$ and $k$. The analysis in this case is therefore a four parameter problem, while for the iron-filter QF only two parameters matter, i.e. the new mediator mass and coupling. \begin{figure*} \centering \includegraphics[scale=0.4]{new_plots_Gaussian/vector_Lindhard_marginalized_qk_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/vector_Fe-QF_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/scalar_Lindhard_marginalized_qk_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/scalar_iron_filter_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/tensor_Lindhard_marginalized_qk_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/tensor_iron_filter_new_Gaussian_may22.pdf} \caption{Constraints on vector NGI (upper row), scalar NGI (central row) and tensor NGI (lower row) in the coupling-mass plane, obtained using the modified Lindhard QF (left column) and the iron-filter QF (right column). In all panels, purple regions indicate exclusion limits. Where present, dark blue stars specify the best fit solutions. Moreover, constraints from CONUS~\cite{CONUS:2021dwh}, CONNIE~\cite{CONNIE:2019xid} and COHERENT CsI+LAr~\cite{Corona:2022wlb} are shown for comparison. Additionally, the black dashed vertical line marks the transition from the light to the effective regime.} \label{fig:NGI} \end{figure*} Our extracted sensitivities are illustrated in Fig.~\ref{fig:NGI} at $1,2,3~\sigma$ (assuming two d.o.f., i.e. $\Delta \chi^2 = 2.3, 6.18, 11.83$ respectively). The upper row stands for the vector case, the middle row for the scalar and the bottom row for the tensor, while left (right) panels are obtained using the modified Lindhard (iron-filter) QF. As can be seen, at the $1\sigma$ level and above, large portions of parameter space are ruled out, disfavoring couplings as low as $7.5\times 10^{-6}$ for $m_V \lesssim 100$ keV. At the $1$ and $2\sigma$ level, two ``islands'' in the region of noneffective interactions ($m_V\gtrsim 10\,$MeV) are open as well. At the $3\sigma$ level these spots are gone and the constraint becomes a little less stringent. Turning to the analysis done assuming the iron-filter QF, we find that about the same regions in parameter space are excluded, though the most stringent limit is a little more pronounced in this case ($4\times 10^{-6}$ for $m_V\lesssim 100\,$keV). The parameter space ``islands'' found with the modified Lindhard QF are present in this case as well, but cover a somewhat wider area. At the $90 \%\,$C.L. constraints on the vector NGI scenario amount to $g_V\lesssim 8\times 10^{-6}$ (Lindhard QF) and $g_V\lesssim 4.5\times 10^{-6}$ (iron-filter QF) for vector masses up to 100 keV. This limit should be compared with results from COHERENT CsI and LAr, for which Refs. \cite{Liao:2017uzy,Miranda:2020tif} found $g_V\lesssim 6\times 10^{-5}$ at the $90\%\,$C.L. We can then conclude that the Dresden-II data largely improve limits for vector interactions in the low vector mass window. This result can be attributed to the sub-keV recoil energy threshold the experiment operates with. In the scalar case the situation is as follows. The modified Lindhard QF and the scalar hypothesis tend to produce smaller deviations from the data. This can be readily understood from the left graph in the bottom row of Fig.~\ref{fig:data-points}. At low scintillation energy the event rate tends to increase, but slightly less than in the vector case, a behavior somehow expected, see e.g. Ref. \cite{AristizabalSierra:2019ykk}. While the scalar coupling contributes to the CE$\nu$NS cross section quadratically, the vector does it linearly because of its interference with the SM contribution. As a consequence, at $1\sigma$ level and above, limits are slightly less stringent than in the vector case. In contrast to that case as well, the parameter space ``islands'' are gone. Their disappearance can be traced back to the fact that these interactions do not sizably interfere with the SM term. Limits for scalar masses below $\sim 1\,$MeV at the $90\%\,$C.L. amount to $g_S\lesssim 3 \times 10^{-6}$ in the Lindhard QF case. For COHERENT CsI and LAr, Refs. \cite{Papoulias:2017qdn,Miranda:2020tif} found $g_S\lesssim 3.0\times 10^{-5}$ at the $90\%\,$C.L., implying a slight improvement on the limit. For the iron-filter QF one finds about the same trend, with limits at different statistical significances spreading uniformly. The $90\%\,$C.L. limit at low scalar mass amounts to $g_S\lesssim 1.8 \times 10^{-6}$, for scalar masses up to $100\,$keV. \begin{figure*} \centering \includegraphics[scale=0.4]{new_plots_Gaussian/mag_mom_Lindhard_qk_new_Gaussian_may22.pdf} \includegraphics[scale=0.4]{new_plots_Gaussian/mag_mom_iron_filter_new_Gaussian_may22.pdf} \caption{Results of the analysis for the sterile neutrino dipole portal based on two QF hypotheses: Modified Lindhard QF (left graph) and iron-filter QF (right graph). For the former case results follow after marginalization over $q$ and $k$. Shaded areas indicate the excluded regions at different statistical significance levels: $1\sigma$, $2\sigma$ and $3\sigma$ as shown in the graphs. Constraints from CENNS10, TEXONO, COHERENT CsI and XENON1T (see Ref.~\cite{Miranda:2021kre}) are also shown for comparison. } \label{fig:dipole-portal} \end{figure*} Results for the light tensor case resemble those found in the NGI light scalar scenario, though limits are a little weaker. At the $1\sigma$ level and above, we find $g_T\lesssim 1.0\times 10^{-5}$ (Lindhard QF) and $g_T\lesssim 6.0\times 10^{-6}$ (iron-filter QF) for tensor masses below $\sim 100\,$keV. Although with small differences, among the NGI we have considered, the tensor couplings are the less constrained by the Dresden-II data set. This result is inline with that found when analyzing tensor NGI using CsI COHERENT data \cite{AristizabalSierra:2018eqm}. To our knowledge, limits on light tensor interactions using COHERENT CsI and LAr data have been discussed only in \cite{Demirci:2021zci}. On the other hand, there are some forecasts for searches for this type of interactions at multi-ton DM detectors \cite{Majumdar:2021vdw}. Searches relying on the CE$\nu$NS nuclear recoil channel are expected to be sensitive up to $g_T\sim 2.0\times 10^{-5}$ for tensor masses up to $\sim 1\,\text{MeV}$ at the $90\%\,$C.L. These numbers lead to the same conclusion than in the scalar case: In the light mediator regime, constraints obtained using Dresden-II data seem to improve upon available sensitivities. As we have already pointed out, given the kinematic threshold of the electron anti-neutrino flux and the small ionization energy of the Dresden-II data set, up-scattering via dipole portal interactions can produce sterile neutrinos with masses up to $\sim 8\,$MeV. In full generality, one can expect constraints on the effective magnetic dipole moment coupling to be less severe as the mass of the up-scattered fermion increases. The kinematic suppression increases, reaching zero when the sterile neutrino mass hits the kinematic production threshold limit given by Eq.~(\ref{eq:kinematic_constraint}). The $1,2,3~\sigma$ (assuming two d.o.f., i.e. $\Delta \chi^2 = 2.3, 6.18, 11.83$ respectively) results of our analysis for this case are shown Fig.~\ref{fig:dipole-portal}, left (right) graph obtained with the modified Lindhard (iron-filter) QF. Limits from the different exclusion regions tend to be a little more uniform in terms of the up-scattered sterile state mass, in comparison to the NGI scenarios previously considered in terms of the mediator mass. For the modified Lindhard QF analysis, values of the order of $\mu_{\nu_e}\lesssim 4\times 10^{-10}\,\mu_B$ are excluded for sterile neutrino masses below 100~keV, at the $1\sigma$ level. Assuming instead the modified iron-filter QF the constraints are slightly tighter, $\mu_{\nu_e} \lesssim 1 \times 10^{-10}\,\mu_B$ for sterile neutrino masses below 100~keV, at the $1\sigma$ level. A comparison of these values with those obtained using CsI and LAr COHERENT data sets (shown in the graphs), $\mu_{\nu_e} \lesssim (3-4)\times 10^{-9}\,\mu_B$ at the $90\%\, $C.L.~\cite{Miranda:2021kre}, demonstrates that the Dresden-II experimental data improve upon these results (the $90\%$ C.L. upper limits are $(2-8)\times 10^{-10}\,\mu_B$ for $m_4\lesssim 100\,$keV). They are competitive with the constraints implied by XENON1T data (indeed more constraining if one focuses only on the nuclear recoil channel) \cite{Brdar:2020quo}, are stronger than those derived from CENNS10~\cite{Miranda:2021kre} and comparable (or even tighter) than those following from TEXONO depending on the QF model used for the analysis, as can be read directly from the graphs. If compared with explanations of the XENON1T electron excess using electron neutrinos \cite{Miranda:2021kre}, one can see that our results are consistent with that possibility\footnote{Explanations of the excess using tau neutrinos are not affected by this result either \cite{Shoemaker:2020kji}.}, regardless of the QF choice. Note that the sterile neutrino dipole portal and NGI results, in contrast to those found for the weak mixing angle, are to a large extent rather insensitive to the QF model. Thus, from that point of view they are more robust. \section{Conclusions} \label{sec:conclusions}% We have studied the implications of the recently released Dresden-II reactor data on the weak mixing angle and on new physics scenarios sensitive to the low-energy threshold of the experiment, namely NGI generated by light vector, scalar and tensor mediators and the sterile neutrino dipole portal. In order to check for the dependences on the QF, we have performed the analyses considering: (i) A modified Lindhard model, (ii) a QF provided by the collaboration (iron-filter QF). The low scintillation energy threshold provides a determination of the weak mixing angle at a renormalization scale of order $10\,$MeV, a scale for which up to now no determination was yet available. Our result shows a rather pronounced dependence on the QF model, with differences between the best-fit values of about $6\%$. The precision of the determination of $\sin^2\theta_W$ has also a strong dependence on that choice, leading to best fit values that are compatible with the SM RGE prediction at $80.7\%$ C.L. and $1\sigma$, respectively. A better understanding of the germanium QF is thus required to improve upon the determination of this parameter. However, regardless of these disparities, the Dresden-II data provides the first hint ever of the value of $\sin^2\theta_W$ at $\mu \sim 10$ MeV. Regarding our analysis of NGI with light mediators, also in this case our findings show that at the $1\sigma$ level results depend on the QF model. For vector interactions, results derived using the modified Lindhard QF tend to produce slightly less stringent bounds. In both cases, though, at large vector mediator masses (above $10\,$MeV or so) the $1\sigma$ and $2\sigma$ limits produce two nonoverlapping exclusion regions. At the $3\sigma$ level these regions are gone and constraints are restricted to a single area, where for vector boson masses of the order of 100 keV the coupling is constrained to be below $\sim 10^{-5}$. The same trend is found for scalar and tensor interactions through light mediators. Regardless of the QF choice, results lead to constraints that amount to about $g_{S}\lesssim 1.0\times 10^{-6}$ and $g_{T}\lesssim 1.0\times 10^{-5}$, respectively, for mediator masses below $\sim 100\,$keV at the $1\sigma$ level. In all scenarios, the derived constraints turn out to improve upon other existing bounds from CE$\nu$NS experiments (COHERENT CsI$+$LAr, CONUS and CONNIE) and even upon predictions made for multi-ton DM detector measurements. Finally, concerning the sterile neutrino dipole portal we find that the Dresden-II results rule out larger regions of parameter space, not excluded by COHERENT and CONUS and are rather competitive with limits from XENON1T data. Actually, they are more stringent if one compares only with XENON1T nuclear recoil data. Compared with those regions where the sterile neutrino dipole portal can account for the XENON1T electron excess, the Dresden-II data is not able to test them yet. However, with more statistics and better understanding of the germanium QF the situation might improve in the future. To conclude, the recent evidence for CE$\nu$NS from the Dresden-II reactor experiment provides unique opportunities to investigate physics scenarios sensitive to low-energy thresholds, complementary to other CE$\nu$NS measurements with spallation sources. However, current results show a dependence on the QF model at low recoil energies thus calling for a deeper understanding of the germanium QF along with more data. \noindent \textbf{Note added in proof}\\ After completion of the manuscript results from the first science run of the XENONnT collaboration \cite{XENON:2022mpc} have ruled out the electron excess previously reported by the XENON1T collaboration \cite{XENON:2020rca}. \section*{Acknowledgments} The authors are grateful to Pilar Coloma, Anirban Majumdar, Sergio Palomares, J. Collar and I. Katsioulas for useful correspondence. VDR acknowledges financial support by the SEJI/2020/016 grant funded by Generalitat Valenciana, by the Universitat de Val\`encia through the sub-programme ``ATRACCI\'O DE TALENT 2019'' and by the Spanish grant PID2020-113775GB-I00 (AEI/10.13039/501100011033). The work of DAS is supported by ANID grant ``Fondecyt Regular'' N$^\text{o}$ 1221445.
1,314,259,996,783
arxiv
\section{Introduction} \label{intro} Particle track reconstruction is a common problem in most particle physics experiments. At collider experiments, particles are accelerated to speeds very close the speed of light and collide in bunches at rates of $\mathcal{O}(10 MHz)$. In each collision, new particles are created and they scatter in all directions. These particles pass through particle tracking detectors and create signals which are called \textit{hits}. Particle tracking algorithms aims to distinguish these signals and identify the trajectory of the particles. CERN created the kaggle TrackML challenge in 2018 to invite researchers from all backgrounds to solve the particle track reconstruction problem \cite{trackml}. Later, the simulated dataset created for the challenge became popular among researchers to test and benchmark new ideas in the field. In a few years, the Large Hadron Collider (LHC) at CERN will be upgraded to become the High Luminosity Large Hadron Collider (HL-LHC). This upgrade, which will ramp up the rate of collisions, comes with many challenges, one of which is the particle track reconstruction problem \cite{ref-hilumi}. Although, the novel particle tracking algorithms can manage the current rate of collision rates, they suffer from higher collision rates as they scale worse than polynomially. Therefore, the search for faster particle track reconstruction algorithms is very important. There are many initiatives to bring faster solutions to particle track reconstruction. Researchers in the field explore novel methods such as Machine Learning and Quantum Computing. The HepTrkX team proposed a Graph Neural Network (GNN) approach to solve the particle track reconstruction problem using the kaggle TrackML challenge dataset \cite{heptrkx}. Other researchers also proposed new methods using Quantum Annealing to tackle the challenge \cite{qalg3}. In this work, we present our updated results on the Quantum Graph Neural Network approach, which combines the novel GNN method of the HepTrkX project with the quantum circuit model \cite{heptrkx-quantum}. \section{The Dataset and Classical Approach} \label{dataset-classical} This work uses the publicly available TrackML dataset \cite{trackml}. The dataset contains spatial coordinates of each hit created by particles which are created during simulated collisions. Each of these collisions are called as events and the dataset contains 10000 event files. Among these files only 100 events are used due to restrictions in simulation times. Quantum Circuit simulations on both CPU and GPU use extensive resources and time as the number of qubits increase. In the case of our model, it takes around a week of CPU time to train the model over 100 events for a single epoch. \noindent The TrackML data is created using a simulated detector having a similar geometry to most LHC experiments. The detector has horizontal (barrel) layers near the center of collisions and vertical (endcap) layers outside. The particle beams propagate along the $z$-axis and collide around $z=0$. The TrackML detector layout can be seen in Figure~\ref{fig:trackml-detector}. The produced particles of these collisions scatter through all directions. This work only uses the barrel region hits to simplify the track reconstruction problem as the ambiguity of the tracks is much higher for endcap hits. \begin{figure}[!htb] \centering \includegraphics[width=0.62\linewidth]{figures/Detector.pdf} \caption{TrackML Detector Layout \cite{trackml}. } \label{fig:trackml-detector} \end{figure} \noindent The HepTrkX GNN approach converts the dataset consisting of spatial coordinates of hits to a graph dataset. A set of loose selection criteria is applied to each hit in order to eliminate illogical connections in the graph. This is way, it is ensured that the preprocessing step takes a very short amount of time and graphs are not fully connected in order to minimize run time. The selection criteria determined by the HepTrkX team is respected in this work and can be seen in Table~\ref{tab:table1} \cite{heptrkx}. \begin{table}[!htb] \begin{center} \caption{Selection applied to TrackML dataset for preprocessing.} \begin{tabular}{|c|c|} \hline $\left|p_T\right|$ & $>1 GeV$ \\\hline $\Delta\phi$ & $<0.0006$ \\\hline $z_0$ & $<100 mm$ \\\hline $\eta$ & $[-5,5]$ \\\hline \end{tabular} \label{tab:table1} \end{center} \end{table} \noindent The coordinate definitions used to describe the data is as follows. $\phi$ is the angle along the transverse plane ($xy$ plane) and $\left|p_T\right|$ is the magnitude of momentum along the same plane. $\eta$ is the psuedorapidity measures the azimuthal angle with respect to the beam axis (z-axis). \noindent An event contains $\sim 8k$ hits, therefore the model requires a huge amounts of memory to load a single event. As the detector geometry is cylindrically symmetric along the $\phi$ and the $z$ direction, the data set is divided into 8 in the $\phi$ and into 2 in the $z$ direction to reduce the size of each event. Therefore, the new graph dataset contains 1600 subgraphs of originated from 100 events. \noindent An example subgraph can be seen in Figure~\ref{fig:graph} and the distribution of the subgraphs for each coordinate variable, in Figure~\ref{fig:data}. The distribution of r and z can easily be explained by referring to the geometry of the detector in Figure~\ref{fig:trackml-detector}. The distribution in $\phi$ is uniform as expected since the geometry is symmetric along the transverse plane. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{figures/Initial_graph_colored_combined.pdf} \caption{1 of 16 subgraphs created from a single event. (a,b) are subgraphs in cylindrical coordinates and (c) is a subgraph in Cartesian coordinates. Red represents Ground Truth, while Blue shows Fake edges created using loose cuts.} \label{fig:graph} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{figures/data_preprocessed.pdf} \caption{Histogram of hits crated using 1600 subgraphs in cylindrical coordinates.} \label{fig:data} \end{figure} \section{The Quantum Graph Neural Network Approach} \label{qgnn} Quantum circuits have been previously shown to be able to handle classification tasks previously \cite{mps,ttn}. Although, Quantum Machine Learning has not yet been shown to outperform classical Machine Learning, scientist are trying new methods achieve speed-ups for certain tasks. High Energy Physics is no exception to this trend \cite{hep-qml}. In this work, we explore the use of Quantum Circuits to perform track segmentation. \noindent In order to integrate Quantum Computing and GNNs, the Neural Networks of Edge and Node Network have been repleaced by two Quantum Circuits. The new model flow can be seen in Figure~\ref{fig:network}. There are many Quantum Circuits that perform binary classification tasks in the literature \cite{mps,ttn}. The Tree Tensor Network (TTN) model is chosen due to its simplicity among these circuits. The TTN circuits are implemented using Pennylane along with its Tensorflow interface. Pennylane provides the necessary gradients of the Quantum Circuits during the training step while Tensorflow is used for optimization and constructing the model pipeline \cite{tensorflow, pennylane}. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{figures/heptrkx-quantum.pdf} \\ \caption{The Quantum Graph Neural Network model used in this work.} \label{fig:network} \end{figure} \noindent The new hybrid model, which is called the Quantum Graph Neural Network (QGNN) takes a graph as an input. The model begins with an Input Layer, which is a single layer Neural Network used to increase the dimension of the input and increases the dimension of the data to the required number from 3. 3 is the original size of the input as there are 3 spatial coordinates for each hit. Then a Quantum Edge Network (QEN) is applied to each edge of the input graph which outputs an edge feature. The output edge feature is passed into each edge to update their value. This information is used by the Quantum Node Network (QNoN), which is applied to each node of the graph. The output of the QNoN is used to update nodes in the hidden layers. Recursive iterations of Edge and Node Networks allow the information to propagate from lower detector layers to above layers. Finally, a QEN is applied to obtain the final segment classification. \noindent The use of TTN inside QEN and QNoN is almost identical except the amount of qubits used. QEN is applied to each edge one by one, therefore the size of input is 6 (amount of spatial coordinates for 2 nodes) in the case of no hidden dimensions. These coordinates are mapped to $[0,2\pi]$. Then, the $R_y(\theta)$ gate is used encode the information to each qubit. The encoding can be represented with the simple equation below. \begin{equation} \label{eq1} R_y(\theta)\ket{0} = \cos(\theta/2)\ket{0} + \sin(\theta/2)\ket{1} \end{equation} \noindent The TTN circuit is applied afterwards and a measurement is taken. To perform a simple state tomography on the output qubit, the process is repeated many times and the expectation value is calculated. This value is used as the edge feature. By applying the QEN to each edge, the edge features for all edges are calculated. The same process is repeated for the QNoN case. The only difference between the QEN and the QNoN is the size of the circuit and the inputs being nodes, rather than the edges. At the final step of the QGNN, edge features for all initial edges are obtained. These values are used to calculate a weighted binary cross entropy loss taking into account the ratio of the true edges to the false edges. Then the ADAM optimizer of the Tensorflow is used to update the variables of the TTN circuits of both QEN and QNoN. In this work, the QGNN model with the Quantum Circuits of QEN and QNoN used shown in Figure~\ref{fig:circuit} is used with one hidden dimension. \begin{figure}[!htb] \centering \subfloat{\includegraphics[align=c,width=0.47\linewidth]{figures/TTN8circuit.pdf}} \qquad \subfloat{\includegraphics[align=c,width=0.4\linewidth]{figures/TTN12circuit.pdf}} \caption{Quantum Circuit of QEN is given on the left. Quantum Circuit of QNoN is given on the right. The numerical values are from an example data.} \label{fig:circuit} \end{figure} \noindent 1400 subgraphs are used to train the QGNN and 200 subgraphs are used in the validation set for a single epoch. 3 independent experiments were conducted to test different iterations. The training results are shown in Figure~\ref{fig:results}. \noindent Figure~\ref{fig:results} shows that the model can achieve an Area under the ROC curve (AUC) of 0.80 and a binary cross entropy loss of 0.5. While 1.0 is the perfect score for AUC, the model seems to perform well considering its simplicity. It was expected that the model performance to get better as the number of iterations increases. However, higher iteration runs performed similar to $N_{it}=1$ or even worse. There are two main reasons leading to this. First, simple TTN models do not represent the data more than the current best results, therefore it defines a less than perfect ending point for the training. The second issue is related to the vanishing gradient problem. It is known that Recurrent Neural Networks suffer from this problem and the QGNN is no exception. Therefore, as the $N_{it}$ increases the learning rate slows down. These two major issues will be investigated deeply in future work. \begin{figure}[!htb] \centering \subfloat[Validation Loss]{\includegraphics[width=0.47\linewidth]{figures/validation_loss.pdf}} \qquad \subfloat[Validation AUC]{\includegraphics[width=0.47\linewidth]{figures/validation_auc.pdf}} \caption{Validation Set results for a single epoch of training with different iteration values.} \label{fig:results} \end{figure} \section{Future Work} \label{fw} The work presented here is the first full demonstration of a Quantum Graph Neural Network that can classify track segments with good precision and accuracy. There is a need for more detailed work to further increase the performance of the model. Future work should include the following; \begin{itemize} \item Extending the size of the hidden dimension, which was limited to 1 in this work. \item A detailed analysis of the vanishing gradient problem. \item Trying different Quantum Circuit architectures to achieve better accuracy. \end{itemize} \section{Conclusions} This work presents a first implementation of a Quantum Graph Neural Network dedicated for solving track reconstruction problem. Although the presented model uses the simplest form of Quantum Circuits in literature and the minimum number of qubits that can be used, it performs well. Current results show that a simple QGNN model performs very similar to sophisticated track reconstruction models. However, there is still more to be done to achieve this. It is also important to note that this work currently only uses simulations and does not consider practical applicability or possible speed-ups. \bigskip \bigskip \begin{center} \begin{large Part of this work was conducted at "\textit{iBanks}", the AI GPU cluster at Caltech. We acknowledge NVIDIA, SuperMicro and the Kavli Foundation for their support of "\textit{iBanks}". This work was partially supported by Turkish Atomic Energy Authority (TAEK) (Grant No: 2017TAEKCERN-A5.H6.F2.15).
1,314,259,996,784
arxiv
\section{Introduction} Mapping the large scale distribution of matter and its corresponding three-dimensional (3D) velocity field is of great interest. The motivation is threefold. For one, mankind at large, and astronomers in particular, explore their Universe by charting it. Making maps is a first step towards understanding ones surroundings and ones place within a greater environment. Mapping the large scale distribution of matter in the Universe is thus an end unto itself. Furthermore, explaining the large scale velocity and density fields within the context of the $\Lambda$CDM paradigm of structure formation, allows for the estimation of cosmological parameters that are inherent to that model XXX. A third motivation is to map these fields at high redshift, thereby allowing for the reconstruction of the initial conditions of the Local Universe (XXX). We focus here on mapping the large scale structure (LSS) of the universe from surveys of peculiar velocities, or rather from surveys of galaxies with measured distances from which the radial component of the peculiar velocities may be extracted. Measuring galaxy distances is a formidable challenge in observational cosmology. All distance measures rely on comparing an observed magnitude with an (inferred or assumed) absolute magnitude. There are many ways to estimate a galaxy's distance. For example, scaling relations tie the size of an elliptical galaxy, or the angular velocity of disc galaxy to its intrinsic luminosity. Other methods include resolving stars at the tip of the red giant branch, measuring Supernovae light curves, Cepheid variables pulsations or the scale of fluctuations in a galaxies surface brightness. Each method has errors associated with it, a combination of instrumentational errors, systematic errors, or calibration errors. As such, compilations of peculiar velocities are difficult to analyze \citep[for a comprehensive review][]{1995PhR...261..271S} and are usually a patchwork of various surveys and methods observed with different telescopes in different locations on earth (or in space). The POTENT method was the first attempt to produce continuous maps of the density and velocity fields based on peculiar velocities surveys \citep{1989ApJ...336L...5B}. The main underlying assumption of the POTENT method is that galaxy velocities are drawn from an irrotational, potential flow. No further assumptions were made on the statistical nature of the flow field. Therefore, its ability to handle the shortcomings of such peculiar velocity surveys was limited. Subsequent approaches to the reconstruction of the LSS from peculiar velocities have been formulated within Bayesian frameworks - these include the Wiener filter (WF) and constrained realizations (CRs) methodology \citep{1993ApJ...415L...5G,1999ApJ...520..413Z,Tully2019} as well as Monte Carlo Markov Chain algorithms \citep[MCMC][]{Lavaux2016,2019MNRAS.488.5438G}. Often, these methods are tested on mock data in order to gauge how reliable they are (XXX). They have been remarkably successful in ``mapping the invisible'' and recovering the underlying cosmic fields. \begin{figure*} \includegraphics[width=.329\textwidth]{Figures/sgw_vs_sgv_cf35-mocks} \includegraphics[width=.329\textwidth]{Figures/sgw_vs_sgu_cf35-mocks} \includegraphics[width=.329\textwidth]{Figures/sgv_vs_sgu_cf35-mocks} \caption{The distance, in units of km/s, of the CF3 data points (magenta) and mock data points (green) projected on the three supergalactic principal planes. Note the ZOA is accurately reproduced in the mock catalogues.} \label{fig:res:obs_coordinates} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/mp_distributions_obs_mocks_b_l_z} \caption{From left to right, the distribution SGB, SGL and $z$ of CF3 and mock data points. Note that in panels (a) and (b) the two curves are on top of one an other.} \label{fig:res:obs_dist} \end{figure*} Beyond the issues of noisy, sparse data, plagued with inhomogenous errors, there is one additional inherent conceptual problem common to all surveys of peculiar velocities and that is that peculiar velocity itself is not observed but is a {\it derived} quantity. Given the redshift of and a distance to a galaxy, it is the radial component of its peculiar velocity that can be derived. But only the redshift is observed; distances themselves are not directly observed. What is measured is the distance modulus of a galaxy \citep[cf.][]{Tully2016}. Because this error of the measured distance modulus is assumed to be normally distributed, the errors on the observed distances are thus lognormally distributed (see equaition XXX). This leads to a biased estimate of the distances and peculiar velocities with respect to the actual distances \citep[see][]{2021MNRAS.505.3380H}. Often this bias is treated as yet another manifestation of the Malmquist bias \citep[see][]{1995PhR...261..271S}. Here we refer to it as the lognormal bias. For the WF/CRs reconstruction algorithm the lognormal bias is treated outside of the Bayesian framework in a separate parallel process \citep{2015MNRAS.450.2644S,Tully2014,2016MNRAS.461.4176H,2021MNRAS.505.3380H}. For Monte Carlo methodologies the lognormal bias is treated within a comprehensive algorithm \citep{Lavaux2016,2019MNRAS.488.5438G}. \yh{The Constrained Local UniversE Simulations (CLUES) project focuses on the reconstruction of LSS of our nearby cosmic neighbourhood from surveys of galactic distances and thereby peculiar velocities, in particular the Cosmicflows database \citep[cf.][and references therein]{Tully2016}. Two main methodologies have been employed by the CLUES for the reconstruction of the local LSS and the setting of initial conditions for constrained cosmological simulations - one that is based on the WF/CRs methodology (Hoffman and Ribak, 1991, Zaroubi et al 2995) and the the other on MCMC and of Hamiltonian Monte Carlo (HMC) sampling. In particular, within the WF/CRs framework the issue of the lognormal bias has been handled by two independent alfgorithms, that of Sorce (2015) and that of Hoffman et al (2021) and within the Monte Carlo sampling approach by Graziani et al (2019) and Valade, Hoffman and Libeskind (2022). } Our aim here to test the quality of two methods that aim to reconstruct the LSS from peculiar velocities: the WF/CRs method with a lognormal bias correction algorithm, known as the Bias Gaussian Correction \citep[BGc;][]{2021MNRAS.505.3380H} and The HAmiltonian Monte carlo reconstruction of the Local EnvironmenT ({\scshape Hamlet}\ for short) method XXX. These two methods are applied to mock data catalogues drawn from a cosmological simulation designed to imitate the CosmicFlows-3 data\citep{Tully2016}. The quality of the two reconstructions are compared with the so-called ``target'' simulation to gauge their fidelity. The New Multidark 2 dark matter (DM) only simulation \citep{Riebe2013} is used to construct the mock catalogue. This paper is structured as follows. In Section XXX the Hamlet and WF/CR reconstructions methods are briefly described. In section XXX the nature of the input data as well as its biases and a bias correction scheme is presented. In section XXX the algorithm for constructing halo catalogues that mock the cosmic flows data is presented. The results of applying to the two reconstruction methods to these mocks, as well as a comparison between them is presented in section XXX. A summary and conclusion is offered in section XXX. \section{Methods} \subsection{Mock Catalogue construction} ֿ\yh{Aureliean - please note that in Figs. 1 and 2 the heading uses CF3+ while the captions refer to CF3. The heading of Fig. 3 uses CF3. In the text I couldn't find CF3+. Please clarify the matter and be consistent.\\} We wish to identify a set of dark matter (DM) haloes from a cosmological simulations that statistically is as close to the grouped CF3 catalogue as possible, since it is on this observational data to which the methods tested here, will eventually be applied. To do so the MultiDark Planck 2 simulation\footnote{The MultiDark simulations are publicly available: www.cosmosim.org} \citep[MDPL2,][]{Riebe2013} is used, an $N$-body run of $N=3840^3$ particles in a periodic box of side length $L= 1 \,\,h^{-1}\,\Gpc$. The cosmological parameters of the simulation are drawn from the 2nd Planck data release XXX which suggests the ``concordance cosmology'' i.e. a flat $\Lambda$CDM Universe ($\Omega_{\rm m} = 0.307,~\Omega_{\rm b} Ω 0.048,~ \Omega_{\Lambda}= 0.693,~ \sigma_{8} = 0.8228,~ n_{s} = 0.96$ and a dimensionless Hubble parameter $h= 0.678$ where $H_{0}=100~h~$km/s/Mpc). A Friend-Of-Friend's (FOF) algorithm with a linking length of 0.2 times the mean inter particle separation is used to identify haloes whose mass is roughly $M_{200}$. It is appropriate to use a FOF halo infder in this case since it is the {\it grouped} CF3 catalogue which is being mocked. Grouping the members of a virialized object together averages out nonlinear motions implying that the (e.g.) cluster's peculiar velocity is a better traces of the flow field. Note that the MDPL2 box size of $L= 1 \,\,h^{-1}\,\Gpc$ is large enough to embed the CF3 catalogue, whose effective depth is roughly $160\,\,h^{-1}\,\Mpc$. An ``observer'' is associated with a randomly selected halo of mass in the range of $[0.9\, \text{---}\,2.0]\times 10^{12}h^{-1}M_{\sun}$. The simulation is then re-centered on this halo and the simulation's coordinate axes are then arbitrarily labelled as Supergalactic (SGX, SGY, SGZ). Furthermore a mock ``sky projection'' is made such that each halo is given a sky position (SGL, SGB). The (proper) distance $d$ of each halo from the center, is used to compute a cosmological redshift $\bar{z}$ by numerically integrating: \begin{equation} \label{eq:intro:dl2zcos} d=c H{^{-1}_0} \int_0^{\bar{z}}{1\over\sqrt{\Omega_m(1+\mathcal{Z})^3+(1-\Omega_m)}}\d{\mathcal{Z}} \end{equation} where $\Omega_m$ is the cosmological matter density parameter. The (proper) distance $d$ is also turned into a luminosity distances $\subX{d}{L}$ by \begin{equation} \label{eq:intro:dl2d_vr} \subX{d}{L}=d \times (1 + \bar{z}) \end{equation} which is used to compute the halo's distance modulus: \begin{equation} \label{eq:intro:mu2dl} \mu = 5\log{\bigg(\frac{\subX{d}{L}}{\rm Mpc}\bigg)}+25 \end{equation} The radial peculiar velocity $v_{r}$ is combined with the cosmological redshift to obtain the full redshift \citep{TamaraDavis} \begin{equation} z+1=(\bar{z}+1)\bigg(\frac{v_{r}}{c}+1\bigg) \end{equation} At this point each halo's position relative to the observer has been transformed into two ``obserrvable'' quantities: 1. a redshift $\bar{z}$ (which includes a contribution from the radial peculiar velocity $v_{r}$) and 2. a distance modulus $\mu$. The mock catalogue aims to have the same distribution of $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ as the $N_{CF3}$ data points in the CF3 data. This is accomplished with a monte-carlo style algorithm in the following way: $N_{CF3}$ haloes are drawn at random from the simulation, within a sphere of around 300Mpc$/h$. A merit is assigned to this initial set of haloes by computing the absolute difference between its distribution (histogram) of $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ and that of CF3. Iterations proceed by adding and subtracting one halo at a time and evaluating the merit of the new $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$, compared to CF3's. If a new potential halo improves the merit of the distributions, it is kept; otherwise it is rejected. In this way, the process converges halo by halo, towards reproducing the distribution CF3's $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$. Once the process has converged and a suitable mock catalogue has been constructed, the observational errors from the CF3 catalogue can be added to the mock. Namely, the redshift and distance modulus of each CF3 data point is given as $z+\sigma_{z}$ and $\mu+\sigma_{\mu}$ where $\sigma_{z}$ and $\sigma_{\mu}$ denote the errors associated with each measurement. The error on $z$ is assumed to be entirely due to the precision of the instrument and taken as a constant $c\sigma_{z}=50$km/s for every data point while $\sigma_{\mu}$ depends on which standard candle is used and may range from 5\% for Supernova to 20\% for scaling relations. For each CF3 data point a random number is drawn from a Gaussian of width $c\sigma_{z} = 50$km/s and $\sigma_{\mu}$. These are then added, on a halo by halo basis, to the values of $\bar{z}$ and $\mu$ assigned to the haloes that constitute the mock. The final mock catalogue is a list of $N_{CF3}$ halos each assigned a $\bar{z}$ and a $\mu$ from the equations above that have been perturbed according to the distribution of errors given by CF3. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_vr_vs_d_cf3_mocks.jpeg} \caption{The distance of a galaxy as a function of its peculiar velocity is shown for the grouped CF3 data (magenta) as well as the mock catalogue (green). The lognormal bias is evident here in the lack of symmetry about $v_{r}=0$; beyond around 70Mpc$/h$ the universe appears to be systematically collapsing, in a so-called ``breathing mode''. Bottom panel: after application of the BGc correction, symmetry is reestablished.} \label{fig:vrd} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/mp_distributions_vr_d_z.pdf} \caption{From left to right: (a) The distributions of the radial peculiar velocity, (b) the distance and (c) the redshift for the target (black solid lines), the BGc/WF (red dotted) and {\scshape Hamlet}\ (Blue dash dotted) reconstruction methods. The Ex/WF method is shown in green dashed. } \label{fig:res:mockspost_dist} \end{figure*} The fidelity of the mock to the CF3 catalogue, is shown in \cref{fig:res:obs_coordinates,fig:res:obs_dist}. \cref{fig:res:obs_coordinates} shows the three supergalactic projections with the mock data points in green and the CF3 constraints in purple. The reader will note that the two sets of points are overplotted, occasionally obscuring the CF3 population. The Zone of Avoidance (ZOA) and visual distribution of the catalogues are well recovered. Quantitatively this is shown by looking at the distributions of $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ themselves shown in \cref{fig:res:obs_dist}a, b, and c, respectively. The distribution of SGB, SGL and $cz$ for the mock galaxies and CF3 constraints, are largely indistinguishable from each other. \cref{fig:res:obs_dist}c shows that within $\sim20$Mpc$/h$, the number of CF3 constraints is much greater than the mock catalogues presented here. This is because of there are too many CF3 constraints in this region, with respect to the resolution of our simulation. In principle the original unperturbed $d$, $d_{L}$, $\bar{z}$ and $v_{r}$ for each halo that in the mock can be ``forgotten'' and new values can be computed using the values of $\bar{z}$ and $\mu$ that include the observational errors. These new values should exhibit similar biases to the observational data by construction. This is seen in \cref{fig:res:obs_rpv_d}, where the radial velocity as a function of distance is plotted. The diagonal cut in this plot is indicative of the log normal bias discussed below in section XXX. Namely, at a given distance there is an unequal number of galaxies moving towards and away from the observer, making it appear that the universe is contracting. The scheme employed to correct this bias is presented below and in XXX. \subsection{The Lognormal bias and the Bias Gaussian correction (BGc)} A brief note on the so-called lognormal bias. One of the main purposes of constructing such a detailed mock catalogue as described above, is to ensure that the lognormal bias is reproduced, thereby allowing us to gauge the ability of the two reconstruction methods to handle this bias. Much hand wringing and literature has been devoted to the handling of biases in peculiar velocity surveys, and we refer the reader to \yh{Strauss and Willick (1995) } for a comprehensive explanation. Here we briefly explain what the lognormal bias is and how it is handled in the context of the BGc and WF/CR as proposed by \cite{2021MNRAS.505.3380H}. We refer the reader to that work for a comprehensive description of the \yh{\st{de-biasing techinque} lognormal bias and its correction by the BGc algorithm}. The reader will note that a Gaussian error on the distance modulus transforms into a log-normal error on the luminosity distance (e.g. the inverse of \yh{Eq. }\ref{eq:intro:mu2dl}). In other words, if the same galaxy is observed many times, the mean of the different distance measures will not coincide with its actual value. This bias changes the spatial distribution of the galaxies as well as their inferred peculiar velocities. The log-normal bias can be seen in \cref{fig:vrd} where the CF3 and mock catalogue peculiar velocity $v_{r}$ is plotted as a function of distance. Beyond around $\sim70{\rm Mpc}/h$, there is no longer symmetry in the distribution of $v_{r}$ about zero: more galaxies have negative $v_{r}$ and the universe naively appears to be collapsing, a so-called ``breathing'' mode. In theory it can be corrected since the standard $\Lambda$CDM \ model makes an explicit prediction that the expected scatter for the radial component of the velocity is roughly $\sigma_{vp}\sim 275\ {\rm km ^{-1}}$. The log-normal bias is treated by a correction scheme and the WF/CRs procedure is applied to the bias-corrected data. \yh{The essence of the scheme is to map the log-normal distribution of the inferred distances around their respective redshift distances into a normal distribution around the median of the log-normal one. The width of that normal distribution is treated as a free parameter set to be $\sim 2 h^{-1}\,{\rm Mpc}$, in agreement with the $\Lambda$CDM \ prediction that the intrinsic scatter of the radial velocities is $\sigma_{vp}\sim 275 {\rm km ^{-1}}$. The same procedure is applied to the observed radial velocities, retaining the median of the distribution of the radial velocities of data points in a given redshift bin. Yet for the velocities, unlike the inferred distances, the variance of the distribution is preserved as well. \st{the peculiar velocity in cosmological shells to have $\sigma_{vp}\sim 275 {\rm km ^{-1}}$.}} Namely the log-normal distribution of the observed distances is mapped to a Gaussian distribution, while preserving the median of the log-normal distribution. It is the invariance of the median under the normal - log-normal transformation which constitutes the backbone of the BGc scheme. After the application of the BGc scheme to the data, the breathing mode dissapears and the radial peculiar velocities scatter normally about 0, as can be seen in \cref{fig:vrd}, bottom panel. \subsection{Wiener Filter and constrained realisation} \label{sec:WF_CR} The WF/CR method is tried and tested algorithm for reconstructing the large scale density distribution of the universe, based on a limited number of peculiar velocity measurements. Being roughly 3 decades old, and we refer the reader to the voluminous literature \yh{Hoffman and Ribak (1991), Zaroubi et al (1995) and Zaroubi, Hoffman and Dekel (1999)} for a thorough description, reviewing just the bare essentials here. \yh{ The WF is a Bayesian estimator that finds the most probable 3D velocity field (and associated density) given a set of observed radial velocities and a given assumed prior model of the distribution of the peculiar velocities. In cosmological applications of the WF the $\Lambda$CDM \ concordance cosmological model is taken to be the prior model. \st{The WF finds the 3 dimensional continuous velocity field (and associated density) which is most consistent with the discrete radial peculiar velocity sample that is its input. The algorithm is based on a minimal variance method, that assumes a Bayesian framework and the concordance cosmology.} Accordingly: a. The WF provides the most probable continuous density and 3D velocity fields given a finite number of observed 'noisy' radial velocities and the assumed $\Lambda$CDM \ model. b. The CRs sample the constrained residual around the WF field. c. The WF and CRs act so as to interpolate between data points and then extrapolate beyond it. } The WF/CRs methodology recovers the linear density and 3D velocity fields. Linear theory is clearly violated on small scales, and the density field is more susceptible to non-linear dynamics than the velocity field. The linear WF/CRs constitute a reasonable proxy to the actual velocity field down to the scale of roughly $5-10 h^{-1}\,{\rm Mpc}$ \citep{1999ApJ...520..413Z}. The WF estimator is the outcome of the `tug-of-war' between the data and the assumed prior, $\Lambda$CDM \ in the present case. Where the data is `good and strong' the estimated WF field is `close' to the input data point. Otherwise, where the data is weak the WF solution is dominated by the prior model, namely the solution tends to the null density and velocity fields. Consequently the constrained variance, i.e variance spanned by the CRs, is small in the good data regime and converges towards the cosmic variance as the data deteriorates. One last remark regarding the BGc/WF method. Because there exist no possibility to homogenize the sampling (namely the Zone of Avoidance will always inhibit full sky coverage), the only source of uncertainty that could, one day, be mitigated, is the observational uncertainty in the distance measurement. In order to test the BGc/WF's inherent ability to reconstruct the underlying fields, an additional ``method'' is compared: the exact WF (hereafter labeled Ex/WF). This is the WF applied to a mock where the error on each data point has been artificially set to zero (and thus no BGc scheme is applied). This serves the purpose of testing the WF in the case where the only source of uncertainty is the sampling. In other words in the results section XXX, the reconstruction based on the BGc/WF, the Ex/WF and the {\scshape Hamlet}\ method are presented. \subsection{{\scshape Hamlet}\ } \label{sec:Hamlet} The second approach examined here utilizes a Hamiltonian Monte Carlo sampling of the posterior PDF via the HAmiltonian Monte carlo reconstruction of the Local EnvironmenT ({\scshape Hamlet}\ \!\!) code, which is described in full in \yh{\st{an accompanying paper} Valade et al (2022)}. Unlike the WF/CRs formalism, the {\scshape Hamlet}\ algorithm treats the real distances of the data as unknown dynamical variables that need to be estimated, much in the same way as the density and velocity field. It is Bayesian in nature because an \emph{ab initio} PDF of the variables that are to be estimated can be specified. A Monte-Carlo technique is used to sample the various posterior PDFs that are under consideration: the cosmic density field, the velocity field as well as the distances of the constraining data. The technical challenges of the Monte Carlo approach is twofold. First the (assumed) prior and posterior PDFs of the density and velocity field need to be extended to also include the distribution of datum distances. Then an efficient sampling of the posterior PDF needs to devised. Given the extremely high dimensionality of the problem, the Hamiltonian Monte Carlo algorithm is the tool of choice to perform that sampling, outperforming Markov Chain Monte-Carlo algorithms by many orders of magnitude (Valade et al 2022). Beyond its inhomogeneous distribution (e.g. the ZOA) the CF3 catalogue has a fairly sharp cut off at around \yh{ a redshift distance of 150$h^{-1}$Mpc} (see Fig. 2c). In practice this defines and limits the extent within which th {\scshape Hamlet}\ reconstruction method is valid. \cite{Hinton2017} investigated the problem that occurs when sampling from a distribution that has an abrupt cut off. They show that the reconstructed fields (an inferred variables) close to the cut off will be biased, effectively nullifying the applicability of the reconstruction beyond a small amount closer than the cut off (say $\sim90\%$). It is therefore expected that any reconstruction based on these data will not be valid beyond around \yh{150$h^{-1}$Mpc}. \section{Results} \yh{YH: paragraph slightly revised.} The results are presented in three sub-sections where we 1. compare the predicted constraints themselves differ from their real values (\cref{sec:data}); 2. examine the accuracy of the reconstruction of the cosmic fields (\cref{sec:maps}); and 3. compare the reconstructed monopole and dipole (i.e. bulk flow) multipoles with their target counterparts (\cref{sec:bulk}). \subsection{Reconstructed data} \label{sec:data} After applying the BGc/WF and {\scshape Hamlet}\ methods to the mock catalogues (as well as the WF to the exact, no error mocks), the first things to check is how well the distributions of radial peculiar velocities, distances and redshifts \yh{of the data points}, match \yh{\st{the input sample} their input values}. This is shown in Fig. \ref{fig:res:mockspost_dist}a,b and c, respectively, where the ``target'' curve (black) represents the true distributions of the mock catalogue; \yh{\st{in otherwords} namely}, the closer the BGc/WF (red dotted) or the {\scshape Hamlet}\ (blue dot-dashed) curve is to the black, the more accurate the reconstructions. \yh{The values of the reconstructed RPVs of the mock data points are obtained by interpolation over the grid points. \it{Aurelien: I don't understand tour Fig. 4b. It shows the histograms of the distances. The case of the Ex/WF is of distances identical to the target. Yet they are slightly different. WHat is happening here?} } We remind the reader that the Exact WF (green dashed) represents the limits of the WF method. Fig.\ref{fig:res:mockspost_dist}(a) shows that the {\scshape Hamlet}\ reconstruction method does an exceptional job at recovering the distribution of radial peculiar velocities. Note also that the WF in its purest form too recovers the target distribution. The BGc/WF struggles slightly by narrowing the data's distribution with a slight over emphasis on smaller values of the peculiar velocity at the expense of the large values. We note, as an aside, that the fact that the target (and hence the reconstructions) are not centered at $v_{r}=0$ is due to the specific nature of the mock observer chosen (i.e. cosmic variance). \yh{\st{In conclusion, we expect that BGc/WF reconstructed data to have slightly smaller peculiar velocities than reality.} The BGc/WF suppression of the reconstructed radial velocities of the data points relative to the target is inherent in the WF algorithm, where the estimated signal is the weighted 'compromise' between the data and the prior model. Where the data is not very strong the WF estimator is biased towards the null field predicted by the prior. } In Figs. \ref{fig:res:mockspost_dist}(b) and (c) the distance \yh{(See my remark above)} and redshift distributions are examined. For both of these quantities the two reconstructions do a remarkably good job at matching the target, rendering their curves practically indistinguishable from the target. Note however that the BGc/WF method tends to ``exagerate'' some of the peaks and valleys in the distance distributions (\cref{fig:res:mockspost_dist}b). All models reliably follow the input's form. \yh{\st{As a final point note that the Ex/WF is quite literally indistinguishable from the target. This fact hints that in the absence of errors\footnote{An unrealistic but scientist's dream.} this method perfectly recovers the target: discrepancies seen are therefore entirely due to the observational errors.} In the absence of errors, i.e. the Ex/WF case, the reconstructed RPVs of the data points should be equal to the input constraints taken from thhe tharget simulation (Hoffman and Ribak, 1991). The slight mismatch between the Ex/WF reconstructed values and the input constraints is due to the interpolation across the coarse grid and the small but not negligible observational errors on the redshifts. } It is important to understand by how much each constraint shifts during the de-biasing and reconstruction procedures. In Fig. \ref{fig:res:mockspost_vr_d} the difference between the reconstructed $v_r$ and the input $v_r$ is compared on a constraint by constraint basis and as function of distance. From top to bottom this difference is shown for show the BGc/WF (\ref{fig:res:mockspost_vr_d}a), {\scshape Hamlet}\ (\cref{fig:res:mockspost_vr_d}b), and Ex/WF (\cref{fig:res:mockspost_vr_d}c). The difference between the two main reconstrcution methods (BGc/WF and {\scshape Hamlet}\ ) is shown in the final panel, \cref{fig:res:mockspost_vr_d}d. In these plots each constraint is a dot. The median \yh{Aurelien: median pr mean?} value of the difference is shown as a black line and the standard deviation of the distribution is designated with error bars. An examination of \cref{fig:res:mockspost_vr_d}, reveals that the methods based on the WF, tend to underestimate the $v_r$ in the inner most distance shells (below $\leq 60$Mpc$/h$) while overestimating it in the outer shells. This is ture even for the ideal case of the Ex/WF. The mean of {\scshape Hamlet}\ method, however (\cref{fig:res:mockspost_vr_d}b) indicates the constraints are not systematically shifted in the region $\sim 40 - 110 h^{-1}\,{\rm Mpc}$, but underestimate $v_r$ outside this range. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_vr_vs_d_diff_target.pdf} \caption{\yh{\st{RPV as a function of the comobile distance for the same sets of constraints. The exact source catalogue, the noisy constraints as well as they corrected equivalent are presented here. The BGc only source (orange) shows the constraints before optimization by the WF, while the BGc/WF shows the results at the end of the full pipeline.} Scatter plots of the residual of the BGc/WF (panel a), of the {\scshape Hamlet}\ (panel b) and of the WX/WF (panel c) reconstructed RPVs evaluated at the data points. The residual of the BGc/WF from the {\scshape Hamlet}\ reconstructed RPVs at the data points is shown as well (panel d). } } \label{fig:res:mockspost_vr_d} \end{figure} \subsection{Reconstructed Cosmic Fields} \label{sec:maps} In this section the reconstructed density and velocity fields are examined and compared with the target. \subsubsection{Reconstructed Density maps} \label{sec:recon_dens} \yh{Aurelien: I assume that by the density of the target simulation you mean $\delta_{\rm lin}=-\nabla\cdot\vec{v}/H_0f(\Omega)$. If so please correct in Figs. 6 and 7 to $\delta_{\rm lin}$ where the target density field is considered. }\\ In order to visually inspect the reconstructed density distribution, a 4Mpc$/h$ thick smoothed slab at the super galactic plane (SGZ=0) is chosen. This is not an arbitrary choice: given that the largest numbers of constraints are expected to lie in or close to SGZ=0, we expect this slab to be the most important (and possibly the most accurate) for reconstructions. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/planes_Z_divv_raw-mean-nstd_160_target-bgcwf-exwf-hmc-bgcwf-exwf-hmc.pdf} \caption{\yh{\st{Fields, uncertainties and pool maps $\delta$ for the ${\rm SGZ}=0$ plan. From top to bottom : BGc/WF, Ex/WF, HMC.} A comparison of the of the BGc/WF (left column), Ex/WF (central column) and the {\scshape Hamlet}\ (right column) reconstructed (over)density fields with the target simulation. The top panel presents the linearized density, $\delta_{\rm lin}$, of the target simulation. The middle panels present the reconstructed $\delta$ and the bottom ones show the constrained variance normalized by the cosmic variance, $\Sigma_\delta / \sigma_\delta$. All plots refer to the $SGZ=0$ plane of the target simulation and all fields are Gaussian smoothed with a $5h^{-1}\,{\rm Mpc}$ kernel.} } \label{fig:res:divv_sgz} \end{figure*} \cref{fig:res:divv_sgz} examines the density distribution in this slab. \yh{The different plots show the fractional overdensity, $\delta=(\rho/\bar{\rho}-1$, where $\bar{\rho}$ is the cosmological mean density. For the target non-linear density field the fractional over density is replaced by $\delta_{\rm lin}=-\nabla\cdot\vec{v}/H_0f(\Omega)$ which coincides with $\delta$ in the linear regime, where $f(\Omega)$ is the linear growth factor. The $\delta$ fields are Gaussian smoothed with a kernel of $5h^{-1}\,{\rm Mpc}$.} The top panel (a) is the target density distribution. The column below it (namely the middle column, \cref{fig:res:divv_sgz}c,f) shows the Ex/WF results, while the left column (\cref{fig:res:divv_sgz}b,e) shows the BGc/WF results and the right column (\cref{fig:res:divv_sgz},d,g) shows the \yh{\st{Hamlet} {\scshape Hamlet}\ } results. The middle row (panels b,c,d) shows the reconstructed density distribution. Some conclusions may be drawn from a visual examination of \cref{fig:res:divv_sgz}b,c,d): The Ex/WF generally recovers the features of the local cosmography at all distances. The reconstruction is not exact; given that there are no ``observational'' errors here, this implies that the mismatch between the Ex/WF and the target (i.e. between \cref{fig:res:divv_sgz}c and \cref{fig:res:divv_sgz}a), are entirely due to the \yh{finite, inhomogenous and anisotropic} sampling. Comparing the BGc/WF (\cref{fig:res:divv_sgz}b) with the target indicates a \yh{\st{steady deterioration of the accuracy of the cosmographic reconstructions with distance} decline of power o the reconstructed density field with the distance from the observer, yet the general structure of the cosmic web of over- and under-dense regions is recovered. The HAMLET reconstructed $\delta$ field does not exhibit the same loss of power as in the BGc/WF case but it suffers from a loss of spatial resolution with distance (\cref{fig:res:divv_sgz}d). The more distant structures become more fuzzy and diffuse with distance. \sout{The same deterioration is not as severe (but nevertheless present) in the HAMLET method (\cref{fig:res:divv_sgz}d), which fairly accurately reproduces the main features of the target at greater distances than the BGc/WF method.} } \yh{The bottom panels of \cref{fig:res:divv_sgz} present the constrained variance, $\delta$ fields, $\Sigma^2_{\delta}$, of the three reconstructed. It is defined as the local, cell by cell, variance calculated over an ensemble of CRs for the Ex/WF and BGc/WF case and over a set of independent states of the Monte Carlo chain in the HAMLET case. The panels show the square root of constrained variance normalized by the cosmic variance, $\Sigma_{\delta}/\sigma_{\delta}$. The cosmic variance is calculated by calculating the variance over all CIC cells in the target simulation. The value of $\Sigma_{\delta}/\sigma_{\delta}$ gauges the constraining power of the constraints and the assumed prior model. One expects it to be small at small distances and to approach unity asymptotically with distance. \sout{The bottom row of \cref{fig:res:divv_sgz}, (namely \cref{fig:res:divv_sgz}e,f,g) quantifies the {\it how well constrained} a region is by comparing the variance in the reconstructions ($\Sigma_{\delta}$) with cosmic variance ($\sigma_{\delta}$). What is plotted here is their ratio $\Sigma_{\delta}/\sigma_{\delta}$. The constrained variance is determined, on a cell by cell basis, by averaging over all the constrained realizations of the density field. Each cell thus has a mean value of the reconstructed density and a standard deviation about this value. The standard drviation is what is meant here by ``constrained variance'' (i.e.$\Sigma_{\delta}$) The cosmic variance is just the standard deviation of the density field of a $\Lambda$ CDM universe at $z$=0. This can be computed in many ways. In this case, it is found by simply compting the standard deviation of all the cells in the target simulation and is thus a fixed number. When $\Sigma_{\delta} = \sigma_{\delta}$, the cell is essentially unconstrained, since the variance in the density field across all constrained realizations is equal to the cosmic variance. Thus when $\Sigma_{\delta}/\sigma_{\delta} =1$ the region is fully unconstrained while when $\Sigma_{\delta}/\sigma_{\delta}=0$ the region is extremely well constrained.} } \cref{fig:res:divv_sgz}e,f,g) quantifies what is visually apparent from \cref{fig:res:divv_sgz}b,c,d) namely that the inner regions are well constrained but that this fades with increasing distance. The reconstruction methods that include errors (i.e. \cref{fig:res:divv_sgz}e,g) are never ``perfect'', while the Ex/WF method \cref{fig:res:divv_sgz}f), does obtain values of $\Sigma_{\delta}/\sigma_{\delta}$ close to 0. Interestingly, the impact of the Zone of Avoidance (ZoA) on the reconstruction method is glaringly obvious in \cref{fig:res:divv_sgz}(f). Here the simple effect of being unable to sample the ZoA, causes a very clear limitation of the expected ability to reconstruct the density field. The accuracy of the density field reconstructions - specifically their accuracy {\it as a function of distance} - is shown in \cref{fig:res:scatters_divv}. These are scatter plots which compare, on a cell by cell basis, the density of the target with the BGc/WF (top row), Ex/WF (middle row) and HAMLET (bottom row). The line, \yh{$y=a x + b$}, which best describes the correlation is shown in red; its slope and the Pearson correlation coefficient is given in each sub-panel. In the ideal case where a reconstruction method perfectly matches the target this would simply be the \yh{\st{$y=x$} case a slope of $a=1$ and an offset or bias of $b=0$} line with zero scatter (shown in black), namely a Pearson correlation coefficient of unity. The columns in this figure denote different 40 Mpc$/h$ thick radial shells under consideration. Note that a slope less than unity indicates that the reconstruction under estimates the over dense regions and over estimates the under dense regions. A slope greater than unity represents the opposite (exaggerates over and under dense regions). \yh{An offset of $b\neq0$ means a biased reconstruction.} There are a number of important features of this \cref{fig:res:scatters_divv}. First, considering the inner most bin (leftmost column) the Ex/WF reconstruction recovers very well the density of the target. A slope of unity \yh{ and practically null offset of $b=0.01$} and a correlation coefficient of 0.92 indicates that in general in this region the Ex/WF reconstructions is \yh{ very} well recovered. \yh{This implies that the nearby sampling of the CF3 catalog is almost optimal. Obviously,} the HAMLET and the BGc/WF methods do worse in recovering the density field. Moving to the outer shells all three density reconstruction systematically degrade with slopes decreasing and correlation coefficients increasing. The reader will note that the slop in all cases is less than unity, indicating that the reconstructions suppress \yh{\st{high density regions and enhance low density ones} the power of the recovered density field.} This \yh{\st{effect increases with increasing distance fro the observer} diminishing power increases with the distance from the observer.} \yh{\st{This is because in the limit of no data, the reconstruction methods recover the mean field. This effect is acutely visible in the rightmost column where the slope of the BGc/WF method approached 0. The conclusion drawn here is that given the enormous errors at these distances, the reconstruction methods essentially recover the mean field and are no longer good realizations of field. } The BGc/WF suffers from that lose of correlation with distance than the HAMLET. Yet, the latter reconstruction is quite biased at large distances with $b=0.12$ for the distance range of $120\ \leq\ d\ \leq\ 160\ hmpc$. The BGC/WF behaves, on the other hand, by the 'Bayesian book' - where the sampling is very sparse and the errors are much larger than the signal, the unbiased prior is recovered.} \yh{I suggest to remove the following paragraph\\ \sout{Looking at \cref{fig:res:scatters_divv} more quantitatively, specifically comparing the middle row to the top row, shows the effect of errors on the BGc/WF reconstruction method compared to the limitations of the sampling. Namely even with no errors, we expect inaccuracies at large distances simply via the sampling issues previously elucidated. The bottom row indicates that the HAMLET is remarkably successful at reconstructing the density field even at large distances. Its slope and correlation coefficient is closer to unity than the BGc/WF method at all distances (albeit not as close as the Ex/WF method).} } \begin{figure*} \centering \includegraphics[width=1\textwidth]{Figures/smoothedscatter_shells_divv_bgcwf-exwf-hmc_vs_target_51} \\ \caption{Density scatter plots of $\delta$ reconstructed versus $\delta$ target. Rows from top to bottom : HMC, Ex/WF BGc/WF. Columns from left to right: within spheres of 50, 100, 150, 200 $\,h^{-1}\,\Mpc$. \yh{The red line represents the best fitted line whose line equation is $y=a x + b$. The parameters of the line and the Pearson correlation coefficient are given in the insert. The black line of a slope of a slope of unity and no offset is shown for reference.\\ Aurelien: please change the xtitle from '$\delta$ target' to '$\delta_{\rm lin}$ target'.} } \label{fig:res:scatters_divv} \end{figure*} The correlation between the reconstructed mean field and the target is shown as a function of distance in \cref{fig:res:corr}. Essentially these are just the correlation coefficients from the scatter plots (\cref{fig:res:scatters_divv}) plotted as a function of distance in order to gauge the degradation of the reconstruction methods as data becomes more sparse and volumes become large. The binning is also finer (20 vs 40 Mpc$/h/$ shell thickness, hence the exact values of the correlation coefficient are not identical). The solid lines in \cref{fig:res:corr} represent the mean correlation coefficient between the reconstruction and the target; the error corridor represents the 2$\sigma$ variance about this mean. Two main conclusions can be drawn here: as expected the Ex/WF is always a superior to the BGc/WF and the HAMLET method. With the exception of the inner most bin, the {\scshape Hamlet}\ method achieved higher correlation coefficients than the BGc/WF method. At the edge of the data, no method achieves a correlation coefficient of greater than 0.5 \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/corr_divv_vs_d_bgcwf-exwf-hmc} \caption{Statistics per shell of distance. Absolute value of the coefficient of correlation for $\delta$ between the different reconstructions and the target field. The error \yh{\sout{bars represent the} envelope represents the $2\sigma$} variation \yh{\sout{on} of the ensemble of} realizations.} \label{fig:res:corr} \end{figure} \subsection{Reconstructed radial velocity maps} \label{sec:bulk} \yh{\sout{In this section we examine the reconstructed radial peculiar velocity field in the same way that the density field was examined in the previous section} The examination of the radial component of the velocity field follows here that of the density field (\S \ref{sec:recon_dens}). } The same SGZ=0, smoothed, 4Mpc$/h$ thick slab \yh{\out{of the supergalactic plane}} is shown in \cref{fig:res:vr_sgz}. Again, the top panel is the target (mock) radial peculiar velocity field while the left column shows the BGc/WF reconstruction, the middle column the Ex/WF reconstruction and the right most column, the {\scshape Hamlet}\ reconstruction. By visual inspection the reader will note that the radial velocity field (\cref{fig:res:vr_sgz}b,c,d) is much more accurately reconstructed than the density field. The same outflows and inflows are generally visible and the cosmographic landscape is recognisable in all three cases. Although the reader will note that the accuracy of the velocity reconstruction, like the density field, deteriorates at larger distances. Features are recognisable but distorted and smoothed out. \yh{\sout{As above, in \cref{fig:res:vr_sgz}(e,f,g) the constrained variance of the radial peculiar velocity field is compared to the cosmic variance of the same quantity. As stated in section \cref{sec:recon_dens}, the quantification of how well constrained a region of the reconstruction is, is computed by the ratio of $\Sigma_{v_{r}}/\sigma_{v_{r}}$. Here $\Sigma_{v_{r}}$ is the standard deviation, for a given cell, of the reconstructed peculiar velocity, averaged over all constrained realizations and $\sigma_{v_{r}}$ is the cosmic variance of this quantity in a $\Lambda$CDM Universe obtained by simply computing the mean radial peculiar velocity of each cell (in a reconstruction) and the corresponding standard deviation of all cells. As such the ratio $\Sigma_{v_{r}}/\sigma_{v_{r}}$=1 when the peculiar velocity is fully unconstrained, and thus varies as much as a random cosmography, and equal to zero when fully constrained with no differences across the constrained realizations. The reader will note that these plots are considerably ``darker'' than their density field counterparts in \cref{fig:res:divv_sgz}, indicating that the radial peculiar velocity field is much better constrained than the density field. Our inability to reconstruct within the ZoA is still visible in the radial peculiar velocity albeit much less prominent than as in the density maps. What is very different between the reconstructed density and peculiar velocity field is the accuracy at the edge of the data: whereas the density field reconstructions become more or less random or at least statistically indistinguishable from random, the reconstructed peculiar velocity fields barely obtain such large errors. \cref{fig:res:vr_sgz}(g) for instance never gets to $\Sigma_{v_{r}}/\sigma_{v_{r}}$=1.} The constrained and cosmic variances of the radial velocity, $\Sigma_{v_{r}}$ and $\sigma_{v_{r}}$ respectively, are calculated much in the same way as in for the density field (\S \ref{sec:recon_dens}). The imprint of the ZOA is clearly seen in the $\Sigma_{v_{r}}/\sigma_{v_{r}}$ map of the Ex/WF map. Yet, in all cases considered here the constrained variance, normalized by the cosmic variance, is much smaller than in density case. Namely, the velocity field is much more constrained by the CF3 data than the density field. Also, the {\scshape Hamlet}\ outperforms the BGc/WF reconstruction. } \yh{A close inspection of \cref{fig:res:vr_sgz}g uncovers one troubling feature. At the edge of the reconstructed volume, at distances close to $150\ h^{-1}\,{\rm Mpc}$ the reconstructed is 'bluer' than in the corresponding target and the Ex/WF maps. Namely the {\scshape Hamlet}\ reconstructed velocity field has a spurious negative infall. } Again, we turn to a scatter plot, on a cell by cell basis to quantify the quality of the reconstruction as a function of distance in \cref{fig:res:scatters_vr}, \yh{which \sout{\cref{fig:res:scatters_vr} }} is structured identically to \cref{fig:res:scatters_divv} - namely radial extent increasing column wise from left to right, while the rows from top to bottom being BGc/WF, Ex/WF and {\scshape Hamlet}\ . This figure is qualitatively also identical to its density field counter part (\cref{fig:res:scatters_divv}) in that the same behavioural trends between the different reconstructions methods and as a function of distance exist. \yh{\sout{The main difference however is the quality of the radial velocity field reconstructions which are superior to the density field in almost every way. In most cases the slope and Pearson correlation coefficient is much closer to unity. The degradation as a function of distance is not as rapid. Of specific remarkable interest is that fact that the {\scshape Hamlet}\ method has a strong correlation coefficient of above 0.8 even 80 to 100 Mpc$/h$ away. These plots indicate that when it comes to examining radial velocity field, the methods presented here do an excellent job, despite the limitations of sampling and observational errors.} The correlation analysis of the Ex/WF and BGc/WF cases behaves much in the same way as for the density field - a degradation of the correlation with distance, a slope ($a$) that is close to unity nearby and diminishes with distance, and essentially with zero offset ($b\sim 0$). Yet, the quality of the reconstruction of the radial velocity is much better than that of the density. The {\scshape Hamlet}\ reconstruction shows a somewhat unexpected behaviour. The slope of the best fit line for the distance range of $40\ \le\ d\ \le\ 80\ h^{-1}\,{\rm Mpc}$ exceeds unity, $a=1.22$, i.e. there is an excess of power compared with the target and the Ex/WF cases. This is unexpected for a Bayesian algorithm. The best linear fit for the range of $120\ \le\ d\ \le\ 160\ h^{-1}\,{\rm Mpc}$ yields a significant negative offset of $b=-151 {\rm km ^{-1}}$, in agreement with the visual inspection of \cref{fig:res:vr_sgz}g. } \yh{\sout{ The correlation between the reconstructed radial velocity field and the target is shown as a function of distance in \cref{fig:res:vr}, similar to \cref{fig:res:corr}. Essentially these are just the correlation coefficients from the scatter plots (\cref{fig:scatters_vr}) plotted as a function of distance. The green dashed, red dotted and blue dot dashed lines in \cref{fig:res:corr} represent the mean correlation coefficient between the BGc/WF, Ex/WF and Hamlet reconstruction methods (respectively) and the target; the error corridor represents the 2$\sigma$ variance about this mean. As expected, and similar to the behaviour of the density field in \cref{fig:res:corr}, the Ex/WF is always a superior to the BGc/WF and the HAMLET method. In fact, the radial peculiar velocity field is extraordinarily well reconstructed by the Ex/WF, out to the edge of the data: even at $\sim 150$Mpc$/h$, the Ex/WF achieves a correlation with the target of $\sim80\%$. The fact that at these distances the BGc/WF drops to correlations of just $\sim 20\%$, indicates how errors are critical in these reconstructions. Again, as is with the case of the density field, the HAMLET method achieves higher correlation coefficients than the BGc/WF method everywhere except the inner most bin. Note that insofar as reconstructing the velocity field goes the {\scshape Hamlet}\ and BGc/WF method are roughly equal out to $\sim 70$Mpc$/h$. } } \subsection{\yh{Multipole m}oments of the reconstructed velocity field} \yh{Aurelien: are you showing here $-\nabla\cdot\vec{v}/H_0f(\Omega)$ or $-\nabla\cdot\vec{v}/H_0$ as the monoploe term?\\} The first two moments of the velocity field, the monopole and dipole, are examined here. The effect of errors and sampling on the fidelity of these two physical quantities is of particular interest since the monopole and dipole are often used as probes of the scale of homogeneity and can effect probes of the cosmological model in particular. \cref{fig:moments}(a) shows the target and reconstructed velocity monopole as a function of distance. The same colouring and line style convention used in \cref{fig:res:corr} and \cref{fig:res:vr} is adopted here too, \yh{with the moments of the target simulation plotted in black}. Note that the monopole - the mean infall or outflow of matter, is the zeroth order moment of the velocity field. Namely it is \yh{\sout{simply} the mean of} the divergence of the velocity field \yh{in spheres of radius $d$ and as such is called the ``breathing mode'' of the velocity field. In the linear theory of the cosmological gravitational instability the density and velocity fields are related by $\delta=-\nabla\cdot\vec{v}/H_0f(\Omega)$, hence we opted here to present the monopole term by means of $\delta=-\nabla\cdot\vec{v}/H_0f(\Omega)$. Thereby, \cref{fig:moments}a effectively presents the mean linear density with spheres of radius $d$. \sout{ (divided by the growth factor times the Hubble constant) and as such is often called the ``breathing mode''}}. The Ex/WF is nearly indistinguishable from the target here: the error corridor (which corresponds to variance across all the constrained realisations) is tiny and the black and green dashed line are practically on top of each other. As for the more realistic cases of the {\scshape Hamlet}\ and BGc/WF curves, they succeed and fail in different ways. The BGc/WF curve over estimates the monopole in the inner parts (within $\sim50\ h^{-1}\,{\rm Mpc}$) while underestimating it \yh{outside that range}. This increased monopole implies an overestimation of the density in the inner parts of the mock universe - this is confirmed by examining the equation of the best fit line to the scatter plot \cref{fig:res:scatters_divv} (upper row, left column, $d < 40 h^{-1}\,{\rm Mpc}$). The best fit line has \yh{\sout{a large $y$-intercept (0.11)} has an offset of $b=0.11$}, meaning that there is a systematic increase in the estimated densities, consistent with the \yh{\out{artificially}} higher monopole. \yh{\sout{The fact that the BGc/WF method and the target both approach zero monopole at around the 100 $h^{-1}\,{\rm Mpc}$ mark, is a result of the $\Lambda$CDM Copernican nature of the universe on these scales} ({\it Noam - this has nothing to do with Copernican nature})}. Both the reconstructions and the target tend to zero infall at these large scales. The {\scshape Hamlet}\ method on the other hand behaves inversely to the BGc/WF method, underestimating the target monopole at small scales and over estimating it at large scales. \yh{\sout{Similar to the BGc/WF over-estimation at small scales, the {\scshape Hamlet}\ method too has a high $y$-intercept for the most distnt regions of the reconstruction (ie in the bottom right panel of \cref{fig:scatters_divv}).} The {\scshape Hamlet}\ 's monopole term at the edge of the data reveals an excess of density at $d \sim (120 - 150)\ h^{-1}\,{\rm Mpc}$, in agreement with \cref{fig:scatters_divv} (lower/right panel).} Otherwise, the {\scshape Hamlet}\ method succeeds in tracking the target monopole over a large range from $\sim20$ to $\sim100\ h^{-1}\,{\rm Mpc}$. \cref{fig:moments}(b,c) refer to the second moment of the velocity field, namely the dipole or the bulk flow. \cref{fig:moments}(b) refers to the magnitude of the bulk flow, while \cref{fig:moments}(c) refers to its direction. Accordingly all methods do a fine job of recovering the magnitude of the bulk flow beyond around $\sim30$Mpc$/h$. The Ex/WF has, predictably, a smaller error corridor than the other two methods, which are roughly similar in size. With respect to direction, \cref{fig:moments}(c0 shows the dot product between the target bulk flow direction and the reconstructed one (hence in this plot there is no black target line). The bulk flow directions for the {\scshape Hamlet}\ and BGc/WF method are aligned to within $\sim 15$ deg of the target out to a distance of $\sim 50$Mpc$\h$, while the Ex/WF is well aligned to greater distances. Note that however, even the Ex/WF curve begins to deviate significantly at the reconstructed edge. This indicates that even in the best case scenario of zero errors, sampling at these great distances is a limiting factor in terms of recovering the cosmic dipole. The problem is (obviously) exacerbated when examining the BGc/WF and {\scshape Hamlet}\ curves at large distances. Taken together \cref{fig:moments} indicates that although the monopole and dipole are well recovered across a large range, the direction of the reconstructed dipole begins to deteriorate when the sampling drops. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/planes_Z_vr_raw-mean-nstd_160_target-bgcwf-exwf-hmc-bgcwf-exwf-hmc.pdf} \caption{Fields, uncertainties and pool maps $v_r$ for the ${\rm SGZ}=0$ plan. From top to bottom : BGc/WF, Ex/WF, HMC.} \label{fig:res:vr_sgz} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Figures/smoothedscatter_shells_vr_bgcwf-exwf-hmc_vs_target_51} \\ \caption{Scatter plots of the target $v_r$ versus the reconstructed $v_r$. Rows from top to bottom : HMC, Ex/WF BGc/WF. Columns from left to right indicate 40 $\,h^{-1}\,\Mpc$ shells with cinreasing distance from the observer. \yh{The red line represents the best fitted line whose line equation is $y=a x + b$. The parameters of the line and the Pearson correlation coefficient are given in the insert. The black line of a slope of a slope of unity and no offset is shown for reference.} } \label{fig:res:scatters_vr} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/corr_vr_vs_d_bgcwf-exwf-hmc} \caption{Statistics per shell of distance. Absolute value of the coefficient of correlation for $v_r$ between the different reconstructions and the target field. The error bars represent the variation on realizations.} \label{fig:res:vr} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_monopole_dipole.pdf} \caption{\yh{The monopole moment (upper panel), the amplitude of the dipole moment (i.e. the bulk velocity; middle panel), and the cosine of the angle of alignment between the reconstructed and target bulk velocities (lower panel) are shown. The profiles present the mean and `$1 \sigma$' scatter of the mean profile in spheres of radius $d$. The reconstructions correspond here to the Ex/WF (green dashed line), the BGc/WF (red dotted line) and the {\scshape Hamlet}\ (blue dot-dashed line) case. The scatter is the constrained variance of the different reconstruction and the target simulation is presented by the black solid line (middle and upper panels). \\{\it Aurelien - note that I ask you to redraw the plot ad use cosine angle.} }} \label{fig:moments} \end{figure} \section{\yh{\sout{Conclusion} Summary}} \yh{The reconstruction of the large scale density and velocity fields from Cosmicflows-like databases of galaxy distances, and hence peculiar radial velocities, is challenging. The data is sparse, extremely noisy with Noise/Signal ratio larger than a few for the majority of the data, non-uniformly and anisotropically distributed. Furthermore that data suffers from the lognormal bias, which leads to a non-linear bias in the estimated distances and velocities. The CLUES project focuses on the reconstruction of the local LSS and the setting of constrained initial conditions for cosmological simulations designed to reproduce our local patch of the Universe. Three main algorithms are used within the CLUES to `unbias' the data and perform the reconstruction: the bias minimization of Sorce (2015), the BGc/WF of Hoffman et al (2021) and the {\scshape Hamlet}\ of Valade, Hoffman and Libeskind (2022). The present paper compare the BGc/WF and the {\scshape Hamlet}\ algorithms by testing them against mock CF3-like surveys draw from one the MultiDark cosmological simulations. } \yh{ The quality of the reconstruction is gauged by studying the residual between the reconstructed and target density and velocity fields. The residual is mostly analyzed by quadratic measures and as such it is characterized by the mean and variance of the distribution. An optimal reconstruction should make the mean of the residual to be as close as possible to the null field and aim at minimizing its variance. A related measure is the linear correlation analysis which yields the best 'line', $y = a x + b$, that fits the linear dependence of reconstructed field on the target one, and the Pearson correlation coefficient. The values of the offset, $b$, for the case of the linear over-density and for the radial velocity are consistent with zero fro the BGc/WF, in agreement with the theoretical expectations. The distant data points are extremely noisy and very sparsely distributed, hence the WF reconstruction is domiated by the $\Lambda$CDM \ prior model. The {\scshape Hamlet}\ 's significant offset is inconsistent with the prior model. } \yh{We define here three different regions: the nearby ($d \lesssim 40 h^{-1}\,{\rm Mpc}$), the intermediate (($ 40 \lesssim d \lesssim 120 h^{-1}\,{\rm Mpc}$) and the distant one ($d \gtrsim 120 h^{-1}\,{\rm Mpc}$). Based on the above criteria we conclude that nearby the BGc/WF and the {\scshape Hamlet}\ methods are doing roughly equally well. The methods diverge at large distance - with the {\scshape Hamlet}\ outperforming the BGc/WF with a tighter correlation and smaller variance but underperforming in terms of the bias. This is most noticeable for distant region (the right columns of Figs. \ref{fig:res:scatters_divv} and \ref{fig:res:vr}). } \yh{The three panels of Fig. \ref{fig:moments} deserve a special attention here. The upper panel shows the radial profile of the monopole moment. The four profiles shown there - target, Ex?WF, BGc/WF and {\scshape Hamlet}\ - are all constructed under the assumption of $\Lambda$CDM \ value of $H_0 = 67.7\,\rm km\,s^{-1}Mpc$. Yet, the negative offset of the monopole moment at the edge of the data implies that the local value of $H_0$ is somewhat smaller than it is global value. A phenomenon expected for any finite volume realization in the $\Lambda$CDM \ cosmology (see Hoffman et al, 2021 for a quantitative assessment). A proper adjustment of the local value of $H_0$ would bring the target and Ex/WF profiles to converge to zero at the edge of the data, together with the BGc/WF asymptotic value. This would leave the {\scshape Hamlet}\ positive offset standing out with a systematic bias. The amplitude of the dipole moment, namely the bulk velocity, is recovered equally well by the three reconstruction in very good agreement with the target. The bottom panel shows the cosine of the angle between the reconstructed and the target bulk velocities. The BGc/WF behaves as expected - the mean misalignment is consistent with the full alignment to within one $\sigma$ of the constrained variance. This is not the case with the {\scshape Hamlet}\ reconstruction, where the misalignment is more than 1 $\sigma$ away from the expected alignment. \\ {\it Aurelien - please note. I'm suggesting to present the alignment of the vectors by means of the cosnine of the angle. For obvious reasons. } } \yh{Our overall assessment of the {\scshape Hamlet}\ and the BGc/WF reconstructions is that the former outperforms the latter one in terms of reduced scatter and tighter correlation between the reconstructed and the target density and velocity fields. Yet, the {\scshape Hamlet}\ suffers from biases in the reconstructed LSS at the distant regime - ones that do not appear in the BGc/WF reconstruction. It follows that the {\scshape Hamlet}\ should be the method of choice for the reconstruction of the LSS and the study of the cosmography of our local patch of the Universe. The BGc/WF reconstruction is the preferred tool for performing quantitative analysis and parameters estimation and possibly also for setting initial conditions for constrained cosmological simulations. One last comment is due here. The WF/CRs is a very well tested approach that is based on a solid theoretical foundations (Hoffman and Ribak, 1991, Zaroubi et al, 1995, Zaroubi, Hoffman and Dekel 1999). As such it provides an attractive framework for perfoming Bayesian reconstruction of the nearby LSS. Yet, any bias in the observational data and in particular the lognormal one needs to be addressed and apply outside that framework in some ad-hoc and approximate way. The HMC methodology, and in particular its {\scshape Hamlet}\ implementation, still suffer from some teething problems that need to be overcome. The ability of the MCMCM methodology in general and the HMC in particular to address the issue of reconstruction of the LSS, the handling of observational biases and the estimation of cosmological parameters withing one computational self-consistent framework make the {\scshape Hamlet}\ a very attractive tool in the CLUES' toolbox. The incredible improvement in the CPU efficiency of the {\scshape Hamlet}\ compared with previous implementation of MCMC algorithms (Valade et al 2022) makes it even more promising for future implementations within the CLUES project. } \\ \\ {\color{red} \bf{Noam, Aurelien - this Conclusion reads more like an Introduction. I suggest to replace it entirely.}\\ \\ Mapping the large scale distribution of matter in the Universe from scant, sparse observational data is an attractive problem for a multitude of reasons. For one, mapping ones environment is a way to understand it, and thus one's place in it. More importantly, depending on the nature of the tracers, the estimation of a continuous distribution from a limited discrete sample of tracers, is a well formulated mathematical problem. There are different ways to estimate the cosmic density field (XXXX). Many of these methods, justifiably, rely on measuring the density of light, typically by using large spectroscopic surveys. Although such methods have the advantage of hundreds of thousands, if not tens of millions (e.g. the EUCLID spectroscopic survey) of tracers, they suffer from some drawbacks: red shift space distortions and unobserved regions like the ZoA limit the ability to map the entire universe. Galaxy peculiar velocities on the other hand suffer from neither of these draw backs since a galaxy's peculiar velocity is a result of the entire mass distribution in the universe. Thus a relatively small number of tracers are needed. Not only that, but structures can be inferred where they cant be seen by looking at the peculiar velocity field and there is a long history in cosmology of such discoveries (from the great attractor XXX to the vela supercluster Kran-Korteweg) But how accurate are these methods? Measuring a galaxy's peculiar velocity is a challenge that requires both spectroscopy and photometry thus cosmic velocity sample sizes are small (CF3 has $\sim 10^{4}$ data points compared with two orders of magnitude more data in spectrosopic surveys); whats more: errors on the distances can be very large (i.e up to 20\% with standard scaling relation). Given these limitations how well do such methods do in reconstructing the underlying cosmic fields? In this work we focus on two methods that claim to reconstruct the comic distribution of matter, and thus the full velocity field by using measurements of galaxy distances: the Hamlet method (Valade et al 2022) and the Bias Gaussian Corrected Wiener Filter method (Hoffman et al 2021). Details on how these methods work are presented first and foremost in those two paper but also in sections 2.2, 2.3 and 2.4. The basis of this paper is to apply these two methods to a mockj cataloge drawn from a numerical simulations such that we know the underlying density and velocity field and we can compare the reconstruced one rto the true one (which we call the ``target''). A mock catalogue is constructed by ``observing" haloes in a simulation in a way which mimics the real CosmicFlows data. The methods are then applied to the mock data and the resulting reconstructions are then gauged and checked against the simulation's density and velocity field. The existence of the underlying mock universe allows us to extrapolate how these methods perform on real data. Although many of our results employ the human eye as judge (i.e. Fig 6 and 9) the fidelity is quantified as well. In general the main conclusion is that these two method do better in recovering the radial velocity field than the density field. This is perhaps no surprising since the radial velocity field is what is measured, however the measurements are error prone. In the unrealistic case of no observational errors, the Wiener Filter radial velocity field and the input data are nearly indistinguishable. When errors are conisdered the two methods recover the input very well within $\sim70\,h^{-1}\,\Mpc$. It turns out that the density field too is well recovered in a region thats slightly smaller - around 50\,h^{-1}\,\Mpc. extra ordinarly well in reconstructing the density and velocity field {\it where there is good data} - good data here meaning a well sampled field namely in the inner parts. The main features are well recovered In this paper we have tested in great detail two methods that reconstruct the cosmic density and velocity field from measurements of pecukiar velocities, similar to the cosmic flows project. } \section*{Acknowledgements} \yh{Useful discussions with Tamara Davis, concerning the \cite{Hinton2017} paper, are acknowledged. This work has been done within the framework of the Constrained Local UniversE Simulations (CLUES) simulations. AV and NIL acknowledge financial support from the Project IDEXLYON at the University of Lyon under the Investments for the Future Program (ANR-16-IDEX-0005). YH has been partially supported by the Israel Science Foundation grant ISF 1358/18. } \section*{Data availability} \yh{ The data underlying this article will be shared on reasonable request to the corresponding author. } \bibliographystyle{mnras} \section{Introduction} \begin{figure*} \includegraphics[width=.329\textwidth]{Figures/sgw_vs_sgv_cf35-mocks} \includegraphics[width=.329\textwidth]{Figures/sgw_vs_sgu_cf35-mocks} \includegraphics[width=.329\textwidth]{Figures/sgv_vs_sgu_cf35-mocks} \caption{The distance, in units of km/s, of the CF3 data points (magenta) and mock data points (green) projected on the three supergalactic principal planes. Note the ZOA is accurately reproduced in the mock catalogues.} \label{fig:mocks_coordinates} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/mp_distributions_obs_mocks_b_l_z} \caption{From left to right, the distribution SGB, SGL and $z$. Note that in panels a and b the two curves are on top of one an other.} \label{fig:mocks_distributions} \end{figure*} Mapping the large scale distribution of matter and its corresponding three-dimensional (3D) velocity field is of great interest. The motivation is threefold. For one, mankind at large, and astronomers in particular, explore their Universe by charting it. Making maps is a first step towards understanding ones surroundings and ones place within a greater environment. Mapping the large scale distribution of matter in the Universe is thus an end unto itself. Furthermore, explaining the large scale velocity and density fields within the context of the $\Lambda$CDM paradigm of structure formation allows for the estimation of cosmological parameters that are inherent to that model \citep{Jaffe1995, Pike2005, Feldman2010, Nusser2011, Carrick2015, Qin2019, Dupuy2019, Lilow2021, Boruah2021a}. A third motivation is to map these fields at high redshift, thereby allowing for the reconstruction of the initial conditions of the Local Universe \citep{Yepes2009, Sorce2016, Hoffman2018, Libeskind2020, Sawala2021}. We focus here on mapping the large scale structure (LSS) of the universe from surveys of peculiar velocities, or rather from surveys of galaxies with measured distances from which the radial component of the peculiar velocities may be extracted. Measuring galaxy distances is a formidable challenge in observational cosmology. All distance measures rely on comparing an observed magnitude with an (inferred or assumed) absolute magnitude. There are many ways to estimate a galaxy's distance. For example, scaling relations tie the size of an elliptical galaxy or the angular velocity of disc galaxy, to its intrinsic luminosity. Other methods include resolving stars at the tip of the red giant branch, measuring Supernovae light curves, Cepheid variables pulsations or the scale of fluctuations in a galaxies surface brightness. Each method has errors associated with it; a combination of instrumentational errors, systematic errors, or calibration errors. As such, compilations of peculiar velocities are difficult to analyze \citep[for a comprehensive review][]{Straus1995} and are usually a patchwork of various surveys and methods observed with different telescopes in different locations on earth (or in space). The POTENT method was the first attempt to produce continuous maps of the density and velocity fields based on peculiar velocities surveys \citep{Bertschinger1989}. The main underlying assumption of the POTENT method is that galaxy velocities are drawn from an irrotational, potential flow. No further assumptions, were made on the statistical nature of the flow field beyond the existence of a galaxy bias \citep{Kaiser1987}. Therefore, its ability to handle the shortcomings of such peculiar velocity surveys was limited. Subsequent approaches to the reconstruction of the LSS from peculiar velocities have been formulated within Bayesian frameworks - these include the Wiener filter (WF) and constrained realizations (CRs) methodology \citep{Ganon1993,Zaroubi1999,Tully2019} as well as Monte Carlo Markov Chain algorithms \citep[MCMC][]{Lavaux2016, Graziani2019, Boruah2021b, Prideaux2022}. They have been remarkably successful in ``mapping the invisible'' and recovering the underlying cosmic fields. Beyond the issues of noisy, sparse data, plagued with inhomogenous errors, there is one additional inherent conceptual problem common to all surveys of peculiar velocities and that is that peculiar velocity itself is not observed but is a \emph{derived} quantity. Given the redshift of and a distance to a galaxy, it is the radial component of its peculiar velocity that can be computed. But only the redshift is observed; distances themselves are not directly observed. What is measured is the distance modulus of a galaxy \citep[cf.][]{Tully2016}. Because the error of the measured distance modulus is assumed to be normally distributed, the errors on the observed distances are thus log-normally distributed. This leads to a biased estimate of the distances and peculiar velocities with respect to the actual distances \citep[see][]{Hoffman2021}. Often this bias is treated as yet another manifestation of the Malmquist bias \citep[see][]{Straus1995}. Here we refer to it as the log-normal bias. For the WF/CRs reconstruction algorithm, the log-normal bias is treated outside of the Bayesian framework in a separate process \citep{Sorce2015,Tully2014,Hoffman2016,Hoffman2021}. For Monte Carlo methodologies the log-normal bias is treated within a comprehensive algorithm \citep{Lavaux2016,Graziani2019}. The Constrained Local UniversE Simulations (CLUES) project focuses on the reconstruction of LSS of our nearby cosmic neighbourhood from surveys of galactic distances and thereby peculiar velocities, in particular the Cosmicflows database \citep[cf.][and references therein]{Tully2016}. Two main methodologies have been employed by the CLUES for the reconstruction of the local LSS and the setting of initial conditions for constrained cosmological simulations - one that is based on the WF/CRs methodology \citep[][]{Hoffman1992, Zaroubi1995} and the other on MCMC and of Hamiltonian Monte Carlo (HMC) sampling. In particular, within the WF/CRs framework the issue of the log-normal bias has been handled by two independent algorithms, that of \citet{Sorce2015} and that of \citet{Hoffman2021} and within the Monte Carlo sampling approach by \citet{Graziani2019} and \citet{Valade2022}. Our aim in this work is to test the quality of two methods that reconstruct the LSS from peculiar velocities: the WF/CRs method with a log-normal bias correction algorithm, known as the Bias Gaussian Correction \citep[BGc;][]{Hoffman2021} and the HAmiltonian Monte carlo reconstruction of the Local EnvironmenT ({\scshape Hamlet}\ \ for short) method \citep{Valade2022}. These two methods are applied to a mock data catalogue drawn from a cosmological simulation designed to imitate the CosmicFlows-3 data \citep{Tully2016}. The original simulation is referred to as the target simulation. The two reconstructions are compared with the target simulation to gauge their fidelity. This paper is structured as follows. In \cref{sec:mocks} the algorithm for constructing halo catalogues that mock the cosmic flows data is presented. In \cref{sec:BGc} the nature of the input data as well as its biases and a bias correction scheme are presented. In \cref{sec:WF,sec:Hamlet} the {\scshape Hamlet}\ \ and WF/CR reconstructions methods are briefly described. The results of applying to the two reconstruction methods to these mocks, as well as a comparison between them is presented in \cref{sec:results}. A summary and conclusion is offered in \cref{sec:conclusion}. \section{Methods} \subsection{Mock Catalogue construction} \label{sec:mocks} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_vr_vs_d_cf3_mocks.jpeg} \caption{The distance of a galaxy as a function of its peculiar velocity is shown for the grouped CF3 data (magenta) as well as the mock catalogue (green). We can only make use of CF3 and not CF3+ as distances (and thus radial velocities) were not communicated for the pre-release of CF4. The log-normal bias is evident here in the lack of symmetry about $v_{r}=0$; beyond around 70Mpc$/h$ the universe appears to be systematically collapsing, in a so-called ``breathing mode''. Bottom panel: after application of the BGc correction, symmetry is reestablished.} \label{fig:mocks_vr_d} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/mp_distributions_vr_d_z.pdf} \caption{From left to right: a The distributions of the radial peculiar velocity, b the distance and c the redshift for the target (black solid lines), the BGc/WF (red dotted) and {\scshape Hamlet}\ \ (Blue dash dotted) reconstruction methods. The Ex/WF method is shown in green dashed. } \label{fig:mockspost_dist} \end{figure*} We wish to create a mock version of the grouped Cosmicflows catalogue that reproduces its main characteristics, since it is on these observational data that the methods studied here will eventually be applied (Valade et al, in prep). We start from the publicly available CF3 data release and add to this $\sim4\,000$ points given to us by the authors of CF4 as a pre-release (Tully, private communications)\footnote{R. B. Tully provided us with an advance set of redshifts and angular positions of CF4 for the purpose of this paper. Distance modulii, and associated erros were not provided.}, resulting in a ensemble of $\sim15\,000$ entries, hereafter named CF3+. A mock catalogue is constructed from the MultiDark Planck 2 simulation\footnote{The MultiDark simulations are publicly available: \url{www.cosmosim.org}} \citep[MDPL2,][]{Riebe2013}, a dark matter only $N$-body run of $N=3840^3$ particles in a periodic box of side length $L= 1 \,h^{-1}\,\Gpc$. The cosmological parameters of the simulation are from the 2nd Planck data release \citet{Ade2016} \emph{i.e.} \ a flat $\Lambda$CDM Universe $\Omega_{\rm m} = 0.307,~\Omega_{\rm b}=0.048,~ \Omega_{\Lambda}= 0.693,~ \sigma_{8} = 0.8228,~ n_{s} = 0.96$ and a dimensionless Hubble parameter $h= 0.678$ where $H_{0}=100 \times h\,{\rm km ^{-1}}/{\rm Mpc}$ A Friend-Of-Friend's (FOF) algorithm with a linking length of 0.2 times the mean inter particle separation is used to identify haloes whose mass is roughly $M_{200}$ \citep{Davis1985}. It is appropriate to use a FOF halo in this case since it is the \emph{grouped} CF3 catalogue which is being mocked. Grouping the members of a virialized object together averages out nonlinear motions implying that the (e.g.) cluster's peculiar velocity is a better traces of the flow field. Note that the MDPL2 box size of $L= 1\,h^{-1}\,\Gpc$ is large enough to embed the CF3 catalogue, whose effective depth is roughly $160\,h^{-1}\,\Mpc$. An ``observer'' is associated with a randomly selected halo of mass in the range of $[0.9\, \text{---}\,2.0]\times 10^{12}h^{-1}M_{\sun}$. The simulation is then re-centered on this halo and the simulation's coordinate axes are then arbitrarily labelled as Supergalactic (SGX, SGY, SGZ). Furthermore a mock ``sky projection'' is made such that each halo is given a sky position (SGL, SGB). The (proper) distance $d$ of each halo from the center, is used to compute a cosmological redshift $\bar{z}$ by numerically integrating: \begin{equation} \label{eq:intro:dl2zcos} d=c H{^{-1}_0} \int_0^{\bar{z}}\frac{1}{\sqrt{\Omega_m(1+\mathcal{Z})^3+(1-\Omega_m)}}\d{\mathcal{Z}} \end{equation} where $\Omega_m$ is the cosmological matter density parameter. The (proper) distance $d$ is also turned into a luminosity distances $\subX{d}{L}$ by \begin{equation} \label{eq:intro:dl2d_vr} \subX{d}{L}=d \times (1 + \bar{z}) \end{equation} which is used to compute the halo's distance modulus: \begin{equation} \label{eq:intro:mu2dl} \mu = 5\log{\bigg(\frac{\subX{d}{L}}{\rm Mpc}\bigg)}+25 \end{equation} The radial peculiar velocity $v_{r}$ is combined with the cosmological redshift to obtain the full redshift \citep{Davis2014} \begin{equation} z+1=(\bar{z}+1)\left(\frac{v_{r}}{c}+1\right) \end{equation} At this point each halo's position relative to the observer has been transformed into two ``observable'' quantities: 1. a redshift $z$ (which includes a contribution from the radial peculiar velocity $v_{r}$) and 2. a distance modulus $\mu$. The mock catalogue aims to have the same Probability Distribution Functions (PDF) of $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ as in the CF3+ data. This is accomplished with a monte-carlo style algorithm in the following way: the same number of haloes as data points in CF3+ are drawn at random from the simulation, within a sphere of around $300\,h^{-1}\,\Mpc$. A merit is assigned to this initial set of haloes by computing the absolute difference between its $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ and that of CF3+. Iterations proceed by adding and subtracting one halo at a time and evaluating the merit of the new $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$, compared to CF3+'s. If a new potential halo improves the merit of the distributions, it is kept; otherwise it is rejected. In this way, the process converges halo by halo, towards reproducing the distribution CF3+'s $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$. Once the merit function has converged and a suitable mock catalogue has been constructed, the observational errors from the CF3 catalogue are added to the mock. Namely, the redshift and distance modulus of each CF3 data point is given as $z+\varepsilon_{z}$ and $\mu+\varepsilon_{\mu}$ where $\varepsilon_{z}$ and $\varepsilon_{\mu}$ denote the errors associated with each measurement. $\varepsilon_{z}$ is assumed to be entirely due to spectroscopic precision while $\varepsilon_{\mu}$ depends on which standard candle is used and may range from 5\% for Supernova to 20\% for scaling relations. Both $\varepsilon_{z}$ and $\varepsilon_{\mu}$ are assumed to be Gaussian with means of zero and standard deviations of $c\sigma_{z}=50{\rm km ^{-1}}$ and $ \sigma_{\mu}$, respectively. The value of $\sigma_{\mu}$ associated to each halo is taken from the entry of CF3 whose redshift is the closest, so as to reproduce the dependency of the $\sigma_{\mu}$ with the distance. The fidelity of the mock to the CF3 catalogue, is shown in \cref{fig:mocks_coordinates,fig:mocks_distributions}. \cref{fig:mocks_coordinates} shows the three supergalactic projections with the mock data points in green and the CF3 constraints in purple. The Zone of Avoidance (ZOA) and visual distribution of the catalogues are well recovered. Quantitatively this is shown by looking at the distributions of $P({\rm SGB})$, $P({\rm SGL})$ and $P(z)$ themselves shown in \cref{fig:mocks_distributions}a, b, and c, respectively. The distribution of SGB, SGL and $cz$ for the mock galaxies and CF3 constraints, are largely indistinguishable from each other. \cref{fig:mocks_distributions}c shows that within $\sim20\,h^{-1}\,\Mpc$, the number of CF3 constraints is much greater than the mock catalogues presented here. This is because of there are too many CF3 constraints in this region, with respect to the resolution of our simulation. In principle the original unperturbed $d$, $d_{L}$, $\bar{z}$ and $v_{r}$ for each halo that in the mock can be ``forgotten'' and new values can be computed using the values of $z$ and $\mu$ that include the observational errors. These new values should exhibit similar biases to the observational data by construction. This is seen in \cref{fig:mocks_vr_d}, where the radial velocity as a function of distance is plotted. The diagonal cut in this plot is indicative of the log normal bias dicussed in \cref{sec:BGc}. At a given distance there is an unequal number of galaxies moving towards and away from the observer, making it appear that the universe is contracting in a ``breathing mode''. This log normal bias and its correction are presented in \cref{sec:BGc}. \subsection{The log-normal bias and the Bias Gaussian correction (BGc)} \label{sec:BGc} One of the main purposes of constructing such a detailed mock catalogue as described above is to ensure that the log-normal bias is reproduced, thereby allowing us to gauge the ability of the two reconstruction methods to handle this bias. Much hand wringing and literature has been devoted to the handling of biases in peculiar velocity surveys, and we refer the reader to \citet{Straus1995} for a comprehensive explanation. Here we briefly explain what the log-normal bias is and how it is handled in the context of the BGc as proposed by \citet{Hoffman2021}. We refer the reader to that work for a comprehensive description of the log-normal bias and its correction by the BGc algorithm. As mentioned, a Gaussian error on the distance modulus transforms into a log-normal error on the luminosity distance (e.g. the inverse of \cref{eq:intro:mu2dl}). In other words, if the same galaxy is observed many times, the mean of the different distance measures will not coincide with its actual value. This bias changes the spatial distribution of the galaxies as well as their inferred peculiar velocities. The log-normal bias can be seen in \cref{fig:mocks_vr_d} where the CF3 and mock catalogue peculiar velocity $v_{r}$ is plotted as a function of distance. Beyond around $\sim70\,h^{-1}\,\Mpc$, there is no longer symmetry in the distribution of $v_{r}$ about zero: more galaxies have negative $v_{r}$ and the universe naively appears to be collapsing, a so-called ``breathing'' mode. In theory it can be corrected since the standard $\Lambda$CDM \ model makes an explicit prediction that the expected scatter for the radial component of the velocity is roughly $\sigma_{v}\sim 275 {\rm km ^{-1}}$. The essence of the BGc scheme is to map the log-normal distribution of the inferred distances around their respective redshift distances into a normal distribution around the median of the log-normal one. The width of that normal distribution is treated as a free parameter set to be $\sim 2 \,h^{-1}\,\Mpc$, in agreement with the $\Lambda$CDM \ prediction that the intrinsic scatter of the radial velocities is $\sigma_{v}\sim 275 {\rm km ^{-1}}$. The same procedure is applied to the observed radial velocities, retaining the median of the distribution of the radial velocities of data points in a given redshift bin. Yet for the velocities, unlike the inferred distances, the variance of the distribution is preserved as well. Namely the log-normal distribution of the observed distances is mapped to a Gaussian distribution, while preserving the median of the log-normal distribution. It is the invariance of the median under the normal - log-normal transformation which constitutes the backbone of the BGc scheme. After the application of the BGc scheme to the data, the breathing mode dissapears and the radial peculiar velocities scatter normally about 0 as can be seen in \cref{fig:mocks_vr_d}, bottom panel. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_vr_vs_d_diff_target.pdf} \caption{Scatter plots of the residual of the BGc/WF (panel a), of the {\scshape Hamlet}\ \ (panel b) and of the WX/WF (panel c) reconstructed $v_{r}$s evaluated at the data points. The residual of the BGc/WF from the {\scshape Hamlet}\ \ reconstructed $v_{r}$s at the data points is shown as well (panel d). } \label{fig:mockspost_vr_d} \end{figure} \subsection{Wiener Filter and Constrained Realizations (WF/CRs)} \label{sec:WF} The WF/CR method is a tried and tested algorithm for reconstructing the large scale density distribution of the universe, based on a limited number of peculiar velocity measurements. We refer the reader to the voluminous literature \citep{Hoffman1992,Zaroubi1995,Zaroubi1999} reviewing just the essentials here. The WF is a Bayesian estimator of the underlying velocity field (and associated over-density) given a set of observed radial velocities and a given assumed prior model of the distribution of the peculiar velocities. In cosmological applications of the WF the $\Lambda$CDM \ concordance cosmological model is taken to be the prior model. Accordingly: (a) the WF provides the most probable continuous density and 3D velocity fields given a finite number of observed ``noisy'' radial velocities and the assumed $\Lambda$CDM \ model; (b) the CRs sample the constrained residual around the WF field; and (c) the WF and CRs act so as to interpolate between data points and then extrapolate beyond it. The WF/CRs methodology recovers the linear density and 3D velocity fields. In this context, the over-density field and the 3D velocity are linked through the linearized coupled continuity and Poisson equations \begin{equation} \delta =-\bm{\nabla}\cdot\vv/H_0f(\Omega_m), \label{eq:delta} \end{equation} where $f(\Omega_m)$ is the linear growth factor. At large scales, or at early cosmological times, the linear over-density is a good approximation for the fractional over-density, $\delta=\rho/\bar{\rho}-1$, where $\rho$ is the density and $\bar{\rho}$ is the cosmological mean. Unless stated otherwise, all the terms density and velocity refer to the \emph{linear} density and velocity fields density. However, linear theory is clearly violated on small scales. Additionally, the density field is more susceptible to non-linear dynamics than the velocity field. The linear WF/CRs constitute a reasonable estimate of the actual velocity field down to the scale of roughly $5-10 \,h^{-1}\,\Mpc$ \citep{Zaroubi1999}. The WF estimator is the outcome of the ``tug-of-war'' between the data and the assumed prior, $\Lambda$CDM \ in the present case. Where the data is has small errors, the estimated WF field is ``close'' to the input data point. Otherwise, where the data is weak the WF solution is dominated by the prior model, namely the solution tends to the mean density and null velocity fields. Consequently the constrained variance, i.e variance spanned by the CRs, is small in the good data regime and converges towards the cosmic variance as the data deteriorates. \subsection{Wiener Filter reconstruction from Exact data (Ex/WF)} As there exist no possibility to homogenize the sampling (namely the Zone of Avoidance will always inhibit full sky coverage), the only source of uncertainty that could, one day, be mitigated, is the observational uncertainty in the distance measurement. In order to test the methods' inherent ability to reconstruct the underlying fields, an additional ``method'' is compared: the exact WF (hereafter labeled Ex/WF). This is the WF applied to a mock where the error on each data point has been artificially set to zero (and thus no BGc scheme is applied). This serves the purpose of testing the WF in the case where the only source of uncertainty is the sampling. In other words in \cref{sec:results}, the reconstruction based on the BGc/WF, the Ex/WF and the {\scshape Hamlet}\ \ method are presented. \subsection{{\scshape Hamlet}\ } \label{sec:Hamlet} The second approach examined here utilizes a Hamiltonian Monte Carlo sampling of the posterior PDF via the HAmiltonian Monte carlo reconstruction of the Local EnvironmenT ({\scshape Hamlet}\ \!\!) code, which is described in full in \citet{Valade2022}. Unlike the WF/CRs formalism, the {\scshape Hamlet}\ \ algorithm treats the real distances of the data as unknown dynamical variables that need to be estimated, much in the same way as the density and velocity field. It is Bayesian in nature because an \emph{ab initio} PDF of the variables that are to be estimated can be specified. A Monte-Carlo technique is used to sample the various posterior PDFs that are under consideration: the cosmic density field, the velocity field and as the distances of the constraining data. The technical challenges of the Monte Carlo approach is twofold. First the (assumed) prior and posterior PDFs of the density and velocity field need to be extended to also include the distribution of distances. Then an efficient sampling of the posterior PDF needs to be devised. Given the extremely high dimensionality of the problem, the Hamiltonian Monte Carlo algorithm is the tool of choice to perform that sampling, outperforming Metropolis Sampling or Gibbs Sampling algorithms by many orders of magnitude \citep{Valade2022}. Beyond its inhomogeneous distribution (e.g. the ZOA) the CF3+ catalogue has a fairly sharp cut off at around a redshift distance of $150\,h^{-1}\,\Mpc$ (see \cref{fig:mocks_distributions}c). In practice this defines and limits the extent within which the {\scshape Hamlet}\ \ reconstruction method is valid. \citet{Hinton2017} investigated the problem that occurs when sampling from a distribution that has an abrupt cut off. They show that the reconstructed fields (and inferred variables) close to the cut off will be biased, effectively nullifying the applicability of the reconstruction beyond a small amount closer than the cut off (say $\sim90\%$). It is therefore expected that any reconstruction based on these data will not be valid beyond around $150\,h^{-1}\,\Mpc$. \section{Results} \label{sec:results} \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/planes_Z_divv_raw-mean-nstd_160_target-bgcwf-exwf-hmc-bgcwf-exwf-hmc.pdf} \caption{A comparison of the of the BGc/WF (left column), Ex/WF (central column) and the {\scshape Hamlet}\ \ (right column) reconstructed over-density fields with the target simulation. For consistency, the over-density plotted for the target field is the linear over-density \emph{i.e.} the divergence of the velocity field. The middle panels present the reconstructed $\delta$ and the bottom ones show the constrained variance normalized by the cosmic variance, $\Sigma_\delta / \sigma_\delta$. All plots refer to the $SGZ=0$ plane of the target simulation and all fields are Gaussian smoothed with a $5\,h^{-1}\,\Mpc$ kernel.} \label{fig:divv_sgz} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Figures/smoothedscatter_shells_divv_bgcwf-exwf-hmc_vs_target_51} \\ \caption{Density scatter plots of $\delta$ reconstructed versus $\delta$ target. Rows from bottom to top: {\scshape Hamlet}\ , Ex/WF BGc/WF. Columns from left to right: within spheres of $40$, $80$, $120$, $160 \,h^{-1}\,\Mpc$. The red line represents the best fitted line whose line equation is $y=a x + b$. The parameters of the line and the Pearson correlation coefficient are given in the legend. The black line $ y = x $ is shown for reference.} \label{fig:scatters_divv} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/corr_divv_vs_d_bgcwf-exwf-hmc} \caption{Statistics per shell of distance. Absolute value of the coefficient of correlation for $\delta$ between the different reconstructions and the target field. The error envelope represents the $2\sigma$ variation of the ensemble of realizations.} \label{fig:divv_corr} \end{figure} The results are presented in three sub-sections where we (a) compare how the predicted constraints themselves differ from their real values (\cref{sec:data}); (b) examine the accuracy of the reconstruction of the cosmic fields (\cref{sec:maps}); and (c) compare the reconstructed monopole and dipole (\emph{i.e.} bulk flow) multipoles with their target counterparts (\cref{sec:bulk}). \subsection{Reconstructed data} \label{sec:data} After applying the BGc/WF and {\scshape Hamlet}\ \ methods to the mock catalogues (as well as the WF to the exact, no error mocks), the first things to check is how well the distributions of radial peculiar velocities, distances and redshifts of the data points, match their target values. This is shown in \cref{fig:mockspost_dist}a,b and c, respectively, where the target curve represents the true distributions of the mock catalogue; namely, the closer the BGc/WF or the {\scshape Hamlet}\ \ curve is to the target, the more accurate the reconstructions. The values of the reconstructed $v_{r}$'s of the mock data points are obtained by interpolation over the grid points. The distances are obtained differently for the different methods. The Ex/WF's distances are the true distances, thus they are identical to the target. For the BGc/WF method, the distances are the result of the application of the BGc to the data, before the WF is applied. Finally, for {\scshape Hamlet}\ , the distance of each constraint is the mean of all the distances sampled over the Monte-Carlo steps. We remind the reader that the Exact WF (green dashed) represents the limits of the WF method. \cref{fig:mockspost_dist}a shows that the {\scshape Hamlet}\ \ reconstruction method does an exceptional job at recovering the distribution of radial peculiar velocities. Note also that the WF in its purest form too recovers the target distribution. The BGc/WF struggles slightly by narrowing the data's distribution with a slight over emphasis on smaller values of the peculiar velocity at the expense of the large values. We note, as an aside, that the fact that the target (and hence the reconstructions) are not centered at $v_{r}=0$ is due to the specific nature of the mock observer chosen (\emph{i.e.} cosmic variance). The BGc/WF suppression of the reconstructed radial velocities of the data points relative to the target is inherent in the WF algorithm, where the estimated signal is the weighted ``compromise'' between the data and the prior model. Where the data is not very strong the WF estimator is biased towards the null field predicted by the prior. In \cref{fig:mockspost_dist}b and c the distance and redshift distributions are examined. For both of these quantities the two reconstructions do a remarkably good job at matching the target, rendering their curves practically indistinguishable from the target. Note however that the BGc/WF method tends to ``exaggerate'' some of the peaks and valleys in the distance distributions (\cref{fig:mockspost_dist}b). All models reliably follow the input's form. In the absence of errors, \emph{i.e.} the Ex/WF case, the reconstructed $v_{r}$'s of the data points should be equal to the input constraints taken from the target simulation \citep{Hoffman1992}. The slight mismatch between the $v_{r}$ histograms of the Ex/WF and the target seen in \cref{fig:mockspost_dist} occurs because the Ex/WF histogram is an interpolation over the course grid of the WF. It is important to understand by how much each constraint shifts during the BGc and reconstruction procedures. In \cref{fig:mockspost_vr_d} the difference between the reconstructed $v_r$ and the input $v_r$ is compared on a constraint by constraint basis and as function of distance. From top to bottom this difference is shown for show the BGc/WF, {\scshape Hamlet}\ and the Ex/WF (respectively \cref{fig:mockspost_vr_d}a, b and c). The difference between the two main reconstruction methods (BGc/WF and {\scshape Hamlet}\ ) is shown in the final panel, \cref{fig:mockspost_vr_d}d. In these plots each constraint is a dot, the mean value of the difference is shown as a black line and the standard deviation of the distribution is designated with error bars. An examination of \cref{fig:mockspost_vr_d} reveals that the methods based on the WF tend to underestimate the $v_r$ in the inner most distance shells (below $\leq 60\,h^{-1}\,\Mpc$) while overestimating it in the outer shells. This is sure even for the ideal case of the Ex/WF. The mean of {\scshape Hamlet}\ \ method, however (\cref{fig:mockspost_vr_d}b) indicates the constraints are not systematically shifted in the region $\sim 40 - 110 \,h^{-1}\,\Mpc$, but underestimate $v_r$ outside this range. \subsection{Reconstructed cosmic fields} \label{sec:maps} In this section the reconstructed density and velocity fields are examined and compared with the target. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/planes_Z_vr_raw-mean-nstd_160_target-bgcwf-exwf-hmc-bgcwf-exwf-hmc.pdf} \caption{Same as \cref{fig:divv_sgz} for the radial component of the velocity field.} \label{fig:vr_sgz} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Figures/smoothedscatter_shells_vr_bgcwf-exwf-hmc_vs_target_51} \\ \caption{Same as \cref{fig:scatters_divv} for the radial component of the velocity field.} \label{fig:scatters_vr} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/corr_vr_vs_d_bgcwf-exwf-hmc} \caption{Same as \cref{fig:divv_corr} for the radial component of the velocity field.} \label{fig:vr_corr} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/mp_monopole_dipole.pdf} \caption{The monopole moment (upper panel), the amplitude of the dipole moment (\emph{i.e.} the bulk velocity; middle panel), and the cosine of the angle of alignment between the reconstructed and target bulk velocities (lower panel) are shown. The profiles present the mean and ``$2 \sigma$'' scatter of the mean profile in spheres of radius $d$. The reconstructions correspond here to the Ex/WF (green dashed line), the BGc/WF (red dotted line) and the {\scshape Hamlet}\ \ (blue dot-dashed line) case. The scatter is the constrained variance of the different reconstruction and the target simulation is presented by the black solid line (middle and upper panels).} \label{fig:moments} \end{figure} \subsubsection{Reconstructed density maps} The non-linear density field of the target simulation cannot be directly compared with the reconstructed linear density field. To enable a meaningful comparison we compare the divergence of the velocity field of the two (\emph{i.e.} \cref{eq:delta}), terming both of these $\delta$, out of convenience. \label{sec:recon_dens} In order to visually inspect the reconstructed density distribution, a $3.9\,h^{-1}\,\Mpc$ thick slab at the super galactic plane (SGZ=0) is chosen. This is not an arbitrary choice: given that the largest numbers of constraints are expected to lie in or close to SGZ=0, we expect this slab to be the most accurate. The fields are smoothed with a Gaussian kernel of $5\,h^{-1}\,\Mpc$. \cref{fig:divv_sgz} examines the density distribution in this slab. \cref{fig:divv_sgz}a is the target density distribution. The column below it (namely the middle column, \cref{fig:divv_sgz}c, f shows the Ex/WF results, while the left column (\cref{fig:divv_sgz}b, e) shows the BGc/WF results and the right column (\cref{fig:divv_sgz}d, g) shows the {\scshape Hamlet}\ \ results. The middle row (panels b, c, d) shows the reconstructed density distribution. Some conclusions may be drawn from a visual examination of \cref{fig:divv_sgz}b, c, d. The Ex/WF generally recovers the features of the local cosmography at all distances. The reconstruction is not exact; given that there are no ``observational'' errors here, this implies that the mismatch between the Ex/WF and the target (\emph{i.e.} between \cref{fig:divv_sgz}c and \cref{fig:divv_sgz}a), are entirely due to the finite, inhomogenous and anisotropic sampling. Comparing the BGc/WF (\cref{fig:divv_sgz}b) with the target indicates a decline of power of the reconstructed density field with the distance from the observer, yet the general structure of the cosmic web of over- and under-dense regions is recovered. The {\scshape Hamlet}\ reconstructed $\delta$ field does not exhibit the same loss of power as in the BGc/WF case but it suffers from a loss of spatial resolution with distance (\cref{fig:divv_sgz}d). The more distant structures become more fuzzy and diffuse. The bottom panels of \cref{fig:divv_sgz} present the constrained variance $\Sigma^2_{\delta}$ of the three reconstructed $\delta$ fields. It is defined as the local, cell by cell, variance calculated over an ensemble of CRs for the Ex/WF and BGc/WF case and over a set of independent states of the Monte Carlo chain in the {\scshape Hamlet}\ case. The panels show the square root of constrained variance normalized by the cosmic variance, $\Sigma_{\delta}/\sigma_{\delta}$. The cosmic variance is calculated by calculating the variance over all CIC cells in each reconstructed $\delta$ field. The value of $\Sigma_{\delta}/\sigma_{\delta}$ gauges the constraining power of the constraints and the assumed prior model. When this equals to 0 the region is highly constrained and when it equals unity the reconstructions are as random as cosmic variance. Thus one expects it to be small close to the observer and to approach unity asymptotically with distance. \cref{fig:divv_sgz}e, f, g quantifies what is visually apparent from (\cref{fig:divv_sgz}b, c, d) namely that the inner regions are well constrained but that this fades with increasing distance. The reconstruction methods that include errors (\emph{i.e.} \cref{fig:divv_sgz}e,g) are never ``perfect'', while the Ex/WF method \cref{fig:divv_sgz}f, does obtain values of $\Sigma_{\delta}/\sigma_{\delta}$ close to 0. Interestingly, the impact of the ZOA on the reconstruction method is apparent in \cref{fig:divv_sgz}f. Here it causes a very clear limitation of the expected ability to reconstruct the density field. The accuracy of the density field reconstructions - specifically their accuracy \emph{as a function of distance} - is shown in \cref{fig:scatters_divv}. These are scatter plots which compare, on a cell by cell basis, the density of the target with the BGc/WF (top row), Ex/WF (middle row) and {\scshape Hamlet}\ (bottom row). The line, $y=a x + b$, (or $\delta_{\rm method} = a \delta_{\rm target} +b$, to be more precise), which best describes the scatter is shown in red; its slope, $y$ intercept and the Pearson correlation coefficient is given in each sub-panel. In the ideal case where a reconstruction method perfectly matches the target this would simply be a slope of $a=1$ and an offset or bias of $b=0$ line with zero scatter (shown in black), with a Pearson correlation coefficient of unity. The columns in this figure denote different $40\,h^{-1}\,\Mpc$ thick radial shells under consideration. Note that a slope less than unity indicates that the reconstruction under-estimates the over-dense regions and over estimates the under-dense regions. A slope greater than unity represents the opposite (exaggerates over- and under-dense regions). An offset of $b\neq0$ means a biased reconstruction. There are a number of important features of this \cref{fig:scatters_divv}. First, considering the inner most bin (leftmost column) the Ex/WF reconstruction recovers very well the density of the target. A slope of unity and practically null offset of $b=0.01$ and a correlation coefficient of 0.92 indicates that in general in this region the Ex/WF reconstructions is very well recovered. This implies that the nearby sampling of the CF3 catalog is almost optimal. Obviously, the {\scshape Hamlet}\ and the BGc/WF methods do worse in recovering the density field. Moving to the outer shells all three density reconstruction systematically degrade with slopes and correlation coefficient decreasing. The slope in all cases is less than unity, indicating that the reconstructions suppress the power of the recovered density field. This diminishing of the power increases with the distance from the observer. The BGc/WF suffers more from the loss of correlation with distance than the {\scshape Hamlet}\ . Yet, the latter reconstruction is at large distances with $b=0.12$ for the distance range of $120\ \leq\ d\ \leq\ 160\ Mpch$. The BGC/WF behaves, on the other hand, by the ``Bayesian book'' - where the sampling is very sparse and the errors are much larger than the signal, the unbiased $\Lambda$CDM prior is recovered. The correlation between the reconstructed mean field and the target is shown as a function of distance in \cref{fig:divv_corr}. These are the correlation coefficients from the scatter plots (\cref{fig:scatters_divv}) plotted as a function of distance in order to gauge the degradation of the reconstruction methods as data becomes more sparse and volumes become large. Note that the binning is different hence the non identical values of the correlation coefficient between the two plots. The solid lines in \cref{fig:divv_corr} represent the mean correlation coefficient between the reconstruction and the target; the error corridor represents the 2$\sigma$ variance about this mean. As expected the Ex/WF is always a superior to the BGc/WF and the {\scshape Hamlet}\ method. With the exception of the inner most bin, the {\scshape Hamlet}\ \ method achieved higher correlation coefficients than the BGc/WF method. At the edge of the data, no method achieves a correlation coefficient of greater than 0.5. \subsection{Reconstructed radial velocity maps} \label{sec:bulk} The examination of the radial component of the velocity field follows here that of the density field (\S \ref{sec:recon_dens}). The same $SGZ=0$ and $4\,h^{-1}\,\Mpc$ thick slab is shown in \cref{fig:vr_sgz}. Again, the top panel is the target radial peculiar velocity field while the left column shows the BGc/WF reconstruction, the middle column the Ex/WF reconstruction and the right most column, the {\scshape Hamlet}\ \ reconstruction. The fields are smoothed with a Gaussian kernel of $5\,h^{-1}\,\Mpc$. The radial velocity field (\cref{fig:vr_sgz}b, c, d) appears much more accurately reconstructed than the density field. The same outflows and inflows are generally visible and the cosmographic landscape is recognisable in all three cases. Although the reader will note that the accuracy of the velocity reconstruction, like the density field, deteriorates at larger distances. Features are recognisable but distorted and smoothed out. The constrained and cosmic variances of the radial velocity, $\Sigma_{v_{r}}$ and $\sigma_{v_{r}}$, are calculated much in the same way as in for the density field (\S \ref{sec:recon_dens}). The imprint of the ZOA is clearly seen in the $\Sigma_{v_{r}}/\sigma_{v_{r}}$ map of the Ex/WF map. Yet, in all cases considered here the constrained variance, normalized by the cosmic variance, is much smaller than in the density case. Namely, the velocity field is much more constrained by the CF3 data than the density field. In general the reconstructed {\scshape Hamlet}\ \ velocity field bares a closer resemblance to the target than the BGc/WF reconstruction. A close inspection of \cref{fig:vr_sgz}d uncovers one troubling feature. At the edge of the reconstructed volume, at distances close to $150 \,h^{-1}\,\Mpc$ the reconstructed is ``bluer'' than in the corresponding target and the Ex/WF maps. Namely the {\scshape Hamlet}\ \ reconstructed velocity field has a spurious negative infall. This is a manifestation of the limitation of the method as described by \citet{Hinton2017} in \cref{sec:Hamlet}. Again, we turn to a scatter plot, on a cell by cell basis to quantify the quality of the reconstruction as a function of distance in \cref{fig:scatters_vr}, which is structured identically to \cref{fig:scatters_divv} - namely radial extent increasing column wise from left to right, while the rows from top to bottom being BGc/WF, Ex/WF and {\scshape Hamlet}\ . This figure is qualitatively identical to its density field counter part (\cref{fig:scatters_divv}) in that the same behavioural trends between the different reconstructions methods and as a function of distance exist. The correlation analysis of the Ex/WF and BGc/WF cases behaves much in the same way as for the density field - a degradation of the correlation with distance, a slope ($a$) that is close to unity nearby and diminishes with distance, and essentially with zero offset ($b\sim 0$). Yet, the quality of the reconstruction of the radial velocity is much better than that of the density. The {\scshape Hamlet}\ \ reconstruction shows a somewhat unexpected behaviour. The slope of the best fit line for the distance range of $40\ \le\ d\ \le\ 80\ \,h^{-1}\,\Mpc$ exceeds unity, $a=1.22$, \emph{i.e.} there is an excess of power compared with the target and the Ex/WF cases. This is unexpected for a Bayesian algorithm. The best linear fit for the range of $120\ \le\ d\ \le\ 160\ \,h^{-1}\,\Mpc$ yields a significant negative offset of $b=-151 {\rm km ^{-1}}$, in agreement with the visual inspection of \cref{fig:vr_sgz}g. The correlation of the radial component of the velocity field between the reconstructions and the target is shown as a function of distance in \cref{fig:vr_corr}. Similar to \cref{fig:divv_corr}, these are the correlation coefficients computed from scatter plots (\cref{fig:scatters_vr}) plotted as a function of distance in order to gauge the degradation of the reconstruction methods as data becomes more sparse and volumes become large. The solid lines in \cref{fig:vr_corr} represent the mean correlation coefficient between the reconstruction and the target; the error corridor represents the 2$\sigma$ variance about this mean. As expected the Ex/WF is always a superior to both the BGc/WF and the {\scshape Hamlet}\ method. The Ex/WF reconstruction is very well correlated with the target out until $\sim 80\,h^{-1}\,\Mpc$, beyond which it begins to drop, although it is worth noting that it stays correlated for the full sample. This drop is a manifestation of the sampling and the decreasing number of the data (per volume) at these distances. The {\scshape Hamlet}\ \ and BGc/WF method are roughly equal in the inner regions out to $\sim 70\,h^{-1}\,\Mpc$, and beyond it the {\scshape Hamlet}\ \ method provides better correlation. At the edge of the data, no method achieves a correlation coefficient of greater than 0.5. \subsection{Multipole moments of the reconstructed velocity field} The first two moments of the velocity field, the monopole and dipole, are examined here. The effect of errors and sampling on the fidelity of these two physical quantities is of particular interest since the monopole and dipole are often used as probes of the scale of homogeneity and can affect probes of the cosmological model in particular. \cref{fig:moments}a shows the target and reconstructed velocity monopole as a function of distance. The same colouring and line style convention used in \cref{fig:divv_corr} and \cref{fig:vr_corr} is adopted here too, with the moments of the target simulation plotted in black. Note that the monopole - the mean infall or outflow of matter, is the zeroth order moment of the velocity field. It is the mean of the divergence of the velocity field in spheres of radius $d$ and as such is called the ``breathing mode'' of the velocity field. In the linear theory of the cosmological gravitational instability the density and velocity fields are related by \cref{eq:delta}, hence we opted here to present the monopole term by means of this equations. Thereby, \cref{fig:moments}a effectively presents the mean linear density with spheres of radius $d$. The Ex/WF is nearly indistinguishable from the target here: the error corridor (which corresponds to variance across all the constrained realisations) is tiny and the black and green dashed line are practically on top of each other. The BGc/WF curve overestimates the monopole in the inner parts (within $\sim50\ \,h^{-1}\,\Mpc$) while underestimating it outside that range. This increased monopole implies an overestimation of the density in the inner parts of the mock universe, which is confirmed by examining the equation of the best fit line to the scatter plot \cref{fig:scatters_divv} (upper row, left column, $d < 40 \,h^{-1}\,\Mpc$). The best fit line has has an offset of $b=0.11$, meaning that there is a systematic increase in the estimated densities, consistent with the higher monopole. Both the reconstructions and the target tend to zero infall at these large scales. The {\scshape Hamlet}\ \ method on the other hand behaves inversely to the BGc/WF method, underestimating the target monopole at small scales and over estimating it at large scales. The {\scshape Hamlet}\ 's monopole term at the edge of the data reveals an excess of density at $d \sim (120 - 150) \,h^{-1}\,\Mpc$, in agreement with \cref{fig:scatters_divv} (lower/right panel). Otherwise, the {\scshape Hamlet}\ \ method succeeds in tracking the target monopole over a large range from $\sim 20$ to $\sim 100 \,h^{-1}\,\Mpc$. \cref{fig:moments}b and c shows the second moment of the velocity field, namely the dipole or the bulk flow. \cref{fig:moments}b refers to the magnitude of the bulk flow, while \cref{fig:moments}c refers to its direction. Accordingly all methods do a fine job of recovering the magnitude of the bulk flow beyond around $\sim30\,h^{-1}\,\Mpc$. The Ex/WF has, predictably, a smaller error corridor than the other two methods, which are roughly similar in size. With respect to direction, \cref{fig:moments}c shows the dot product between the target bulk flow direction and the reconstructed one (hence in this plot there is no black target line). The bulk flow directions for the {\scshape Hamlet}\ \ and BGc/WF method are aligned to within $\sim 15$ deg of the target out to a distance of $\sim 50\,h^{-1}\,\Mpc$, while the Ex/WF is well aligned to greater distances. Note that however, even the Ex/WF curve begins to deviate significantly at the reconstructed edge. This indicates that even in the best case scenario of zero errors, sampling at these great distances is a limiting factor in terms of recovering the direction of the cosmic dipole. Note that the accuracy recovered here is also restricted in its ability to recover the underlying dipole direction by the limited depth of the survey \citep{Nusser2014}. The problem is exacerbated when examining the BGc/WF and {\scshape Hamlet}\ \ curves at large distances. \cref{fig:moments} indicates that although the monopole and dipole are well recovered across a large range, the direction of the reconstructed dipole begins to deteriorate when the sampling drops. \section{Summary} \label{sec:conclusion} The reconstruction of the large scale density and velocity fields from Cosmicflows-like databases of galaxy distances, and hence peculiar radial velocities, is challenging. The data is sparse, extremely noisy with Noise/Signal ratio larger than a few for the majority of the data, non-uniformly and anisotropically distributed. Furthermore the data suffers from the log-normal bias, which leads to a non-linear bias in the estimated distances and velocities. A number of independent methods have been developed to reconstruct the local LSS and to produce constrained initial conditions for cosmological simulations designed to reproduce our local patch of the Universe \citep[i.e][]{Sorce2015}. What is generally missing from the literature in this field is an understanding of the accuracy of these methods. Often the reconstructions are applied directly to observational data and only very limited conclusions can be drawn on the viability of a cosmography. The present paper compares the BGc/WF \citep{Hoffman2021} and the {\scshape Hamlet}\ \ algorithms \citep{Valade2022} by testing them against a carefully crafted mock of an observational catalogue (an improved CF3-like survey) drawn from one the MultiDark cosmological simulations. The quality of the reconstruction is gauged by studying the residual between the reconstructed and target density and velocity fields. The residual is mostly analyzed by quadratic measures and as such it is characterized by the mean and variance of the distribution. An optimal reconstruction should make the mean of the residual to be as close as possible to the null field and aim at minimizing its variance. A related measure is the linear correlation analysis which yields the best ``line'', $y = a x + b$, that fits the linear dependence of reconstructed field on the target one, and the Pearson correlation coefficient. The values of the offset, $b$, for the case of the linear over-density and for the radial velocity are consistent with zero fro the BGc/WF, in agreement with the theoretical expectations. The distant data points are extremely noisy and very sparsely distributed, hence the WF reconstruction is dominated by the $\Lambda$CDM \ prior model. The {\scshape Hamlet}\ 's significant offset is however inconsistent with the prior model. We define here three different regions: the nearby ($d \lesssim 40 \,h^{-1}\,\Mpc$), the intermediate (($ 40 \lesssim d \lesssim 120 \,h^{-1}\,\Mpc$) and the distant one ($d \gtrsim 120 \,h^{-1}\,\Mpc$). Based on the above criteria we conclude that nearby the BGc/WF and the {\scshape Hamlet}\ \ methods are doing roughly equally well. The methods diverge at large distance - with the {\scshape Hamlet}\ \ outperforming the BGc/WF with a tighter correlation and smaller variance but underperforming in terms of the bias. This is most noticeable for distant region (the right columns of \cref{fig:scatters_divv} and \cref{fig:divv_corr}). The three panels of Fig. \ref{fig:moments} deserve a special attention here. The upper panel shows the radial profile of the monopole moment. The four profiles shown there - target, Ex/WF, BGc/WF and {\scshape Hamlet}\ \ - are all constructed under the assumption of $\Lambda$CDM \ value of $H_0 = 67.7\,{\rm km/s/Mpc}$. Yet, the negative offset of the monopole moment at the edge of the data implies that the local value of $H_0$ is somewhat smaller than its global value. A phenomenon expected for any finite volume realization in the $\Lambda$CDM \ cosmology (see \citet{Hoffman2021} for a quantitative assessment). A proper adjustment of the local value of $H_0$ would bring the target and Ex/WF profiles to converge to zero at the edge of the data, together with the BGc/WF asymptotic value. This would leave the {\scshape Hamlet}\ \ positive offset standing out with a systematic bias. The amplitude of the dipole moment, namely the bulk velocity, is recovered equally well by the three reconstruction and is in very good agreement with the target. The bottom panel shows the cosine of the angle between the reconstructed and the target bulk velocities. The BGc/WF behaves as expected - the mean misalignment is consistent with the full alignment to within one $\sigma$ of the constrained variance. This is not the case with the {\scshape Hamlet}\ \ reconstruction, where the misalignment is more than $2 \sigma$ away from the expected alignment. Our overall assessment of the {\scshape Hamlet}\ \ and the BGc/WF reconstructions is that the former outperforms the latter one in terms of reduced scatter and tighter correlation between the reconstructed and the target density and velocity fields. Yet, the {\scshape Hamlet}\ \ suffers from biases in the reconstructed LSS at the distant regime - ones that do not appear in the BGc/WF reconstruction. It follows that the {\scshape Hamlet}\ \ should be the method of choice for the reconstruction of the LSS and the study of the cosmography of our local patch of the Universe. The BGc/WF reconstruction is the preferred tool for performing quantitative analysis and parameters estimation and possibly also for setting initial conditions for constrained cosmological simulations. One last comment is due here. The WF/CRs is a very well tested approach that is based on a solid theoretical foundations \citep{Hoffman1992, Zaroubi1995, Zaroubi1999}. As such it provides an attractive framework for performing Bayesian reconstruction of the nearby LSS. Yet, any bias in the observational data and in particular the log-normal one needs to be addressed and apply outside that framework in some ad-hoc and approximate way. The HMC methodology, and in particular its {\scshape Hamlet}\ \ implementation, still suffer from some teething problems that need to be overcome. The ability of the MCMC methodology in general and the HMC in particular to address the issue of reconstruction of the LSS, the handling of observational biases and the estimation of cosmological parameters within one computational self-consistent framework makes {\scshape Hamlet}\ \ a very attractive tool in the CLUES' toolbox. The incredible improvement in the computational efficiency of the {\scshape Hamlet}\ \ compared with previous implementation of MCMC algorithms makes it even more promising for future implementations within the CLUES project. \section*{Acknowledgements} Useful discussions with Tamara Davis, concerning the \cite{Hinton2017} paper, are acknowledged. This work has been done within the framework of the Constrained Local UniversE Simulations (CLUES) simulations. AV and NIL acknowledge financial support from the Project IDEXLYON at the University of Lyon under the Investments for the Future Program (ANR-16-IDEX-0005). YH has been partially supported by the Israel Science Foundation grant ISF 1358/18. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
1,314,259,996,785
arxiv
\section{Introduction} \iffalse Persistent memory (PM) technologies offer high performance and data persistence. This relatively new memory technology has the potential to unify both the main memory and storage devices. PM devices can be connected to a system through both the memory bus as DIMMs or as PCIe based devices. Unlike block storage devices (e.g., HDD and SSD) PM memories are byte addressable enabling access to the device in cache line granularity. In modern systems, only memory channel based PM devices are capable of utilizing the byte admissibility advantage because devices mounted over PCIe are accessed as block devices. However, PM DIMMs mounted over the memory bus are limited by capacity because of the available number of memory slots. With PCIe 5.0 and new protocols such as Compute Express Link (CXL) and Gen Z, byte addressability and caching has been supported for PM memory devices connected over PCIe. With this there is a new trend towards memory disaggregation over PCIe channel. One primary use of PM is to store data persistently. Utilizing this capability a new programming paradigm which can recover after system failure has been developed. These programs are known as crash consistent programs. There have been a myriad of solutions to provide crash consistency. For example, the undo-logging approach makes a backup of the existing data to PM prior to updates \cite{chakrabarti14_oopsla, chatzistergiou15_pvldb, coburn13_sosp, coburn11_asplos,dulloor14_eurosys,pmdk, kolli16_asplos, pmem-memcached, gogte18_pldi}; the checkpointing method periodically makes a snapshot of PM data structures to keep a consistent, recoverable copy \cite{giles17_ismm, kannan13_ipdps, bailey2013exploring, ongaro11_sosp, volos11_asplos, ren15_micro,fernando2016_hipc}; the shadow paging mechanism redirects writes to a shadow memory and swaps backward at commit \cite{hsu17_eurosys, wu2020_pldi,ni18_hotstorage,ni2019_micro}. These methods have been adopted by a variety of applications \cite{pmem_mongodb,pmem_redis,pmem_rocksdb,pmem-memcached,pmemkv,xu16_fast} to protect their persistent data from unexpected failures. All these mechanisms introduce additional computations, data movements, and ordering into the program execution. There has been work that propose data movement in the memory hierarchy \cite{nguyen18_micro}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/MICRO adr vs PCIe.pdf} \caption{Memory channel based vs PCIe based PM.} \label{fig:adr} \vspace{-0.15in} \end{figure} The most widely used mechanism of guaranteeing this requirement in hardware is by ensuring the write pending queue (WPQ) of the processor is written to the persistent memory at a system failure (figure \ref{fig:adr} a) using Asynchronous DRAM Refresh (ADR). Therefore, in memory channel based PM systems, the WPQ and the PM DIMMs are in the persistent domain. When it comes to PM connected over PCIe, there is no guarantee in the PCIe controller to persist writes. In fact, because protocols like CXL can extend over to multiple devices in separate nodes, techniques similar to ADR would have a hard time guaranteeing persistent writes from a buffer in the PCIe controller. Another technique used in modern systems is Enhanced - Asynchronous DRAM Refresh (eADR), where the components in the persistent domain is supplied with external power until it guarantee persistence of data within the domain. This makes eADR capable of extending the persistent domain from memory controller to processor caches. Even though this is a proven technique on PM DIMMs, when it comes to PCIe based PMs this method becomes harder to realize. For example battery backing a large CXL network and each component to be correctly functioning at a time of system failure is unrealistic. With this observations we see the requirement for having a novel partitioned mechanism, NearPM\xspace{}, for supporting persistent programs for PM devices connected over the PCIe bus. A trend in computer architecture design, near-data processing (NDP), can be a good remedy for this problems. NDP is a class of architectural designs that move the processing closer to the data to mitigate the memory movement between the memory and the host processor~\cite{fernandez2020_ICCD,gao2015_PACT,gao2016_HPCA,hsieh2016_sigarch,kim2016_sigarch,kim2018_BMC,lockerman20_livia}. Leveraging this advantage, applying NDP to PCIe based PM systems would help in handling the distributed persistence problem. In order to ensure such a distributed NDP-based approach is applicable to real PCIe PM-based programs, there are two practical challenges to solve. The first challenge arises from the additional ordering constraints necessary to ensure the recoverability of persistent data. In partitioned persistence this can be divided into two: 1. Ordering between CPU and a single PM device executing. 2. Ordering in-between different PM devices executing persistent memory operations. Because each device has its own execution of the crash consistency operations, there must be a mechanism to guarantee that memory operations in multiple devices do not violate crash consistency guarantees. Secondly, given that multiple programs with different crash consistency techniques can co-execute on a single PM-equipped server, how can NearPM\xspace{} hardware support different crash consistency techniques? Next, we describe our key insights that overcome these challenges. \paragraph*{\textbf{Ensuring correct ordering in each PM module}} Rather than managing ordering in processor, NearPM\xspace{} manages ordering near memory. For a distributed PM system this is best suited because synchronising memory writes at processor level will make write persisting time significantly high. Furthermore, taking the ordering management near memory enables the processor to focus on executing processor centric operations. This in-fact, introduces a relaxed persistence ordering where the processor can keep on executing without waiting for write persistence completion. For memory accesses from the host CPU and NearPM\xspace{} operations, NearPM\xspace{} enforces the original program order if they share any common addresses. These guarantees are enforced by monitoring in-flight NearPM\xspace{} operations and host processor's memory accesses in NearPM\xspace{}. For example, in undo-log, the in-place update on the host processor can be parallelized with the log creation operation in NearPM\xspace, NearPM\xspace{} guarantees that the log creation completes in memory before the in-place update from the CPU is written to persistent memory. With this key idea, the processing of crash consistency operations can overlap with the program execution on the host processor while ordering is guaranteed. \paragraph*{\textbf{Handling distributed persistence among multiple devices}} Ensuring persistence guarantees in different devices can be tricky because of how persistent objects are allocated amongst them. As long as an object is allocated in a single device previously mentioned ordering guarantees can ensure correctness. However, when an object is allocated in multiple devices which independently handle persistent memory operations, correctness can get affected. For example if two PM devices share an object and an undolog is created, a consistent state of each partition in the two devices are guaranteed to exist. However, in addition to this before committing the undolog there should be an mechanism to ensure that all partitioned undologs are complete. Therefore a need for a mechanism to ensure correct completion of distributed persistent operations is required. NearPM\xspace{} overcomes this challenge by delaying the log deletion until all devices that share the object complete log commit stage. \paragraph*{\textbf{Supporting multiple crash consistency mechanisms}} Different crash consistency techniques appear to perform completely different operations for their own crash consistency guarantees. However, we observe that if we break down these seemingly unrelated crash consistency techniques, they share similarities. We refer to the fine-grained steps within each crash consistency mechanism as NearPM\xspace{} operations. For example, undo-logging creates a copy of the original data in the log and updates in-place; in redo-logging, a persistent update first goes to the log and is then copied back to the original location. However, operations within these mechanisms (i.e., log creation in undo-logging and update writeback in redo-logging) perform data movement within the memory. Likewise, shadow paging and checkpointing also share this common data movement operation. Thus, it is possible to process such common operations that benefit many different crash consistency mechanisms within distribute PM modules. We prototype NearPM\xspace{} in an FPGA platform and developed a framework that supports three commonly-used crash consistency techniques: undo-/redo-logging, checkpointing, and shadow paging. We evaluate the \emph{end-to-end performance} of NearPM\xspace{} in a real system, using ten PM workloads that are developed with the three crash consistency mechanisms. In summary, this work has the following contributions: \begin{itemize} \item We show that NDP is a suitable candidate for PM programming on PM connected over PCIe. \item We show that seemingly different crash consistency techniques can be decomposed into several common crash consistency support operations. \item We demonstrate that NearPM\xspace{} is able to offload crash consistency support operations to the PM device and execute them in parallel with the application's logic on the host processor by enforcing memory ordering in the NearPM\xspace{} controller. \item We propose mechanisms to overcome the challenge of handing persistent objects distributed among multiple PM devices connected over PCIe. \item We prototype NearPM\xspace{} on an FPGA and evaluate its performance in a \emph{real system}. We adapt ten PM workloads to our NearPM\xspace{} framework. For each workload, we evaluation three commonly-used crash consistency mechanism, logging, checkpointing, and shadow paging. The NearPM\xspace{} framework, including both the FPGA and software implementations are publicly available.\footnote{ NearPM\xspace{} is publicly available at \cite{nearpm_source} as an anonymous link.} \item Our experiments show that NearPM\xspace{} reduces the \emph{crash consistency overhead} by $2.45\times$, $2.13\times$, and $2.33\times$ in logging, checkpointing, and shadow-paging-based programs, compared to a CPU-based baseline system. In term of the \emph{end-to-end performance}, it achieves $1.34\times$, $1.38\times$, and $1.63\times$ speedup over the baseline, respectively, in programs based on the three crash consistency mechanisms. \end{itemize} \fi Persistent memory (PM) technologies offer both high performance and data persistence. For example, Intel has released Optane PM \cite{optane} that shares the DDR interface with DRAMs and the industry is developing other PM devices \cite{intel_pmem_cxl, lenovo_pemm_cxl} for the upcoming Compute Express Link (CXL) standard \cite{cxl}. These PM systems enable applications to perform direct accesses to PM, without going through the file system indirection, unlike conventional storage devices (e.g., HDD and SSD). Thus, PM-optimized applications can benefit from a faster data path. These opportunities have inspired research on developing and deploying PM \cite{shanbhag2020IDM, gill2019arx, dulloor14_eurosys,lee19_recipe,xu16_fast, izraelevitz19_dcpmm,chatzistergiou15_pvldb, google_nvm, amazon_nvm, chen15_vldb, arulraj2018_vldb, pmem-memcached}. However, a new challenge occurs---without the file system maintaining the data persistence, applications need to manage the recovery of persistent data. In case of failures, such as system crashes and power outages, applications that directly access PM need to ensure that the persistent data is maintained under a consistent, recoverable state. This property is usually referred to as the crash consistency guarantee. There have been a myriad of solutions to provide the crash consistency guarantee for PM-based applications. For example, the undo-logging approach makes a backup of the existing data to PM prior to updates \cite{chakrabarti14_oopsla, chatzistergiou15_pvldb, coburn13_sosp, coburn11_asplos,dulloor14_eurosys,pmdk, kolli16_asplos, pmem-memcached, gogte18_pldi}; the checkpointing method periodically makes a snapshot of PM data structures to keep a consistent, recoverable copy \cite{giles17_ismm, kannan13_ipdps, bailey2013exploring, ongaro11_sosp, volos11_asplos, ren15_micro,fernando2016_hipc}; the shadow paging mechanism redirects writes to a shadow memory and swaps backward at commit \cite{hsu17_eurosys, wu2020_pldi,ni18_hotstorage,ni2019_micro}. However, the crash consistency mechanisms come with a performance cost. First, the crash consistency guarantee requires writes to become persistent in a specific order. For example, the undo-logging mechanism backs up the to-be-updated persistent data \emph{before} performing the update, introducing additional ordering constraints. Therefore, crash consistency mechanisms introduce additional stalls to the program execution, as they need to maintain a correct persist ordering \cite{izraelevitz16_asplos,kolli16_asplos,volos11_asplos,xu16_fast,pelley14_isca, nalli17_asplos}. Second, these crash consistency mechanisms tend to create and keep extra copies of the original data \cite{pmdk,condit09_sosp,ni18_hotstorage, chatzistergiou15_pvldb,coburn13_sosp,hsu17_eurosys,pmdk,izraelevitz16_asplos} in order to recover in case of a failure. These data movements introduce additional memory bandwidth utilization. Combining the two performance bottlenecks, crash consistency mechanisms can place extra data-intensive operations on program's critical path. In our study of ten common PM-based workloads (methodology in Section~\ref{subsec:methodology}), we observe that some crash consistency mechanisms take a high overhead of 29.6\% of the execution time, on average. A trend in computer architecture design, near-data processing (NDP), has the potential to mitigate the overhead of data movement in crash consistency mechanisms. By bringing computation closer to data, NDP can mitigate the memory movement between memory and processor (e.g.,\xspace CPU)~\cite{fernandez2020_ICCD,gao2015_PACT,gao2016_HPCA,hsieh2016_sigarch,kim2016_sigarch,kim2018_BMC,lockerman20_livia}. In particular, as the new CXL standard \cite{cxl} is around the corner, more opportunities for processing closer to PM devices are being opened up. Our FPGA-based NDP prototype achieves $6.97\times$, $4.26\times$, and $9.76\times$ speedup in logging, checkpointing, and shadow-paging over the software baselines. However, when adapting NDP to existing crash consistency mechanisms, it is not trivial to ensure that the NDP follows a correct persist ordering, without breaking the crash consistency guarantees. We identify that there are two major challenges in maintaining persist ordering. The first challenge arises from running crash consist operations near memory in parallel with CPU execution. When program execution is \emph{partitioned} between the CPU and the near-PM NDP units, the PM program still needs to guarantee a correct ordering of persists. For example, when the undo-log operation is offloaded to NDP, the memory operations on both CPU and NDP must be coordinated, such that the undo log becomes persistent by NDP before the CPU performance an in-place update. Naively, one can introduce additional CPU-NDP synchronizations to keep this order but synchronization may offset the performance gains from NDP. Second, the program execution is not only partitioned between CPU and NDP, but may also lie across multiple NDP devices. For example, two interleaved PM devices may both contain NDP units, each holding a fraction of the persistent data. Thus, an offloaded operation may get partitioned on both NDP-enabled PM devices. However, the execution flows on both devices can be out of sync. Back to the undo log example, if one device has the update committed but the other is still backing up data to the log, a failure may cause the recovery procedure to keep the updates in one device but roll back updates on another, leading to inconsistencies after recovery. Likewise, it is not hard to design a naive solution that frequently synchronizes multiple NDP-enabled PM devices at the cost of performance degradation---a design against the key motivation for integrating NDP. In this work, we aim to design a new persist ordering model for such partitioned execution among CPU and multiple NDP devices. To overcome the challenges in maintaining persist ordering for partitioned execution among CPU and NDP devices, we propose Partitioned Persist Ordering (PPO\xspace{}) that defines the persistence in NDP-enabled PM systems. Next, we describe the key insights of PPO\xspace{}. \noindent\textbf{Persist ordering between CPU and NDP device.} Synchronizing after each offloaded operation preserves the original persist ordering, but we observe that maintaining such a strict ordering is often unnecessary. Execution in NDP units can in fact manage their local memory, without being visible to the CPU. For example, NDP-executed undo-log operations can be persisted to NDP-managed memory, as logged data is not used unless during failure-recovery. Therefore, the persist ordering within an NDP unit is not affected by the CPU. Only when data dependency between NDP and CPU exists, NDP needs to be ordered with the CPU. Back to the undo-logging example, in-place updates from the CPU may not persist before the offloaded logging operation that backs up the original data has persisted to the NDP-managed location. Otherwise, the CPU and NDP device do not need to follow a strict ordering. As long as the NDP hardware system can resolve such data dependency, the software execution on the CPU no longer needs to be blocked by the completion of data persistence on the NDP; instead, it can continue the program execution. For example, several logging operations on independent PM addresses can be processed in parallel by the NDP unit. Therefore, in PPO\xspace{}, partitioned execution on NDP that does not persist to CPU-visible memory is no longer constrained by the original persistence on the CPU but only needs to be ordered when there exist data dependencies with the CPU (more details in \cref{subsec:cpu_ndp_order}). \noindent\textbf{Persist ordering among multiple NDP devices.} When a PM system consists of multiple PM devices, the key challenge lies in keeping the partitioned execution on NDP devices at the same pace, to present a consistent persistent state to the recovery procedure. Thus, one can naively synchronize multiple devices at recovery-critical points, such as committing a transaction, to make sure writes have been persisted by this synchronization event. However, we find that the synchronization can be delayed, without affecting the failure-recovery. If an NDP operation only persists to its separate PM region, the CPU would not change its persistence state. Even though multiple NDP devices may not stay in sync, as long as they preserve the persistence state that enables recovery, synchronization does not need to happen immediately. In the undo-logging example, if NDP devices delay the synchronization after committing the transaction, the persistent state can still recover: if a failure happens during commit but before synchronization, from each NDP's execution status, the devices can together restore to a consistent state using the logged data or keep the in-place updates, given that CPU-NDP ordering ensures a consistent copy has been persisted prior to in-place updates; if a failure happens after the delayed synchronization, then the NDP devices can simply keep the committed in-place updates as they have reached a consistent state. In summary, these two key insights preserve a persist ordering and enable the software to provide the crash consistency guarantees (more details in \cref{subsec:multi_ndp_order}). \begin{figure*} \centering \begin{subfigure}[t]{0.23\linewidth} \centering \includegraphics[height=.87in]{charts/fulloh2.pdf} \caption{Crash consistency overhead} \label{fig:crashconsistency_overhead} \end{subfigure} \hspace{-0.1in} \begin{subfigure}[t]{0.27\linewidth} \includegraphics[height=.87in]{charts/logoh.pdf} \caption{Logging} \label{fig:logbreakdown} \end{subfigure} \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[height=.87in]{charts/cpoh.pdf} \caption{Checkpointing}\ \label{fig:checkpointbreakdown} \end{subfigure} \hspace{-0.1in} \begin{subfigure}[t]{0.24\linewidth} \centering \includegraphics[height=.87in]{charts/shadowoh.pdf} \caption{Shadow paging}\ \label{fig:shadowbreakdown} \end{subfigure} \vspace{-0.15in} \caption{Crash consistency overheads (a) and the breakdown in logging, checkpointing, and shadow paging (b--d).} \end{figure*} On top of PPO\xspace, we further prototype an NDP-enabled system on an FPGA platform. We place the FPGA as a PCIe device and ensure data coherence using a software-based approach by modifying the Linux kernel. On top of the FPGA platform, we implement two PM devices, each containing its own NDP units, and evaluate ten PM-optimized workloads, where each workload has implementations for logging, checkpointing, and shadow paging. The main contributions of this work are the following: \begin{itemize} \item Near-data processing (NDP) can mitigate data-intensive operations in crash consistency mechanisms for PM system, but introduces new challenges in maintaining a correct and efficient persist ordering. We propose Partitioned Persist Ordering (PPO\xspace{}) that ensures correctness and performance when part of the execution is partitioned among NDP devices. \item On top of PPO\xspace{}, we prototype an NDP-enabled PM system, NearPM\xspace{}, using an FPGA platform. We adapt ten PM workloads to our NearPM\xspace{} system, each workload is implemented in three commonly-used crash consistency mechanisms, logging, checkpointing, and shadow paging. \item Our experiments on ten PM workloads show that NearPM\xspace{} reduces the crash consistency overhead by $6.97\times$, $4.26\times$, and $9.76\times$ in logging, checkpointing, and shadow-paging-based programs, compared to a CPU-based baseline system. In terms of \emph{end-to-end performance}, it achieves $1.35\times$, $1.22\times$, and $1.33\times$ speedup over the baseline, respectively, in programs based on the three crash consistency mechanisms. \end{itemize} \section{Background and Motivation} \label{sec:background} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/ISCA_Crash_consistency_ulogging.pdf} \caption{Procedures in crash consistency mechanisms.} \label{fig:crashcons} \end{figure} \subsection{Crash Consistency and PM Programming}\label{sec:background_crashconsistency} Persistent memory technologies (PM) feature high performance and byte-addressability while providing the additional benefits of data persistence and load-store direct access to persistent data bypassing the file system. The currently commercialized Optane PM~\cite{optane} from Intel is a memory module that shares the memory bus with DRAM modules; other upcoming PM variations also service as PCIe devices \cite{bhardwaj2022cache, intel_pmem_cxl, lenovo_pemm_cxl} by leveraging new technologies such as the Compute Express Link (CXL)~\cite{cxl}. Compared to conventional storage, PM systems enable programs to perform direct access to persistent data, bypassing the file system indirections. The direct access to persistent data reduces the overhead on the data path but at the same time ships the burden of managing data recovery to the programs. We refer to the ability to restore persistent data in case of a failure (e.g.,\xspace power outage or system crash) as the crash consistency guarantee. Past research on crash-consistent programming has proposed various mechanisms for the crash consistency guarantee, such as undo- and redo-logging logging\cite{chakrabarti14_oopsla,chatzistergiou15_pvldb,coburn13_sosp,hsu17_eurosys,pmdk,izraelevitz16_asplos,kolli16_asplos,volos11_asplos,xu16_fast}, checkpointing \cite{fernando2016_hipc,kannan13_ipdps}, and shadow paging \cite{condit09_sosp,ni18_hotstorage}. Next, we describe several commonly-used crash consistency mechanisms. The logging mechanism maintains the persistent data in a separate location (e.g., a undo or a redo log) before updating the persistent state. As shown in \cref{fig:crashcons}a, undo-logging makes a fine-grained snapshot of the original data in a log, before persisting the in-place update; it deletes the log only after completing the updates. Similarly, redo-logging redirects each update to a separate location and only apply the updates in-place only after the updates have become persistent in a separate copy. This way, the redo-logged data can still be used to recover the original place in case of a failure. Different from logging, checkpointing (\cref{fig:crashcons}b) maintains a coarse-grained snapshot of the location prior to updates. Another commonly-used coarse-grained method is shadow paging, which redirects the update to a new copy of the PM page and then switches the version of a whole page (\cref{fig:crashcons}c). These mechanisms are necessary for data recoverability but introduce performance overheads. \cref{fig:crashconsistency_overhead} shows the overhead of these mechanisms, where logging (undo and/or redo, depending on the implementation), checkpointing, and shadow paging mechanisms take up 25.7\%, 23.1\%, and 29.6\% of the execution time, respectively (detailed methodology in \cref{subsec:methodology}). The main overhead in those mechanisms is due to intensive data movement when maintaining a copy of the data. \cref{fig:logbreakdown}, \ref{fig:checkpointbreakdown}, and \ref{fig:shadowbreakdown} further break down the crash consistency overheads in each mechanism, where 68.9\%, 60.4\%, and 70.5\% of the overhead come from data movement. Therefore, if we can accelerate this data movement between CPU and PM, there is a huge opportunity for performance improvement. \subsection{Opportunities for Near-data Processing (NDP)} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/NDP.pdf} \caption{(a) CPU-centric and (b) near-data processing.}\ \label{fig:neardata} \vspace{-0.1in} \end{figure} In traditional systems, the CPU is the only processing unit that manipulates data. Even for simple operations, such as creating a copy of data in the memory (as shown in \cref{fig:neardata}a), the CPU needs to fetch data through the cache hierarchy and write it to another CPU-manged memory location, leading to a high data movement overhead. To mitigate this data movement overhead, researchers have introduced the paradigm of near-data processing (NDP) that places computation closer to the data \cite{ahn2015_ISCA,ahn2015_ISCA2,fernandez2020_ICCD,gao2015_PACT,gao2016_HPCA,hsieh2016_sigarch,kim2016_sigarch,kim2018_BMC,mutlu2020modern,singh2020_FPL,singh2019_DAC,zhan16_micro,lockerman20_livia}. As underlying operations that maintain crash consistency guarantees exhibit intensive data movement between CPU and PM, NDP has the potential to mitigate the crash consistency overhead. The existing and upcoming PM devices also have the capability to host processing elements. For example, PCIe-based PM devices can easily integrate processing logic \cite{bhardwaj2022cache,ahn2022enabling}; even the more compact PM DIMMs, such as Optane DIMMs, already integrate controllers for data-intensive tasks \cite{wang20_micro}, which has the potential to be extended for NDP. \cref{fig:neardata}b demonstrates an NDP unit inside the PM device can execute data-intensive operations. In undo-logging, an NDP unit can create a copy of the old data to an \emph{NDP-managed log}, without going through the CPU, to reduce the data movement overhead. \subsection{Challenges of Persist Ordering in NDP Systems}\label{sec:challenges} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/ordering_challenge.pdf} \caption{Challenges of ordering in partitioned execution}\ \label{fig:orderchallenge} \vspace{-0.1in} \end{figure} Even though NDP has the potential of reducing the crash consistency overhead, maintaining the original crash consistency guarantee becomes a new challenge. The crash consistency guarantee relies on the persist ordering provisioned by the PM system. For example, the undo-logging mechanism in \cref{fig:crashcons}a requires the log to be created prior to the update, as indicated by the numbered steps. Likewise, a checkpoint of the old data needs to be persisted \emph{before} the upcoming updates (\cref{fig:crashcons}b); updates to the shadow copy must be persisted \emph{before} switching the version of the page (\cref{fig:crashcons}c). When offloading computation to the NDP-enabled PM device, program execution becomes \emph{partitioned} between the CPU and the PM device. \cref{fig:orderchallenge}a shows undo-logging in a conventional CPU-centric system where each step is strictly ordered for crash consistency. \cref{fig:orderchallenge}b offloads the undo-logging operation to an NDP unit while other steps remain executed on the CPU. Even though data-intensive operations in the logging step become faster, such partitioned execution breaks the ordering guarantees---the CPU can make the update persistent in PM, while the NDP unit is creating an undo log. In case of a failure, as indicated by the red line in \cref{fig:orderchallenge}b, the program then needs to spawn a recovery procedure. As the update was not committed before failure, the program attempts to \emph{read from} (as indicated by the rf edge) the undo log for recovery. Due to an incorrect ordering, the undo log may contain the already update data, leading to an inconsistent recovery. In addition to issues with CPU-NDP ordering, such partitioned execution can also introduce ordering issues among different NDP devices. \cref{fig:orderchallenge}c shows a scenario where two PM devices interleave. As such, a PM object can span both devices. Likewise, the NDP units (NDP0 and NDP1) on both devices will operate on this partitioned object. However, without ordering the execution on both NDP devices, the offloaded execution may not follow the same pace on both devices. In case of a failure, as indicated by the red line, one PM device (NDP0) `has committed the update, but another (NDP1) has not. As a result, during failure-recovery, NDP0 maintains the in-place updates as they were committed prior to failure, while NDP1 attempts to read from (rf) to the old copy in the log. Therefore, after recovery, the PM object can end up with inconsistent data---half from the original version and half from the updated version We summarize that the central challenge is to extend the persist ordering from a single execution device, i.e.,\xspace the CPU, to multiple partitioned devices. Therefore, in this work, we aim to define persist ordering in such partitioned systems and enable the software system to implement efficient crash consistency mechanisms. \section{Partitioned Persist Ordering} In this work, we present \emph{Partitioned Persist Ordering} (PPO\xspace{}) that ensures persistence ordering across the CPU and near-data processing units in PM. In this section, we define PPO\xspace{} in two practical scenarios---ordering between CPU and the NDP device in PM, and among NDP devices. \subsection{Persistent Ordering between CPU and NDP} \label{subsec:cpu_ndp_order} \iffalse \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/ISCA Crash consistency dependencies ulog.pdf} \caption{Ordering in (a) logging (undo), (b) checkpointing, and (c) Shadow paging crash consistency mechanisms.}\ \label{fig:crash_consistency_parallelized} \vspace{-0.1in} \end{figure} \fi \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/CPU_NDP_ordering.pdf} \caption{Partitioned execution between CPU and NDP.}\ \label{fig:undo_decoupled} \vspace{-0.1in} \end{figure} The major challenge demonstrated in \cref{fig:orderchallenge}a lies in maintaining the persist ordering between the CPU and the NDP unit. To enable a correct persist ordering, a naive solution is to synchronize between CPU and the NDP unit actively. As \cref{fig:undo_decoupled}b illustrates, in such a naive solution, the CPU execution needs to wait till the partitioned execution on NDP has completed. Though faster than the CPU-only baseline in \cref{fig:undo_decoupled}a, the frequent synchronization offsets the performance benefits. However, we observe that maintaining such a strict persist ordering is not always necessary. The partitioned execution on NDP does not always share the same memory with the CPU. In the example of \cref{fig:neardata}b, the NDP operation reads from CPU's memory but copies and persists it to a NDP-managed memory, i.e.,\xspace an undo log. Therefore, the persist ordering between the offloaded execution boils down to data dependency between CPU and NDP---the CPU-side update (e.g.,\xspace ``Update A'' and ``Update B'' in \cref{fig:undo_decoupled}c) needs to persist \emph{after} the associated NDP logging operations (e.g.,\xspace ``Log A'' and ``Log B'' in \cref{fig:undo_decoupled}c). While, independent NDP operations, such as logging on different addresses can happen in parallel, without being blocked by the CPU. In other crash consistency mechanisms, we observe similar opportunities. For example, a page-grained checkpointing operation on NDP only needs to \emph{persist before} any update from the CPU toward the same page, while independent checkpointing operations can persist in parallel as they write to a separate NDP-managed memory. Based on this observation, we see the opportunity to overcome the strict ordering between CPU and NDP units to exploit parallelization. Therefore, we define a more relaxed persist ordering in PPO\xspace{}. First, we define two types of memories: \\ $\sbullet[0.75]$~CPU-visible memory is the memory space that is accessible by the CPU. \\ $\sbullet[0.75]$~NDP-managed memory is the memory space that is only accessible by the NDP unit. \\ Then, we define basic memory and NDP operations: \\ $\sbullet[0.75]$~$\mathit{OP}$: an operation, which can either be a memory access or an NDP operation. $\sbullet[0.75]$~$M_{x}$: a memory access to memory address $x$, where $W_{x}$ and $R_{x}$ are read and write access, respectively. \\ $\sbullet[0.75]$~$N$: an NDP request, where $M_{N, x}$ is a memory access within $N$ that accesses address $x$. \\ Lastly, we define the following edges: \\ $\sbullet[0.75]$~$\mathit{OP}_1 \xrightarrow{po} \mathit{OP}_2$: memory operation $\mathit{OP}_1$ is program ordered before $\mathit{OP}_2$. \\ $\sbullet[0.75]$~$\mathit{OP}_1 \leq_{p} \mathit{OP}_2$: memory operation $\mathit{OP}_2$ may not persist before $\mathit{OP}_1$. \\ \textbf{Persist to CPU memory.} An NDP-issued operation that persists to the CPU-visible memory follows a strict persist ordering, i.e.,\xspace $M_{x} \xrightarrow{po} N \rightarrow M_{x} \le_{p} N$ and $N \xrightarrow{po} M_{x} \rightarrow N \le_{p} M_{x} $. This strict persist order on CPU-visible memory guarantees that, even under partitioned execution at an NDP unit, the original persist ordering still holds. \textbf{Persist to NDP-managed memory.} An NDP operation that persists to NDP-managed memory only needs order according to data dependencies with memory operations from CPU. If $N$ only persists within its NDP-managed memory, for a read $M_{N,x} \in N$, $M_{N,x} \xrightarrow{po} M_x \rightarrow M_{N, x} \le_{p} M_x$ and $M_x \xrightarrow{po} M_{N,x} \rightarrow M_x \le_{p} M_{N, x}$. However, other memory accesses, say $M_{N,y} \in N$, where $x \cap y = \varnothing$, do not need to be strictly ordered with $M_{N,x}$. This relaxed persist ordering on NDP-managed memory guarantees parallelism within NDP units, as the memory persisted in these NDP units are not visible to the CPU, without affecting CPU's persistency. \iffalse We define this ordering guarantee more formally. We first define basic memory and NDP operations. \\ $\sbullet[0.75]$~$m_{i,x}$: the $i$-th memory access (read/write) from the host processor to address $x$. \\ $\sbullet[0.75]$~$N_{j,\{x, y\}}$: the $j$-th NDP request that accesses address $x$ and $y$. \\ $\sbullet[0.75]$~$n_{j-k,x}$: the $k$-th operation within $N_{j,\{x, y\}}$ that accesses address $x$. \\ We use $\mathit{op}$ to collectively refer to the aforementioned operations, i.e., $\mathit{op} \in \{m, n\}$. Then, we define the orderings between operation $\mathit{op}_{i,x}$ and $\mathit{op}_{j,x}$ that access address $x$. \\ $\sbullet[0.75]$~$\mathit{op}_{i,x} \prec_{po} \mathit{op}_{j,x}$: operation $\mathit{op}_{i,x}$ happens before $\mathit{op}_{j,x}$ in program order. \\ $\sbullet[0.75]$~$\mathit{op}_{i,x} \prec_{mo} \mathit{op}_{j,x}$: operation $\mathit{op}_{i,x}$ happens before $\mathit{op}_{j,x}$ in memory order. For writes to PM, this memory order also indicates that $\mathit{op}_i$ becomes persistent in PM before $\mathit{op}_{j,x}$. PPO\xspace{} ensures that regular memory accesses $m$ and operations $n$ in the NDP requests preserve the original program order, if there exist common addresses, i.e., $\mathit{op}_{i,x} \prec_{po} \mathit{op}_{j,x} \rightarrow \mathit{op}_{i,x} \prec_{mo} \mathit{op}_{j,x}$. Thus, for a NDP request $N_{j,\{x, y\}}$ that contains operations $n_{j-p,x} \prec_{po} n_{j-q,y}$, $ m_{i,x} \prec_{po} N_{j,\{x, y\}} \prec_{po} m_{k,x} \rightarrow m_{i,x} \prec_{mo} n_{j-p,x} \prec_{mo} m_{k,x}$. The ordering of $n_{j-q,y}$ is not enforced as it does not share address $x$ and can be performed in parallel. \fi \subsection{Persistent Ordering among Multiple NDP Devices} \label{subsec:multi_ndp_order} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/multidim_undolog_solution.pdf} \caption{An undo-logging example in multi-device partitioned execution. }\ \label{fig:delayedlogdelete} \vspace{-0.1in} \end{figure} In addition to CPU and NDP ordering, partitioned execution presents another ordering challenge among multiple devices because persistent objects can be interleaved among multiple devices. The persist ordering among multiple devices is another challenge because the execution is asynchronous among devices and their programs do not necessarily stay at the same pace. A naive way of maintaining the persist ordering among multiple NDP devices is to actively synchronize multiple NDP devices to make sure both are complete before committing the updates. As demonstrated in \cref{fig:delayedlogdelete}b, before updating the data in-place (A or B), the CPU stalls before sending a commit operation to the NDP devices, until logging operations on both NDP devices have completed. Thus, this naive solution avoids the unrecoverable scenario in \cref{sec:challenges}---the recovery program either both recovers to the logged data or keeps the in-place updates. Compared to the CPU-centric baseline in \cref{fig:delayedlogdelete}a, even though this naive solution already provides better performance, both the CPU and the NDP devices still stalling for synchronization. Although synchronization seems necessary to guarantee a correct persist ordering, we observe that the NDP-managed memory is not visible to the CPU, unless failure-recovery happens. In the example of \cref{fig:delayedlogdelete}c, if we relax this persist ordering and let the synchronization among devices delayed, the recovery program can still read from consistent data as long as the NDP-managed data is not deleted before the delayed synchronization has completed (i.e.,\xspace ``Delete logs'' for A and B). In \cref{fig:resolvechallenge2}, if a failure happens when NDP0 has committed the update but NDP1 has not, the recovery procedure can still read from the consistent copy in the log in both NDP devices, as ``Delete logs'' on both devices only persists after a synchronization. At the same time, because the synchronization is delayed, it does not lead to additional stalling on CPU or NDP devices. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/resolved_challenge_2.pdf} \caption{Recovery in multi-device partitioned execution under relaxed persist ordering.}\ \label{fig:resolvechallenge2} \vspace{-0.15in} \end{figure} Follow to the definitions in \cref{subsec:cpu_ndp_order}, we additionally define the followings: \\ $\sbullet[0.75]$~$N_{a,i}$: the i-th NDP operation on NDP device $a$. \\ $\sbullet[0.75]$~$S$: an synchronization event among all NDP devices. \\ $\sbullet[0.75]$~$\mathit{OP}_1 \xrightarrow{rf} \mathit{OP}_2$: memory operation $\mathit{OP}_1$ reads from data persisted by $\mathit{OP}_2$. \\ $\sbullet[0.75]$~$F$: a system failure event; and, an operation $OP$ that happens before a failure $F$ is denoted as $OP \xrightarrow{hb} F$. \noindent\textbf{Synchronization among NDP devices.} If an NDP device $a$ synchronizes with $b$ through a synchronization event $S$, then any operation in NDP $a$ and $b$, say $N_{a,i}$ and $N_{b,j}$, after synchronization may not persist before synchronization has completed, i.e.,\xspace $S \xrightarrow{po} N_{a,i} \bigwedge S \xrightarrow{po} N_{b,j} \rightarrow S \le_{p} N_{a,i} \bigwedge S \le_{p} N_{b,j}$. On the other hand, before the synchronization, any operation in NDP $a$ and $b$, say $N_{a,i'}$ and $N_{b,j'}$, must have been persisted, i.e.,\xspace $N_{a,i'} \xrightarrow{po} S \bigwedge N_{b,j'} \xrightarrow{po} S \rightarrow N_{a,i'} \le_{p} S \bigwedge N_{b,j'} \le_{p} S$. Based on this guarantee, next we discuss the correctness of failure-recovery. \noindent\textbf{Failure-recovery correctness guarantee.} When the failure happens before synchronization, i.e.,\xspace $S \xrightarrow{hb} F$, the recovery procedure on each NDP device reads from NDP-managed data for recovery. Say, on NDP device $a$, $M_{a, recovery} \xrightarrow{rf} N_{a}$, where $N_{a}$ has logged the original copy. As PPO\xspace{} enforces persist ordering between updates to NDP-managed data and the in-place update from the CPU (\cref{subsec:cpu_ndp_order}), $M_{a, recovery}$ is guaranteed to read consistent data. When failure happens after synchronization, i.e.,\xspace $F \xrightarrow{hb} S$, because all prior memory operations have become persistent, the recovery procedure is also guaranteed to read consistent data. In conclusion, the delayed synchronization PPO\xspace{} ensures a consistent failure-recovery and thus guarantees crash consistency. \subsection{Implementation of PPO\xspace{} in Near-PM Processing} So far, we have discussed the challenges of maintaining the crash consistency guarantee when execution is partitioned among CPU and multiple interleaved PM devices, and how PPO\xspace{} overcomes these challenges. To evaluate PPO\xspace{}, in this work we prototyped a PPO\xspace{}-enabled system using an FPGA platform. As intensive operations on persistent data are processed near the PM device, we name our prototype NearPM\xspace{}. Next, we look at the design and implementation of our PPO\xspace{} system, NearPM\xspace{}. \section{NearPM\xspace{} Design} \label{mech} In this section, we describe the design of NearPM\xspace{} that provides ordering guarantees for NDP in PM programs. \iffalse \subsection{Software interface} \label{mech:api} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Code examples key idea.pdf} \caption{Examples that use NearPM\xspace{} software interface. } \label{fig:codeexample} \vspace{-0.2in} \end{figure} In a conventional CPU-centric system, the software library executes crash consistency operations on the host processor, as demonstrated in Figure \ref{fig:codeexample}a, c, e, and g. With NearPM\xspace{}, the software interface in the NearPM\xspace{} framework offloads crash consistency operations to our PM device. Next, we describe how PM program issues request to a NearPM\xspace{} system. \paragraph*{\textbf{Undo-logging}} With NearPM\xspace{}, the program calls the \texttt{undolog\_create\_NearPM()} function to perform logging in the PM module (Figure \ref{fig:codeexample}b). NearPM\xspace{} achieves better performance compared to the processor by performing memory movement during logging in the PM module. Further, NearPM\xspace{} gains performance by executing independent undo-logging operations in parallel such as in a PMDK transaction~\cite{pmdk}. \paragraph*{\textbf{Redo-logging}} With NearPM\xspace{}, the host processor generates the redo-log. However, the redo-log commit at the end of each transaction is offloaded to NearPM\xspace{} using function \texttt{apply\_redolog\_NearPM()} (Figure \ref{fig:codeexample}d). This function requires redo-log address, data address, and its size as parameters. \paragraph*{\textbf{Checkpointing}} In NearPM\xspace{}, checkpoints are created using the \texttt{ckpoint\_create\_NearPM()} function (Figure~\ref{fig:codeexample}f). The main program must guarantee that the source data is in a consistent state by persisting them before offloading the checkpointing operation to NearPM\xspace{}. This function takes in the source address, the destination address, and the size of the data that needs to be made a checkpoint. \paragraph*{\textbf{Shadow paging}} Upon the triggering of shadow paging, the PM program offloads the data copy to NearPM\xspace{} accelerator using the function \texttt{shadowcpy\_NearPM()}(Figure \ref{fig:codeexample}h). Then the host processor redirects updates to a shadow page. After the update is complete, the host processor persists the update and commits the update by switching all subsequence accesses to a newly created page. As the transactional library, \texttt{libpmemobj}, from PMDK~\cite{pmdk} is a widely-used PM library that uses undo+redo-logging, we integrated our undo- and redo-logging methods into the library. Next, we will discuss the correctness of the software interface we use in NearPM\xspace{}. \subsection{Correctness} Correctness of crash consistency programs is critical to recoverability. One main concern when executing operations in both near memory as well as in the processor is that there can be stale data in caches. NearPM\xspace{} achieves this by utilizing cache flushes and cache invalidation to ensure that all the data in the caches reach PM before executing NearPM\xspace{} operation. For example, in Figure~{\ref{fig:codeexample}}d, cached data reach PM using persist before applying the redo log. Similarly, shadow paging also takes similar steps (Figure~{\ref{fig:codeexample}h}). \fi \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Detailed_architecture_2.pdf} \caption{High-level architecture of NearPM\xspace{}.}\ \label{fig:detailedarch} \vspace{-0.15in} \end{figure} \subsection{Architecture of NearPM\xspace{}} \label{subsec:arch} NearPM\xspace{} is placed inside the PM controller of the PM device, with direct access to the PM storage medium. This enables NearPM\xspace{} to access PM with higher bandwidth and lower latency than the host processor. NearPM\xspace{} consists of the following major components (Figure \ref{fig:detailedarch}): \noindent \textbf{Host read/write queue} takes regular reads and writes from the host processor and accesses the PM media. \noindent \textbf{Request FIFO} takes requests issued by the host processor and keeps them until they are executed. \noindent \textbf{Dispatcher} decodes and issues requests to \textit{NearPM\xspace{} units} (i.e.,\xspace execution engines). The \emph{Dispatcher} keeps track of all \textit{NearPM\xspace{} units}. It issues a request as soon as there is an available \textit{NearPM\xspace{} unit}. \noindent \textbf{Address mapping table} converts virtual addresses in the requests to physical addresses, as the parameters in program-issued NearPM\xspace{} requests are in virtual address (see \cref{subsubsec:address_translation} for details). \noindent \textbf{In-flight memory access table} keeps track of memory addresses being access by both NearPM\xspace{} units and the CPU, in order to handle accesses with conflicting addresses. In case a operation attempts to access an address that is being written to, the \emph{Dispatcher} stalls this operation and keeps it in the \emph{Host read/write queue}. \noindent \textbf{NearPM\xspace{} units} are processing engines that manipulate data in PM and are controlled by the \emph{Dispatcher}. Each \emph{NearPM\xspace{} unit} has a request register that stores the request from the \emph{Dispatcher}, a controller which converts requests into control signals, a metadata generator (e.g.,\xspace metadata generation and log deletion), and a load/store unit for fine-grained data movement, and DMA engine for large data movement (e.g.,\xspace data copy), as shown in \cref{fig:pmlogunit}. \noindent \textbf{Multi-device handler} stores the status of other NearPM\xspace{} devices and coordinates among NearPM\xspace{} devices. When NearPM\xspace{} starts the execution of an operation, NearPM\xspace{} clears the completion status of all NearPM\xspace{} devices. Upon receiving completion status from other NearPM\xspace{} devices, NearPM\xspace{} updates the completion status accordingly. When all devices have completed execution, NearPM\xspace{} can progress to the next synchronization point. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/PMlogUnit.pdf} \caption{Components in each NearPM\xspace{} unit.}\ \label{fig:pmlogunit} \vspace{-0.1in} \end{figure} \subsection{NearPM\xspace{} Execution Flow} In this section, we further introduce the execution flow of NearPM\xspace{} that handles program-offloaded crash consistency operations (i.e.,\xspace NearPM\xspace{} requests) and services the regular memory accesses from the host processor. \noindent\textbf{NearPM\xspace{} request execution.} \cref{fig:detailedarch} shows the workflow (steps in blue). A NearPM\xspace{} request first enters the \emph{Request FIFO} (step \circledtext{1a}) and then gets decoded by the \emph{Dispatcher} (step \circledtext{2a}). During decoding, the \emph{Dispatcher} translates request operands from virtual to physical address through an \emph{Address Mapping Table} (step \circledtext{3a}). After translation, the \emph{Dispatcher} checks the request's physical address (step \circledtext{4a})---requests without address conflicts are immediately issued, but stall until the completion of the other conflicting request/access (details in \cref{dephandle}). Next, NearPM\xspace resets the status bit in \emph{Multi-device handler} (step \circledtext{5a}). Then, NearPM\xspace{} unit receives the request and starts the execution immediately (step \circledtext{6a}). Upon completion, NearPM\xspace{} notifies the \emph{Multi-device handler} to update the status bit both locally and in other NearPM\xspace{} devices (step \circledtext{7a}). When all the NearPM\xspace{} devices have completed execution, the \emph{Multi-device handler} notifies the \emph{Dispatcher} (step \circledtext{8a}) such that new commands can be assigned to the NearPM\xspace{} unit. \noindent\textbf{Host memory access.} Figure~\ref{fig:detailedarch} (steps in red) describes the execution for CPU's memory accesses. CPU's memory accesses enter the host read/write queue (step \circledtext{1b}). Like before, the \emph{Dispatcher} also checks the CPU's accesses for address conflicts before dispatching (step \circledtext{2b}). It issues memory access immediately if there is no conflict from the \emph{In-flight Access Table} (step \circledtext{3b}), Otherwise, it stalls CPU's access until the other access has completed. \subsection{Ordering Guarantee of A Single NearPM\xspace{} Device.} \label{dephandle} The implementation of NearPM\xspace{} follows PPO\xspace{}. In a single-device setup, it enforces a correct persist ordering between CPU and NearPM\xspace{}. When the PM program on the CPU accesses PM, NearPM\xspace{} ensures no address dependency (i.e.,\xspace conflicting addresses) between the in-coming read/write from the host processor and any of the pending and/or in-flight offloaded NearPM\xspace{} requests. The dependency detection is handled by the \emph{Dispatcher} (\cref{subsec:arch}). Besides ordering with CPU, multiple NearPM\xspace{} units within a device can execute offloaded requests in parallel. Required by PPO\xspace{}, parallel execution only applies to requests without read-write dependency. Like the CPU-NearPM\xspace{} dependency handling, the \emph{Dispatcher} also ensures that any NearPM\xspace{} units would not concurrently persist to a common conflicting address but instead persist in the order they were issued from the program. \subsection{Ordering Guarantee of Multiple NearPM\xspace{} Devices}\label{sec:imp_multidev} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/multi_DIMM_sync.pdf} \caption{Partitioned command sync. hardware}\ \label{fig:command_sync_hw} \end{figure} \noindent\textbf{NearPM\xspace{} request issuing.} When the program issues a NearPM\xspace{} request, each request comes with a per-CPU-thread sequence number to each command to be later used for ordering. Similar to issuing memory requests in interleaved memory devices, the memory controller sends the NearPM\xspace{} request to all interleaved NearPM\xspace{} devices according to their address ranges. Once the request reaches NearPM\xspace{} device, NearPM\xspace{} sets \emph{Multi-device handler} status and executes them independently. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figures/MultiDIMM_state_machine.pdf} \caption{Synchronization in partitioned execution. State machine demonstrates NearPM\xspace{} device 0 and each bit represents one device.}\ \label{fig:command_sync_state} \vspace{-0.1in} \end{figure} \noindent\textbf{Correctness guarantees.} PPO\xspace{} enables delayed synchronization, as the recovery procedure may still read from consistent data to restore the persistent state. Thus, synchronization is not on the critical path. NearPM\xspace{} takes the approach in \cref{fig:command_sync_hw} to coordinate the completion of requests in NearPM\xspace{} upon synchronization. Each NearPM\xspace{} device has a \emph{Multi-device handler} that keeps track of the status of each command in local NearPM\xspace{} execution logic as well as in other NearPM\xspace{} devices. When NearPM\xspace{} starts execution, it waits for both the completion status from other NearPM\xspace{} devices (step \circledtext{1}) and the completion status of local execution (step \circledtext{2}) before progressing to the next synchronization point. \cref{fig:command_sync_state} shows the state machine each \emph{Multi-device handler} uses to track the completion of remote commands when there are two devices mounted on the system. The state machine starts at the \emph{All Complete} state and will change its state based on the input signals \emph{Receive command}, \emph{Receive local complete} or \emph{Receive remote complete}. When NearPM\xspace{} receives the command and starts the execution, the state machine change to the \emph{Executing} state. After receiving command complete signals from all devices it will return back to the \emph{All Complete} state. Each NearPM\xspace{} device has a dedicated state machine to track the completion of its command. \subsection{Discussion on Corner-cases} Hardware implementations have physical limitations because of only having finite resources (e.g., limited buffer capacity). In this section, we discuss how we overcome several key limitations in NearPM\xspace{} implementation. \noindent\textbf{Execution between CPU and NearPM\xspace{} devices.} As PPO\xspace{} buffers and stalls a CPU memory access that has data dependency with in-flight NearPM\xspace{} requests, in case the buffer is full, any incoming requests to the same addresses will be stalled. However, accesses to the remaining addresses are not blocked, without interfering other threads on the CPU. \noindent\textbf{Execution among NearPM\xspace{} devices.} As explained in \cref{sec:imp_multidev}, a NearPM\xspace{} device needs to wait for the completion of others to synchronize before moving to the next synchronization point. However, the completion status of the local NearPM\xspace{} device has no external dependency on the completion status of other devices. Thus, all NearPM\xspace{} devices will eventually complete the local execution without running into a deadlock. \subsection{Address Translation}\label{subsubsec:address_translation} Address translation has always been a challenge in NDP systems \cite{gao2015_PACT,xi2015beyond, boroumand2018asplos,fernandez2020_ICCD,gao2015_PACT,gao2016_HPCA,hsieh2016_sigarch,hsieh2016_ICCD,kim2016_sigarch,kim2018_BMC,mutlu2020modern,singh2020_FPL,singh2019_DAC,zhan16_micro,kim2018_BMC} as structures such as TLB are placed in the host processor. Fortunately, PM libraries (e.g.,\cite{pmdk}) usually allocate PM as pools and a memory access to the pool manifests as a base address plus an offset within the pool. Prior works have shown that as long as a pool's base address is translated, it is straightforward to also translate other memory addresses in the same pool using the offset value~\cite{wang17_micro, wang18_isca, ye2021supporting}. Therefore, NearPM\xspace{} keeps the translation of the base address for each pool and perform address translation without going through the CPU. Figure {\ref{fig:addrtranslate}} shows the address translation procedure in NearPM\xspace{}. When the program creates a PM pool, NearPM\xspace{} stores both the virtual base address (step {\circledtext{1}}) and physical base address (step {\circledtext{2}}) of the pool in the \emph{Address Mapping Table} indexed by the pool ID. During execution, NearPM\xspace{} looks up the pool ID of the incoming request (step {\circledtext{3}}) and translates its virtual base address to the physical address, and finally adds on the offset to complete the translation (step {\circledtext{4}}). \begin{figure} \vspace{7pt} \centering \includegraphics[width=\linewidth]{figures/Address_translation_1.pdf} \caption{Address translation in NearPM\xspace{}.}\ \label{fig:addrtranslate} \vspace{-0.1in} \end{figure} \noindent\textbf{Context switch handling.} NearPM\xspace{} keeps the base address mapping for each PM pool. As each pool ID is unique in the system, even across a context switch, the pool-ID-indexed translation mapping still remains valid. \noindent\textbf{Multi-device support.} A PM pool can span across multiple interleaved NearPM\xspace{} devices, where certain bits in the virtual address identifies which NearPM\xspace{} device the data locates. Based on these bits, each device contains a virtual-physical mapping for the base address that is mapped to its local device. Thus, the translation mechanism that replies on the base address of the pool still applies to multi-device scenarios. \subsection{Recovery} \label{subsec:recovery} \subsubsection{\textbf{Persistence domain.}} PM hardware systems employ extended persistence domains (e.g., ADR \cite{ADR} and eADR \cite{rudolf_persist_cache}) that include not only the PM devices but also buffers/caches in the processor. As NearPM\xspace{} executes the crash consistency operations in the PM module and services regular memory accesses from the host processor, these operations and memory access requests should also be placed in the persistence domain, in case they are not completed before failure. Figure \ref{fig:detailedarch} marks the hardware components of NearPM\xspace{} within the persistence domain in green: Request FIFO (2 kB), Address Look-Up Table in the Address translator (432 Bytes), In-flight request registers (256 Bytes) in the Dispatcher, and Host Read/Write Queue (4 kB). Those structures have a total capacity of 7 kB, much less than the buffers (tens of kBs) in existing Optane PM modules \cite{wang20_micro}). Thus, it is practical to use residual capacitors similar to existing Optane PM to write back structures in the persistence domain to a reserved PM location upon failure. \subsubsection{\textbf{Recovery procedure.}} After the system is up again, the hardware of a NearPM\xspace{} device ensures that the results of in-flight NearPM\xspace{} requests and pending host memory accesses in the persistent domain are visible to the recovery program. In a case where there are multiple NearPM\xspace{} devices in the system, the recovery program needs to determine the progress made by each device prior to failure---the latest synchronization point which all NearPM\xspace{} devices reach before failure happens. The recovery procedure of NearPM\xspace{} hardware includes two steps: (1) NearPM\xspace{} loads the data from the reserved PM region back to the structures in the persistence domain. (2) NearPM\xspace{} replays the in-flight NearPM\xspace{} requests and host memory accesses until it reaches the latest synchronization point. Thus, the results of all in-flight operations prior to the synchronization point are visible in memory. \section{NearPM\xspace{} Implementation} \label{sec:implementation} \input{tables/common_tasks} \iffalse We implement NearPM\xspace{} on a real system using an FPGA platform connected to a host system through PCI-E. The system configurations are as shown in Table \ref{tab:system_config}. The host system has an AMD Zen 2 processor with 8 cores running at a frequency of 2.1 GHz to match common server processors \cite{cascadelake_sku}. The data-path in the FPGA runs at 300 MHz. \fi \iffalse \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Evaluation architecture.pdf} \caption{(a) Proposed system vs (b) emulated system of NearPM\xspace{}.}\ \label{fig:implementation} \vspace{-0.2in} \end{figure} \fi \subsection{Hardware Implementation} We use the Xilinx Virtex UltraScale+ VCU118 evaluation platform \cite{vcu118} to implement NearPM\xspace{}. The development board is attached to a PCIe $3.0\times8$ slot, with a bandwidth of 8 GB/s. We use 2 GB of the on-board DRAM is to emulate PM on a NearPM\xspace{} device. In our evaluation, the access latency to the emulated PM is 436 ns, similar to real evaluations on Intel's Optane DCPMM \cite{izraelevitz19_dcpmm}. NearPM\xspace{} connects to the CPU's memory controller via the PCIe bus. Constrained by the FPGA platform, we implement two NearPM\xspace{} devices on the same FPGA, with each device having four NearPM\xspace Units running at 300MHz. The emulated PM on each NearPM\xspace{} devices are interleaved at a 512 B granularity. Each NearPM\xspace{} device contains four NearPM\xspace{} units, connected through an internal AXI bus of 4 GB/s bandwidth. \iffalse Figure~{\ref{fig:implementation}} shows the comparison between the proposed system and the emulated system implementation. \korakit{Remove Fig 13 and change the text to explain it.} Figure~{\ref{fig:implementation}}a shows the real delays we measured on an Intel Optane based system and Figure~{\ref{fig:implementation}}b shows the memory access delays observed in the emulated setup. Because the PM memory access latency in the FPGA-emulated system is $3.4\times$ slower compared to a real Optane based system. We make the memory access latency ratio of each system to be as close as possible by capping the emulated system's CPU frequency to 2.0 GHz and reduce the DRAM frequency to 1600 MHz. This way, the emulated system proportionally slows down each components to make sure the emulated PM access latency comparable to the rest of the system. With all these factors taken into account, we justify that the implementation is as closest it can be to the real system. \fi \iffalse The size of the added hardware structures used for NearPM\xspace{} ordering are fixed and are decided at design time. \emph{NearPM\xspace{} access LUT} (\cref{fig:depend}) needs to keep track of the in-flight operations in NearPM\xspace{} units therefore only need to have entries equal to the number of NearPM\xspace{} units (4 in our implementation). \emph{Host access LUT} needs to keep track of the pending requests to the NearPM\xspace{} unit therefore it should be able to keep entries equal to the request FIFO, which is 32. Therefore, the latency to access these structures are fixed and is reflected in the evaluation. \fi The DRAM on the FPGA board is mapped to the CPU's physical memory space in the write-back cacheable mode and is directly accessible through load-store instructions. However, the Linux kernel maps FPGA's memory as non-cacheable by default. Thus, we manipulate the memory type range register (MTRR) at the boot time to enable writeback caching. Thus, our implementation is a software-implemented coherent memory. In upcoming CXL \cite{cxl} systems, we expect even better performance. \iffalse \paragraph*{\textbf{Requirements}} There are two main requirements to have the FPGA prototype functioning. First, NearPM\xspace{} needs to provide load-store access to programs and an interface to offload crash consistency operations. Second, the NearPM\xspace{} implementation also needs to provide performance similar to PM systems, i.e., similar memory access speed. \paragraph*{\textbf{Challenges in prototyping}} The PCI-E based FPGA implementation faces two challenges. First, the Linux kernel enumerates NearPM\xspace{} as an IO memory device which is not accessible via PM program. And second, the kernel maps PCI-E memory-mapped IO (MMIO) region as uncacheable by default. As caching is a major mitigation to the long memory access latency (also true for real PM systems), lack of caching support slows down host processor's access to the emulated PM. \paragraph*{\textbf{Kernel modifications}} To overcome these two challenges, we implement the following kernel modifications. First, to allow the PM program to access NearPM\xspace{} as a byte-addressable PM, we apply kernel parameters that memory map (mmap) NearPM\xspace{}’s PCI-E memory region as PM at boot time. Second, to enable caching, we modify the Linux kernel by implementing a NearPM\xspace{}-supporting module that initializes the memory type range register (MTTR) and sets the desired the caching parameter (as write-back cachable) at the boot time. Our experiment shows that the cached setup is $7\times$ faster than no-caching. \fi \subsection{Software Interface}\label{subsec:interface} The software interface that supports the NearPM\xspace{} offloads crash consistency operations to our PM device. Our current implementation provides primitive operations that are needed by the crash consistency mechanisms in \cref{tab:commonops}. During offloading, the program calls the operation functions, which generate NearPM\xspace{} requests by setting operands to a dedicated memory-mapped address range on the NearPM\xspace{} FPGA device. \iffalse \paragraph*{\textbf{Logging}} In fine-grained logging, the NearPM\xspace{} library allocates a log and generates metadata in a separate location. For \emph{undo-logging}, the library copies the original data to the log. Then, the PM program updates the data in-place. For \emph{redo-logging}, the library redirects the update to a log. After the update is complete, the PM program copies updated data from the log to the original location and commits the update. In both cases, the metadata generation and data copy operations are offloaded to the NearPM\xspace{} hardware. \ \paragraph*{\textbf{Checkpointing}} Different from logging, checkpointing is coarse-grained (e.g., page granularity). Therefore, the NearPM\xspace{} library tracks the modifications transparently using a page fault handler. Initially, all pages are set to be read-only. Upon a write to the page, the handler steps in. The handler records the address range of updated data at page granularity. After performing an update request, the program makes a checkpoint for the updated address ranges by copying updated data to the newly allocated page. In NearPM\xspace{}'s implementation, the data copy operation is offloaded to NearPM\xspace{} hardware. \paragraph*{\textbf{Shadow paging}} Like checkpointing, shadow paging is also coarse-grained. Its page fault handler performs copy-on-write upon any update to the page--creating a shadow copy of the original page and redirect the updates to the shadow copy. At commit, the program remaps the shadow page to the original location. In NearPM\xspace{}'s implementation, the creation of shadow page is offloaded to NearPM\xspace{} hardware. \fi \section{Evaluation} \label{sec:eval} \subsection{Methodology} \label{subsec:methodology} \begin{figure} \centering \includegraphics[width=.8\linewidth]{charts/datacopy.pdf} \vspace{-0.05in} \caption{Data movement speedup of NearPM\xspace{} over CPU.}\ \label{fig:datacopy} \vspace{-0.2in} \end{figure} \noindent\textbf{System configuration.} We evaluate PPO\xspace{} on the prototype of NearPM\xspace{} (implementation in Section~\ref{sec:implementation}) in a testbed described with a system configurations in Table \ref{tab:system_config}. In the rest of the evaluation, we focus on a system with the ADR support~\cite{snia_adr} where a combination of \texttt{CLWB} (or other flush instructions) and \texttt{SFENCE} is needed to ensure data's persistence. Recently, an extension to the ADR support, eADR, uses batteries to backup CPU's caches to eliminate the need for cache flush/write-back. In Section \ref{subsubsec:eadr}, we also evaluate NearPM\xspace{} in an emulated eADR~\cite{eadr} system, where flushes/write-backs are deprecated. \noindent\textbf{Workloads.} Table~\ref{tab:workloads} lists the workloads and their inputs. TPCC and TATP are PM transactions from a prior work \cite{gogte18_pldi}; btree, rbtree, skiplist, and hashmap are key-value stores from PMDK \cite{pmdk} library; Redis and Memcached are real-world workloads. PmemKV \cite{pmemkv} is a key-value store that uses a B+ tree as the backend. For each workload, we evaluate three crash consistency implementations: \begin{itemize}[leftmargin=10pt] \item {\textbf{Logging:}} The performance of each program's original crash consistency support based on undo/redo logging. \item {\textbf{Checkpointing:}} The performance of a modified crash consistency support based on checkpointing. \item {\textbf{Shadow paging:}} The performance of a modified crash consistency support based on shadow paging. \end{itemize} Note that both checkpointing and shadow paging operate at 4 kB page granularity. \noindent{\textbf{Comparison points.}} \noindent We evaluate four configurations: \begin{itemize}[leftmargin=10pt] \item {\textbf{Baseline}} executes only on the CPU. \item {\textbf{NearPM\xspace{} SD}} offloads crash consistency operations to a single NearPM\xspace{} device. \item {\textbf{NearPM\xspace{} MD SW-sync}} offloads crash consistency operations to two NearPM\xspace{} devices and synchronizes using a CPU-polling, software mechanism. \item {\textbf{NearPM\xspace{} MD}} offloads crash consistency operations to two interleaved NearPM\xspace{} devices with delayed synchronization. \end{itemize} \subsection{Evaluation Results} \label{subsec:app} In this section, we evaluate applications (listed in \cref{tab:workloads}) in the configurations mentioned in \cref{subsec:methodology}. \subsubsection{Micro-benchmark.} We first evaluate a micro-benchmark that copies persistent data of variable sizes. \cref{fig:datacopy} shows the speedup from NearPM\xspace{}. As the size of data increases, the speedup also increases: from $1.13\times$ when the size is 64 B to $5.57\times$ when copying 16 kB of data. This speedup is comparable to prior FPGA-based NDP prototypes \cite{lee2021hardware,PiDRAM2021,upmem2021}. \subsubsection{Speedup in crash consistency operations.} \cref{fig:crashp_kernel} shows the speedup from PPO\xspace{} in crash consistency code regions of each workload. On average PPO\xspace{} achieves $6.9\times$, $4.3\times$, and $9.8\times$ speedup for logging, checkpointing, and shadow paging, respectively. We notice that TATP has a low speedup of $1.23\times$ in undo-logging. The main reason is that TATP has only one NearPM\xspace{} operation that performs logging and commits immediately afterward. Thus, it does not benefit from parallelism in NearPM\xspace{} execution. \subsubsection{End-to-end speedup.} \label{subsubsec:overall_perf} We evaluate the end-to-end performance for all four configurations in \cref{subsec:methodology}. NearPM\xspace{} SD achieves $1.29\times$, $1.15\times$, and $1.28\times$ average speedup for logging, checkpointing, and shadow paging as presented by the first bar in the graphs in \cref{fig:crashp}. This result shows the performance PPO\xspace{} achieved only by effective handling of ordering between the CPU and NDP. NearPM\xspace{} MD SW-sync achieves $1.21\times$, $1.14\times$, and $1.23\times$ average speedup for logging, checkpointing, and shadow paging as presented by the second bar in the graphs in \cref{fig:crashp} second row. Its speedup is lower compared to NearPM\xspace{} SD, due to the synchronization overhead. By reducing the synchronization overhead, NearPM\xspace{} MD achieves $1.35\times$, $1.22\times$, and $1.33\times$ speedup on average in the three crash consistency mechanisms, as presented by the third bar in \cref{fig:crashp}. \begin{figure} \centering \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[height=1.2in]{charts/parallelpercent1.pdf} \caption{Percentage of parallel execution between CPU and NearPM\xspace{}.} \label{fig:parallel} \end{minipage} \hspace{0.05in} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[height=1.2in]{charts/numunits.pdf} \caption{End-to-end performance with variable \# NearPM\xspace{} Units.} \label{fig:numndp} \end{minipage} \end{figure} \iffalse We first evaluate a logging-based version of these workloads. On average, NearPM\xspace{} MD provides a $1.32\times$ speedup over the baseline as presented by \cref{fig:crashp}a second row. \cref{fig:crashp}b (second row) shows that NearPM\xspace{} MD improves the average performance of checkpointing by $1.31\times$ compared to CPU-based solution. Finally, we evaluate the shadow-paging-based versions. Figure \cref{fig:crashp}c (second row) shows that NearPM\xspace{} MD provides a $1.43\times$ speedup in total execution. Overall, NearPM\xspace{} effectively reduces the crash consistency overhead in different mechanisms. We observe that NearPM\xspace{} SD achieves $1.29\times$, $1.23\times$, and $1.37\times$ speedup in logging, checkpointing, and shadow-paging compared to the baseline. The speedup shows that by only handling ordering near memory significant speedup can be achieved. Also, the performance difference between between NearPM\xspace{} SD and NearPM\xspace{} MD are considerably low. This is because in the partitioned execution the number of objects partitioned amongst NearPM\xspace{} devices are low, higher the number of objects partitioned among devices better performance can be achieved because of the inherent parallelism available when executing on multiple devices. \korakit{@Yasas, do you really want to say it this way? I think you should go step-by-step (if we have enough space) 1. NearPM-SD xxx speed up, 2. NearPM-MD-SW-sync xxx speed up, yyy slowdown over NearPM-SD due to synchronization overhead, then 3. NearPM-MP xxx speedup over NearPM-MP-SW-sync)} \fi \iffalse \subsubsection{Crash consistent acceleration.} Next, we look at the performance benefits that NearPM\xspace{} MD provide for the crash consistent regions normalized to the baseline. Figure \ref{} shows that crash consistent regions are accelerated by $6.9\times$, $4.2\times$, and $9.7\times$ for logging, checkpointing, and shadow paging respectively (\yasas{need to refer full figure} second row). The performance achieved is reflected in figure \yasas{refer full figure} first row. \korakit{What are you trying to say here?} \fi \iffalse \begin{figure} \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{charts/parallelpercent1.pdf} \caption{Average percentage of parallel execution between CPU and NearPM\xspace{}.} \label{fig:parallel} \end{minipage} \hspace{0.1in} \begin{minipage}{0.45\linewidth} \centering \vspace{-0.1in} \includegraphics[width=\linewidth]{charts/numunits.pdf} \caption{Average end-to-end performance with variable \# NearPM\xspace{} Units.} \label{fig:numndp} \end{minipage} \vspace{-0.1in} \end{figure} \fi \subsubsection{CPU-NearPM\xspace{} parallelism.} \label{subsubsec:parallelism} One benefit of NearPM\xspace{} is the parallel execution between the NDP units and the CPU, i.e.,\xspace CPU and NearPM\xspace{} can execute at the same time for a certain fraction of the program. \cref{fig:parallel} presents the average percentage of execution that is parallelizable (the lower stack). On average, logging, checkpointing, and shadow paging have 20.01\%, 17.25\%, and 24.68\% of the execution parallelizable, respectively. \iffalse \sihang{I don't think there is a big enough difference to discuss} We find out that checkpointing has the lowest level of parallelism, as all the checkpoints are processed at the end of each transaction. In comparison, logging and shadow paging can perform NearPM\xspace{} operations in parallel with the program execution. The parallelism is also consistent with our result in Section \ref{subsubsec:overall_perf}. Though shadow paging and checkpointing both perform at an NDP-friendly coarse granularity, shadow paging provides more speedup than checkpointing because of the better parallelism. \fi \subsubsection{Performance of variable numbers of NearPM\xspace{} units.} This experiment evaluates the performance with 1, 2, and 4 NearPM\xspace{} units. \cref{fig:numndp} shows that the speedup over the CPU-based baseline increases with more NearPM\xspace{} units, as the offloaded program contains parallelizable operations, such as copying multiple cache lines in a page can happen in parallel. \begin{figure} \begin{minipage}[t]{0.54\linewidth} \centering \includegraphics[height=1.2in]{charts/multi.pdf} \caption{Throughput with multiple threads (mc: memcached).} \label{fig:multi} \end{minipage} \hspace{0.05in} \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[height=1.2in]{charts/eadrperf.pdf} \caption{Average end-to-end performance with eADR.} \label{fig:eadr_perf} \end{minipage} \end{figure} \subsubsection{Multithreaded performance.} \label{subsubsec:multi} This experiment evaluates the performance when the application on the host CPU is multithreaded. We take the two real-world workloads, Redis and Memcached, that can scale the number of clients and the backend handlers from 1 to 16 threads. \cref{fig:multi} presents the speedup over the CPU-based baseline with the same number of threads. As the number of threads increases, the speedup from NearPM\xspace{} reduces but still outperforms the baseline. The main reason for the slowdown is that the number of execution units in NearPM\xspace{} is limited to four due to the limitation of our FPGA platform. We expect commercialized systems to integrate more units for intensive workloads. \iffalse \subsubsection{\textbf{Sensitivity study on checkpointing frequency}} In Section~\ref{subsubsec:overall_perf}, we evaluate the performance of a per-query checkpointing scheme. In this section, we vary the checkpointing frequency, from checkpointing after each update query to once every 20 update queries. Figure~\ref{fig:checkfreq} presents the speedup over the baseline. We observe the performance increase when the frequency of checkpointing decreases, as moving larger chunks of memory at a time amortizes the offloading overhead. \fi \iffalse \subsubsection{\textbf{Performance comparison with HOOP}} We emulate the design of HOOP~\cite{cai2020hoop} in our evaluation platform and compare its performance with NearPM\xspace{}, as shown in Figure~{\ref{fig:hoop}}. We observe that the performance of HOOP depends on the PM programs contention to the OOP region. When the contention increase NearPM\xspace{} performs faster than HOOP. Our evaluation shows that NearPM\xspace{} is $1.07\times$, $1.16\times$, $1.18\times$, and $1.24\times$ faster than HOOP when running 4, 8, 16, and 32 threads. This is because HOOP's garbage collection is done periodically and when there is high contention to the OOP region, the program needs to wait for the garbage collection to complete. In the case of NearPM\xspace{}, the request FIFO is always processed by NearPM\xspace{} units, allowing NearPM\xspace{} to handle resource contention better than HOOP. \fi \iffalse \subsubsection{Sensitivity study read/write ratio.} Next, we evaluate the speed up from NearPM\xspace{} under different read/write ratios. Figure \ref{fig:rwratio} presents the average speedup over the CPU baseline in both checkpointing-based and logging-based versions. Note that the three pmemkv-based benchmarks, fill, update, and delete, are not included as they only perform update requests. Overall, more update requests yield better speedup over the CPU-based baseline, as NearPM\xspace{} targets crash consistency operations involved in update requests. \fi \subsubsection{Performance in an eADR system.} \label{subsubsec:eadr} We also evaluate NearPM\xspace{} under eADR \cite{eadr}, by deprecating cache flush/write-back primitives in the software, following Intel's approach~\cite{pmdk_changelog}. \cref{fig:eadr_perf} compares the average speedup of NearPM\xspace{} under ADR and eADR over the CPU baseline (which always runs ADR). Overall, with an eADR system, the speedup from NearPM\xspace{} reduces only $5.65$\%, $3.86$\%, and $3.93$\% in the three crash consistency mechanisms. Even though eADR reduces the writeback cost, the programs still perform intensive data movements. \section{Related Work} \noindent\textbf{Near-data processing.} NDP aims to reduce memory movement in the conventional CPU-centric systems \cite{ahn2015_ISCA,ahn2015_ISCA2,fernandez2020_ICCD,gao2015_PACT,gao2016_HPCA,hsieh2016_sigarch,hsieh2016_ICCD,kim2016_sigarch,kim2018_BMC,mutlu2020modern,singh2020_FPL,singh2019_DAC,zhan16_micro,kim2018_BMC}. For example, RowClone~\cite{seshadri2013isca} accelerates bulk data movement in DRAM, Boroumand et al.~\cite{boroumand2018asplos} targets Google workloads with near-data accelerators, and TETRIS~\cite{gao2017asplos} accelerates neural networks. Different from our work, they target computation rather than persist ordering. Our proposal PPO\xspace{}, on the other hand, can cooperate with those computation-focused methods to provide persistence guarantees. There have also been works that bring processing to SSDs. For example, CompoundFS \cite{ren20_compoundfs} accelerates file system IO operations in SSD and Almanac \cite{wang19_almanac} retains SSD history logs using an in-SSD logic. Nonetheless, they target conventional storage systems, different from PM systems that directly manage persistent data. \noindent\textbf{Hardware support for memory persistency.} The memory persistency model ensures the order in which writes become persistent. Pelly et al. first propose memory persistency \cite{pelley14_isca} and followup research continue to optimize the performance of persistency models. For example, DPO \cite{kolli16_micro} and HOPS \cite{nalli17_asplos}, PMEM-Spec \cite{jeong21_pmemspec}, and Themis\cite{Shahri2020Fenceless} provide efficient persistency models by reducing the cost of blocking due to data persistence. However, those works target CPU-centric systems. In comparison, our work, PPO\xspace{}, extends persistence to NDP-enabled PM systems. \noindent\textbf{Crash consistency mechanisms.} There are a number of previous works that provide solutions for crash consistency. Intel's PMDK library provides transactions using a combination of undo and redo logs~\cite{libpmemobj}. There are also databases and key-value stores based on PMDK that maintain crash consistency, such as Redis \cite{pmem_redis}, MongoDB \cite{pmem_mongodb}, RocksDB \cite{pmem_rocksdb}, and Memcached \cite{pmem-memcached}. Atlas \cite{chakrabarti14_oopsla} and SFR~\cite{gogte18_pldi} convert code regions marked by synchronization primitives to undo-log-based transactions. Checkpointing creates a copy of the updated persistent memory to enable recovery \cite{dong2009SC, joshi15_micro}. DudeTM \cite{liu17_asplos} and SoftWrAP \cite{giles15_msst} use shadow memory to maintain redo logs before applying them to PM. These mechanisms tend to maintain additional copies of data for recovery. Therefore, our NearPM\xspace{}, can be applied to mitigate their crash consistency overhead. \iffalse \noindent\textbf{Hardware crash consistency mechanisms.} There have been proposals that introduce hardware features for crash consistency. For example, ATOM \cite{joshi17_hpca} and PiCL \cite{nguyen18_micro} uses specialized hardware in the L1 cache to enable hardware-based undo-logging transactions. ThyNVM \cite{ren15_micro} proposes a mechanism that decouples checkpointing from the critical path of execution by overlapping the execution with checkpoint generation. However, these approaches only optimize a particular crash consistency mechanism, while NearPM\xspace{} fundamentally optimizes for data movement and can handle various mechanisms. \fi \section{Discussion} \noindent\textbf{PPO\xspace{} for different crash consistency mechanisms.} We have demonstrated that multiple crash consistency mechanisms can benefit from PPO\xspace{}. These mechanisms all use versioning of persistent data for recovery, which lead to data-intensive operations that are good fit for near-PM processing. However, other mechanisms may still benefit from NDP. For example, checksum-based crash consistency may offload the checksum computation to the PM device and use PPO\xspace{} to order the persistence of its checksum updates. \noindent\textbf{Scalability.} Though in \cref{sec:eval}, we evaluated our prototype of two NearPM\xspace{} devices, due to limitations in our FPGA platform, PPO\xspace{} is scalable as synchronizations among devices is off the critical path. We expect future PM systems to have more PM devices, such as memory modules or CXL devices, where having such a scalable persist ordering is critical to performance. \noindent\textbf{Expected performance in commercial NDP systems.} Our NearPM\xspace{} prototype shows comparable performance as prior work that also prototype NDP systems \cite{lee2021hardware,upmem2021,PiDRAM2021}---we achieve $7-9\times$ speed when evaluating the offloaded crash consistency operations (i.e.,\xspace accelerable computation kernels). In commercialized implementations, we expect better performance as the NearPM\xspace{} device can allow for more processing units that runs at a higher clock speed. \section{Conclusions} In this work, we propose Partitioned Persist Ordering (PPO\xspace{}) for PM systems that partitions data-intensive crash consistency operations to NDP-enabled PM devices during execution. On top of PPO\xspace{}, we further prototype an NDP system, NearPM\xspace{} using an FPGA platform, which processes crash consistency operations inside the PM device. We evaluate ten PM workloads, where each workload has three versions that use logging, checkpointing, and shadow paging for crash consistency. Overall, NearPM\xspace{} achieves $4.3-9.8\times$ speedup in the NDP-offloaded operations and $1.22-1.35\times$ speedup in end-to-end execution.
1,314,259,996,786
arxiv
\section{Introduction} \IEEEPARstart{D}{}isasters are catastrophic events that may overwhelm the emergency response capabilities of the community and threaten public safety and the environment \cite{1}. Disasters always have significant impact on societies, economies, and humankind. According to the Annual Disaster Statistical Review 2017, there were $335$ natural disasters that affected more than 95.6 million people, causing 9697 deaths and total losses of \$$335$ billion \cite{2}. Besides natural disasters, there are also many man-made disasters, such as fire, terrorist attacks, transport accidents, and technological disasters, which cause considerable victims and economic losses \cite{3}. These disasters will bring people a great physical and mental pain, even deprive a number of lives. Therefore, it is vital to train relevant participants in disaster relief capabilities. Generally, the disaster relief capability training adopts traditional approaches such as videos, posters, disaster exercises, etc. However, the major limitation of these approaches is that they cannot fully produce the elements of the disaster, which cannot ensure that participants receive effective training. Serious game, with non-entertainment purposes, is an innovative approach which can create an immersed disaster environment by using game elements, e.g. symbolic tokens, models and sound to simulate the impact of a disaster, e.g. destroyed building, injured people \cite{4}. Connolly \cite{5} has shown that participants in disaster relief can obtain more effective training by serious games compared with traditional approaches. Therefore, serious games have taken its place in the disaster relief training. Hence, serious games for disaster relief training have drawn widespread attention, and many related researches have been done. But most studies focused on the certain stage of a disaster and did not cover the entire disaster process. However, a successful disaster relief process should involve all possible activities and situations at all stages of a disaster \cite{6}. Disaster relief training needs to train different capabilities in each stage of a disaster. Therefore, it is necessary to classify the relief work in different stages of a disaster. Disaster management \cite{6} is defined as the organization and management of resources and responsibilities for all stages of a disaster, which can effectively guide disaster relief actions in each stage of a disaster. Because there has not been a systematic model for all stages of a disaster relief yet, we integrate the idea of the disaster management into the entire disaster relief process. According to the International Federation of Red Cross and Red Crescent Societies (IFRC), disaster relief is divided into three stages: Reparedness, Response, and Recovery \cite{7}. By contrast, most literature and organizations believe that disaster relief consists of four phases base on disaster management-Mitigation, Preparedness, Response, and Recovery \cite{8}. We analyze these two frameworks and conclude a new framework to describe the entire disaster relief process. Then, we reviewed a quantity of Serious Games for Disaster Relief (SGDRs) in our framework and analyze their characteristics, target group, techniques, and possible disaster relief capabilities so as to offer reliable guidance for relevant participants, e.g. disaster communities, rescuers, policymakers and incident commanders. This paper discusses the limitations of SGDRs from multiple aspects, and further proposes some suggestions for developers to design more effective serious games. The rest of the paper is organized as follows: section 2 introduces traditional training methods as well as serious games training methods, and then presents the stage classification disaster relief framework according to the different tasks in each stage. Section 3 surveys the SGDRs in each disaster relief stage. Section 4 analyzes the deficiencies of SGDRs and proposes the corresponding countermeasures. Finally, section 5. draws the conclusions and future direction. \section{Background} \subsection{Disaster Relief Training Methods} Disaster relief is a fundamental process to fight disasters. Effective disaster relief can greatly reduce the losses caused by disasters. In disaster relief, relief workers have to work under a great pressure to make decisions and implement an appropriate action. In contrast, the training aim to provide individual with the knowledge, skills, and attitudes to cope with potential stressors \cite{9}. Consequently, effective disaster exercise can greatly improve participants’ ability to deal with disasters. In general, people often choose to use operation-based methods such as disaster drills to practice and maintain rescue capabilities and use discussion-based methods such as tabletop exercise to develop and assess plans, policies, and procedures. \begin{table*} \centering \caption{The Main Approaches for Disaster Relief training} \label{table} \setlength{\tabcolsep}{3pt} \renewcommand\arraystretch{1.5} \begin{tabular}{|m{3.0cm}<{\centering}|m{3.0cm}<{\centering}|m{6.0cm}<{\centering}|m{6.0cm}<{\centering}|} \hline Approaches&Target abilities&Advantages&Limitation \\ \hline Disaster drill & Rescue skills &\tabincell{c}{ benefit to train team collaboration,\\improve and evaluate \\ local disaster response capacity, etc.} & \tabincell{c}{expensive,\\ unfavorable to newcomer, \\different from real disaster rescue,\\rescuer maybe injury in training, etc.}\\ \hline Tabletop exercise&Disaster planning&\tabincell{c}{low stress environment,\\low cost,gathering ideas and wisdom,\\facilitated group discussion of problem, etc.}&\tabincell{c}{difficult to organize,\\lack of realism,\\unable to replicate every aspect\\ of a hypothetical situation,\\ providing only a superficial review, etc.}\\ \hline Serious game&\tabincell{c}{All disaster relief\\ capabilities}&\tabincell{c}{safe, low cost,\\ customizable scenes,\\ repeatable training,\\ simulating scenes that are difficult\\ to reproduce in reality, etc.} &\tabincell{c}{the discomfort of game\\ technology to the human body,\\ people do not take it seriously,\\ ignoring the mistakes in the game,\\ inability to fully express the\\ complexity of the disaster, etc.\\ }\\ \hline \end{tabular} \label{tab1} \end{table*} It is generally know that disaster drills are coordinated, supervised activities, usually used to test specific operations or functions \cite{10}. What’s more, the disaster drill training method is widely used around the world and is regarded as a fundamental tool for evaluating and improving local disaster response capacity \cite{11}. In the disaster drill, Rescuers in different types of training scenarios affected by the disaster where victims are replaced with dummies, rescuer needs to use various rescue skills to keep “victims” safe. Therefore, the disaster drill is effective in improving personal disaster rescue capabilities, but not optimal for all situations. On the one hand, it is relatively expensive because it requires items to be consumed and enter a dedicated training area, where various environments (such as buildings, ships, trains, etc.) need to be built. On the other hand, the drill may be overwhelming to the newcomer, especially when involving large scale simulations \cite{12}. In fact, rescue behavior in real life is different from experiments such as drills \cite{13}. Additionally, rescuers are likely to be injured during the drill \cite{14}, which means that current drill training still has limitations. Table exercise is a discussion-based learning experience where participants need to play a role and use their strategies to solve problems \cite{15}. During the discussion, participants can not only strengthen communication, but also enable to evaluate the effectiveness of emergency response strategies. This method often used to train disaster planning capabilities, such as, the strategies to reduce disaster risk, assess plans and policies about disasters. For example, Taylor et al. use tabletop exercise to train officials’ strategies to fight Pandemic Influenza \cite{16}. H Khankeh et al. use tabletop exercise to train hospital managers to plan for disaster \cite{17}. AM Wendelboe et al. use tabletop exercise to review and test the measures taken by university leaders to deal with the COVID-19 \cite{18}. These experiments are enough to show that tabletop exercises are necessary for disaster relief planning and future disaster prevention. However, tabletop exercises also have their limitations. It is difficult to satisfy the requirement of multiplayer participation where officials, policymakers, managers, and experts must gather in one place. Apart from that, same as the disaster drillt it lacks reality, different from the real situation, and difficult to consider all aspects, provide only a superficial review of the overall plan \cite{19}. Consequently, it is necessary to investigate innovative and more effective approaches to overcome these limitations. Considering the apparent limitations of traditional approaches in disaster relief training, serious games have been used as an alternative for training instead of traditional exercise. Serious games are using game technologies and game elements for applications that aim to learn or train, not only for entertainment \cite{20}. Unlike traditional disaster drills and tabletop exercises, serious games can easily simulate the element of disaster by using game elements, such as symbolic tokens, models, sound effects, virtual reality, and so on. In this context, each participant may experience a certain key characteristic of a real disaster that enables them to better understand the disaster. Besides, there are many other benefits to the serious games training method based on the characteristics of the game. It allows more frequent training, possibilities to train that are not easy to reproduce in the real world due to cost, safety, and time concerns, and allows better evaluation \cite{21}. At the same time, it provides a safe, low-cost alternative that can be practiced in certain situations, and provides trainees with the opportunity to train various work-related procedures. With the rising popularity of various types of games such as electronic games, VR games, and somatosensory games continue to increase the upper limit of serious game training. While enjoying the high immersion brought by these technologies, we also need to overcome their shortcomings. Such as eyesight problems caused by electronic devices, dizziness and sickness caused by VR technology \cite{22}, the discomfort caused by somatosensory clothing, and so on. This requires us reasonably plan the time for training using these techniques. In addition, serious games as a type of game also have some limitations. Firstly, serious games maybe build up a false sense of security \cite{21}, because in serious games, players are free to make mistakes without any actual punishment, especially in games without a good feedback mechanism, which make players ignore these mistakes even in the real world. Secondly, certain groups are not familiar with what serious games are or the difference between serious games and games. Considering games made it certain that users did not take serious games seriously. Finally, serious games may inevitably simplify disaster, and thus fail to adequately portray the complexity of the disaster and the process of disaster relief. For example, terrain control of lava flow and tsunami distribution, the relationship between earthquake intensity and source distance \cite{23}, as well as excessive use of water in a fire may create a steam cloud that will pose a major threat to firefighters and any potential victim being rescued. These details are often difficult to consider in the game \cite{24}. Therefore, serious games cannot completely replace traditional disaster relief training, but taking serious game training methods as a supplement to traditional training methods can reduce training costs and provide a specific environment for training specific tasks. At the same time, serious games can also be used as a prelude to traditional training. After the trainees use serious games to reach a certain level, they contact with expensive and resource-intensive traditional training to ensure the efficiency of training. % % % % % \subsection{Disaster Relief Cycle} Disaster relief is a complex process that usually requires a large number of measures to deal with a series of uncertain emergencies. That normally requires various actors (e.g. civilians, government, and non-government) to perform their duties and coordinate with each other at each stage of the disaster. At the same time, serious games have been widely used in relief training for different groups of people at different stages of disasters. Therefore, it is very necessary to distinguish relief work in different stages of the disaster and select the appropriate SGDRs to train different abilities. Now there are two main frameworks that describe the stage of disaster relief based on disaster management. The first framework is proposed by The Federal Emergency Management Agency (FEMA) \cite{25} to divide this process into four phases, while the second format is proposed by IFRC \cite{7} to divide the process into three phases. Both frameworks can clearly express the relief work at each stage of the disaster. However, their descriptions of the disaster relief process have their own characteristic. Based on this, we further analyzed the characteristics of these two frameworks and concluded a comprehensive and professional disaster relief classification framework. The framework is not only used as a guideline to guide participants in disaster relief to take appropriate relief work at different stages of the disaster but also can as a classification standard for SGDRs. \begin{figure}[h] \centering \includegraphics[width=3.7in]{FEMA} \caption{The Framework of FEMA} \end{figure} The FEMA \cite{25} proposed a basic four-stage framework, which is also widely used by most articles and organizations. As show in Fig. 1. It includes Mitigation, Preparedness, Response, and Recovery. It cycles through each stage and takes different relevant relief work in different phase. For example, disaster mitigation is the stage to eliminate or reduce the probability of disaster occurrence or reduce the negative effects of inevitable disasters. People take activities such as disaster analyses, disaster forecasting, and disaster defense project. Disaster preparedness refers to increase the likelihood of successful disaster response, such as taking a disaster response plan and raising public awareness would be considered preparedness activities. The purpose of disaster response is to respond to a disaster as rapidly as possible, by mobilizing resources to rescue survivors and meet their basic needs. Disaster recovery aim to assist those who have suffered the impact of a disaster to return the normal life. Both the mitigation stage and the preparedness stage occur before the disaster, and the disaster preparedness stage is complementary to the disaster mitigation stage, resulting in the relief work in these two phases is always closely linked. For example, people always carry out disaster prediction and disaster planning at the same time to ensure a timely response to the disaster. Similarly, most of the related serious games always pair preparedness skills with mitigation knowledge. Therefore, combining these two stages can more concisely and clearly show the relief work in pre-disaster. \begin{figure}[h] \centering \includegraphics[width=3.65in]{IFRC} \caption{The Framework of IFRC} \end{figure} To address this issue, we analyzed the IFRC’s \cite{7} disaster relief phase division. IFRC divides the disaster relief process into three stages based on disaster management—Preparedness, Response, and Recovery. As showed in Fig.2. Unlike FEMA, IFRC combines the relief work before the disaster into one stage to express the disaster relief process more simply and clearly. In the preparedness stage, people take activities that provide relief measures to reduce potential disaster areas vulnerability to disasters and strengthen their capacities to respond to disasters. Another difference is that the framework depicts the phases of disaster relief as a linear process. But the disaster relief phase can best be represented as a cycle which is very important to the disaster relief process. Because disaster does not just appear one day, post-disaster review always is carried out in the recovery period. Such review will often reveal the shortcoming in the previous disaster plan, and then provides valuable experience and strategies for subsequent disaster preparedness \cite{6}. \begin{figure}[h] \centering \includegraphics[width= 3.5in]{DisasterReliefCycle} \caption{Disaster Relief Cycle} \end{figure} Therefore, we combined the above two frameworks to design a more comprehensive framework as a standard for dividing the stages of the disaster relief cycle. As show in Fig.3. On one hand, we divided the disaster relief process into three stages, namely Preparedness, Response, and Recovery. Each of phase levies particular demands on participants in disaster relief. In the preparedness stage, adopt policies, take a disaster response plan, etc. to prevent the impact of the disaster. In the response stage, some rescue actions are taken to keep people safe, and emergency supplies are provided to the disaster area to ensure the basic needs of the refugee. In the recovery stage, the need to help the reconstruction of the disaster area and strengthen the community’s anti-disaster capacity. On the other hand, these phases follow one another in a continuous cycle, emphasizing the positive impact of previous experience on current planning. In summary, SGDRs of each stage should also exhibit these properties. Later, based on this disaster relief framework, SGDRs at each stage of the disaster will be reviewed. \begin{table*} \centering \caption{Relief Works at Each Stages of Disaster Relief} \label{table} \setlength{\tabcolsep}{3pt} \renewcommand\arraystretch{1.5} \begin{tabular}{|m{3.0cm}<{\centering}|m{3.0cm}<{\centering}|m{6.0cm}<{\centering}|m{6.0cm}<{\centering}|} \hline Stage&Time&Characteristics&Relief Works \\ \hline Preparedness & Pre-disaster &\tabincell{c}{ Prepare and prevent the effect of \\disaster} & \tabincell{c}{deploy disaster prevent projects,\\ disaster policy and plan,\\ raise public awareness,\\ improve the environment, etc.\\ }\\ \hline Response&\tabincell{c}{During the disaster\\ and one or six months \\after}&\tabincell{c}{Putting people safe and meeting the\\ basic need of people}&\tabincell{c}{rescue victims in disaster,\\ evacuate people,\\ provide emergency supplies, etc.\\ }\\ \hline Recovery&\tabincell{c}{Six months after the \\disaster and lasts for\\ a long time}&\tabincell{c}{Recovery of infrastructure and\\ strength future Anti-disaster capacity.} &\tabincell{c}{rebuilding homes after the disaster,\\ ensuring medical and health care in\\ disaster area, etc. }\\ \hline \end{tabular} \label{tab1} \end{table*} % \section{Serious Games for Different Stages in Disaster Relief} In this part, SGDRs are classified based on the framework mentioned above. As showed in TABLE II, according to different relief work at different stages of the disaster, the classification of SGDRs and related games are presented. \subsection{SGDRs for Preparedness Stage} The preparedness phase of disaster relief happens before the disaster or after the disaster recovery. This phase involves relief actions taken before an emergency to ensure a more effective response and steps to minimize the damage caused by a disaster, such as arranging the disaster response plans, constructing of disaster prevention projects, improving the environment, and increasing the skills and knowledge among the staff and community. Without sufficient disaster preparedness, people will be in a hurry when a disaster occurs, which causes huge casualties and property losses. Therefore, a number of serious games are designed for disaster preparedness, which allows people to understand disaster preparedness, especially that some games may train managers to predict, plan, and manage disasters before they occur. Many games depict on preparedness knowledge or enhancing civilian disaster awareness or evacuation skills. The knowledge conveyed by these games includes what to do, where to go before a potential disaster happens. Mannsverk \cite{26} and his team developed serious games to raise civilian awareness about the importance of preparation in facing floods. In Disaster Master \cite{27}, people learn how to identify the first signs of disasters and how to shelter during a disaster. The primary audience of Earth Girls \cite{28} is preteens which aim to help players better understand natural disasters through imaginative and interesting games. Jacob et al. \cite{29} developed a game called Smart Fire Safety, in which players can understand the fire safety hazards and precautions in the kitchen or gas station. In addition, there are many games that can improve the evacuation skills (e.g. system perception, pre-movement behavior, finding the route, exit choice, and navigation interactions). Rahouti et al. \cite{30} developed a game about fire fighting and evacuation training in medical institutions, which simulates a specific fire emergency and aims to train medical staff so that they will be able to provide evacuation instructions to the patient. This game is different from other systems because it can generate some 3D virtual patients with limited mobility, and the medical staff is expected to provide appropriate assistance to these virtual patients. In order to simulate the impact of crowd behavior on individual evacuation, some games use artificial intelligence technology to control the crowd or Non-Player Characters (NPCs) behavior. Ribeiro et al. \cite{31} applied crowd representation models (e.g. cellular automata models, forces-based models and, artificial intelligence-based Models) to represent the movements and behavior of crowed, and use human behavior model into the game to simulate human behavior factors in emergencies, such as panic, disorganized, irrational, etc. In this game, virtual crowds with different reactions and pedestrians with different behaviors will be generated. Players must overcome their instinct to follow other individuals and choose the most appropriate escape route based on their own judgment. The evaluation preliminary results show that this game has quite promised to train players’ judgment in disaster. In another study Ruffino et al. \cite{32} bring Building Information Modeling (BIM) and serious game together. This combination has great potential because using BIM to build models make them closely related to the real built world \cite{33}. This game is created based on a real building, and the game is divided into 4 levels. The higher the level, the greater the difficulty. Players must find the shortest path to the safety exit and run to the safety exit as fast as possible when a fire occurs. Nowadays, VR technologies and augmented reality(AR) technologies have proved to be a valuable alternative to disaster evacuation training. Liang et al. \cite{34} developed a VR game to improve earthquake evacuation skills. In this game, the immersion of VR technology is used to simulate building shaking. Players can perceive earthquakes by observing the movement of buildings, ceiling lamps, and furniture in a virtual world. That provides players with knowledge on how to perceive earthquakes and respond safely. Unlike VR, which immerses users in a computer-generated environment, the more novel AR technology can combine the real world with virtual digital content, making training closer to reality \cite{35}. Catal et al. \cite{36} developed an AR-based evacuation system, which is suitable for fire, earthquake, and chemical attack. They implemented a GPS system to determine the player’s location and provided virtual instructions to players according to their location. Players had to follow the instructions to leave the building. Finally, they showed through experiments and questionnaires that the system was evaluated effectively, and most people could learn about disaster evacuation knowledge and skills. After the game development is completed, some authors \cite{31} \cite{32} have made game evaluations by different groups of people. They tested multiple players (half of them are familiar with route of the building in the game, and the other half are not familiar with the route). The result showed that players who are familiar with the building route had a shorter time to pass the game and were more efficient in mastering disaster evacuation skills. Therefore, for more effective training, these such serious games are not suitable be used by visitors or people who are not familiar with the building routes. For newcomers, before training by the serious game, they should be familiar with the route and environment of the building. The aforementioned games can raise public awareness and knowledge of disaster evacuation, but they are useless for training mangers’ disaster preparedness capabilities. There are also many SGDRs that are used for the disaster preparedness measures before the disaster. These games are mainly strategy role-playing games based on PC or website, developed by some simple game engines or web technologies. They often allow player to control emergency planning, disaster policies or city management. Players must choose from multiple options, and may observe the negative or positive consequences of their decisions when the disaster finally occurs. Floodsim \cite{37}, a simulation-based serious game, is developed by Mendez et al. In this game, the player is a policymaker in control of flood policy in the UK to minimize the impact of flooding on people. The player needs to make some policies according to population density, economic output, and flood risk. After that the game in the form of a newspaper presents results of taken decisions. Apparently, this game can easily verify the reasonable of the disaster policy. Apart from that, Stop Disaster \cite{38} is another disaster simulation strategy game developed by flash, which simulates five kinds of disasters to let players deploy certain resources or construct some man-made protections to protect the town from disasters within budget. The player's final score is closely related to the effectiveness of resource allocation. The process of disaster relief best is a cyclical process. The disaster preparedness and prevention plan before the disaster should be based on the experience of the previous disaster. For example, the strategy and plan for the fight against SARS in 2004 can provide valuable experience and guidance for the prevention of the new coronavirus in 2020. However, most of SGDRs for the preparedness stages almost no have emphasis on the importance of the experience of the previous disaster. Therefore, serious game designers need to take this into consideration when developing games in the future. \subsection{SGDRs for Response Stage} The response phase of disaster relief happens during the disaster and one or six months after. In this stage, the disaster has occurred and has taken a great impact on people. People take a large and complex set of activities that work to minimize the damage caused by the disaster and protect life or property. The primary aim of the response stage is to rescue victims from immediate danger. This requires the rescuer to take a series of rescue actions to search and rescue the victims. These actions are closely related to the command and plan of the incident commander. Therefore, SGDRs for the response stage is mainly divided into two types: one is that the player is a rescuer involved in a specific rescue mission, and the other is that the player takes the role of the incident commander to direct the disaster relief team. At the same time, it is also necessary to provide immediate relief (e.g. medical care, food and water, and temporary shelter, etc.) for the refugee to meet their basic needs. There are a few non-electronic games about providing capital or medical care to disaster areas. For example, Buzz about Dengue \cite{39} is a team-based strategy game that can teach players how to prevent Dengue Fever by providing medical health. Dissolving Disasters \cite{40} designed by Red Cross can help people understand the importance of donors to the refugees in disaster areas. \subsubsection{SGDRs for Rescuers} when a disaster occurs, the rescue workers (e.g. firefighter, soldier, medical staff, volunteer, etc.) are always on the first line of disaster response. While rescuers are saving people, their lives will also be threatened. According to the International Association of Fire Fighters (IAFF), the fire department have four times the rate of work-related injuries as the private industry. One in every three firefighters is injured while on their duties \cite{41}. Thus, in order to reduce the casualty rate of rescuers and improve the rescue efficiency of rescuers, a large of serious games have been developed for rescuers. Hazmat: Hotzone \cite{42} is an instructor-based simulation based on video game technology, developed by the Entertainment Technology Center at Carnegie Mellon University and the Fire Department of New York to train first responders to handle chemical and hazardous materials emergencies. In this game, the instructor has full control of any aspect of the simulation, such as specify the wind, temperature, precipitation for outside environments, the type and location of hazardous material, and where the victims are located. The instructor also can freely add some new elements into the original game content while the rescuer is training in the game. This will greatly increase the randomness of training and can be used to train unexpected emergencies, but it also increases the burden on instructors to create the scene. Flame-Sim \cite{43} developed by Flame Sim LLC is a commercial application training game that drives every firefighter to make decisions on the fire ground. It uses scenario generation technology, which allows the user to change or create a scene within a few minutes that can maximize the training time without a lot of set up time. Moreover, similar to Hazmat: Hotzone, Flame-Sim also can assist firefights in training many rescue operations such as rescue tool selection and use, room search, vents opened, and rescue people. Besides, Both games allow multiple participants to cooperate to complete tasks in a networked 3D game environment. In order to make the game environment more realistic, a large number of technologies and peripherals are used to simulate real disaster situations as possible, such as VR/AR/mixed reality(MR) technology, sensor technology, cave technology, and somatosensory technology. Virtual reality technology is the main way to enhance immersion and interactivity. Like the Nano Games sp. z o.o. \cite{44} company has designed a game allows simulate different types of traffic accidents. With the help of VR technology, rescuers are wearing protective clothing and equipped with real equipment can practice relevant rescue operations that provide appropriate assistance at the virtual accident site. Xu et al. \cite{45} have developed a VR-based game that can create an immersive environment to provide crane operators with an experiment opportunity to learn knowledge about how to deal with railway accidents. The FLAIM Trainer \cite{46} is a firefighting simulator that provides an immersive virtual environment that lets firefighters be trained in realistic emergency fire situations in a safe way than traditional training methods. It is not the first VR-based firefighting training simulator, but it is the first that combines virtual reality and haptic technologies. It use heated personal protective clothing and breathing apparatus to simulate all the sensory experiences that people might encounter in a real fire scenario, including extreme heat and difficulty breathing. At the same time, there are multiple scenarios in FLAIM Trainer which can train multiple base capabilities. Therefore, the immersion and other advantages brought by virtual reality technology provide a broad prospect for disaster rescue training. In addition to VR technology, Skovde University and the Swedish Rescue Services Agency further use CAVE (Cave Automatic Virtual Environment) technology to develop a game Sidh \cite{47} to train firefighters to wear respirators for search and rescue. During the game, participants can walk or run in a small room surrounded by screens, and move in a virtual environment to search for virtual victims. The movement speed in the virtual environment is controlled by the acceleration sensor installed on the player’s boots. Although the real-life situations simulated by virtual reality technology or CAVE technology has improved. It is still challenging to map the real world to the virtual world due to the limitations of current technology. Mixed reality simulation is an alternative way to balance between immersion and reality in training. Researches on the use of mixed reality games have shown the benefits of this approach in disaster response training. By using scenario-based mixed reality simulation, it provides support for team coordination and decision-making training, so that responders can coordinate face-to-face with each other in real time to simulate real disasters \cite{48}. The Icehouse game \cite{49} developed by the Lincoln Laboratory of the Massachusetts Institute of Technology provides a simulated game environment in which a group of disaster responders use a wearable computer and interface specifically designed for this game. In this live action game, players have to move in a physically simulated disaster space, which requires physical exertion and team coordination to reduce virtual dangers and rescue virtual victims. Moreover, this game uses wearable technologies to monitor players’ information about their teammate and their own status (e.g. approximate distance between teammates, the distance from the leader, and heart rate). Most of the previously mentioned games used to train rescue skills also support multiplayer mode (e.g. Hazmat: Hotzone, Flame-Sim, and Icehouse), and to a certain extent it can train the collaboration ability between rescuers. Moreover, there are some low-fidelity games/simulations that abstract the elements of cooperation in disasters specifically for training team cooperation. For example, The C3Fire \cite{50} simulation is a training environment that can train people’s team coordination awareness and team decision-making. It can generate a task environment in which participants cooperate to complete a specific mission, such as extinguishing a forest fire and save valuable houses. ZO Toups et al. \cite{51} developed a game called Team Coordination Game (TeC) which uses team coordination as the main component of the game's core mechanism. In this game distributed information among team members, participants need to gather information, cooperate, communicate, and rely on each other. In addition, it is also very important to improve the moral judgment of rescuers. Because the rescuer often encounters various moral choices in the actual rescue process. If not handled properly, the rescuer may not only miss the best rescue opportunities, but also may develop mental illness like post-traumatic stress disorder (PTSD) \cite{52} \cite{53}. To solve this problem, some games are designed to train the ethical decision-making of inexperienced rescuers. Wahyudin et al. \cite{54} have designed a mobile first-person role-playing game (RPG) MAGNITUDE, which is used to train ethical decision-making in disaster situations. During the game, the player has to confront NPCs with ethical conflicts. For example, a boy’s leg was crushed by debris, and he lost a lot of blood. The heavy equipment used to clean up debris cannot be brought to the scene immediately. The player must choose to amputate or wait for equipment, both are very dangerous to the boy’s life. The player must take responsibility for the boy’s life. \begin{figure}[h] \centering \includegraphics[width=3.65in]{Fidelity} \caption{Game Fidelity Adaptive} \end{figure} With the development of science and technology, a large number of technologies continue to make the game training environment of disaster rescuers closer to the real-world. Meanwhile, most designers and developers of serious games usually strive for high-fidelity environments, especially in visual scenes. However, studies have shown that higher fidelity does not lead to higher learning efficiency. The relationship between fidelity and learning efficiency depends on the pre-knowledge and pre-skills that a trainee has mastered \cite{55}. In general, novices or intermediate trainees may be distracted by abundant irrelevant information and activities in the game which increase cognitive load, resulting in inefficient learning or training. Proficient knowledge and skills can make professional focus on the main purpose of the game, and high fidelity can bring them more immersion, so that high fidelity can bring professional more gain than others. At the same time, high fidelity brings more burdens of computing resources, causing a series of physical discomforts and lead to an increase in development costs. One way to overcome these challenges is to use low-fidelity simulation, which reduces the fidelity of the simulation and only focuses on concept learning. This low-fidelity simulation is economical where it can reduce the cost of the simulation whilst and increase the focus on the desired element of education \cite{51}. In order to prove the effectiveness of low-fidelity simulation, ZO Toups et al. \cite{51} tested 28 firefighters. The results showed that low fidelity is an effective training method when the demand for fidelity is not high. In summary, in the process of developing and designing such serious games, developers should design different games according to the professional level of the target group. For example, novices can adopt a cartoon game style, whereas for professionals, the style of the game should be as realistic as possible. When the demand for fidelity is not high, or it is about training some specific knowledge, this knowledge can be abstracted develop a low-fidelity game. However, these solutions need to design a new game with different fidelity for people of different professional levels, which will greatly increase the development cost of the game. With the continuous development of artificial intelligence technology and adaptive technology, more and more games have begun to adjust game content based on the player’s behavior in the game. For example, in some answering games, the difficultly of questions will be appropriately adjusted according to the player’s correct rate of answering the question \cite{56}. Therefore, facing the relationship between game fidelity and learning efficiency, game fidelity adaption may be a feasible method, Fig 4. On the one hand, players’ data can be collected through their interaction with the game or by using some sensors (e.g. eye tracker, motion capturing, joysticks etc.) \cite{57} \cite{58}. Based on this data, algorithms such as decision trees, fuzzy logic, and others can be used to obtain the player’s professional level \cite{59}. On the other hand, the game fidelity is mainly expressed through the materials, textures, and models. Consequently, it may be possible to change the game material and model based on the player’s professional level in the game to achieve the purpose of fidelity adaptation. That will keep the relationship between the game environmental fidelity of the game and the professional level of player is maintained at the point of highest learning efficiency. We just provide a feasible idea for researchers to solve such problems. Thus, the specific implementation of the game fidelity adaptive method requires further research and exploration. \subsubsection{SGDRs for Commanders} once an emergency occurs, incident commanders of rescue service will rapidly assess the situation before deploying their resources. This evaluation and command are extremely important to achieve the successful result of the rescue at this stage. At this point, improper decision-making or poorly informed can cause catastrophic consequences, leading to loss of life or avoidable property loss. In order to manage this situation, commanders always have to make the correct decision even when they are under the extreme pressures of the incident. In an attempt to provide a realistic training environment, some games combine videos, photos, and role-playing to recreate the scene \cite{60}. However, videos and photos will limit the flexibility of the scene and cannot be used repeatedly. Therefore, many games based on the random event have been developed which. In these games, some emergencies are randomly generated, and players need to resolve these events reasonably according to the actual situation. Most of the games are about a micromanaging emergency where the player can directly control a selection of rescue units (e.g. firefighter, medical staff, and police, etc.) to save lives as possible. These games are often used by incident commanders to train their ability to collect information and allocate disaster rescue resources. For example, Emergency \cite{61} is a disaster strategy game developed by Sixteen Tons Entertainment. In Emergency, the player has a city map that points out the location of various rescue resources, and the place where the emergency occurs. The player needs to mobilize limited resources to resolve the emergency. This is a complicated process in which the player needs to observe what is happening at the emergency location, then dispatches the needed rescue resources and vehicles. For example, the player must first mobilize ambulance or helicopter when seeing blood, handle large-scale fires which require heavy fire trucks with water guns, and need to control firefighters to cut down trees to create fire barriers when dealing with the forest fires. Another game is KriseIM \cite{62}, In this game, players will get information from the notification window, and then respond to the crisis by dispatching police, ambulances and fire departments. At end of the event, the player and instructor will be provided a debriefing to show the results of the training. Rescue: Everyday Heroes \cite{63} focuses only on the fire. In this game, the player keeps track of several fire stations and has to come up with a strategy for the mission and chooses the most suitable fire extinguishing agent according to different circumstances. In these games, artificial intelligence technology is used to control a large number of NPCs’ behavior. Especially those NPCs affected by the disaster. At the same time, the player needs to control virtual teammates as the response team members to rescue the NPCs affected by the disaster. However, those virtual teammates will blindly follow the player's command and take action, regardless of whether these commands are reasonable. In order to make the commander consider the safety and emotions of the teammate during training. Djordjevich et al. \cite{64} developed an emotional model into virtual teammates for a training game Ground Truth: Toxic City. This game allows players to play the role of the incident commander, command virtual teammates to deal with the leakage of toxic chemicals,aim to training the commander's strategy. In this game, Artificial neural network computations are used to model intensities of fear and anger and then convert them into fuzzy sets to be easier to understand by the human cognitive model. If the player orders the virtual teammate to take inappropriate actions (e.g. require going deep inside the hazardous fog), the virtual teammate’s emotional state may change dynamically. This will cause the virtual teammates to disobey the command or reduce its ability to perform other requested actions. Integrating emotions into the virtual teammates will prompt the player to consider the rationality of instructions and the emotional state of the members in the process of commanding rescue. This will more accurately reflect the player’s decisions in such situations. After a disaster occurs, it is often necessary for the incident commander to make a Rapid Damage Assessment (R.D.A.). Sooraj K Babu et al. \cite{65} came up with a multiplayer game, which is set in an earthquake. Each player has to take the role of head of a department to deal with the rescue operation. Players must first take a R.D.A. according to destroyed buildings and injured people. Then each player collaborates to provide resources and assistance according to the R.D.A level and the responsibilities of the department they played. In addition, many studies have shown that the main way for incident commanders to know about the disaster is through social media \cite{66} \cite{67}. The incident commander should not only be trained to obtain information from the disaster site for damage assessment, but also need to learn to extract useful information from social media. Abbasi et al. \cite{68} designed a live action game in which the player can get information from social media (e.g. Internet, TV, newspaper, E-mail, telephone, etc.). Multiple players must exchange information to determine the level and situation of the disaster in which commanders can be trained to collect information from media and communication ability. In the actual rescue process, the commander often needs to cooperate with rescuers on the scene of the disaster to complete the task. In this way, real-time communication and cooperation between commanders and rescuers are very important. However, for such serious games, the training of cooperation factors often concentrated on the cooperation between commanders of different departments. Few games involve commanding cooperation between commanders and rescuers in which will provide developers with another design solution. \subsection{SGDRs for Recovery Stage} When a disaster is over, building back homes is very necessary. This stage usually occurs six months after the disaster and lasts for long time. In the recovery phase, the relief work is no longer just to provide some emergency supplies and search or rescue the victims, but it is rather a series of longer-term assistance, including returning people affected by the disaster to normal life and strengthen future anti-disaster capacity. However, post-disaster reconstruction will be limited by many factors such as regional culture, funding, and the extent of the disaster. Therefore, serious games are not suitable for training in this aspect. Only a few games incorporate the element of improving environment and living conditions of communities. For example, Hazagora \cite{23} is a board game in which players represent the inhabitants of a volcanic, who must develop and maintain communities. After the disaster, the player needs to take some measures to maintain communities, such as removing the destroyed buildings, burying the dead, and clean up contaminated resources (e.g. food, natural water). In this way, the player can experience the impact of the disaster and learn post-disaster strategies to mitigate the continuing impact after the disaster. \section{Discussion and Prospects} Disasters occur every day in the world that affects thousands of people. In order to reduce the losses caused by disasters, participants in disaster relief must perform their duties and train themselves to prepare for disaster. Many studies have proved that serious games are a good way to train relevant participants. However, SGDRs are not exhaustive and has a series of limitations. Considering these deficiencies, the following aspects of SGDRs are worth to be further attention. The content of SGDRs are not comprehensive. On the one hand, SGDRs could not consider regional cultural diversity and custom-imposed taboos or local needs, which will limit its use range. On the other hand, most SGDRs focus only on most common disasters like fire, earthquake, floods, and tsunami. A few games involve other disasters like droughts, extreme weather, and disease epidemics. Therefore, developers should take this aspect into consideration when developing SGDRs in the future, and continuously enrich their content. Game feedback, including in-game feedback and post-game feedback, is particularly important for SGDRs \cite{69}, but it is often ignored in many SGDRs. The in-game feedback can judge the player’s in-game operations, affirm the correct operations, and punish the wrong operations, if there is no proper feedback in the game, the player can make mistakes at will in the game so that these mistakes can be ignored in real-world operations and lead to disastrous consequences. The post-game mainly is debriefing which can offer an opportunity for the player to process and consolidate their in-game operation. Therefore, when the player completes an operation in SGDRs, feedback should be given to the player in certain forms (e.g. sound, animation, special effects, etc.) based on the player’s operation, and the player should be given a debriefing when the game is over. Some articles indicate that game control complexity and game environment can affect SGDRs training \cite{31}. That requires the simple operation of SGDRs, and players should be familiar with the game environment before training. At the same time, developers should design different SGDRs according to the player’s professional level. novices can be trained by simply quiz games. The professional can be trained through processes or problem-solving in a complex game environment. For evaluation of the effectiveness of SGDRs, there are few detailed descriptions of comprehensive evaluation models, which leave room for future research and speculations. Only a few games have been tested using player feedback. Their research results show that serious games can be effectively used in simulations, training many disaster relief related activities and increasing disaster awareness. However, to face of a lack of professional evaluation, we recommend using a combination of player feedback and professional opinions to evaluate the serious game in the realm of disaster relief. \section{Conclusion} SGDRs is an effective method for disaster relief training and being intensively studied. In order to show the effectiveness of serious games for disaster relief training, this paper investigated the traditional methods and serious game methods for disaster relief training and determined that the use of serious game training can make up for the limitations of the traditional methods. Apart from that, due to the absence of systematic description of disaster relief work in different stages of disaster relief, we introduced disaster management and divided disaster relief into three stages: Preparedness, Response, and Recovery. Then, based on the different stages of disaster relief, the technologies and functions of SGDRs were summarized and analyzed. Finally, we discussed the current deficiencies of SGDRs. To sum up, our work can provide a guidance for participants in disaster relief work and training. Meanwhile, we provide suggestions for researchers to design more effective serious games for disaster relief. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{D}{}isasters are catastrophic events that may overwhelm the emergency response capabilities of the community and threaten public safety and the environment \cite{1}. Disasters always have significant impact on societies, economies, and humankind. According to the Annual Disaster Statistical Review 2017, there were $335$ natural disasters that affected more than 95.6 million people, causing 9697 deaths and total losses of \$$335$ billion \cite{2}. Besides natural disasters, there are also many man-made disasters, such as fire, terrorist attacks, transport accidents, and technological disasters, which cause considerable victims and economic losses \cite{3}. These disasters will bring people a great physical and mental pain, even deprive a number of lives. Therefore, it is vital to train relevant participants in disaster relief capabilities. Generally, the disaster relief capability training adopts traditional approaches such as videos, posters, disaster exercises, etc. However, the major limitation of these approaches is that they cannot fully produce the elements of the disaster, which cannot ensure that participants receive effective training. Serious game, with non-entertainment purposes, is an innovative approach which can create an immersed disaster environment by using game elements, e.g. symbolic tokens, models and sound to simulate the impact of a disaster, e.g. destroyed building, injured people \cite{4}. Connolly \cite{5} has shown that participants in disaster relief can obtain more effective training by serious games compared with traditional approaches. Therefore, serious games have taken its place in the disaster relief training. Hence, serious games for disaster relief training have drawn widespread attention, and many related researches have been done. But most studies focused on the certain stage of a disaster and did not cover the entire disaster process. However, a successful disaster relief process should involve all possible activities and situations at all stages of a disaster \cite{6}. Disaster relief training needs to train different capabilities in each stage of a disaster. Therefore, it is necessary to classify the relief work in different stages of a disaster. Disaster management \cite{6} is defined as the organization and management of resources and responsibilities for all stages of a disaster, which can effectively guide disaster relief actions in each stage of a disaster. Because there has not been a systematic model for all stages of a disaster relief yet, we integrate the idea of the disaster management into the entire disaster relief process. According to the International Federation of Red Cross and Red Crescent Societies (IFRC), disaster relief is divided into three stages: Reparedness, Response, and Recovery \cite{7}. By contrast, most literature and organizations believe that disaster relief consists of four phases base on disaster management-Mitigation, Preparedness, Response, and Recovery \cite{8}. We analyze these two frameworks and conclude a new framework to describe the entire disaster relief process. Then, we reviewed a quantity of Serious Games for Disaster Relief (SGDRs) in our framework and analyze their characteristics, target group, techniques, and possible disaster relief capabilities so as to offer reliable guidance for relevant participants, e.g. disaster communities, rescuers, policymakers and incident commanders. This paper discusses the limitations of SGDRs from multiple aspects, and further proposes some suggestions for developers to design more effective serious games. The rest of the paper is organized as follows: section 2 introduces traditional training methods as well as serious games training methods, and then presents the stage classification disaster relief framework according to the different tasks in each stage. Section 3 surveys the SGDRs in each disaster relief stage. Section 4 analyzes the deficiencies of SGDRs and proposes the corresponding countermeasures. Finally, section 5. draws the conclusions and future direction. \section{Background} \subsection{Disaster Relief Training Methods} Disaster relief is a fundamental process to fight disasters. Effective disaster relief can greatly reduce the losses caused by disasters. In disaster relief, relief workers have to work under a great pressure to make decisions and implement an appropriate action. In contrast, the training aim to provide individual with the knowledge, skills, and attitudes to cope with potential stressors \cite{9}. Consequently, effective disaster exercise can greatly improve participants’ ability to deal with disasters. In general, people often choose to use operation-based methods such as disaster drills to practice and maintain rescue capabilities and use discussion-based methods such as tabletop exercise to develop and assess plans, policies, and procedures. \begin{table*} \centering \caption{The Main Approaches for Disaster Relief training} \label{table} \setlength{\tabcolsep}{3pt} \renewcommand\arraystretch{1.5} \begin{tabular}{|m{3.0cm}<{\centering}|m{3.0cm}<{\centering}|m{6.0cm}<{\centering}|m{6.0cm}<{\centering}|} \hline Approaches&Target abilities&Advantages&Limitation \\ \hline Disaster drill & Rescue skills &\tabincell{c}{ benefit to train team collaboration,\\improve and evaluate \\ local disaster response capacity, etc.} & \tabincell{c}{expensive,\\ unfavorable to newcomer, \\different from real disaster rescue,\\rescuer maybe injury in training, etc.}\\ \hline Tabletop exercise&Disaster planning&\tabincell{c}{low stress environment,\\low cost,gathering ideas and wisdom,\\facilitated group discussion of problem, etc.}&\tabincell{c}{difficult to organize,\\lack of realism,\\unable to replicate every aspect\\ of a hypothetical situation,\\ providing only a superficial review, etc.}\\ \hline Serious game&\tabincell{c}{All disaster relief\\ capabilities}&\tabincell{c}{safe, low cost,\\ customizable scenes,\\ repeatable training,\\ simulating scenes that are difficult\\ to reproduce in reality, etc.} &\tabincell{c}{the discomfort of game\\ technology to the human body,\\ people do not take it seriously,\\ ignoring the mistakes in the game,\\ inability to fully express the\\ complexity of the disaster, etc.\\ }\\ \hline \end{tabular} \label{tab1} \end{table*} It is generally know that disaster drills are coordinated, supervised activities, usually used to test specific operations or functions \cite{10}. What’s more, the disaster drill training method is widely used around the world and is regarded as a fundamental tool for evaluating and improving local disaster response capacity \cite{11}. In the disaster drill, Rescuers in different types of training scenarios affected by the disaster where victims are replaced with dummies, rescuer needs to use various rescue skills to keep “victims” safe. Therefore, the disaster drill is effective in improving personal disaster rescue capabilities, but not optimal for all situations. On the one hand, it is relatively expensive because it requires items to be consumed and enter a dedicated training area, where various environments (such as buildings, ships, trains, etc.) need to be built. On the other hand, the drill may be overwhelming to the newcomer, especially when involving large scale simulations \cite{12}. In fact, rescue behavior in real life is different from experiments such as drills \cite{13}. Additionally, rescuers are likely to be injured during the drill \cite{14}, which means that current drill training still has limitations. Table exercise is a discussion-based learning experience where participants need to play a role and use their strategies to solve problems \cite{15}. During the discussion, participants can not only strengthen communication, but also enable to evaluate the effectiveness of emergency response strategies. This method often used to train disaster planning capabilities, such as, the strategies to reduce disaster risk, assess plans and policies about disasters. For example, Taylor et al. use tabletop exercise to train officials’ strategies to fight Pandemic Influenza \cite{16}. H Khankeh et al. use tabletop exercise to train hospital managers to plan for disaster \cite{17}. AM Wendelboe et al. use tabletop exercise to review and test the measures taken by university leaders to deal with the COVID-19 \cite{18}. These experiments are enough to show that tabletop exercises are necessary for disaster relief planning and future disaster prevention. However, tabletop exercises also have their limitations. It is difficult to satisfy the requirement of multiplayer participation where officials, policymakers, managers, and experts must gather in one place. Apart from that, same as the disaster drillt it lacks reality, different from the real situation, and difficult to consider all aspects, provide only a superficial review of the overall plan \cite{19}. Consequently, it is necessary to investigate innovative and more effective approaches to overcome these limitations. Considering the apparent limitations of traditional approaches in disaster relief training, serious games have been used as an alternative for training instead of traditional exercise. Serious games are using game technologies and game elements for applications that aim to learn or train, not only for entertainment \cite{20}. Unlike traditional disaster drills and tabletop exercises, serious games can easily simulate the element of disaster by using game elements, such as symbolic tokens, models, sound effects, virtual reality, and so on. In this context, each participant may experience a certain key characteristic of a real disaster that enables them to better understand the disaster. Besides, there are many other benefits to the serious games training method based on the characteristics of the game. It allows more frequent training, possibilities to train that are not easy to reproduce in the real world due to cost, safety, and time concerns, and allows better evaluation \cite{21}. At the same time, it provides a safe, low-cost alternative that can be practiced in certain situations, and provides trainees with the opportunity to train various work-related procedures. With the rising popularity of various types of games such as electronic games, VR games, and somatosensory games continue to increase the upper limit of serious game training. While enjoying the high immersion brought by these technologies, we also need to overcome their shortcomings. Such as eyesight problems caused by electronic devices, dizziness and sickness caused by VR technology \cite{22}, the discomfort caused by somatosensory clothing, and so on. This requires us reasonably plan the time for training using these techniques. In addition, serious games as a type of game also have some limitations. Firstly, serious games maybe build up a false sense of security \cite{21}, because in serious games, players are free to make mistakes without any actual punishment, especially in games without a good feedback mechanism, which make players ignore these mistakes even in the real world. Secondly, certain groups are not familiar with what serious games are or the difference between serious games and games. Considering games made it certain that users did not take serious games seriously. Finally, serious games may inevitably simplify disaster, and thus fail to adequately portray the complexity of the disaster and the process of disaster relief. For example, terrain control of lava flow and tsunami distribution, the relationship between earthquake intensity and source distance \cite{23}, as well as excessive use of water in a fire may create a steam cloud that will pose a major threat to firefighters and any potential victim being rescued. These details are often difficult to consider in the game \cite{24}. Therefore, serious games cannot completely replace traditional disaster relief training, but taking serious game training methods as a supplement to traditional training methods can reduce training costs and provide a specific environment for training specific tasks. At the same time, serious games can also be used as a prelude to traditional training. After the trainees use serious games to reach a certain level, they contact with expensive and resource-intensive traditional training to ensure the efficiency of training. % % % % % \subsection{Disaster Relief Cycle} Disaster relief is a complex process that usually requires a large number of measures to deal with a series of uncertain emergencies. That normally requires various actors (e.g. civilians, government, and non-government) to perform their duties and coordinate with each other at each stage of the disaster. At the same time, serious games have been widely used in relief training for different groups of people at different stages of disasters. Therefore, it is very necessary to distinguish relief work in different stages of the disaster and select the appropriate SGDRs to train different abilities. Now there are two main frameworks that describe the stage of disaster relief based on disaster management. The first framework is proposed by The Federal Emergency Management Agency (FEMA) \cite{25} to divide this process into four phases, while the second format is proposed by IFRC \cite{7} to divide the process into three phases. Both frameworks can clearly express the relief work at each stage of the disaster. However, their descriptions of the disaster relief process have their own characteristic. Based on this, we further analyzed the characteristics of these two frameworks and concluded a comprehensive and professional disaster relief classification framework. The framework is not only used as a guideline to guide participants in disaster relief to take appropriate relief work at different stages of the disaster but also can as a classification standard for SGDRs. \begin{figure}[h] \centering \includegraphics[width=3.7in]{FEMA} \caption{The Framework of FEMA} \end{figure} The FEMA \cite{25} proposed a basic four-stage framework, which is also widely used by most articles and organizations. As show in Fig. 1. It includes Mitigation, Preparedness, Response, and Recovery. It cycles through each stage and takes different relevant relief work in different phase. For example, disaster mitigation is the stage to eliminate or reduce the probability of disaster occurrence or reduce the negative effects of inevitable disasters. People take activities such as disaster analyses, disaster forecasting, and disaster defense project. Disaster preparedness refers to increase the likelihood of successful disaster response, such as taking a disaster response plan and raising public awareness would be considered preparedness activities. The purpose of disaster response is to respond to a disaster as rapidly as possible, by mobilizing resources to rescue survivors and meet their basic needs. Disaster recovery aim to assist those who have suffered the impact of a disaster to return the normal life. Both the mitigation stage and the preparedness stage occur before the disaster, and the disaster preparedness stage is complementary to the disaster mitigation stage, resulting in the relief work in these two phases is always closely linked. For example, people always carry out disaster prediction and disaster planning at the same time to ensure a timely response to the disaster. Similarly, most of the related serious games always pair preparedness skills with mitigation knowledge. Therefore, combining these two stages can more concisely and clearly show the relief work in pre-disaster. \begin{figure}[h] \centering \includegraphics[width=3.65in]{IFRC} \caption{The Framework of IFRC} \end{figure} To address this issue, we analyzed the IFRC’s \cite{7} disaster relief phase division. IFRC divides the disaster relief process into three stages based on disaster management—Preparedness, Response, and Recovery. As showed in Fig.2. Unlike FEMA, IFRC combines the relief work before the disaster into one stage to express the disaster relief process more simply and clearly. In the preparedness stage, people take activities that provide relief measures to reduce potential disaster areas vulnerability to disasters and strengthen their capacities to respond to disasters. Another difference is that the framework depicts the phases of disaster relief as a linear process. But the disaster relief phase can best be represented as a cycle which is very important to the disaster relief process. Because disaster does not just appear one day, post-disaster review always is carried out in the recovery period. Such review will often reveal the shortcoming in the previous disaster plan, and then provides valuable experience and strategies for subsequent disaster preparedness \cite{6}. \begin{figure}[h] \centering \includegraphics[width= 3.5in]{DisasterReliefCycle} \caption{Disaster Relief Cycle} \end{figure} Therefore, we combined the above two frameworks to design a more comprehensive framework as a standard for dividing the stages of the disaster relief cycle. As show in Fig.3. On one hand, we divided the disaster relief process into three stages, namely Preparedness, Response, and Recovery. Each of phase levies particular demands on participants in disaster relief. In the preparedness stage, adopt policies, take a disaster response plan, etc. to prevent the impact of the disaster. In the response stage, some rescue actions are taken to keep people safe, and emergency supplies are provided to the disaster area to ensure the basic needs of the refugee. In the recovery stage, the need to help the reconstruction of the disaster area and strengthen the community’s anti-disaster capacity. On the other hand, these phases follow one another in a continuous cycle, emphasizing the positive impact of previous experience on current planning. In summary, SGDRs of each stage should also exhibit these properties. Later, based on this disaster relief framework, SGDRs at each stage of the disaster will be reviewed. \begin{table*} \centering \caption{Relief Works at Each Stages of Disaster Relief} \label{table} \setlength{\tabcolsep}{3pt} \renewcommand\arraystretch{1.5} \begin{tabular}{|m{3.0cm}<{\centering}|m{3.0cm}<{\centering}|m{6.0cm}<{\centering}|m{6.0cm}<{\centering}|} \hline Stage&Time&Characteristics&Relief Works \\ \hline Preparedness & Pre-disaster &\tabincell{c}{ Prepare and prevent the effect of \\disaster} & \tabincell{c}{deploy disaster prevent projects,\\ disaster policy and plan,\\ raise public awareness,\\ improve the environment, etc.\\ }\\ \hline Response&\tabincell{c}{During the disaster\\ and one or six months \\after}&\tabincell{c}{Putting people safe and meeting the\\ basic need of people}&\tabincell{c}{rescue victims in disaster,\\ evacuate people,\\ provide emergency supplies, etc.\\ }\\ \hline Recovery&\tabincell{c}{Six months after the \\disaster and lasts for\\ a long time}&\tabincell{c}{Recovery of infrastructure and\\ strength future Anti-disaster capacity.} &\tabincell{c}{rebuilding homes after the disaster,\\ ensuring medical and health care in\\ disaster area, etc. }\\ \hline \end{tabular} \label{tab1} \end{table*} % \section{Serious Games for Different Stages in Disaster Relief} In this part, SGDRs are classified based on the framework mentioned above. As showed in TABLE II, according to different relief work at different stages of the disaster, the classification of SGDRs and related games are presented. \subsection{SGDRs for Preparedness Stage} The preparedness phase of disaster relief happens before the disaster or after the disaster recovery. This phase involves relief actions taken before an emergency to ensure a more effective response and steps to minimize the damage caused by a disaster, such as arranging the disaster response plans, constructing of disaster prevention projects, improving the environment, and increasing the skills and knowledge among the staff and community. Without sufficient disaster preparedness, people will be in a hurry when a disaster occurs, which causes huge casualties and property losses. Therefore, a number of serious games are designed for disaster preparedness, which allows people to understand disaster preparedness, especially that some games may train managers to predict, plan, and manage disasters before they occur. Many games depict on preparedness knowledge or enhancing civilian disaster awareness or evacuation skills. The knowledge conveyed by these games includes what to do, where to go before a potential disaster happens. Mannsverk \cite{26} and his team developed serious games to raise civilian awareness about the importance of preparation in facing floods. In Disaster Master \cite{27}, people learn how to identify the first signs of disasters and how to shelter during a disaster. The primary audience of Earth Girls \cite{28} is preteens which aim to help players better understand natural disasters through imaginative and interesting games. Jacob et al. \cite{29} developed a game called Smart Fire Safety, in which players can understand the fire safety hazards and precautions in the kitchen or gas station. In addition, there are many games that can improve the evacuation skills (e.g. system perception, pre-movement behavior, finding the route, exit choice, and navigation interactions). Rahouti et al. \cite{30} developed a game about fire fighting and evacuation training in medical institutions, which simulates a specific fire emergency and aims to train medical staff so that they will be able to provide evacuation instructions to the patient. This game is different from other systems because it can generate some 3D virtual patients with limited mobility, and the medical staff is expected to provide appropriate assistance to these virtual patients. In order to simulate the impact of crowd behavior on individual evacuation, some games use artificial intelligence technology to control the crowd or Non-Player Characters (NPCs) behavior. Ribeiro et al. \cite{31} applied crowd representation models (e.g. cellular automata models, forces-based models and, artificial intelligence-based Models) to represent the movements and behavior of crowed, and use human behavior model into the game to simulate human behavior factors in emergencies, such as panic, disorganized, irrational, etc. In this game, virtual crowds with different reactions and pedestrians with different behaviors will be generated. Players must overcome their instinct to follow other individuals and choose the most appropriate escape route based on their own judgment. The evaluation preliminary results show that this game has quite promised to train players’ judgment in disaster. In another study Ruffino et al. \cite{32} bring Building Information Modeling (BIM) and serious game together. This combination has great potential because using BIM to build models make them closely related to the real built world \cite{33}. This game is created based on a real building, and the game is divided into 4 levels. The higher the level, the greater the difficulty. Players must find the shortest path to the safety exit and run to the safety exit as fast as possible when a fire occurs. Nowadays, VR technologies and augmented reality(AR) technologies have proved to be a valuable alternative to disaster evacuation training. Liang et al. \cite{34} developed a VR game to improve earthquake evacuation skills. In this game, the immersion of VR technology is used to simulate building shaking. Players can perceive earthquakes by observing the movement of buildings, ceiling lamps, and furniture in a virtual world. That provides players with knowledge on how to perceive earthquakes and respond safely. Unlike VR, which immerses users in a computer-generated environment, the more novel AR technology can combine the real world with virtual digital content, making training closer to reality \cite{35}. Catal et al. \cite{36} developed an AR-based evacuation system, which is suitable for fire, earthquake, and chemical attack. They implemented a GPS system to determine the player’s location and provided virtual instructions to players according to their location. Players had to follow the instructions to leave the building. Finally, they showed through experiments and questionnaires that the system was evaluated effectively, and most people could learn about disaster evacuation knowledge and skills. After the game development is completed, some authors \cite{31} \cite{32} have made game evaluations by different groups of people. They tested multiple players (half of them are familiar with route of the building in the game, and the other half are not familiar with the route). The result showed that players who are familiar with the building route had a shorter time to pass the game and were more efficient in mastering disaster evacuation skills. Therefore, for more effective training, these such serious games are not suitable be used by visitors or people who are not familiar with the building routes. For newcomers, before training by the serious game, they should be familiar with the route and environment of the building. The aforementioned games can raise public awareness and knowledge of disaster evacuation, but they are useless for training mangers’ disaster preparedness capabilities. There are also many SGDRs that are used for the disaster preparedness measures before the disaster. These games are mainly strategy role-playing games based on PC or website, developed by some simple game engines or web technologies. They often allow player to control emergency planning, disaster policies or city management. Players must choose from multiple options, and may observe the negative or positive consequences of their decisions when the disaster finally occurs. Floodsim \cite{37}, a simulation-based serious game, is developed by Mendez et al. In this game, the player is a policymaker in control of flood policy in the UK to minimize the impact of flooding on people. The player needs to make some policies according to population density, economic output, and flood risk. After that the game in the form of a newspaper presents results of taken decisions. Apparently, this game can easily verify the reasonable of the disaster policy. Apart from that, Stop Disaster \cite{38} is another disaster simulation strategy game developed by flash, which simulates five kinds of disasters to let players deploy certain resources or construct some man-made protections to protect the town from disasters within budget. The player's final score is closely related to the effectiveness of resource allocation. The process of disaster relief best is a cyclical process. The disaster preparedness and prevention plan before the disaster should be based on the experience of the previous disaster. For example, the strategy and plan for the fight against SARS in 2004 can provide valuable experience and guidance for the prevention of the new coronavirus in 2020. However, most of SGDRs for the preparedness stages almost no have emphasis on the importance of the experience of the previous disaster. Therefore, serious game designers need to take this into consideration when developing games in the future. \subsection{SGDRs for Response Stage} The response phase of disaster relief happens during the disaster and one or six months after. In this stage, the disaster has occurred and has taken a great impact on people. People take a large and complex set of activities that work to minimize the damage caused by the disaster and protect life or property. The primary aim of the response stage is to rescue victims from immediate danger. This requires the rescuer to take a series of rescue actions to search and rescue the victims. These actions are closely related to the command and plan of the incident commander. Therefore, SGDRs for the response stage is mainly divided into two types: one is that the player is a rescuer involved in a specific rescue mission, and the other is that the player takes the role of the incident commander to direct the disaster relief team. At the same time, it is also necessary to provide immediate relief (e.g. medical care, food and water, and temporary shelter, etc.) for the refugee to meet their basic needs. There are a few non-electronic games about providing capital or medical care to disaster areas. For example, Buzz about Dengue \cite{39} is a team-based strategy game that can teach players how to prevent Dengue Fever by providing medical health. Dissolving Disasters \cite{40} designed by Red Cross can help people understand the importance of donors to the refugees in disaster areas. \subsubsection{SGDRs for Rescuers} when a disaster occurs, the rescue workers (e.g. firefighter, soldier, medical staff, volunteer, etc.) are always on the first line of disaster response. While rescuers are saving people, their lives will also be threatened. According to the International Association of Fire Fighters (IAFF), the fire department have four times the rate of work-related injuries as the private industry. One in every three firefighters is injured while on their duties \cite{41}. Thus, in order to reduce the casualty rate of rescuers and improve the rescue efficiency of rescuers, a large of serious games have been developed for rescuers. Hazmat: Hotzone \cite{42} is an instructor-based simulation based on video game technology, developed by the Entertainment Technology Center at Carnegie Mellon University and the Fire Department of New York to train first responders to handle chemical and hazardous materials emergencies. In this game, the instructor has full control of any aspect of the simulation, such as specify the wind, temperature, precipitation for outside environments, the type and location of hazardous material, and where the victims are located. The instructor also can freely add some new elements into the original game content while the rescuer is training in the game. This will greatly increase the randomness of training and can be used to train unexpected emergencies, but it also increases the burden on instructors to create the scene. Flame-Sim \cite{43} developed by Flame Sim LLC is a commercial application training game that drives every firefighter to make decisions on the fire ground. It uses scenario generation technology, which allows the user to change or create a scene within a few minutes that can maximize the training time without a lot of set up time. Moreover, similar to Hazmat: Hotzone, Flame-Sim also can assist firefights in training many rescue operations such as rescue tool selection and use, room search, vents opened, and rescue people. Besides, Both games allow multiple participants to cooperate to complete tasks in a networked 3D game environment. In order to make the game environment more realistic, a large number of technologies and peripherals are used to simulate real disaster situations as possible, such as VR/AR/mixed reality(MR) technology, sensor technology, cave technology, and somatosensory technology. Virtual reality technology is the main way to enhance immersion and interactivity. Like the Nano Games sp. z o.o. \cite{44} company has designed a game allows simulate different types of traffic accidents. With the help of VR technology, rescuers are wearing protective clothing and equipped with real equipment can practice relevant rescue operations that provide appropriate assistance at the virtual accident site. Xu et al. \cite{45} have developed a VR-based game that can create an immersive environment to provide crane operators with an experiment opportunity to learn knowledge about how to deal with railway accidents. The FLAIM Trainer \cite{46} is a firefighting simulator that provides an immersive virtual environment that lets firefighters be trained in realistic emergency fire situations in a safe way than traditional training methods. It is not the first VR-based firefighting training simulator, but it is the first that combines virtual reality and haptic technologies. It use heated personal protective clothing and breathing apparatus to simulate all the sensory experiences that people might encounter in a real fire scenario, including extreme heat and difficulty breathing. At the same time, there are multiple scenarios in FLAIM Trainer which can train multiple base capabilities. Therefore, the immersion and other advantages brought by virtual reality technology provide a broad prospect for disaster rescue training. In addition to VR technology, Skovde University and the Swedish Rescue Services Agency further use CAVE (Cave Automatic Virtual Environment) technology to develop a game Sidh \cite{47} to train firefighters to wear respirators for search and rescue. During the game, participants can walk or run in a small room surrounded by screens, and move in a virtual environment to search for virtual victims. The movement speed in the virtual environment is controlled by the acceleration sensor installed on the player’s boots. Although the real-life situations simulated by virtual reality technology or CAVE technology has improved. It is still challenging to map the real world to the virtual world due to the limitations of current technology. Mixed reality simulation is an alternative way to balance between immersion and reality in training. Researches on the use of mixed reality games have shown the benefits of this approach in disaster response training. By using scenario-based mixed reality simulation, it provides support for team coordination and decision-making training, so that responders can coordinate face-to-face with each other in real time to simulate real disasters \cite{48}. The Icehouse game \cite{49} developed by the Lincoln Laboratory of the Massachusetts Institute of Technology provides a simulated game environment in which a group of disaster responders use a wearable computer and interface specifically designed for this game. In this live action game, players have to move in a physically simulated disaster space, which requires physical exertion and team coordination to reduce virtual dangers and rescue virtual victims. Moreover, this game uses wearable technologies to monitor players’ information about their teammate and their own status (e.g. approximate distance between teammates, the distance from the leader, and heart rate). Most of the previously mentioned games used to train rescue skills also support multiplayer mode (e.g. Hazmat: Hotzone, Flame-Sim, and Icehouse), and to a certain extent it can train the collaboration ability between rescuers. Moreover, there are some low-fidelity games/simulations that abstract the elements of cooperation in disasters specifically for training team cooperation. For example, The C3Fire \cite{50} simulation is a training environment that can train people’s team coordination awareness and team decision-making. It can generate a task environment in which participants cooperate to complete a specific mission, such as extinguishing a forest fire and save valuable houses. ZO Toups et al. \cite{51} developed a game called Team Coordination Game (TeC) which uses team coordination as the main component of the game's core mechanism. In this game distributed information among team members, participants need to gather information, cooperate, communicate, and rely on each other. In addition, it is also very important to improve the moral judgment of rescuers. Because the rescuer often encounters various moral choices in the actual rescue process. If not handled properly, the rescuer may not only miss the best rescue opportunities, but also may develop mental illness like post-traumatic stress disorder (PTSD) \cite{52} \cite{53}. To solve this problem, some games are designed to train the ethical decision-making of inexperienced rescuers. Wahyudin et al. \cite{54} have designed a mobile first-person role-playing game (RPG) MAGNITUDE, which is used to train ethical decision-making in disaster situations. During the game, the player has to confront NPCs with ethical conflicts. For example, a boy’s leg was crushed by debris, and he lost a lot of blood. The heavy equipment used to clean up debris cannot be brought to the scene immediately. The player must choose to amputate or wait for equipment, both are very dangerous to the boy’s life. The player must take responsibility for the boy’s life. \begin{figure}[h] \centering \includegraphics[width=3.65in]{Fidelity} \caption{Game Fidelity Adaptive} \end{figure} With the development of science and technology, a large number of technologies continue to make the game training environment of disaster rescuers closer to the real-world. Meanwhile, most designers and developers of serious games usually strive for high-fidelity environments, especially in visual scenes. However, studies have shown that higher fidelity does not lead to higher learning efficiency. The relationship between fidelity and learning efficiency depends on the pre-knowledge and pre-skills that a trainee has mastered \cite{55}. In general, novices or intermediate trainees may be distracted by abundant irrelevant information and activities in the game which increase cognitive load, resulting in inefficient learning or training. Proficient knowledge and skills can make professional focus on the main purpose of the game, and high fidelity can bring them more immersion, so that high fidelity can bring professional more gain than others. At the same time, high fidelity brings more burdens of computing resources, causing a series of physical discomforts and lead to an increase in development costs. One way to overcome these challenges is to use low-fidelity simulation, which reduces the fidelity of the simulation and only focuses on concept learning. This low-fidelity simulation is economical where it can reduce the cost of the simulation whilst and increase the focus on the desired element of education \cite{51}. In order to prove the effectiveness of low-fidelity simulation, ZO Toups et al. \cite{51} tested 28 firefighters. The results showed that low fidelity is an effective training method when the demand for fidelity is not high. In summary, in the process of developing and designing such serious games, developers should design different games according to the professional level of the target group. For example, novices can adopt a cartoon game style, whereas for professionals, the style of the game should be as realistic as possible. When the demand for fidelity is not high, or it is about training some specific knowledge, this knowledge can be abstracted develop a low-fidelity game. However, these solutions need to design a new game with different fidelity for people of different professional levels, which will greatly increase the development cost of the game. With the continuous development of artificial intelligence technology and adaptive technology, more and more games have begun to adjust game content based on the player’s behavior in the game. For example, in some answering games, the difficultly of questions will be appropriately adjusted according to the player’s correct rate of answering the question \cite{56}. Therefore, facing the relationship between game fidelity and learning efficiency, game fidelity adaption may be a feasible method, Fig 4. On the one hand, players’ data can be collected through their interaction with the game or by using some sensors (e.g. eye tracker, motion capturing, joysticks etc.) \cite{57} \cite{58}. Based on this data, algorithms such as decision trees, fuzzy logic, and others can be used to obtain the player’s professional level \cite{59}. On the other hand, the game fidelity is mainly expressed through the materials, textures, and models. Consequently, it may be possible to change the game material and model based on the player’s professional level in the game to achieve the purpose of fidelity adaptation. That will keep the relationship between the game environmental fidelity of the game and the professional level of player is maintained at the point of highest learning efficiency. We just provide a feasible idea for researchers to solve such problems. Thus, the specific implementation of the game fidelity adaptive method requires further research and exploration. \subsubsection{SGDRs for Commanders} once an emergency occurs, incident commanders of rescue service will rapidly assess the situation before deploying their resources. This evaluation and command are extremely important to achieve the successful result of the rescue at this stage. At this point, improper decision-making or poorly informed can cause catastrophic consequences, leading to loss of life or avoidable property loss. In order to manage this situation, commanders always have to make the correct decision even when they are under the extreme pressures of the incident. In an attempt to provide a realistic training environment, some games combine videos, photos, and role-playing to recreate the scene \cite{60}. However, videos and photos will limit the flexibility of the scene and cannot be used repeatedly. Therefore, many games based on the random event have been developed which. In these games, some emergencies are randomly generated, and players need to resolve these events reasonably according to the actual situation. Most of the games are about a micromanaging emergency where the player can directly control a selection of rescue units (e.g. firefighter, medical staff, and police, etc.) to save lives as possible. These games are often used by incident commanders to train their ability to collect information and allocate disaster rescue resources. For example, Emergency \cite{61} is a disaster strategy game developed by Sixteen Tons Entertainment. In Emergency, the player has a city map that points out the location of various rescue resources, and the place where the emergency occurs. The player needs to mobilize limited resources to resolve the emergency. This is a complicated process in which the player needs to observe what is happening at the emergency location, then dispatches the needed rescue resources and vehicles. For example, the player must first mobilize ambulance or helicopter when seeing blood, handle large-scale fires which require heavy fire trucks with water guns, and need to control firefighters to cut down trees to create fire barriers when dealing with the forest fires. Another game is KriseIM \cite{62}, In this game, players will get information from the notification window, and then respond to the crisis by dispatching police, ambulances and fire departments. At end of the event, the player and instructor will be provided a debriefing to show the results of the training. Rescue: Everyday Heroes \cite{63} focuses only on the fire. In this game, the player keeps track of several fire stations and has to come up with a strategy for the mission and chooses the most suitable fire extinguishing agent according to different circumstances. In these games, artificial intelligence technology is used to control a large number of NPCs’ behavior. Especially those NPCs affected by the disaster. At the same time, the player needs to control virtual teammates as the response team members to rescue the NPCs affected by the disaster. However, those virtual teammates will blindly follow the player's command and take action, regardless of whether these commands are reasonable. In order to make the commander consider the safety and emotions of the teammate during training. Djordjevich et al. \cite{64} developed an emotional model into virtual teammates for a training game Ground Truth: Toxic City. This game allows players to play the role of the incident commander, command virtual teammates to deal with the leakage of toxic chemicals,aim to training the commander's strategy. In this game, Artificial neural network computations are used to model intensities of fear and anger and then convert them into fuzzy sets to be easier to understand by the human cognitive model. If the player orders the virtual teammate to take inappropriate actions (e.g. require going deep inside the hazardous fog), the virtual teammate’s emotional state may change dynamically. This will cause the virtual teammates to disobey the command or reduce its ability to perform other requested actions. Integrating emotions into the virtual teammates will prompt the player to consider the rationality of instructions and the emotional state of the members in the process of commanding rescue. This will more accurately reflect the player’s decisions in such situations. After a disaster occurs, it is often necessary for the incident commander to make a Rapid Damage Assessment (R.D.A.). Sooraj K Babu et al. \cite{65} came up with a multiplayer game, which is set in an earthquake. Each player has to take the role of head of a department to deal with the rescue operation. Players must first take a R.D.A. according to destroyed buildings and injured people. Then each player collaborates to provide resources and assistance according to the R.D.A level and the responsibilities of the department they played. In addition, many studies have shown that the main way for incident commanders to know about the disaster is through social media \cite{66} \cite{67}. The incident commander should not only be trained to obtain information from the disaster site for damage assessment, but also need to learn to extract useful information from social media. Abbasi et al. \cite{68} designed a live action game in which the player can get information from social media (e.g. Internet, TV, newspaper, E-mail, telephone, etc.). Multiple players must exchange information to determine the level and situation of the disaster in which commanders can be trained to collect information from media and communication ability. In the actual rescue process, the commander often needs to cooperate with rescuers on the scene of the disaster to complete the task. In this way, real-time communication and cooperation between commanders and rescuers are very important. However, for such serious games, the training of cooperation factors often concentrated on the cooperation between commanders of different departments. Few games involve commanding cooperation between commanders and rescuers in which will provide developers with another design solution. \subsection{SGDRs for Recovery Stage} When a disaster is over, building back homes is very necessary. This stage usually occurs six months after the disaster and lasts for long time. In the recovery phase, the relief work is no longer just to provide some emergency supplies and search or rescue the victims, but it is rather a series of longer-term assistance, including returning people affected by the disaster to normal life and strengthen future anti-disaster capacity. However, post-disaster reconstruction will be limited by many factors such as regional culture, funding, and the extent of the disaster. Therefore, serious games are not suitable for training in this aspect. Only a few games incorporate the element of improving environment and living conditions of communities. For example, Hazagora \cite{23} is a board game in which players represent the inhabitants of a volcanic, who must develop and maintain communities. After the disaster, the player needs to take some measures to maintain communities, such as removing the destroyed buildings, burying the dead, and clean up contaminated resources (e.g. food, natural water). In this way, the player can experience the impact of the disaster and learn post-disaster strategies to mitigate the continuing impact after the disaster. \section{Discussion and Prospects} Disasters occur every day in the world that affects thousands of people. In order to reduce the losses caused by disasters, participants in disaster relief must perform their duties and train themselves to prepare for disaster. Many studies have proved that serious games are a good way to train relevant participants. However, SGDRs are not exhaustive and has a series of limitations. Considering these deficiencies, the following aspects of SGDRs are worth to be further attention. The content of SGDRs are not comprehensive. On the one hand, SGDRs could not consider regional cultural diversity and custom-imposed taboos or local needs, which will limit its use range. On the other hand, most SGDRs focus only on most common disasters like fire, earthquake, floods, and tsunami. A few games involve other disasters like droughts, extreme weather, and disease epidemics. Therefore, developers should take this aspect into consideration when developing SGDRs in the future, and continuously enrich their content. Game feedback, including in-game feedback and post-game feedback, is particularly important for SGDRs \cite{69}, but it is often ignored in many SGDRs. The in-game feedback can judge the player’s in-game operations, affirm the correct operations, and punish the wrong operations, if there is no proper feedback in the game, the player can make mistakes at will in the game so that these mistakes can be ignored in real-world operations and lead to disastrous consequences. The post-game mainly is debriefing which can offer an opportunity for the player to process and consolidate their in-game operation. Therefore, when the player completes an operation in SGDRs, feedback should be given to the player in certain forms (e.g. sound, animation, special effects, etc.) based on the player’s operation, and the player should be given a debriefing when the game is over. Some articles indicate that game control complexity and game environment can affect SGDRs training \cite{31}. That requires the simple operation of SGDRs, and players should be familiar with the game environment before training. At the same time, developers should design different SGDRs according to the player’s professional level. novices can be trained by simply quiz games. The professional can be trained through processes or problem-solving in a complex game environment. For evaluation of the effectiveness of SGDRs, there are few detailed descriptions of comprehensive evaluation models, which leave room for future research and speculations. Only a few games have been tested using player feedback. Their research results show that serious games can be effectively used in simulations, training many disaster relief related activities and increasing disaster awareness. However, to face of a lack of professional evaluation, we recommend using a combination of player feedback and professional opinions to evaluate the serious game in the realm of disaster relief. \section{Conclusion} SGDRs is an effective method for disaster relief training and being intensively studied. In order to show the effectiveness of serious games for disaster relief training, this paper investigated the traditional methods and serious game methods for disaster relief training and determined that the use of serious game training can make up for the limitations of the traditional methods. Apart from that, due to the absence of systematic description of disaster relief work in different stages of disaster relief, we introduced disaster management and divided disaster relief into three stages: Preparedness, Response, and Recovery. Then, based on the different stages of disaster relief, the technologies and functions of SGDRs were summarized and analyzed. Finally, we discussed the current deficiencies of SGDRs. To sum up, our work can provide a guidance for participants in disaster relief work and training. Meanwhile, we provide suggestions for researchers to design more effective serious games for disaster relief. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,996,787
arxiv
\section{Introduction} \label{sec:intro} Climate change is one of the biggest challenges threatening the world, the effects of which are already being felt through events such as increasingly frequent extreme weather effects, severe droughts, and devastating fires in countries such as the US and Australia \cite{nhess+:2020}. During such events, it is not uncommon to see claims of questionable scientific merit with headlines such as \ex{Climate Change has caused more rain, helping fight Australian wildfires}.\footnote{\url{https://bit.ly/2H9xivN}} This kind of narrative seeds scepticism \cite{oreskes+:2010}, discredits climate science and scientists \cite{anderegg+:2010}, spreads misinformation \cite{farrell+:2019}, and neutralises debate on key issues \cite{mckie+:2018}, thereby turning it into a partisan issue \cite{benegal+:2018, van+:2017} and leading to inaction. To check for the veracity (truthfulness) of such claims and at the same time provide public with scientifically sound information, experts have started publishing feedback on websites like \href{https://climatefeedback.org/}{climatefeedback.org} and \href{https://skepticalscience.com/}{skepticalscience.com}. \figref{cf-ex} gives us one such example where the claim from The Sun is that \ex{Earth [is] about to enter 30-year `Mini Ice Age'}, which has been labelled as Incorrect, with the Key Take Away being that \ex{Scientists cannot predict whether solar grand minimum ... is coming} and \ex{even if one occurred, the consequences for average global temperatures would be minimal}. It is this process of fact verification with a textual explanation/justification that we aim to automate, as a tool to assist climate science experts to more efficiently respond to such claims. Our approach draws on recent work on explainable fact checking \cite{atanasova+:2020} and retrieval-augmented generation \cite{lewis+:2020}, in using the claim to: (1) retrieve documents from a knowledge source such as Wikipedia or Intergovernmental Panel on Climate Change (IPCC) reports; and (2) generate an explanation for the claim based on the top-$k$ retrieved documents and the T5 decoder \cite{raffel+:2019}, with a multi-task objective including a veracity prediction for the claim. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cf-example} \caption{An example of a claim review from \url{climatefeedback.org}} \label{fig:cf-ex} \end{figure} Our contributions are as follows: (1) we introduce the task of generating explanations justifying the predicted veracity label for a climate change claim; (2) we deploy open-domain question answering with an external knowledge source to the knowledge-rich and high-impact domain of climate change fact checking; (3) we study the effect of different knowledge sources, retrievers, and retrieval depth across two datasets (both within- and cross-dataset); and (4) we demonstrate that small numbers of manually-written high-quality claim explanations result in high-quality explanations. \section{Related Work} \label{sec:related-work} Public perception and reaction to climate change is a function of how the facts and narrative are presented \cite{flottum+:2014, flottum+:2016}, in large part because climate change is not just the science but has strong political, social, and ethical aspects \cite{flottum+:2017}. A range of corpus linguistic methods have been used to study the topical and stylistic aspects of language around climate change, including structured topic modelling \cite{roberts+:2014,tvinnereim+:2015}, keyphrase extraction via grammar induction \cite{salway+:2014}, and analysis of frequently-used metaphors \cite{atanasova+:2017}. Fact checking and fake news detection are critical tasks for climate science discussions in the media and social media. Early work on fact checking and misinformation was based on the creation of datasets for fake news detection \cite{vlachos+:2014} and claim/stance verification \cite{ferreira+:2016}. While early datasets were limited in size, larger datasets have since been developed, such as LIAR \cite{wang+:2017} --- collected from PolitiFact and labelled with 6 levels of veracity --- and \method{FEVER} \cite{thorne+:2018}, a dataset generated from Wikipedia with \class{supported}, \class{refuted} and \class{not enough info} labels. More recently, extending the methodology of \method{FEVER}, \newcite{diggelmann+:2020} released \method{CLIMATE-FEVER}, a dataset specific to the domain of climate change. Several studies have analysed the discourse around climate change. \newcite{luo+:2020} proposed an opinion framing task to detect stance in the media. To understand narrative and fames around lack of action on climate change, \newcite{bhatia+:2021} studied automatic classification of neutralization techniques. \newcite{diggelmann+:2020} formalised the task by introducing the task of \method{CLIMATE-FEVER} as a veracity prediction task in a fact checking setting. Our work differs from these in that we are focussed on jointly generating the correct explanation to counter or support the claim, in addition to the veracity label. The closest work to our own is that on explainable fact checking by \newcite{atanasova+:2020}, which uses DistilBERT \cite{sanh+:2019} in a multitask setting and performs the joint task of summarisation and classification of the veracity of the claim. \newcite{stammbach+:2020} experimented with GPT3's \cite{brown+:2020} few shot learning capabilities to generate fact-checking explanations. There has recent work on the ability of pretrained models like BERT \cite{devlin+:2019}, GPT2 \cite{radford+:2019}, and \method{T5} \cite{raffel+:2019} to capture factual information \cite{petroni+:2019}. However, ``knowledge'' in these pretrained models is stored in the parameters and not directly accessible, making it hard to interpret, extend, or even query these models \cite{roberts+:2020}. One way of augmenting pretrained models is to combine them with external knowledge sources by retrieving passages that are similar to a given query (a claim, in our case). The text retrieval module can be either traditional methods such as \method{BM25} \cite{robertson+:1995}, or neural retrievers such as dense passage retriever (\method{DPR}) \cite{karpukhin+:2020} or memory efficient binary passage retriever \cite{yamada+:2021}. Retrieval-based methods have been successfully applied to open domain question answering (QA) by combining the retriever with a ``reader'', to extract the relevant answer from those passages. \newcite{chen+:2017} introduced DRQA, a span-based extractive framework trained with gold spans in a SQuAD setting \cite{rajpurkar+:2016}, TF-IDF-weighted sparse representations are used to retrieve relevant passages from Wikipedia, and answers are extracted from them. \newcite{lee+:2019} argued against using separate information retrieval systems to retrieve context passages, and proposed ``open retrieval question answering'', which jointly learns the reader and retriever using only QA pairs (without explicit supervision over context passages). The retriever is pretrained in an unsupervised setting using \textit{inverse cloze task}, i.e.\ by predicting the document context given a passage from that document. Similarly \newcite{guu+:2020} jointly trained a reader and retriever by pretraining the retriever using ``salient span masking'', a specialisation of masked language modelling. \newcite{lewis+:2020} proposed retrieval-augmented generation, which combines a pretrained language model with an external knowledge source accessed via a neural retriever such as \method{DPR} \cite{karpukhin+:2020}, and jointly fine-tuned in a seq2seq manner (over questions and answers). Building on a similar idea, \newcite{izacard+:2020, izacard+:2020b} proposed the simple but highly effective ``fusion-in decoder'' model, which combines evidence from multiple passages independently in separate encoders, and attending to the combined representations in the decoder to generate the answer. \newcite{samarinas+:2021} extended the idea of passage retrieval to automatic fact checking, and demonstrated that neural retrieval models can improve evidence recall. In our work, we combine these ideas to jointly perform claim veracity classification and generate explanations to justify the prediction. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{wflow} \caption{Overview of our proposed method for generating an explanation and veracity label for a given claim, based on text passages from a knowledge source.} \label{fig:wf} \end{figure*} \section{Datasets} \label{sec:datasets} The two key data components of our method are: (1) an external knowledge source (``KS''); and (2) paired claim--explanation data with veracity labels (where the explanation justifies the binary veracity class). \subsection{Knowledge Sources} \label{sec:knowledge-source} We experiment with two knowledge sources: \paragraph{Wikipedia (``\dataset{Wiki}'')} Following \newcite{chen+:2017, karpukhin+:2020}, we use the processed Wikipedia dump from Dec 2018 as the knowledge source.\footnote{Available via the \method{DPR} repository: \url{https://github.com/facebookresearch/DPR}} \paragraph{Peer-reviewed PubMed abstracts and IPCC reports (``\dataset{Pubs}'')} A combination of climate change-related abstracts from PubMed,\footnote{\url{https://pubmed.ncbi.nlm.nih.gov/}} and reports from the Intergovernmental Panel on Climate Change (IPCC).\footnote{\url{https://www.ipcc.ch/}} PubMed is a database of peer-reviewed publications primarily in the biomedical domain, but also including other high-profile scientific journals. We use MeSH categories to sample publications relating to climate science, and extract the title and abstract of each publication. IPCC reports are written by a mix of scientists, experts, and policy makers. They are based off peer-reviewed publications, and intended to provide a comprehensive summary of a given topic relating to topics such as the physical science of climate change, climate change impacts, or the mitigation of climate change. While smaller in size than \dataset{Wiki}, this knowledge source is specific to climate change and thus more domain relevant. We apply the same preprocessing steps as \newcite{chen+:2017, karpukhin+:2020} to both knowledge sources, and segment each document into (non-overlapping) 100-word passages. In total after preprocessing, we generate 21M and 123K passages for \dataset{Wiki} and \dataset{Pubs}, respectively. \subsection{Paired Data} \label{sec:pd} \paragraph{\method{CLIMATE-FEVER} (``\dataset{C-fever}'')} \newcite{diggelmann+:2020} released this dataset for climate change claim verification, consisting of 1535 claims with 5 corresponding evidence sentences\footnote{``Evidence'' is used instead of ``explanation'' in this section for consistency with \method{CLIMATE-FEVER}, but both mean the same thing.} each (yielding a total of 7675 claim--evidence pairs). Following \method{FEVER} \cite{thorne+:2018}, it uses Wikipedia as the knowledge source for the evidence sentences, and labels the veracity of each evidence sentence according to 3 classes: \class{supports}, \class{refutes}, and \class{not enough info}. An overall label is assigned to a claim based on a majority vote over the evidence labels for the 5 evidence sentences. Inspired by \newcite{thorne+:2020,lewis+:2020}, we explore 2 configurations of \dataset{C-fever} in our experiments: (1) 3-way classification (``\method{FEV3}''); and (2) 2-way classification (``\method{FEV2}''), where we consider only \class{supports} vs.\ \class{refutes} and discard any claims which are not majority-labelled according to one of these two labels. For a given claim, we filter out evidence sentences which differ in label to the overall claim label. We split the two variants of \dataset{C-fever} into training, validation, and test sets using stratified partitioning. This resulted in: 963 training, 83 validation and 332 test instances for \method{FEV3}; and 680 training, 50 validation, and 177 test instances for \method{FEV2}. As each claim has multiple evidence sentences, this translates into a total of 3196 claim--evidence pairs for \method{FEV3}, and 1671 claim--evidence pairs for \method{FEV2}. When evaluating the quality of the generated explanation for a claim, we consider the multiple evidence sentences as ground-truth references. \paragraph{Climate Feedback (``\dataset{Feedback}'')} \href{https://climatefeedback.org/}{climatefeedback.org} is a website which invites scientists and experts to provide reviews and highlight factual inaccuracies of claims in news articles or made by prominent public figures. An example of a claim review is given in \figref{cf-ex}, where the ``key-take away'' acts as our explanation. Following \dataset{C-fever}, we construct a dataset of claim--explanation pairs. One important difference to \dataset{C-fever} is that the explanations here are not passages from a document collection, but rather are written by experts and so are more descriptive, specific to the claim, and overall higher in quality. We crawl 130 paired claim--explanation instances from the website. As the website is almost exclusively used to refute incorrect claims, the vast majority of claims have veracity label incorrect or partially incorrect. Given the resultant extreme label imbalance, we do not use this resource for veracity prediction, but only for explanation generation. Unlike \dataset{C-fever}, there is always a unique explanation for each claim, because of the structure of the website. In our experiments, we use five random training/validation/test splits of the data (90/15/25), and average the results. \section{Method} \label{sec:method} We now describe the joint model for explanation generation and veracity prediction. Our model is based on the \textit{Fusion in Decoder} (``\method{FiD}''; \newcite{izacard+:2020}) --- a sequence-to-sequence model that takes as input a question and support passages (from a retriever), and generates an answer --- adapted to process claim and support passages, to predict veracity labels and generate explanation texts. There are two components in our model: (1) a retriever (\secref{retriever}); and (2) a generator (\secref{generator}). Given a claim $c_i$, the role of the retriever is to search for the most relevant (top-$k$) support passages ${z_{k}}$ from a knowledge source (e.g.\ \dataset{Wiki}). For the generator, it is fashioned as an encoder--decoder: given a claim with $k$ support passages, each support passage ${z_{k}}$ is concatenated with the claim $c_i$ to produce $k$ claim--passage contexts $e_{i} = [c_{i};z_{j}]$, where $1\leq j \leq k$, where each $e_{i}$ is encoded independently but the encodings are fed together to the decoder to generate the veracity label \textit{and} explanation. This joint processing of multiple claim--passage contexts in the decoder allows the model to summarise the evidence from multiple passages; an illustration of the overall architecture is presented in \figref{wf}. \begin{table}[t] \centering \begin{adjustbox}{max width=\linewidth} \begin{tabular}{lc@{\;}cc@{\;}c} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{\metric{B-SCORE$_{\text{\sf rs}}$}} & \multicolumn{2}{c}{\metric{ACC}} \\ & \method{FEV2} & \method{FEV3} & \method{FEV2} & \method{FEV3} \\ \midrule \method{T5-only} & 0.26 & 0.24 & 0.75 & 0.49 \\ \method{Top1} (\dataset{Wiki}) & 0.03 & 0.02 &NA & NA \\ \method{Top1} (\dataset{Pubs}) & 0.02 & 0.03 &NA & NA \\ \method{Bert-veracity} & NA & NA & 0.79 & 0.55 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Performance of the baseline models (explanation generation = \metric{B-SCORE$_{\text{\sf rs}}$}; veracity prediction = \metric{ACC}).} \label{tab:baseline-res} \end{table} \begin{table*}[t] \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{clc@{\;}cc@{\;}cc@{\;}cc@{\;}c} \toprule \multirow{2}{*}{{Knowledge Source}} & \multirow{2}{*}{{Retriever}} & \multicolumn{2}{c}{{\metric{B-SCORE$_{\text{\sf rs}}$}}} & \multicolumn{2}{c}{{\metric{ROUGE-1}}} & \multicolumn{2}{c}{{\metric{ROUGE-L}}} & \multicolumn{2}{c}{{\metric{ACC}}} \\ & & {\method{FEV2}} & {\method{FEV3}} & {\method{FEV2}} & {\method{FEV3}} & {\method{FEV2}} & {\method{FEV3}} & {\method{FEV2}} & {\method{FEV3}} \\ \midrule \multicolumn{2}{c}{Baseline: \method{T5-only}} & 0.26 & 0.24 & 0.25 & 0.25 & 0.22 & 0.21 & 0.75 &0.49 \\ \multicolumn{2}{c}{Baseline: \method{Bert-veracity}} & --- & --- & --- & --- & --- & --- & 0.79 & 0.55 \\ \midrule \multirow{2}{*}{{\dataset{Wiki}}} & \method{BPR} & 0.28 & 0.27 & 0.25 & 0.27 & 0.22 & 0.24 & 0.80 & 0.59 \\ & \method{BM25} & 0.26 & 0.26 & 0.26 & 0.26 & 0.23 &0.22 & 0.78 & 0.57 \\ \midrule \multirow{2}{*}{{\dataset{Pubs}}} & \method{BPR} & 0.28 & 0.26 & 0.26 & 0.25 & 0.23 & 0.22 & 0.77 & 0.55 \\ & \method{BM25} & 0.29 & 0.26 & 0.25 & 0.25 & 0.23 &0.21 & 0.77 & 0.53 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{\clfev{}$\rightarrow$\clfev results (explanation generation = \metric{B-SCORE$_{\text{\sf rs}}$}, \metric{ROUGE-1}, and \metric{ROUGE-L}; veracity prediction = \metric{ACC}).} \label{tab:results1} \end{table*} \subsection{Retriever: \method{BM25} and \method{BPR}} \label{sec:retriever} We experiment with two retrievers: (1) \method{BM25} \cite{robertson+:1995}; and (2) \method{BPR} (binary passage retrieval; \newcite{yamada+:2021}). For \method{BM25}, the knowledge source is stored in the form of an inverted index. Claim texts are tokenised and entities are linked to produce a sparse bag of words/concepts representation. We use Pylucene\footnote{\url{https://lucene.apache.org/pylucene/}} with default parameters as the retrieval engine, and DBepdia spotlight\footnote{\url{https://www.dbpedia-spotlight.org/api}} for entity recognition and linking. \method{DPR} \cite{karpukhin+:2020} is a dual encoder that consists of two separate BERT models to encode the query and passage, and computes the relevance score based on the inner product of their BERT encodings. \method{BPR} extends this by integrating a hashing layer (which converts the BERT encodings to binary codes) to making the encodings more memory efficient without substantial loss in accuracy. \method{BPR} is trained with a multi-task objective over 2 tasks: (1) candidate generation using the binary codes; and (2) candidate re-ranking based on the inner product of the continuous vectors. We use the official implementation of \method{BPR}, which was pretrained on the Natural Questions dataset \cite{kwiatkowski+:2019} with Wikipedia as the knowledge source.\footnote{\url{https://github.com/studio-ousia/bpr}} Note that we use the pretrained \method{BPR} without fine-tuning. \subsection{Generator: \method{T5}} \label{sec:generator} \newcite{raffel+:2019} introduced \method{T5}, a pretrained encoder--decoder model, where different NLP tasks can be reframed as text-to-text problems to allow the training of a single model to perform multiple tasks. \method{T5} allows us to define a new task by prepending a task-specific prefix token during fine-tuning. In our case, an input is prefixed with \class{lab-exp:} (to denote our task) and uses special tokens \class{claim:} and \class{context:} to denote the start of a claim text and support passage, respectively (e.g.\ input $=$ \textit{lab-exp: claim: Our harmless emissions of ... context: Marine life ...}). The output takes the form of the veracity label followed by an explanation (delimited by a semicolon, see \figref{wf}), such that that the decoder is predicting the veracity label and generate explanation together.\footnote{We also experimented with defining separate objectives for label generation and explanation generation and found similar results.} \subsection{Experimental setup} For \method{FEV2} when training on \dataset{C-fever}, we pretrain \method{T5} base as a generator as follows: batch size = 1 with gradient accumulation = 4, text maxlength (claim +passage length) = 200, and generated answer maxlength = 150. We use the Adam optimiser, learning rate = 1e-5 with a linear scheduler, weight decay = 0.01, and total steps = 10k with warmup steps = 800. We evaluate the performance of our model on the validation set every 2500 steps. In the case of \method{FEV3}, due to the larger size of the dataset, we change the total steps to 18k and warmup steps to 1000, but keep other hyper-parameters the same. In the case of training on \dataset{Feedback}, we decrease our total steps to 7500. Details of \method{BPR} and \method{BM25} are given in \secref{retriever}. We experiment with $k \in \{1, 5, 10, 15, 20\}$ as the number of retrieved documents for both retrievers. \section{Experiments} In this section we present and compare the results of our experiments under different conditions. We evaluate the performance of explanation generation using rescaled BERT-score (\metric{B-SCORE$_{\text{\sf rs}}$}; \newcite{zhang+:2019}), \metric{ROUGE-1} and \metric{ROUGE-L}. The original BERT-score uses contextual embeddings to compute similarity between a generated explanation and reference, but since the computed similarity values often end up in a small range at the higher end of the numeric range (close to 1), \metric{B-SCORE$_{\text{\sf rs}}$} is proposed where rescaling is performed to produce similarity scores of wider range. To assess label veracity prediction, we use classification accuracy (\metric{ACC}). We use several baselines: (1) \method{T5-only}, where we remove the retriever and treat the task as a sequence-to-sequence problem (i.e.\ \method{T5-only} is trained using only the claim-explanation pairs without any knowledge sources); (2) \method{Top1}, where we remove the generator and use the top-1 retrieved passage (using \method{BPR}) as the explanation (this baseline therefore does not predict the veracity labels); and (3) \method{Bert-veracity}, where we fine-tune \method{BERT} using the claim$+$explanation as input to predict the veracity label (this baseline therefore does not have a retriever or generate any explanation). \begin{table}[t] \centering \begin{adjustbox}{max width=0.8\linewidth} \begin{tabular}{lccc} \toprule KS & \metric{B-SCORE$_{\text{\sf rs}}$} & \metric{ROUGE-1} & \metric{ROUGE-L} \\ \midrule \dataset{Wiki} & 0.17 & 0.16 & 0.14 \\ \dataset{Pubs} & 0.17 & 0.16 & 0.14 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Generation performance for \clfev{}$\rightarrow$\clfeed, with \method{BPR} as the retriever.} \label{tab:results2} \end{table} \begin{table}[t] \centering \begin{adjustbox}{max width=0.8\linewidth} \begin{tabular}{lccc} \toprule KS & \metric{B-SCORE$_{\text{\sf rs}}$} & \metric{ROUGE-1} & \metric{ROUGE-L} \\ \midrule \dataset{Wiki} & 0.21 & 0.22 & 0.18 \\ \dataset{Pubs} & 0.22 & 0.21 & 0.18 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Generation performance for \clfeed{}$\rightarrow$\clfev, with \method{BPR} as the retriever.} \label{tab:results3} \end{table} \begin{figure*}[t] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\textwidth]{fev2ns.png} \caption{\method{FEV2}} \label{fig:res-fev2} \end{subfigure} ~ \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\textwidth]{fev3ns.png} \caption{\method{FEV3}} \label{fig:res-fev3} \end{subfigure} \caption{\metric{ACC} performance over different numbers of retrieved documents for \method{BPR} and \method{BM25}, with \dataset{Wiki}.} \label{fig:res-ret} \end{figure*} \subsection{RQ1: Which knowledge sources performs best?} \label{sec:rq1} We first present the baseline system results in \tabref{baseline-res}. Note that all results are presented in the two configurations of \dataset{C-fever}, where the veracity has either 2 classes (\method{FEV2}) or 3 classes (\method{FEV3}); see \secref{pd} for details. In terms of explanation generation, we can see that \method{T5-only} substantially outperforms \method{Top1} (using either \dataset{Wiki} or \dataset{Pubs}). This suggests that it is important to have a generator summarise over multiple evidence passages to generate a good explanation. In terms of veracity prediction, we see a pure veracity prediction model (\method{Bert-veracity}) performs best, although it is important to note that this model uses explanation text as part of its input (where \method{T5-only} has to generate the explanation). \tabref{results1} presents the full results for veracity prediction and explanation generation over \dataset{C-fever} (for training and testing), using either \dataset{Wiki} or \dataset{Pubs} (\secref{knowledge-source}) as the knowledge source. In terms of veracity prediction (\metric{ACC}), our model (using either \dataset{Wiki} or \dataset{Pubs}) outperforms \method{T5-only}; more encouragingly, our best model (using \dataset{Wiki} and \method{BPR}) outperforms \method{Bert-veracity} which has access to the ground-truth explanation as input, suggesting the explanation generator component helps veracity prediction. In terms of explanation generation quality, our model consistently outperforms \method{T5-only} (using either \dataset{Wiki} or \dataset{Pubs}) over all metrics (\metric{B-SCORE$_{\text{\sf rs}}$}, \metric{ROUGE-1}, and \metric{ROUGE-L}), implying that the incorporation of an external knowledge source aids explanation generation. Comparing the results between \dataset{Wiki} and \dataset{Pubs}, we see that \dataset{Wiki} generally performs slightly better. We theorise this may be due to 2 reasons: (1) \method{CLIMATE-FEVER} is based on Wikipedia (as the source of evidence sentences), and so there is an element of data bias when we using the \dataset{Wiki} as our knowledge source;\footnote{It is theoretically possible for the retriever to retrieve the same supporting passage as the explanation text (output), given that the explanation is extracted from Wikipedia in \dataset{C-fever}. In practice this is rare since the retriever uses the claim text as the query.} and (2) \dataset{Pubs} is orders of magnitude smaller in terms of the number of passages, and its domain alignment advantage appears to be outweighed by the resulting data sparseness. \subsection{RQ2: Which retriever performs best?} Looking at \tabref{results1}, in terms of both veracity prediction and explanation quality, \method{BPR} generally outperforms \method{BM25}, although the performance gap is smaller when we use \dataset{Pubs} as the knowledge source. As such, we base the remainder of our experiments exclusively on \method{BPR}. \subsection{RQ3: How well does the method work in the cross-dataset setting?} \label{sec:rq3} \tabref{results1} was based on the \clfev{}$\rightarrow$\clfev dataset setting. Here, we explore cross-dataset performance to test the robustness of the proposed model, based on the two settings of: (1) train on\dataset{C-fever} and test on \dataset{Feedback} (\clfev{}$\rightarrow$\clfeed); and (2) train on \dataset{Feedback} and test on \dataset{C-fever} (\clfeed{}$\rightarrow$\clfev). We first present \tabref{results2}, evaluating only the generation quality, recalling our comments about label imbalance in \dataset{Feedback} from \secref{pd}. We see a considerable drop in values in comparison to the in-dataset setting of \tabref{results1}. This drop can be attributed to the fact that \dataset{Feedback} explanations are manually written, and generally longer and follow a different style to \dataset{C-fever}. Next we look at \tabref{results3} for \clfeed{}$\rightarrow$\clfev and see that the raw results are higher than \tabref{results2}, though still below the in-dataset setting. \begin{table*}[t] \small \centering \begin{tabular*}{\linewidth}{l}\toprule \textbf{Text} \\ \midrule \textbf{Claim}:About 60\% of the warming observed from 1970 to 2000 was very likely caused by the above natural 60-year climatic \\ cycle during its warming phase.\\ \textbf{\dataset{Wiki}} The observed warming over the past 60 years was ``very likely'' (greater than 90\% probability, based on expert \\ judgement) due to human-induced emissions of greenhouse gases. \\ \textbf {Ref}: It is extremely likely (95-100\% probability) that human influence was the dominant cause of global warming between \\ 1951-2010. \\ \hdashline \textbf{Claim}: Our harmless emissions of trifling quantities of carbon dioxide cannot possibly acidify the oceans\\ \textbf{\dataset{Pubs}}: The oceans are affected by the release of carbon dioxide, is responsible for the increase in the pH of the \\ Earth's oceans \\ \textbf{Ref}: Carbon dioxide also causes ocean acidification because it dissolves in water to form carbonic acid \\ \midrule \textbf{Claim} Believers think the warming is man-made, while the skeptics believe the warming is natural and contributions \\from man are minimal and certainly not potentially catastrophic \\ \textbf{\dataset{Wiki}}: The scientific consensus on climate change is that the trend is very likely caused mainly by human-induced emissions \\ of greenhouse gases\\ \textbf{\dataset{Pubs}}: Scientists have concluded that the warming observed over the past 50 years is primarily human-induced, and that \\ the effects are ``very likely'' to be catastrophic (although some argue that the effects are likely to be more severe than others) \\ \textbf{Ref}: Attribution sceptics or deniers (who accept the global warming trend but see natural causes for this), \\ and doubt that human activities are responsible for the observed trends.\\ \bottomrule \end{tabular*} \caption{Example generated explanations in the \clfev{}$\rightarrow$\clfev setting based on \dataset{Wiki} and \dataset{Pubs} (with reference explanations).} \label{tab:gen-ex1} \end{table*} \subsection{RQ4: What is the optimal number of retrieved documents?} To check the effect of the number of retrieved documents for both \method{BM25} and \method{BPR}, we present \metric{ACC} at different retrieval depths $k$ (between 1 and 20) in \figref{res-ret}. We see that for both \method{FEV2} and \method{FEV3}, in the case of \method{BPR} we achieve the best performance with 5 documents, before dropping slightly and flattening out. In the case of \method{BM25}, it takes more retrieved documents (10) to reach the best performance, before either flattening out or dropping back in performance, suggesting that the retrieval quality of \method{BPR} is higher than \method{BM25} for small values of $k$. \begin{table*}[t] \small \centering \begin{tabular*}{\linewidth}{l}\toprule \textbf{Text} \\ \midrule \textbf{Claim}: Sea level rise is decelerating. \\ \textbf{KS} \dataset{Wiki}: Sea levels are rising due to anthropogenic global warming, which is the cause of many global sea level rises.\\ The current rate of sea level rise is accelerating, and the acceleration of ice loss is due to land-based ice melting.\\ \textbf {Ref}: This acceleration is due mostly to human-caused global warming, which is driving thermal expansion of seawater \\ and the melting of land-based ice sheets and glaciers.\\ \hdashline \textbf{Claim}: The Great Barrier Reef is in fine fettle \\ \textbf{KS} \dataset{Pubs}: Increased sea surface temperatures boosted by heatwaves have triggered three global mass bleaching \\events in the Great Barrier Reef since 1998.\\ \textbf{Ref}: The percentage of baby corals being born on the Great Barrier Reef dropped drastically in 2018 and scientists are \\ describing it as the early stage of a ``huge natural selection event unfolding''\\ \bottomrule \end{tabular*} \caption{Example generated explanations in the \clfeed{}$\rightarrow$\clfev setting based on \dataset{Wiki} and \dataset{Pubs} (with reference explanations).} \label{tab:gen-ex2} \end{table*} \begin{table*}[t] \small \centering \begin{tabular*}{\linewidth}{l}\toprule \textbf{Text} \\ \midrule \textbf{Claim}: Marine life has nothing whatsoever to fear from ocean acidification. \textbf{Label}: Refutes \\ \textbf{Trained on \dataset{C-fever}}: Acidification of the oceans has a negative impact on marine ecosystems \textbf{\metric{B-SCORE$_{\text{\sf rs}}$}} 0.46 \\ \textbf{Trained on \dataset{Feedback}}: Decreasing ocean pH is documented to pose significant risks to marine ecosystems, though the \\ magnitude of the impacts depends on specific species. \textbf{\metric{B-SCORE$_{\text{\sf rs}}$}} 0.33 \\ \textbf {Ref1}: Human activities affect marine life and marine habitats through overfishing, pollution, acidification and the \\ introduction of invasive species.\\ \textbf{Ref2}: Rising levels of acids in seas may endanger marine life, says study \\ \midrule \textbf{Claim}: Tuvalu sea level isn't rising. \textbf{Label}: Refutes\\ \textbf{Trained on \dataset{C-fever}}: Tuvalu is affected by the effects of the Perigean spring tide events, which raise the \\ sea level \textbf{\metric{B-SCORE$_{\text{\sf rs}}$}} 0.75\\ \textbf{Trained on \dataset{Feedback}} : Global average sea level is rising due to greenhouse gas emissions, with the highest rates in the \\ tropical Pacific, which are vulnerable to coastal erosion \textbf{\metric{B-SCORE$_{\text{\sf rs}}$}} 0.22 \\ \textbf{Ref}: Tuvalu is also affected by perigean spring tide events which raise the sea level higher than a normal high tide.\\ \bottomrule \end{tabular*} \caption{Comparative examples in the \clfev{}$\rightarrow$\clfev and \clfeed{}$\rightarrow$\clfev settings, with their \metric{B-SCORE$_{\text{\sf rs}}$}.} \label{tab:gen-ex3} \end{table*} \subsection{Generation analysis} Noting that the generation evaluation metrics (\metric{B-SCORE$_{\text{\sf rs}}$}, \metric{ROUGE-1}, and \metric{ROUGE-L}) may not tell the whole story, we present some example generations in \tabref{gen-ex1} and \tabref{gen-ex2}. Looking at the first part of \tabref{gen-ex1}, we see that the generated explanations are similar to the reference, whereas in second part of the table, the generated outputs from both KS (for the same claim) are different to the reference generation, showing the impact of the two knowledge sources. Next we look at the examples in \tabref{gen-ex2}. We see that even though the model was trained on \dataset{Feedback} with only 90 training instances, as a result of the knowledge sources and retrievers, the model can still generate both coherent and semantically relevant explanations for the claim, pointing to the fact that high-quality paired data can pay rich dividends even in small quantities. \section{Discussion} We present two examples of explanation generation \tabref{gen-ex3} for \clfev{}$\rightarrow$\clfev and \clfeed{}$\rightarrow$\clfev setting, with their \metric{B-SCORE$_{\text{\sf rs}}$}. Looking at the first example, we see that the \metric{B-SCORE$_{\text{\sf rs}}$} when training on \dataset{C-fever} is higher than when training on \dataset{Feedback}. The explanation for \dataset{C-fever} is broadly correct but contains little detail. On the other hand, the explanation trained on \dataset{Feedback} has more substance, as it talks about \textit{pH} and \textit{species}, but ends up with a lower \metric{B-SCORE$_{\text{\sf rs}}$}. In the second example, similarly we see a high \metric{B-SCORE$_{\text{\sf rs}}$} for the explanation trained on \dataset{C-fever} as it is able to extract almost the same evidence as the reference. This is mainly due to the training and testing set being from same dataset,\footnote{In this case, the KS is also \dataset{Wiki}, which was used to construct \method{CLIMATE-FEVER}.} allowing the model to potentially extract exact chunks from context passages. Looking to the explanation from \dataset{Feedback}, however, we can see it has at least the same level of correctness and diversity, but ends up with a lower score. Given concerns about \metric{B-SCORE$_{\text{\sf rs}}$} being able to account for topical nuances in the domain of climate change, we additionally performed manual annotation of the quality of the explanations. Taking inspiration from \newcite{atanasova+:2020}, we conduct this evaluation in two forms: (1) given a claim, a generated explanation, and the true veracity label, annotate whether the explanation is in agreement with the true label, as a binary classification task (\method{Anno-T1}); and (2) rank the two explanations (for the same claim) according to their overall quality (\method{Anno-T2}). \begin{table}[t] \centering \begin{adjustbox}{max width=0.8\linewidth} \begin{tabular}{lcc} \toprule Dataset & \metric{MAR} & \metric{V-AGREE} \\ \midrule \dataset{C-fever} & 1.64 & 0.72 \\ \dataset{Feedback} & 1.36 & 0.80 \\ \bottomrule \end{tabular} \end{adjustbox} \caption{Manual evaluation of explanation quality.} \label{tab:results-ds} \end{table} We conduct this manual evaluation on a small sample of 25 examples from test set, and collect the explanations from the 2 data sources (resulting if a total of 50 instances across the 2 data sources for the same claim), shuffling the explanations randomly. We use 3 annotators and calculate Krippendorff's inter-annotator agreement,\footnote{We use nominal metric for \method{Anno-T1} and ordinal metric for \method{Anno-T2}} \cite{hayes+:2007} resulting in $\alpha = 0.58$ for \method{Anno-T1}, and $\alpha = 0.61$ for \method{Anno-T2}. For our evaluation, we then take the majority vote across annotators. For \method{Anno-T1}, we calculate raw agreement with the true label (\metric{V-AGREE}; higher is better), and for \method{Anno-T2} we calculate mean average rank (\metric{MAR}) across all the 25 examples for each dataset (lower is better). Looking at the results in \tabref{results-ds}, we can see that \dataset{Feedback} performs better than \dataset{C-fever} over both metrics, in contradiction to what we found using \metric{B-SCORE$_{\text{\sf rs}}$}, \metric{ROUGE-1}, and \metric{ROUGE-L} in \secref{rq3}. This suggests: (1) that while automatic metrics provide a general sense of the quality of the generated explanations, they are not able to capture the subtle nuances of the data; and (2) training the model using high-quality manually-written explanations, even in small quantities, is beneficial. \section{Conclusion} \label{sec:conclusion} We explore the task of joint veracity prediction and explanation generation for climate change claims and prepared a data pipeline consisting of knowledge source and paired data. We transposed the idea of using an external knowledge source with with a retriever and a reader from the domain of open domain question answering to for our task and experimented with 2 different knowledge sources, retrievers and number of retrieved documents. We analysed shortcomings in automatic evaluation like \metric{B-SCORE$_{\text{\sf rs}}$} with the help of manual evaluation and suggested that training model on small high quality manually written explanations augmented with a knowledge source can be quite useful. \bibliographystyle{acl_natbib}
1,314,259,996,788
arxiv
\section{Introduction} \label{Intro} In \cite{Mil}, the first named author gave an account of a connection between simple $(4a+1)$-knots of genus $1$ and Alexander polynomial \begin{equation} \label{Delta_m} \Delta_m(t) = mt^2 - (2m-1)t + m, m \in {\mathbb{Z}},\end{equation} and binary quadratic forms of discriminant $1-4m$ (note that this is also the discriminant of the quadratic $\Delta_m$). In particular, there is a surjective map from such binary quadratic forms to such knots, which is injective exactly when $m = \pm p$ is a prime or negative prime, and otherwise is expected to have large kernel. As a consequence, the first named author gave interesting heuristics that shew knots with Alexander polynomial $\Delta_p(t)$ with $p$ prime should dominate the total count of such knots up to isomorphism. Determining the size of the heuristically dominant term leads to the following question, which is purely about definite binary quadratic forms: \\ \\ \textbf{Question}: How many $\operatorname{SL}_2({\mathbb{Z}})$-classes of integral, positive definite binary quadratic forms are there whose discriminant is of the form $1-4p$ with $p$ prime, and bounded by $X$? \\ Note that here the binary quadratic forms are allowed to be imprimitive; there is no obvious topological interpretation for primitivity of the quadratic form associated to a knot. If we write $H(D)$ for the total number of $\operatorname{SL}_2$-equivalence classes of positive definite binary quadratic forms of discriminant $D$ (this is almost the same as the Hurwitz class number, except that we are not weighting the forms that have automorphisms, and has the same asymptotics), our question asks: what is $\sum_{p \leq X} H(1 - 4p)$?\\ The previous paper \cite{Mil} conjectured that the answer should be asymptotic to $X^{3/2}/\log X$, based on the heuristic assumption that class numbers $H(1-4p)$, $p$ prime, have the same statistical behaviour as the class numbers $H(1-4m)$ for all $m$. It applied a sieve to prove an upper bound of the form $O(X^{3/2}/\log X)$, without determining the constant explicitly. In this paper we give an exact asymptotic for the counting problem. \begin{theorem} \label{MT} We have the asymptotic formula \[\sum_{p \leq X} H(1 - 4p) = C_{\Art} \cdot \frac{2 \pi}{9} \cdot \frac{X^{3/2}}{\log X} + O \left(\frac{X^{3/2}} {(\log X)^{2}} \right). \] \end{theorem} Here the constant $C_{\Art}$ is \emph{Artin's constant}, given by the Euler product \[C_{\Art} = \prod_{\ell \ge 2} \left(1 - \frac{1}{\ell(\ell-1)} \right). \] This shows that the heuristic assumption in \cite{Mil} is not quite right. Comparing with the asymptotic $\sum_{m \le X} H(1-4m) = \frac{2\pi}{9} X^{3/2}$, we see that the class numbers $H(1-4p)$ are smaller on average than the class numbers $H(1-4m)$ by a factor of Artin's constant $C_{\Art}$.\\ Artin's constant originally arose in the context of Artin's conjecture on primitive roots \cite{Hoo1}, namely predicting the proportion of primes $\ell$ for which a given integer $a \ne \pm 1$ is a primitive root mod $\ell$ (this proportion is expected to be independent of $a$). We do not know of a direct connection between Theorem \ref{MT} and Artin's conjecture. \\ Similar problems were considered by Friedlander and Iwaniec in \cite{FI hyperbolic}, for the case of discriminants of the form $-4p$, and by Nagoshi in \cite{Nagoshi} for discriminants of the form $-p \equiv 1 \pmod{4}$ (Nagoshi notably computed all moments). Our second proof, using the analytic class number formula, is inspired by these proofs. However, in implementing this strategy we encounter additional complications: where Friedlander-Iwaniec and Nagoshi obtain a single main term, we end up with infinitely many main terms, which we must sum to obtain the final answer. These come from two sources: both because we are including non-primitive forms in our count, and because numbers of the form $1-4p$, unlike primes, are slightly more likely to be quadratic residues modulo a prime $\ell$ than they are to be residues. \\ We expand on this point. While in previous works restricting to primes did not affect the average size of the class number, in our case it does, producing a factor of $C_{\Art}$ that does not appear in the total count of all binary quadratic forms. \subsection{Outline of the proofs} We will in fact give two proofs of the main theorem, one where we count in the fundamental domain of $\operatorname{SL}_2({\mathbb{Z}})$-classes of positive definite binary quadratic forms having absolute discriminant bounded by $X$, and one which computes using the class number formula. The reason for this redundancy is that the two approaches have distinct advantages: the former requires an \emph{explicit} equidistribution result for roots of quadratic polynomials and gives us an opportunity to record such a result in the literature. In fact the result is easily deduced from the work of Hooley \cite{Hoo1}. The latter approach has the advantage that it is computationally less intensive. \\ The first proof goes on by counting integral points in the classical Gauss fundamental domain directly. The trick is to write the discriminant equation \[4ac - b^2 = 4p - 1\] as \[ac = k^2 + k + p,\] where $b = 2k+1$. Thus one reduces the problem to estimating a divisor sum of the shape \[\sum_{p \leq X} \sum_{k} d(k^2 + k + p). \] Such a sum is well within the purview of modern analytic number theory; see for example \cite{Lap} for a recent example. However, the issue is that the region we are counting in is somewhat irregular in shape and so some thought is required to write down sums that can be evaluated precisely. \\ The key idea to do this is to note that \[\#\{k \pmod{a} : a | k^2 + k + p\} = \sum_{m|a} \left(\frac{1-4p}{m} \right). \] This reduces our counting problem to evaluating sums of the shape \[\sum_p \sum_a \sum_{m|a} \left(\frac{1-4p}{m} \right). \] Unfortunately, sometimes we do not sum over a complete set of residues mod $a$, so we will require an equidistribution result. Such equidistribution results are well-known; see for example the seminal work of Hooley \cite{Hoo1}. We require an explicit error term with good dependence on $p$, which does not appear to be in the literature. Fortunately such an estimate follows easily from Hooley's argument in \cite{Hoo1}, and we have recorded it here as Proposition \ref{quad equi}. \\ The rest of the argument relies on applying oscillation lemmas of sums involving Jacobi symbols and the Siegel-Walfisz theorem. \\ Our second proof counts binary quadratic forms with the analytic class number formula \[h(D) = \frac{1}{\pi} L(1, \chi_D) |D|^{1/2}\] for negative discriminant $D$. This formula applies to non-maximal orders, but here $h(D)$ only counts the primitive binary quadratic forms of discriminant $D$. Since we want to count all binary quadratic forms, our first step is to write our sum $Q(X) = \sum_{d \ge 1} Q_d(X)$, where $Q_d(X)$ counts only the forms $ax^2 + bxy + cy^2$ with content $\gcd(a, b, c)= d$. To estimate $Q_d(X)$, we first estimate the related quantity $T_d(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} L(1, \chi((1-4p)/d^2))$, and then apply Abel summation to the sum over $p$.\\ This reduces our problem to bounding \[ T_d(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} L(1, \chi((1-4p)/d^2)) = \sum_{{\substack{p \le X\\ d^2 \mid 1-4p}}} \sum_{n \ge 1} \frac{1}{n} \Legendre{(1-4p)/d^2}{n}. \] As in the first method, we have a double sum of quadratic residue symbols, though now it is weighted, and the strategies we apply are similar to the first approach. We are able to cut off the sums in the $L$-functions at $n \sim X^{1/2+\epsilon}$. We then show that the range $(\log X)^B < n < X^{1/2 + \epsilon}$ yields an error term by dividing it into dyadic intervals and using double oscillation lemma on each. Finally, for $n \le (\log X)^B$ we use Siegel-Walfisz to estimate the sum over $p$: this sum is generally nonzero, so we then have to combine our Siegel-Walfisz main terms into one big main term. \subsection{Motivation from and application to knot theory} This line of research was originally motivated by a question related to knot theory. We sketch the appropriate background (See section 2 of \cite{Mil} for more details.) and state the implications of our results in terms of knots and knot invariants.\\ Roughly speaking, an $n$-knot is a ``nicely'' embedded copy of $S^n$ in $S^{n+2}$, up to topological equivalence. (Here it matters that we keep track of the orientations both on $S^n$ and $S^{n+2}$.) The most well-known case is that of $n=1$, where the classification of knots and study of their invariants is an extremely rich and active field. In higher dimensions, the study of all knots only gets more complicated, but certain special families of knots are well understood, in particular the simple $n$-knots, which are those for which the first $\lfloor\frac{n}{2}\rfloor$ homotopy groups of the knot complement are ``as trivial as possible'' (explicitly, $\pi_1$ is isomorphic to ${\mathbb{Z}}$ and $\pi_i$ is trivial for $2 \le i \le \frac{n}{2}$).\\ For $n = 5$ and $n \ge 7$, simple $n$-knots have been completely classified in terms of algebraic data: notably, this classification only depends on the dimension $n$ modulo $4$. Simple knots have a fundamental invariant, the Alexander polynomial, and Bayer and Michel proved that for a squarefree polynomial $\Delta \in {\mathbb{Z}}[t]$, there are only finitely many simple $n$-knots with Alexander polynomial $\Delta$.\\ The paper \cite{Mil} studied simple $(4a+1)$-knots of genus $1$, for fixed $a \ge 1$ (as noted above the classification does not depend on $a$). These are exactly the simple $(4a+1)$-knots with Alexander polynomial $\Delta_m = m t^2 + (1-2m) t + m$ for some nonzero integer $m$. Using the algebraic classification of simple $(4a+1)$-knots, in \cite{Mil} the first named author showed that: \begin{theorem}[\cite{Mil} Theorem 2.5 (vi), Corollary 2.13] Simple $(4a+1)$-knots with Alexander polynomial $\Delta_m$ are in bijection with $\operatorname{SL}_2({\mathbb{Z}}[1/m])$-equivalence classes of binary quadratic forms over ${\mathbb{Z}}[1/m]$ of discriminant $1-4m$ . Furthermore, when $m=p$ is prime, $\operatorname{SL}_2({\mathbb{Z}}[1/p])$-equivalence classes of binary quadratic forms of discriminant $1-4m$ over ${\mathbb{Z}}[1/m]$ are naturally identified with $\operatorname{SL}_2({\mathbb{Z}})$-equivalence classes of binary quardatic forms over ${\mathbb{Z}}$ of discriminant $1-4p$. \end{theorem} Combining this with our Theorem~\ref{MT}, and including a factor of $2$ as we are allowing both positive and negative definite forms, \begin{corollary} The total number of simple $(4a+1)$-knots of genus $1$ with Alexander polynomial $p t^2 + (1-2p) t + p$ for $p \in [1, X]$ prime is $\sim C_{\Art} \frac{4\pi}{9} X^{3/2}/(\log X)$. \end{corollary} The heuristics of \cite{Mil} indicate that most simple knots of genus $1$ (ordered by the height of the Alexander polynomial) should have Alexander polynomial of this form. That is, the total number of $(4a+1)$-knots of genus $1$ with Alexander polynomial $m t^2 + (1-2m) t + m$ for $m \in [-X, X]$ should also be $C_{\Art} \frac{4\pi}{9} X^{3/2}/(\log X) + O(X^{3/2}/\log (X))$. It is an interesting question for future research to see if this can be proven: showing it will require getting some control on the sizes of the oriented class groups of the rings ${\mathbb{Z}} \left[ \frac{1}{m}, \frac{1 + \sqrt{1-4m}}{2} \right]$, which is more complicated when $m$ is not prime. \\ Finally, we note that the algebraic invariants (the Alexander module and Blanchfield pairing, or equivalently the Seifert matrix) used to classify $(4a+1)$-knots for $a \ge 1$ are also useful in the low-dimensional case $a=0$, though they are no longer complete invariants of the knot, and miss a lot of knot-theoretic information. Indeed, $(4a+1)$-knots for fixed $a \ge 1$ are in bijection with $S$-equivalence classes of Seifert forms \cite{Mil}[Theorem 2.4].\\ This allows us to restate the previous corollary in terms of objects of interest to low-dimensional topologists: \begin{corollary} The number of $S$-equivalence classes of $2 \times 2$ Seifert matrices $P$ with Alexander polynomial $\det (t P - P^t) = p t^2 + (1-2p) t + p$ for $p \in [1, X]$ prime is $\sim C_{\Art} \frac{42\pi}{9} X^{3/2}/(\log X)$. \end{corollary} \section{Preliminary lemmas} In this section we state several technical lemmas and propositions which will be needed for our proofs of Theorem \ref{MT}. Some of the statements that follow will be classical and well-known, while some are new results. In particular, we believe that Proposition \ref{quad equi} and Lemma \ref{unrestricted double oscillation} are new. We give proofs for these results in this section. \subsection{An unrestricted version of the double oscillation lemma} We begin with the statement of the following well-known lemma: \begin{lemma}[Double Oscillation Lemma] \label{double} Let $\{\alpha_n\}, \{\beta_m\}$ be two sequences of complex numbers with each term having absolute value bounded by $1$. Let $M,N$ be positive real numbers. Then we have \[\sum_{m \leq M} \sum_{n \leq N} \alpha_m \beta_n \mu^2(2m) \mu^2(2n) \Legendre{m}{n} \] \[ \ll MN \min \left\{ \left(M^{-1/2} + (N/M)^{-1/2} \right), \left(N^{-1/2} + (M/N)^{-1/2} \right) \right\} \] and for every $\varepsilon > 0$, \[\sum_{m \leq M} \sum_{n \leq N} \alpha_m \beta_n \mu^2(2m) \mu^2(2n) \Legendre{m}{n} \ll_\varepsilon MN \left(M^{-1/2} + N^{-1/2} \right) (MN)^\varepsilon \] \end{lemma} Lemma \ref{double} essentially follows from the Polya-Vinogradov inequality. There are several instances in this paper where Lemma \ref{double} is needed. However, the sharpest form of Lemma \ref{double} requires that the sum is supported on \emph{odd squarefree numbers}. We observe here that applying a squarefree sieve allows us to remove this restriction at the cost of a logarithmic factor, which is often an acceptable loss. We thus obtain the following version of Lemma \ref{double}: \begin{lemma}[Unrestricted Double Oscillation Lemma] \label{unrestricted double oscillation} Let $\{\alpha_n\}, \{\beta_m\}$ be two sequences of complex numbers with each term having absolute value bounded by $1$. Let $M,N$ be positive real numbers. Then we have \[\sum_{m \leq M} \sum_{n \leq N} \alpha_m \beta_n \Legendre{m}{n}\] \[ \ll MN (\log M + \log N) \min \left\{ \left(M^{-1/2} + (N/M)^{-1/2} \right), \left(N^{-1/2} + (M/N)^{-1/2} \right) \right\} \] \end{lemma} \begin{proof}[Proof of Lemma \ref{fun domain}] We first note that, with minor modifications to the constant, we can easily get rid of the restriction that $m$ and $n$ must be odd, since every even squarefree number is twice an odd squarefree number. Getting rid of the squarefree restriction takes more work. \\ By symmetry, it is enough to show that our left hand side $\Xi(M, N) = \sum_{m \leq M} \sum_{n \leq N} \alpha_m \beta_n$ satisfies $\Xi_(M, N) \ll ( M^{1/2} N + M^{3/2} N^{1/2} ) (\log M + \log N)$. \\ We then decompose $\Xi(M, N)$ as \[\Xi(M, N) = \sum_{k \le \sqrt{M}} \sum_{\ell \le \sqrt{N}} \Xi_{k, \ell}(M, N)\] where \[ \Xi_{k, \ell}(M, N) = \sum_{m' \le k^{-2}M \text{ squarefree}} \sum_{n' \le \ell^{-2}M \text{ squarefree}} \alpha_{k^2 m'} \beta_{\ell^2 n'} \Legendre{k^2 m'}{\ell^2 n'}. \] is the total contribution from all pairs $(m, n)$ of the form $(k^2 m', \ell^2 n')$ with $m'$ and $n'$ squarefree. Since $m'$ and $n'$ are now restricted to be squarefree, we can apply double cancellation to obtain \[ \Xi_{k, \ell}(M, N) \ll (k^{-1} \ell^{-2} M^{1/2} N + \ell^{-3} k^{-1} M^{3/2} N^{1/2}). \] Summing over $k, \ell$ gives the bound \[ \Xi(M, N) \ll \sum_{k \le \sqrt{M}} \sum_{\ell \le \sqrt{N}} (k^{-1} \ell^{-2} M^{1/2} N + \ell^{-3} k^{-1} M^{3/2} N^{1/2}) \asymp \log(M)M^{1/2} N + \log(N) M^{3/2} N^{1/2}, \] which is of the size needed. \end{proof} \subsection{A Siegel-Walfisz-type lemma} As is well-known by now, in problems involving summations over Jacobi symbols where both arguments are variable, it is often necessary to consider the sum over just one variable. In such cases Lemma \ref{double} does not apply, and so we will require the following result which is derived from the Siegel-Walfisz theorem: \begin{lemma}[Siegel-Walfisz] \label{S-W} For every $q \geq 2$ and for every primitive character $\chi \pmod{q}$, we have that for any $A > 0$ the estimate \[\sum_{Y \leq p \leq X} \chi(p) \ll_A \sqrt{q} X(\log X)^{-A} \] uniformly for $X \geq Y \geq 2$. \end{lemma} Although Lemma \ref{S-W} applies to character sums over primes evaluated at prime arguments, we can easily modify the lemma to the sum $\sum_{Y \leq p \leq X} \chi(1-4p)$ when $\chi$ has odd conductor $m$ say. Therefore $4$ is invertible mod $m$ and the map $x \mapsto 1 - 4x$ is invertible mod $m$. \\ \\ However, in the sum $\sum_{Y < p < X} \chi_m(p)$ we note that for $Y > m$ the prime $p$ is never divisible by $m$, whence the $0$-class never appears. Correspondingly, the $1 \pmod{m}$ class never appears for $1 - 4p$ since that would imply $m | p$, as $m$ is odd. \\ If $m = \ell$ is an odd prime, then the map $L: x \mapsto 1-4x$ sends the set of primitive residues $\{1, \cdots, \ell-1\}$ to $\{0, 2, \cdots, \ell-1\}$. Indeed, there are $(p-1)/2$ residues $k$ each such that $\left(\frac{k}{\ell} \right) = \pm 1$, and the image of the map contains $(p-1)/2$ residues giving $-1$ and $(p-3)/2$ residues giving $+1$, since $1 \pmod{\ell}$ is omitted. Thus there is one more $-1$ than $+1$. \\ However if $m = \ell_1 \ell_2 $, then we see that there are \[(\ell_1 - 1)(\ell_2 - 1)/4 + (\ell_1 - 3)(\ell_2 -3)/4 \] residues giving $+1$ in the image of $L$, and \[(\ell_1 - 1)(\ell_2 - 3)/4 + (\ell_1 - 3)(\ell_2 - 1)/4 \] giving $-1$. Expanding, we see that \begin{align*} & (\ell_1 - 1)(\ell_2 - 1) + (\ell_1 - 3)(\ell_2 - 3) - (\ell_1 - 1)(\ell_2 - 3) - (\ell_1 -3)(\ell_2 - 1) \\ & = \ell_1 \ell_2 - \ell_1 - \ell_2 + 1 + \ell_1 \ell_2 - 3 \ell_1 - 3 \ell_2 + 9 - \ell_1 \ell_2 + 3 \ell_1 + \ell_2 - 3 - \ell_1 \ell_2 + 3 \ell_2 + \ell_1 - 3 \\ & = 4, \end{align*} so there will be one more $+1$ than $-1$. \\ The pattern continues, or to put it more formally: the function $m \mapsto \sum_{a \in ({\mathbb{Z}}/m)^*} \chi_m(1-4a)$ is multiplicative on odd squarefree $m$ by the Chinese remainder theorem. We deduce the follow following lemma: \begin{lemma} \label{S-W mod} Let $m > 1$ be an odd, square-free integer and let $\chi$ be a primitive character modulo $m$. Then for any $A > 0$ we have the estimate \[\sum_{Y \leq p \leq X} \chi(1 - 4p) = \frac{\mu(m)}{\phi(m)} \left(\operatorname{Li}(X) - \operatorname{Li}(Y)\right) + O_A \left(\sqrt{m} X (\log X)^{-A} \right)\] uniformly for $X \geq Y \geq 2$. \end{lemma} We will require the following result on the equidistribution of roots of a quadratic congruence: \begin{proposition} \label{quad equi} Let $f$ be an irreducible quadratic polynomial, and let $0 \leq \alpha \leq \beta \leq 1$ be real numbers. Put \begin{equation} \label{Sfab} S_f(\alpha, \beta; n) = \# \{v \in \{0, \ldots, n-1\} : f(v) \equiv 0 \pmod{n}, \alpha n \leq v \leq \beta n\}. \end{equation} Then \[\sum_{n \leq X} S_f(\alpha, \beta; n) = (\beta - \alpha) \sum_{n \leq X} S_f(0, 1; n) + O \left(X^{8/9} (\log X)^3 \right). \] \end{proposition} The novelty of Proposition \ref{quad equi} lies in the explicit error term, the main asymptotic being well-known. The asymptotic is derived as a consequence of examining explicit estimates in \cite{Hoo1} and then applying a method of Erd\"{o}s-Turan, as recorded in Montgomery's book \cite{Mon}. \subsection{Proof of Proposition \ref{quad equi}} The proof begins with the following set-up. Let $f(x) = a x^2 + b x + c \in {\mathbb{Z}}[x]$ be a positive definite, primitive quadratic polynomial. For a given positive integer $n$ let $\S_f(\alpha, \beta; n)$ be as in (\ref{Sfab}), and put \[\rho_h(n) = \sum_{f(v) \equiv 0 \pmod{n}} e \left(\frac{hv}{n}\right)\] for $h \in {\mathbb{Z}}$, where as usual $e (s) = \exp (2 \pi i s)$. Put \[D_f(\alpha, \beta; X) = \sum_{n \leq X} S_f(\alpha, \beta; n) - (\beta - \alpha) \sum_{n \leq X} S_f(0, 1; n)\] and define the \emph{discrepancy} to be \begin{equation} D_f(X) = \sup_{0 \leq \alpha \leq \beta \leq 1} |D_f(\alpha, \beta; X)|. \end{equation} We then have the following result due to Erd\"{o}s and Turan (see Chapter 1 of \cite{Mon} for a reference): \begin{lemma} \label{ET} Let notation be as above. For all positive integers $N$ the discrepancy $D_f(X)$ satisfies the bound \begin{equation} \label{ET bd} D_f(X) \leq \frac{X}{N+1} + 3 \sum_{h = 1}^N \frac{1}{h} \left \lvert \sum_{n \leq X} \rho_h(n) \right \rvert \end{equation} \end{lemma} Lemma \ref{ET} thus reduces the question of bounding $D_f(\alpha, \beta; X)$ by bounding the term \begin{equation} \label{rh sum} \left \vert \sum_{n \leq X} \rho_h(n) \right \rvert. \end{equation} It is known that (\ref{rh sum}) satisfies the bound $O_{f,h} \left(X^{2/3} (\log X)^2 \right)$, but for our purposes we need to make the dependence on $f$ and $h$ explicit. To do so we must settle with an older approach, due to Hooley \cite{Hoo1}, where this can be done, at the cost of a worse exponent of $3/4$ for $X$. \\ We give a brief description of Hooley's approach in \cite{Hoo1}, being careful to extract the dependence on $h$ and on $f$. One begins by observing that the solutions to the congruence \begin{equation} \label{gen quad} f(x) \equiv 0 \pmod{n} \end{equation} are in bijection with the the number of representations of $n$ by $\operatorname{SL}_2({\mathbb{Z}})$-inequivalent binary quadratic forms $g$ having the same discriminant as $f$. Indeed, for \[g(x,y) = a_2 x^2 + a_1 xy + a_0 y^2, \Delta(g) = \Delta(f)\] It is standard (see \cite{BBDT} for a modern reference) that the fractions $v/n$ corresponding to solutions to the congruence (\ref{gen quad}) are given by \begin{equation} \label{rep1}\frac{v}{n} = \frac{r}{\alpha t} - \frac{a_1 t + 2 a_0 u + b t}{2 a t g(t,u)} \text{ if } |t| > |u|,\end{equation} where $ru -st = 1$ and $g$ runs over a set of pairwise inequivalent forms of discriminant $\Delta(f)$. Similarly, one obtains the expression \begin{equation} \label{rep2} \frac{v}{n} = \frac{s}{\alpha u} - \frac{a_2 t + 2 a_1 u + b t}{2 a u g(t,u)} \text{ if } |t| < |u|.\end{equation} Let $\theta(t,u)$ denote the representation of $v/n$ given by (\ref{rep1}) or (\ref{rep2}). Let $-D = \Delta(f)$, and let ${\mathcal{F}}(-D)$ denote a fundamental set of forms of discriminant $-D$. We then have the following key formula: \begin{equation} \sum_{n \leq X} \rho_h(n) = \sum_{\substack{g(t,u) \leq X \\ g \in {\mathcal{F}}(-D)}} e(h \theta(t,u)). \end{equation} From here, examining Hooley's argument in \cite{Hoo1} reveals that the only further dependence on $f$ comes from estimating the size of \[F_2(t) - F_1(t),\] where \[F_2(t) = \min \left\{ |t|, - \frac{a_1 t}{a_0} + \frac{1}{a_0} (a_0 X + D t^2)^{1/2} \right\}\] and \[F_1(t) = \max \left\{-|t|, -\frac{a_1 t}{a_0} - \frac{1}{a_0} (a_0 X + Dt^2)^{1/2} \right\}.\] We then see that \[F_2(t) - F_1(t) \ll |t| + \sqrt{\frac{X}{a_0}}+ \frac{\sqrt{D} \cdot |t|}{a_0}\] with the implied constant being absolute. We note that since $g$ is positive definite and reduced, we have $a_0 \geq \sqrt{D}$, from which we obtain the simplified bound \[F_2(t) - F_1(t) \ll |t| + X^{1/2} D^{-1/4}.\] Examining the $\max, \min$ functions in the definitions of $F_1, F_2$ we see that $F_2(t) - F_1(t) \ll |t|$ with an absolute implied constant. Hooley's argument in \cite{Hoo1} then proceeds with an absolute implied constant, giving the bound \begin{equation} \label{Hoo bd} \left \lvert \sum_{n \leq X} \rho_h(n) \right \rvert = O \left(|h|^{4/5} \sigma_{-1/2}^2 (h) X^{4/5} (\log X)^2\right), \end{equation} where $\sigma_z(n) = \sum_{m | n} m^z$. Feeding this into (\ref{ET bd}) we obtain the bound \[O \left(X^{4/5} (\log X)^2 \sum_{1 \leq h \leq N} \frac{\sigma_{-1/2}^2(h)}{h^{1/5}} \right),\] which is the same estimate as in equation (41) in \cite{Hoo1}. Optimizing for $N$ in (\ref{ET bd}) we obtain \begin{equation} \label{Di bd} D_f(X) = O \left(X^{8/9} (\log X)^3 \right), \end{equation} with absolute implied constant. \section{Counting in a fundamental domain} We now begin the first of two proofs of our main theorem. By the fundamental work of Gauss, we may parametrize $\operatorname{SL}_2({\mathbb{Z}})$-equivalence classes of positive definite binary quadratic forms having discriminant bounded by $X$ by integral points in some bounded domain: \begin{equation} \label{fun domain} {\mathcal{F}}(X) = \{(a,b,c) \in {\mathbb{Z}}^3 : |b| \leq a \leq c, a,c > 0, 0 < 4ac - b^2 \leq X\}. \end{equation} We remark immediately that \[4p - 1 = 4ac - b^2 \geq 4ac - a^2 \geq 3a^2,\] whence \begin{equation} \label{a fun bd} a \leq \sqrt{\frac{4p}{3}}. \end{equation} We are interested in integral points in ${\mathcal{F}}(X)$ for which $4ac - b^2 = 4p - 1$, where $p$ is a prime. Clearly, this equation implies that $b$ is odd, say $b = 2k+1$. We then have the expression \[ac = k^2 + k + p.\] We make a trivial but important observation: the right hand side is odd for all $p \geq 3$. Indeed, the term $k^2 + k = k(k+1)$ is always even. This implies that $a,c$ must both be odd. \\ That the triple $(a,b,c) = (a, 2k+1, c) \in {\mathcal{F}}(X)$ implies \begin{equation} \label{k bd} |2k+1| \leq a \leq \sqrt{4p/3}, \end{equation} from which we obtain \begin{equation} \label{k bd 2} |k| \leq \frac{a-1}{2}. \end{equation} Next note that \[4p - 1 = 4ac - b^2 \geq 4a^2 - b^2, \] hence \[b^2 \geq 4a^2 - 4p + 1. \] This lower bound is non-trivial if and only if $a \geq \sqrt{p}$. In this case we obtain \[k^2 + k \geq a^2 - p,\] or equivalently, \begin{equation} \label{k bd 3} |k| \geq \sqrt{a^2 -p}. \end{equation} We are then led to the following sum \begin{align} Q(X) & = \sum_{p \leq X} \left(\sum_{1 \leq a \leq \sqrt{p}} \sum_{\substack{a | k^2 + k + p \\ |k| \leq a/2}} 1 + \sum_{\sqrt{p} \leq a \leq \sqrt{4p/3}} \sum_{\substack{a | k^2 + k + p \\ \sqrt{a^2 - p} \leq |k| \leq a/2}} 1 \right) \\ & = \sum_{p \leq X} \left({\mathcal{T}}_1(p) + {\mathcal{T}}_2(p) \right) \notag \\ & = Q_1(X) + Q_2(X), \notag \end{align} say. \\ The term $Q_1(X)$ is already in the shape that we want, since the inner sum runs over a complete set of residues. More work, indeed, a substantial amount of work, is needed to get $Q_2(X)$, and in particular ${\mathcal{T}}_2(p)$ into an acceptable form. \\ To treat ${\mathcal{T}}_2(p)$, we partition the interval $[\sqrt{p}, \sqrt{4p/3})$ into subintervals of the shape \[[\kappa \sqrt{p}, (\kappa + N^{-1}) \sqrt{p})\] with $1 \leq \kappa \leq 2/\sqrt{3}$ and $N$ a large positive integer (which may depend on $X$). We want to estimate the sum \[{\mathcal{T}}_2(\kappa; p) = \sum_{\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1})\sqrt{p}} \sum_{\substack{a | k^2 + k + p \\ \sqrt{a^2 - p} \leq |k| \leq a/2}} 1. \] Writing $a = \sqrt{p}(\kappa + \delta)$ with $\delta = O \left( N^{-1}\right)$ we find that \[\sqrt{p} = \frac{a}{\kappa + \delta} = \frac{a}{\kappa} \left(1 + O(N^{-1}) \right), \] whence \[a\sqrt{1 - \kappa^{-2} + O(N^{-1})} \leq |k| \leq a/2. \] Note that \begin{align*} \sqrt{1 - \kappa^{-2} + O(N^{-1})} - \sqrt{1 - \kappa^{-2}} & = \frac{1 - \kappa^{-2} + O(N^{-1}) - 1 + \kappa^{-2}}{\sqrt{1 - \kappa^{-2} + O(N^{-1})} + \sqrt{1 - \kappa^{-2}} } \\ & = O(N^{-1}). \end{align*} We write \begin{equation} {\mathcal{T}}_2^\ast(\kappa; p) = \sum_{\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1})\sqrt{p}} \sum_{\substack{a | k^2 + k + p \\ a \sqrt{1 - \kappa^{-2}} \leq |k| \leq a/2}} 1. \end{equation} We see that \[ \sum_{p \leq X} \sum_{j = N}^{\lfloor 2N/\sqrt{3} \rfloor} \left({\mathcal{T}}_2(j/N, p) - {\mathcal{T}}_2^\ast(j/N, p) \right) \ll \int_1^{X^{1/2}} \int_a^{X/a} \int_{I_a} db dc da, \] where $I_a$ is an interval of length $O(a/N)$. The triple integral on the right satisfies \[\int_1^{X^{1/2}} \int_a^{X/a} \int_{I_a} db dc da = O \left(\frac{X^{3/2}}{N}, \right)\] which will be small enough with an appropriate choice of $N$, say $N = \lfloor (\log X)^A \rfloor$. We can then apply Proposition \ref{quad equi} to ${\mathcal{T}}_2^\ast(\kappa, p)$ to obtain \begin{equation} {\mathcal{T}}_2^\ast(\kappa, N; p) = \left(1 - 2 \sqrt{1 - \kappa^{-2}}\right) \sum_{\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1})\sqrt{p}} \sum_{\substack{a | k^2 + k + p \\ |k| \leq a/2}} 1 + O \left(p^{4/9} (\log p)^{3} \right). \end{equation} Summing the error term over $p \leq X$ gives \begin{equation} \sum_{p \leq X} p^{4/9} (\log p)^3 \ll X^{13/9} (\log X)^2. \end{equation} We may thus focus on sums of the shape \begin{equation} Q(\kappa, N; X) = \sum_{p \leq X} \sum_{\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1}) \sqrt{p}} \sum_{\substack{a | k^2 + k + p \\ |k| \leq a/2}} 1. \end{equation} The inner most sum exactly counts the number of solutions to the congruence \[k^2 + k + p \equiv 0 \pmod{a}. \] We know that this is given by the expression \[\sum_{m | a} \left(\frac{1- 4p}{m} \right).\] This gives \[Q(\kappa, N; X) = \sum_{p \leq X} \sum_{\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1})\sqrt{p}} \sum_{m | a} \left(\frac{1 - 4p}{m}\right).\] Note that \[\kappa \sqrt{p} \leq a \leq (\kappa + N^{-1}) \sqrt{p} \] if and only if \[\frac{a^2}{\kappa^2}\left(1 - \frac{1}{M+1}\right) \leq p \leq \frac{a^2}{\kappa^2}, \] where $M = N/\kappa$. Hence we may switch order of summation and then abuse notation to obtain \[Q(\kappa, M; X) = \sum_{a \leq \kappa X^{1/2} (1 + (M-1)^{-1})} \sum_{a^2(1 - M^{-1})/\kappa^2 \leq p \leq a^2/\kappa^2} \sum_{m | a} \left(\frac{1-4p}{m}\right). \] Writing $Y = \kappa X^{1/2}(1 + (M-1)^{-1})$ for convenience and then using symmetry, we find that \begin{equation} \label{Q1X} Q(\kappa, M; X) = \sum_{\substack{mn \leq Y \\ m \leq n}} \sum_{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2} \left(\left(\frac{1 - 4p}{m} \right) + \left(\frac{1 - 4p}{n}\right) \right).\end{equation} Our goal is to eventually take advantage of the oscillation of the Jacobi symbol; see Lemma \ref{double}. Before we can do so we must massage the sums $Q(\kappa, N-1; X)$ so that we are summing over square-free numbers, and this requires applying a square-free sieve over the terms $m,n, 4p-1$. For $m,n$ this is very easy to do but the irregularity introduced by having the variable running over primes introduces some minor difficulties. \\ We write \begin{equation} \label{Q1dag} Q^\dagger(\kappa, M; X) = \sum_{\substack{mn \leq Y \\ m \leq n }} \text{ } \sum_{\substack{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2 \\ k^2 | 4p - 1 \Rightarrow k \leq (\log X)^{2A} }} \left(\left(\frac{1 - 4p}{m}\right) + \left(\frac{1 - 4p}{n} \right) \right) \end{equation} and \[Q^\ddagger(\kappa, M; X) = Q(\kappa, M; X) - Q^\dagger(\kappa, M; X).\] We proceed to show that $Q^\ddagger(\kappa, M; X)$ is an error term. To do so we note that \begin{align*} Q^\ddagger(\kappa, M; X) & = \sum_{\substack{m \leq Y \\ m \leq n }} \sum_{\substack{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2 \\ \exists k^2 | 4p - 1 \text{ s.t } k > \xi }} \left(\left(\frac{1 - 4p}{m}\right) + \left(\frac{1 - 4p}{n} \right) \right) \\ & \leq 2 \sum_{\substack{m \leq Y \\ m \leq n }} \sum_{\substack{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2 \\ \exists k^2 | 4p - 1 \text{ s.t } k > \xi }} 1 \\ & \leq 2 \sum_{\substack{m \leq Y \\ m \leq n }} \sum_{\substack{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2 \\ \exists k^2 | 4p - 1 \text{ s.t } k > \xi }} 1, \end{align*} where $s$ runs over the integers instead of primes. We denote this last inner sum by $S(X)$. We shall prove the following: \begin{lemma} \label{sf simp} We have \[S(X) = O \left(\frac{X}{\xi} + X^{1/2} \right)\] \end{lemma} \begin{proof} For a fixed integer $k$, we note that \begin{align*} S(k; X) & = \#\{s \leq X : k^2 | 4n-1\} \\ & = \frac{X}{k^2} + O(1). \end{align*} Moreover, if $k^2 | 4n-1$ and $n \leq X$, then $k \ll X^{1/2}$. Thus \begin{align*} S(X) & \ll \sum_{\xi < k \ll X^{1/2}} \left( \frac{X}{k^2} + O(1) \right) \\ & \ll \frac{X}{\xi(X)} + X^{1/2}, \end{align*} as desired. \end{proof} We have thus obtained the following conclusion: \begin{proposition} \label{sieve prop} Let $Q(\kappa, M; X)$ be as in (\ref{Q1X}) and $Q^\dagger(\kappa, M; X)$ be as in (\ref{Q1dag}). Then \[Q(\kappa, M, X) = Q^\dagger(\kappa, M;X) + O_A \left( \frac{X}{(\log X)^A} \right).\] \end{proposition} We denote by \[R_1(\kappa, M; X) = \sum_{\substack{mn \leq Y \\ m \leq n}} \sum_{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn)^2/\kappa^2} \left(\frac{1- 4p}{m} \right)\] and \[R_{2}(\kappa, M; X) = \sum_{\substack{mn \leq Y \\ m \leq n}} \sum_{(mn)^2(1 - M^{-1})/\kappa^2 \leq p \leq mn^2/\kappa^2} \left(\frac{1 - 4p}{n} \right).\] We shall now show that $R_{1}(\kappa, M; X)$ gives the expected main term (for either $Q_1(X)$ or $Q_2(X)$), while $R_{2}(\kappa, M; X)$ contributes negligibly. By Proposition \ref{sieve prop}, it suffices to consider the corresponding sums $Q_{1}^\dagger(\kappa, M; X), Q_{2}^\dagger(\kappa, M; X)$, with identical restrictions on $p$. We shall focus on the second goal for now. To show that $Q_{2}^\dagger(\kappa, M; X)$ is small, we shall need Lemma \ref{double}. \\ We may now deal with $Q_{2}^\dagger(\kappa, M; X)$. Since the Jacobi symbol depends only on $p, n$, we do not need to worry about the $m$-variable. We write \[n = n_1 n_2^2,\] with $n_1$ square-free (though not necessarily co-prime with $n_2$). We then have \begin{equation} \label{Q12 range} Q_{2}^\dagger(\kappa, M; X) = \end{equation} \[\sum_{m \leq Y^{1/2}} \sum_{n_2 \leq Y^{1/2}/m^{1/2}} \sum_{m/n_2^2 \leq n_1 \leq X^{1/2}/(mn_2^2) } \text{ } \sum_{\substack{\kappa(mn_1 n_2^2)^2(1 - M^{-1})/\kappa^2 \leq p \leq (mn_1 n_2^2)^2/\kappa^2 \\ 4p -1 = k^2 s, s\text{ square-free} \\ \ell | k \Rightarrow \ell < \xi(X) \\ k < (\log X)^{2A} }} \left(\frac{1 - 4p}{n_1}\right)\] \[=\sum_{m \leq Y^{1/2}} \sum_{n_2 \leq Y^{1/2}/m^{1/2}} \sum_{m/n_2^2 \leq n_1 \leq X^{1/2}/(mn_2^2) } \text{ } \sideset{}{^{(1)}} \sum \left(\frac{1 - 4p}{n_1}\right) \] say. \\ The inner double sum in (\ref{Q12 range}) is equal to \[\sum_{m/n_2^2 \leq n_1 \leq X^{1/2}/(mn_2^2) } \sideset{}{^{(1)}} \sum \left(\frac{-s}{n_1}\right) \] which we can rewrite as \begin{equation} \label{massaged Q12} \sum_{k \leq (\log X)^{2A}}\sum_{m/n_2^2 \leq n_1 \leq X^{1/2}/(mn_2^2) } \text{ } \sum_{\substack{(mn_1 n_2^2)^2(1 - N^{-1})/(2k \kappa)^2 \leq s \leq (mn_1 n_2^2)^2/(2k \kappa)^2 \\ s\text{ square-free} }} \alpha_s \left(\frac{-s}{n_1}\right), \end{equation} where \[\alpha_s = \begin{cases} 1 & \text{if } (k^2 s + 1)/4 \text{ is prime} \\ 0 & \text{otherwise}.\end{cases} \] The sum (\ref{massaged Q12}) is now amenable to Lemma \ref{double}, since both $n_1, s$ are square-free. Restricting $n_1$ to a dyadic interval $[N_1, 2N_1)$ say we obtain the bound \[ O_{\varepsilon} \left(N_1 \cdot \frac{(mn_2^2 N_1)^2}{k^2} \left(\frac{k}{mn_2^2 N_1} + \frac{1}{N_1^{1/2}} \right) X^\varepsilon \right) = O_\varepsilon \left( X^\varepsilon \left(\frac{mn_2^2 N_1^2}{k} + \frac{(mn_2^2)^2 N_1^{5/2}}{k^2} \right) \right). \] Summing over $k$ and $N_1 \ll X^{1/2}/mn_2^2$ and noting that $k = O_{A,\varepsilon} \left(X^\varepsilon \right)$, we get a total contribution at most \[O_{A,\varepsilon} \left(\frac{X^{5/4 + \varepsilon}}{m^{1/2} n_2} \right). \] Feeding this back into (\ref{Q12 range}) we see that \[Q_{2}^\dagger(\kappa, N-1; X) = O_{\varepsilon} \left(X^{11/8 + \varepsilon} \right).\] We move on to estimating $Q_{1}^\dagger(\kappa, N-1; X)$. Again we need to isolate the square-free part of $m$. As before, write $m = m_1 m_2^2$ with $m_1$ square-free. We note that \begin{align} \label{Q11dag} & Q_{1}^\dagger(\kappa, N-1; X) = \sum_{m \leq Y^{1/2}} \sum_{m \leq n \leq X^{1/2}/m} \text{ } \sum_{\substack{(mn)^2(1 - N^{-1})/\kappa^2 \leq p \leq mn^2/\kappa^2 \\ 4p - 1 = k^2 s, s\text{ square-free}\\ k < (\log X)^A}} \left(\frac{1 - 4p}{m} \right) \\ & = \sum_{n \leq Y} \sum_{\substack{m_1 m_2^2 \leq \min\{n, Y/n\} }} \text{ } \sum_{\substack{(mn)^2(1 - N^{-1})/\kappa^2 \leq p \leq mn^2/\kappa^2 \\ 4p - 1 = k^2 s, s\text{ square-free}\\ k < (\log X)^A}} \left(\frac{1 - 4p}{m_1}\right) \notag \\ & = \sum_{n \leq Y} \sum_{1 < m_2 \leq \min\{n^{1/2}, Y^{1/2}/n^{1/2} \}} \text{ } \sum_{m_1 \leq Y/nm_2^2} \text{ } \sum_{\substack{(mn)^2(1 - N^{-1})/\kappa^2 \leq p \leq mn^2/\kappa^2 \\ 4p - 1 = k^2 s, s\text{ square-free}\\ k < (\log X)^A}} \left(\frac{1 - 4p}{m_1}\right) \notag \\ & = \sum_{n \leq Y} \sum_{1 < m_2 \leq \min\{n^{1/2}, Y^{1/2}/n^{1/2} \}} \sum_{k \leq (\log X)^{2A}} \sum_{m_1 \leq Y/nm_2^2} \text{ } \sum_{\substack{(nm_1 m_2^2)^2(1 - N^{-1})/(2\kappa k)^2 \leq s \leq (nm_1 m_2^2)^2/(2 \kappa k)^2 \\ s\text{ square-free} }} \alpha_s \left(\frac{-s}{m_1}\right) \notag \end{align} We must be more careful in our application of Lemma \ref{double}, since the sum over $n$ is very long. Indeed, we must use the more precise variant of the lemma rather than the coarser but more convenient version with the $\varepsilon$, since the sum over $m_1$ is much shorter than the sum over $p$. We shall consider the separate cases when $n < X^{1/2} (\log X)^{-A}$ and when $n > X^{1/2} (\log X)^{-A}$. \\ In the former case we restrict $m_1$ to dyadic intervals $[M_1, 2M_1)$ and then apply Lemma \ref{double}. Summing over $M_1 \ll X^{1/2}/nm_2^2$ we obtain the estimate \[O \left(\frac{X^{5/4}}{k^2 n^{1/2} m_2} + \frac{k X^{1/4} }{n^{3/2} m_2^3 } \right) \] for the inner double sum in the last line of (\ref{Q11dag}). We put this back into (\ref{Q11dag}) to obtain the sum \[\sum_{n \leq X^{1/2} (\log X)^{-A}} \sum_{m_2 \leq \min\{n, X^{1/2}/n\}} \frac{X^{5/4}}{n^{1/2} m_2}.\] We then have \begin{align*} \sum_{n \leq X^{1/2} (\log X)^{-A}} \sum_{m_2 \leq \min\{n, X^{1/2}/n\}} \frac{1}{n^{1/2} m_2} & \ll \sum_{n \leq X^{1/2}(\log X)^{-A}} \frac{\log X}{n^{1/2}} \\ & \ll X^{1/4} (\log X)^{-A/2 + 1} \end{align*} which suffices for our purposes by choosing $A > 4$. \\ When $n > X^{1/2} (\log X)^{-A}$ the sum over $m$ is very short: indeed, we see that $m < (\log X)^A$. Thus we may use the Siegel-Walfisz theorem to handle the inner sum over $p$. Applying Lemma \ref{S-W mod} to (\ref{Q11dag}) gives \[Q_1^\dagger(\kappa, N-1; X) \] \[= \sum_{X^{1/2}(\log X)^{-A} < n \leq Y} \sum_{m \leq Y/n} \left( \frac{\mu(m)}{ \phi(m)} \int_{(mn)^2(1 - N^{-1})/\kappa^2}^{(mn)^2/\kappa^2} \frac{dt}{\log t} + O_A \left( X (\log X)^{-A} \right) \right). \] The sum over the error term is \[O \left(X^{3/2} (\log X)^{-A+1} \right), \] which is sufficiently small if we take $A > 2$. For the main term we rearrange as follows: \begin{align*} & \sum_{X^{1/2} (\log X)^{-A} < n \leq Y} \sum_{m \leq X^{1/2}/n} \frac{\mu(m)}{\phi(m)} \int_{(mn)^2(1 - N^{-1})/\kappa^2}^{(mn)^2/\kappa^2 } \frac{dt}{\log t} \\ & = \int_{X (\log X)^{-2A}}^{X } \left(\sum_{X^{1/2}(\log X)^{-A} < n \leq t^{1/2}} \sum_{t^{1/2} \kappa/n \leq m \leq t^{1/2} \kappa N^{1/2} /n(N-1)^{1/2}} \frac{\mu(m)}{\phi(m)} \right) \frac{dt}{\log t} \\ & = \int_{X (\log X)^{-2A}}^{X } \left(\sum_{m \leq (\log X)^A} \frac{\mu(m)}{\phi(m)} \sum_{t^{1/2} \kappa/m < n \leq (Nt)^{1/2} \kappa/m(N-1)^{1/2}} 1 \right) \frac{dt}{\log t} \\ & = \int_{X (\log X)^{-2A}}^{X } \left(\sum_{m \leq (\log X)^A} \frac{\mu(m)}{\phi(m)} \left(\frac{t^{1/2} \kappa}{m} \cdot \frac{1}{N - 1 + \sqrt{N(N-1)}} + O(1) \right) \right) \frac{dt}{\log t} \\ & = \int_{X (\log X)^{-2A}}^{X } \left(\sum_{m \leq (\log X)^A} \frac{\mu(m)}{\phi(m)} \left(\frac{\kappa t^{1/2}}{2mN} + O \left(\frac{t^{1/2}}{mN^2} \right) \right) \right) \frac{dt}{\log t} \\ & = \frac{\kappa}{N} \sum_{m \leq (\log X)^A} \frac{\mu(m)}{m \phi(m)} \int_{X(\log X)^{-A}}^{X } \frac{t^{1/2} dt}{\log t} + O_A \left(X^{3/2} (\log X)^{-A + 1} \right). \end{align*} We first note that \[\sum_{\substack{m \leq Z \\ m \text{ odd}}} \frac{\mu(m)}{m\phi(m)} = \sum_{m \text{ odd}} \frac{\mu(m)}{m \phi(m)} + O \left(Z^{-1} \right), \] and that the completed sum is equal to the Euler product \[\prod_{\ell \geq 3} \left(1 - \frac{1}{\ell(\ell-1)}\right) = 2C_{\text{Art}}.\] It follows that \[Q_1^\dagger(\kappa, N-1; X) = \frac{\kappa}{N} 2C_{\text{Art}} \int_{X(\log X)^{-A}}^{X} \frac{t^{1/2}}{2 \log t}+ O_A \left(X^{3/2} (\log X)^{-A+1} \right). \] We now renormalize and replace $N$ with $N/\kappa$, and then consider $\kappa = j/N$ with $N \leq j \leq \lfloor 2N/\sqrt{3} \rfloor$. We then consider \begin{equation} \label{Q1k sum} \sum_{N \leq j \leq 2N/\sqrt{3}} \left(1 - 2 \sqrt{1 - \left(\frac{j}{N} \right)^2} \right) \left(\frac{C_{\Art}}{N} \int_{X(\log X)^{-A}}^{X} \frac{t^{1/2} dt}{\log t} + O_A \left(X^{3/2} (\log X)^{-A + 1} \right) \right). \end{equation} The sum over the error term is small by first choosing $A$ large, and then supposing $N < (\log X)^{A^\prime}$ with $A^\prime < A - 3$, say. \\ The sum \[\sum_{N \leq j \leq 2N/\sqrt{3}} N^{-1} \left(1 - 2 \sqrt{1 (j/N)^2} \right) \] is a Riemann sum for the integral \[\int_1^{2/\sqrt{3}} (1 - 2 \sqrt{1 - x^{-2}})dx = \frac{\pi}{3} - 1, \] and the integrand $1 - \sqrt{1 - x^{-2}}$ is decreasing on $[1, \infty)$. Hence the Riemann sum approximates the integral by \[\left \lvert \sum_{N \leq j \leq 2N/\sqrt{3}} N^{-1} \left(1 - 2 \sqrt{ (N/j)^2} \right) -\int_1^{2/\sqrt{3}} (1 - 2 \sqrt{1 - x^{-2}})dx \right \rvert \ll N^{-1}. \] It follows that (\ref{Q1k sum}) is equal to \[ \left(\frac{\pi}{3} - 1\right) C_{\Art} \int_{2}^X \frac{t^{1/2}dt}{\log t} + O_A \left(\frac{X^{3/2}}{ (\log X)^{A}}\right). \] Evaluating the analogous sum for (\ref{Q1k sum}) but over the range $1 \leq j < N$ and summing gives \begin{equation} Q(X) = C_{\Art} \cdot \frac{\pi}{3} \int_2^X \frac{t^{1/2}dt}{\log t} + O_A \left(\frac{X^{3/2}}{(\log X)^A} \right). \end{equation} Applying integration by parts we see \begin{align*} \int_2^X \frac{t^{1/2}dt}{\log t} & = \left[\frac{2t^{3/2}}{3 \log t} \right]_2^X - \frac{2}{3} \int_2^X t^{3/2} \left(\frac{1}{\log t} \right)^\prime dt \\ & = \frac{2X^{3/2}}{3 \log X} + \frac{2}{3} \int_2^X \frac{t^{1/2}dt}{(\log t)^2} dt + O(1), \end{align*} hence \[Q(X) = C_{\Art} \cdot \frac{2\pi}{9} \cdot \frac{ X^{3/2}}{ \log X} + O \left(\frac{X^{3/2}}{(\log X)^2} \right) \] as required for the proof of Theorem \ref{MT}. \section{Alternate approach via the analytic class number formula} We now give our second proof, which is similar to the approaches of Nagoshi \cite{Nagoshi} for discriminant $1-4p$ and Friedlander-Iwaniec \cite{FI hyperbolic} for discriminant $-4p$, where we estimate the sum of class numbers via the analytic class number formula. \\ The class number formula gives \[ h(D) = \frac{1}{\pi} L(1, \chi_D) |D|^{1/2} \] for any discriminant $D < 0$: it holds even for non-fundamental $D$, when $h(D)$ is defined as the number of primitive equivalence classes of binary quadratic forms of discriminant $D$. The number of all binary quadratic forms of discriminant $D$, including the non-primitive ones, is \[ H(D) = \sum_{d} h(D/d^2) \] where the sum is over all $d$ such that $D/d^2$ is a discriminant. \\ Summing over all $D = 1-4p$ with $p \le X$, we can write our sum $Q(X) = \sum_{p \le X} H(1-4p)$ as \[ Q(X) = \sum_{d \ge 1} Q_d(X) \] where we define $Q_d(X) = \sum_{\substack{p \le X \\ d^2 \mid 1-4p}} h(\frac{1-4p}{d^2})$ for $d$ odd (and $0$ for $d$ even). Applying the class number formula, we also have \begin{equation} Q_d(X) = \sum_{\substack{p \le X \\ d^2 \mid 1-4p}}\frac{1}{\pi} L(1, \chi_{(1-4p)/d^2}) \left|\frac{4p-1}{d^2}\right|^{1/2} = \frac{1}{\pi d}\sum_{\substack{p \le X \\ d^2 \mid 1-4p}} \sum_{n \ge 1} \frac{(4p-1)^{1/2}}{n} \Legendre{(1-4p)/d^2}{n}. \end{equation} It will be convenient to us to first bound the related quantity \[ T_d(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} L(1, \chi((1-4p)/d^2)) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{n \ge 1} \Legendre{(1-4p)/d^2}{n} \frac{1}{n}. \] We define a multiplicative function $c(d)$, which will feature in the leading terms. \begin{definition} For $d$ odd we set \begin{equation}\label{define c} c(d) := \frac{1}{d^3} \prod_{\ell \mid d} \frac{\ell^3 -1 }{\ell^3 - \ell^2 - \ell - 1} \end{equation} and for $d$ even we have $c(d) = 0$. \end{definition} \begin{lemma} \label{estimate $T_d$} If $d \ll (\log X)^{\alpha}$ for some fixed $\alpha$, then \[T_d(X) = \frac{\pi^2}{12} \cdot d c(d) \prod_{\ell \text{ odd}}\frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} X/\log X + o(X/\log X),\] \end{lemma} \begin{lemma} \label{estimate $Q_d$} \[ Q_d(X) = \frac{\pi}{9} c(d) \prod_{\ell \text{ odd}}\frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} X^{3/2}/ \log X + o(X^{3/2}/\log X)\] \end{lemma} \begin{theorem}\label{main result CNF} $Q(X) = \frac{2\cdot\pi}{9} C_{\text{Art}} X^{3/2}/ \log X + o(X)$. \end{theorem} The heart of the argument is in the proof of Lemma~\ref{estimate $T_d$}, which we defer to Section~\ref{$T_d$ details}. We now assume it and prove the other results \begin{proof}[Proof of Lemma~\ref{estimate $Q_d$}] We apply partial summation to \begin{equation} \begin{split} Q_d(X) &= \sum_{\substack{p \le X \\ d^2 \mid 1-4p}}\frac{1}{\pi} L(1, \chi_{(1-4p)/d^2}) \left|\frac{4p-1}{d^2}\right|^{1/2} \\ &= \frac{1}{\pi d} \left((4X-1)^{1/2} T_d(X) + \sum_{t = 1}^{X-1} \left( (4t-1)^{1/2} - (4(t+1)-1)^{1/2} \right) T_d(t) \right)\\ & = \frac{1}{\pi d} \left(X^{1/2} (2 + O(X^{-1})) T_d(X) - \sum_{t = 1}^{X-1} (t^{-1/2} + O(t^{-3/2})) T_d(t) \right)\\ &= \frac{\pi}{12} \cdot c(d) \prod_{\ell \text{ odd}}\frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \left( 2 (X^{3/2}/\log X) -\sum_{t = 1}^{X} t^{1/2} / \log t + o(X^{3/2} /\log X)) \right)\\ &= (\pi/9) \cdot c(d) \prod_{\ell \text{ odd}}\frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} (X^{3/2} /\log X) + o(X^{3/2} /\log X). \end{split} \end{equation} where we apply Lemma~\ref{estimate $T_d$} at the next-to-last step and use $\sum_{t = 1}^{X} t^{1/2} / \log t = (\frac{2}{3} + o(1)) X^{3/2} / (\log X)$ in the last. \end{proof} We now sum over $d$ to obtain the asymptotic for $Q$: \begin{proof}[Proof of Theorem~\ref{main result CNF}] Using $L(1, \chi_D) = O(\log D)$, we have the trivial bound \[ Q_d(X) \ll \sum_{\substack{m \le X\\ d^2 \mid 1-4m}} \log X \frac{\sqrt{X}}{d} \asymp d^{-3} X \log X. \] Hence we can cut off the sum at $X^{\alpha}$ for any $\alpha > 1/2$. \[Q(X) = \sum_{d \in [1, X] \text{ odd}} Q_d(X) = \sum_{d \in [1, \log(X)^{\alpha}] \text{ odd}} Q_d(X) + o(X^{3/2}/\log(X))\] We now apply Lemma~\ref{estimate $Q_d$} to obtain \begin{equation}\label{apply estimate $Q_d$} \begin{split} Q(X) &= \sum_{d \in [1, \log(X)^{\alpha}] \text{ odd}} Q_d(X) + o(X^{3/2}/\log(X))\\ &= \prod_{\ell \text{ odd}}\frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \left( \sum_{d \in [1, \log(X)^{\alpha}]} c(d) \right) X^{3/2}/\log(X) + o(X^{3/2}/\log(X)) \end{split} \end{equation} (where we make the convention $c(d) = 0$ for $d$ even).\\ We now need to evaluate the sum $\sum_{d = 1}^{\infty} c(d)$, which we can expand as an Euler product: \begin{equation*} \begin{split} \sum_{d = 1}^{\infty} c(d) & = \sum_{d \ge 1 \text{ odd}} d^{-3} \prod_{\ell \mid d} \frac{\ell^3 -1 }{\ell^3 - \ell^2 -\ell -1} \\ & = \prod_{\ell \text{ odd}} 1+ \ell^{-3}(1 - \ell^{-3})^{-1} \frac{\ell^3 -1 }{\ell^3 - \ell^2 -\ell -1} \\ &= \prod_{\ell \text{ odd}} \frac{\ell^3 - \ell^2 - \ell}{\ell^3 - \ell^2 - \ell -1} \end{split} \end{equation*} Hence $\sum_{d \in [1, \log(X)^{\alpha}]} c(d) = \prod_{\ell \text{ odd}} \frac{\ell^3 - \ell^2 - \ell}{\ell^3 - \ell^2 - \ell -1} + o(1)$. Plugging this into \eqref{apply estimate $Q_d$}, we obtain \begin{equation*} \begin{split} Q(X) &= \frac{\pi}{9} \prod_{\ell \text{ odd}} \left( \frac{\ell^3 - \ell^2 - \ell}{\ell^3 - \ell^2 - \ell -1} \cdot \frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \right) X^{3/2} / \log X + o(X^{3/2} / \log (X))\\ & = \frac{\pi}{9} \prod_{\ell \text{ odd}} \left(1 - \frac{1}{\ell^2 - \ell} \right) X^{3/2} / \log X + o(X^{3/2} / \log X)\\ & = \frac{2\pi}{9} C_{\text{Art}} X^{3/2} / \log X. \end{split} \end{equation*} as desired. \end{proof} \subsection{Strategy for proof of Lemma~\ref{estimate $T_d$}}\label{$T_d$ details} We now need to estimate \[ T_d(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} L(1, \chi((1-4p)/d^2)) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{n \ge 1} \Legendre{(1-4p)/d^2}{n} \frac{1}{n}. \] under the assumption that $d \ll (\log X)^{\alpha}$. First we want to cut off the tails of the inner sums, so that we have something finite. We'll choose $B$ such that for any $p \le X$, $d \ll (\log X)^{\alpha}$ \[ \sum_{n > B} \Legendre{(1-4p)/d^2}{n} \frac{1}{n} = o(1) \] By the partial summation argument given in \cite{Siegel}, $B = X^{1/2+\delta}$ works for any positive $\delta$. Then \[ T_d(X) = T_d(X, [1, B]) + o(X / \log X) \] where \[ T_d(X, [1, B]) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{1 \le n \le B} \Legendre{(1-4p)/d^2}{n} \frac{1}{n}. \] We now choose a second cutoff $b < B$, which we'll choose small enough that we can estimate $T_d(X, b)$ by Siegel-Walfisz, and then will estimate by breaking up \[ T_d(X, [1, b]) = T_d(X, [1, b]) + T_d(X, (b, B]) \] where $T_d(X, [1, b])$ is the partial double sum with $n$ restricted to the range $[1, b]$ and likewise for $T_d(X, (b, B])$.\\ We will estimate $T_d(X, [1, b])$ by Siegel-Walfisz and $T_d(X, (b, B])$ by double oscillation. \subsection{Estimating $T_d(X, [1, b])$} We exchange order of summation, to get \begin{equation}\label{change order} T_d(X, [1, b]) = \sum_{1 \le n \le b} \frac{1}{n} \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \Legendre{(1-4p)/d^2}{n} . \end{equation} We now estimate the inner sum. Note that $\Legendre{(1-4p)/d^2}{n}$ depends only on $p$ modulo $d^2 n$, which for $n \le b$ is bounded by a power of $\log X$. We take $b$ to be $(\log X)^{\beta}$ for some fixed $\beta > 4$, so we get \begin{equation}\label{use siegel-walfisz} \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \Legendre{(1-4p)/d^2}{n} = (\phi(d^2)^{-1}) a_{n, d} \operatorname{Li}(X) + O(X (\log X)^{-A}) \end{equation} for any $A > 0$.\\ Here the constant fixed odd $d$, $a_{n, d}$ is a multiplicative function of $n$. Hence it is determined by its values at prime powers, which can be computed. We can then express the Dirichlet series for $a_{n, d}$ as the following Euler product \begin{equation} \begin{split} f_d(s)= \sum_{n} a_{n, d} n^{-s} = (1+2^{-s})^{-1} \prod_{\ell \nmid d \text{ odd}} & \left( 1 - \left(\frac{1}{\ell-1}\right) ( \ell^{-s} + \ell^{-2s}) \right) \left(1 - \ell^{-2s} \right)^{-1} \\ \times &\prod_{\ell \mid d} (1 - \ell^{-2s})^{-1} (1- \ell^{-2s-1}) \\ = (1-2^{-s}) \zeta(2s) \prod_{\ell \nmid d \text{ odd}}& \left( 1 - \left(\frac{1}{\ell-1}\right) ( \ell^{-s} + \ell^{-2s}) \right) \prod_{\ell \mid d} (1- \ell^{-2s-1}). \end{split} \end{equation} Now plugging (\ref{use siegel-walfisz}) into (\ref{change order}), we obtain \begin{equation} T_d(X,[1, b]) = \phi(d^2)^{-1} \left(\sum_{1 \le n \le b} \frac{a_{n, d}}{n}\right) \operatorname{Li}(X) + O(X (\log X)^{-A} \log b) \end{equation} Now, this sum $\sum_{n \le b} \frac{a_{n, d}}{n}$ converges absolutely as $b \to \infty$, with sum \begin{equation} \begin{split} \sum_{n} a_{n, d} n^{-1} = f_d(1) &= \frac{1}{2} \zeta(2) \prod_{\ell \nmid d \text{ odd}} \left( 1 - \left(\frac{1}{\ell-1}\right) ( \ell^{-1} + \ell^{-2}) \right) \prod_{\ell \mid d} (1- \ell^{-3})\\ &= \frac{1}{2} \zeta(2) \prod_{\ell \mid d} \left(\frac{\ell-1}{\ell}\right) \left(\frac{\ell^3-1}{\ell^3 - \ell^2 - \ell - 1} \right) \prod_{\ell \text{ odd}} \left( \frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \right) \\ &= \frac{1}{2} \zeta(2) d^3 c(d) \prod_{\ell \mid d} \left(\frac{\ell-1}{\ell}\right) \prod_{\ell \text{ odd}} \left( \frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \right). \end{split} \end{equation} using the definition of $c(d)$ in \eqref{define c}.\\ Since $b \to \infty$ as $X \to \infty$, we get asymptotically \begin{equation} \begin{split} T(X, b) &= \phi(d^2)^{-1} \cdot \frac{1}{2} \zeta(2) d^3 c(d) \prod_{\ell \mid d} \left(\frac{\ell-1}{\ell}\right) \prod_{\ell \text{ odd}} \frac{\ell^3 - \ell^2 - \ell - 1}{\ell^3 - \ell^2} \\ &= \frac{\pi^2}{12} d c(d) \prod_{\ell \mid d} \left( \frac{\ell^3 - \ell}{\ell^3 - \ell^2- \ell - 1} \right) \end{split} \end{equation} \subsection{Estimating $T_{[b, B]}(X)$} We will now apply the double oscillation Lemma (Lemma~\ref{unrestricted double oscillation}) to bound \[ T_{[b, B]}(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{b < n \le B} \Legendre{(1-4p)/d^2}{n} \frac{1}{n}. \] As suggested by Friedlander-Iwaniec in \cite{FI hyperbolic}, we divide the interval $[b, B]$ into dyadic intervals $[a, 2a]$ and apply Lemma~\ref{unrestricted double oscillation} on each. \begin{lemma}\label{double cancellation interval} For $a \ll X^{1/2}$, \[ T_{[a, 2a]}(X) = \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{a \le n \le 2a} \Legendre{(1-4p)/d^2}{n} \frac{1}{n} \ll X (\log X) \cdot a^{-1/2} \] \end{lemma} \begin{proof} We rewrite \[ T_{[a, 2a]}(X) = \frac{1}{a} \sum_{\substack{p \le X\\ d^2 \mid 1-4p}} \sum_{a \le n \le 2a} \frac{a}{n} \Legendre{(1-4p)/d^2}{n} . \] so now all the coefficients in the inner sum are of size $\le 1$, and Lemma~\ref{unrestricted double oscillation} applies to give \[ T_{[a, 2a]}(X) \ll \frac{1}{a} \cdot a X (\log a + \log X) (a^{-1/2} + (X/a)^{-1/2}) \ll X \log X \cdot a^{-1/2}, \] since $a \ll X^{1/2}$. \end{proof} We now apply this lemma to bound $T_{[b, B]}(X)$ as follows: \[ T_{[b, B]} \le \sum_{i = 0}^{\lfloor \log_2(B/b) \rfloor -1} T_{[2^i b, 2^{i+1}b]}(X) \ll X \log X \cdot b^{-1/2} \sum_{i = 0}^{\lfloor \log_2(B/b) \rfloor -1} 2^{i/2} \ll X \log X b^{-1/2}. \] Since we have chosen $b = (\log X)^{\beta}$ for $\beta > 4$, this is an error term. This completes the proof of Lemma~\ref{estimate $T_d$}.
1,314,259,996,789
arxiv
\section{Introduction} Classically sampling output probability distributions of sub-universal quantum computing models is known to be impossible under certain classical complexity conjectures. The depth-four model~\cite{TD}, the Boson Sampling model~\cite{BS}, the IQP model~\cite{BJS,BMS}, the one-clean qubit model~\cite{KL,MFF,M,Kobayashi,KobayashiICALP}, the HC1Q model~\cite{HC1Q}, and the random circuit model~\cite{random,random2,random3} are known examples. These results prohibit only polynomial-time classical sampling, but recently, impossibilities of some exponential-time classical simulations have been shown based on classical fine-grained complexity conjectures~\cite{Dalzell,DalzellPhD,Huang,Huang2,MorimaeTamaki,Hayakawa}. These ``fine-grained" quantum supremacy results are, however, only for exact computations (i.e., strong simulations) or multiplicative-error sampling of output probability distributions. Here, we say that a quantum probability distribution $\{p_z\}_z$ is classically sampled in time $T$ within a multiplicative error $\epsilon$ if there exists a classical $T$-time probabilistic algorithm that outputs $z$ with probability $q_z$ such that $|p_z-q_z|\le \epsilon p_z$ for all $z$. It was open whether fine-grained quantum supremacy can be shown for additive-error sampling. Here, we say that a quantum probability distribution $\{p_z\}_z$ is classically sampled in time $T$ within an additive error $\epsilon$ if there exists a classical $T$-time probabilistic algorithm that outputs $z$ with probability $q_z$ such that $\sum_z|p_z-q_z|\le \epsilon$. Additive-error sampling is more realistic for medium-size noisy quantum computers, and therefore theoretically showing quantum supremacy for additive-error sampling is important for the near-term experimental demonstrations of quantum supremacy. In this paper, we show additive-error fine-grained quantum supremacy based on certain classical fine-grained complexity conjectures. As examples, we consider the IQP model (Sec.~\ref{sec:IQP}), a mixture of the IQP model and log-depth Boolean circuits (Sec.~\ref{sec:log}), and Clifford+$T$ circuits (Sec.~\ref{sec:T}). Similar results should hold for other sub-universal models. The second result (IQP plus log-depth Boolean circuit) needs more complicated quantum circuit than the first one, but the conjecture seems to be more reliable. The first and second results are about the scaling for the number of qubits, while the third result is about the scaling for the number of $T$ gates. The standard proof technique of additive-error quantum supremacy~\cite{BS,BMS}, namely, the combination of Markov's inequality, Stockmeyer's theorem, and the anti-concentration lemma, cannot be directly used for fine-grained quantum supremacy, because Stockmeyer's theorem is a result for polynomial-time probabilistic computing. (Markov's inequality and the anti-concentration lemma can be used because they are independent of the time complexity.) In order to show fine-grained additive-error quantum supremacy, we derive a ``fine-grained version" of Stockmeyer's theorem. Our results, for the first time, demonstrate that the standard proof technique of additive-error quantum supremacy can be extended to exponential-time hardness. Because there is a gap between the upper-bound and the lower-bound, the results have a potential to be improved or sharpened. The upper-bound can be improved by faster simulations, and the lower-bound can be improved by improving the reduction or by introducing other conjectures. Furthermore, possible ways are extensions of the MA algorithm to refute SETH~\cite{WilliamsCCC}, and the fine-grained reductions from approximate counting to decision~\cite{Dell}. Note: After uploading this paper on arXiv, authors of Ref.~\cite{Dalzell} told us that they also show independently additive-error fine-grained quantum supremacy results. (Their additive-error results are added in their latest version.) They consider an exponential-time version of the ${\rm SBP}\neq{\rm SBQP}$ conjecture. (If ${\rm SBQP}={\rm SBP}$, the polynomial-time hierarchy collapses to the second level.) At this moment, we do not know how their conjecture and ours are related. \section{IQP} \label{sec:IQP} In this section, we show additive-error fine-grained quantum supremacy of the IQP model. The IQP model is defined as follows. \begin{definition} An $N$-qubit IQP model is the following quantum computing model: \begin{itemize} \item[1.] The initial state is $|0^N\rangle$. (Here, $|0^N\rangle=|0\rangle^{\otimes N}$.) \item[2.] $H^{\otimes N}$ is applied, where $H$ is the Hadamard gate. \item[3.] $Z$-diagonal gates (such as $e^{i\theta Z}$, $Z$, $CZ$, and $CCZ$) are applied. (In this paper, we consider only $Z$, $CZ$, and $CCZ$.) \item[4.] $H^{\otimes N}$ is applied. \item[5.] All qubits are measured in the computational basis. \end{itemize} \end{definition} Let us consider an $n$-variable degree-3 polynomial, $f:\{0,1\}^n\to\{0,1\}$, over ${\mathbb F}_2$ defined by \begin{eqnarray*} f(x_1,...,x_n)\equiv \sum_{i=1}^n \alpha_i x_i +\sum_{i>j}\beta_{i,j} x_ix_j +\sum_{i>j>k}\gamma_{i,j,k} x_ix_jx_k \end{eqnarray*} for any $x\equiv(x_1,x_2,...,x_n)\in\{0,1\}^n$, where $\alpha_i,\beta_{i,j},\gamma_{i,j,k}\in\{0,1\}$. If we say that we randomly choose $f$, it means that we randomly choose each $\alpha_i,\beta_{i,j},\gamma_{i,j,k}$ uniformly and independently. The conjecture on which additive-error fine-grained quantum supremacy of the IQP model is based is stated as follows. \begin{conjecture} \label{conjecture:IQP} Let $f$ be an $n$-variable degree-3 polynomial over ${\mathbb F}_2$. Let us define \begin{eqnarray*} gap(f)\equiv\sum_{x\in\{0,1\}^n}(-1)^{f(x)}. \end{eqnarray*} There exist positive constants $a$ and $n_0$ such that for every $n>n_0$ the following holds. Computing $[gap(f)]^2$ within a multiplicative error $u$ for at least $v$ fraction of $f$ cannot be done with a classical probabilistic $O^*(2^{a n})$-time algorithm that makes queries of length $O(2^{an})$ to an ${\rm NTIME}[n^2]$ oracle with a success probability at least $w$. Here, $u,v,w$ are certain constants. ($O^*$ means that the polynomial factor is ignored.) \end{conjecture} Here, the oracle query is the standard one: there is a separate oracle tape and answers can be returned from the oracle instantaneously. Note that the parameters $u,v,w$ can be adjustable to some extent. (See the proof.) We do not know whether this conjecture is true or false, but at least at this moment we do not know how to refute it. (For more discussions, see Sec.~\ref{sec:conjectures}). Based on Conjecture~\ref{conjecture:IQP}, we show the following result. \begin{theorem} \label{theorem:IQP} If Conjecture~\ref{conjecture:IQP} is true, then there exists an $N$-qubit IQP circuit whose output probability distribution cannot be classically sampled in $O(2^{aN})$-time within a certain constant additive error $\epsilon$. \end{theorem} For simplicity, we consider degree-3 polynomials, but it is clear from the following proof that a similar result holds for degree-$k$ polynomials for any constant $k\ge3$. (The anti-concentration lemma holds for any degree-$k$ polynomial with $k\ge2$, but the degree-2 case is classically simulatable because it is a Clifford circuit, so $k\ge3$ is necessary.) If we consider Conjecture~\ref{conjecture:IQP} for degree-$k$ polynomials, it becomes more stable for larger $k$~\cite{Williams}. {\it Proof of Theorem~\ref{theorem:IQP}}. Given an $n$-variable degree-3 polynomial $f$, we can construct an $n$-qubit IQP circuit such that the probability $p_z(f)$ of outputting $z\in\{0,1\}^n$ satisfies \begin{eqnarray*} p_z(f)=\frac{(gap(f_z))^2}{2^{2n}}, \end{eqnarray*} where \begin{eqnarray*} f_z(x_1,...,x_n)\equiv f(x_1,...,x_n)+\sum_{i=1}^n z_ix_i. \end{eqnarray*} Assume that there exists a $T$-time classical probabilistic algorithm that outputs $z\in\{0,1\}^n$ with probability $q_z(f)$ such that \begin{eqnarray*} \sum_{z\in\{0,1\}^n}|p_z(f)-q_z(f)|\le\epsilon \end{eqnarray*} for a certain $\epsilon$ and any $f$. From Markov's inequality, \begin{eqnarray*} {\rm Pr}_z \Big[|p_z(f)-q_z(f)|\ge\frac{\epsilon}{2^n\delta}\Big] \le \delta \end{eqnarray*} for any $f$ and $\delta>0$. According to the fine-grained Stockmeyer's theorem (see Appendix), a classical $O^*(T)$-time probabilistic algorithm that makes queries of $O(T)$ length to the ${\rm NTIME}[n^2]$ oracle can compute $\tilde{q}_z(f)$ such that \begin{eqnarray*} |q_z(f)-\tilde{q}_z(f)|\le \xi q_z(f), \end{eqnarray*} where \begin{eqnarray*} \xi\equiv \frac{2^{\frac{1}{\alpha}}-2^{-\frac{1}{\alpha}}}{2}, \end{eqnarray*} for any $f$, integer $\alpha\ge1$, and $z\in\{0,1\}^n$, with a success probability at least $w$. Due to the anti-concentration lemma~\cite{BMS}, \begin{eqnarray*} {\rm Pr}_{z,f}\Big[p_z(f)\ge\frac{\tau}{2^n}\Big] \ge \frac{(1-\tau)^2}{3} \end{eqnarray*} for any $0<\tau<1$. Therefore we have \begin{eqnarray*} |p_z(f)-\tilde{q}_z(f)|&\le&|p_z(f)-q_z(f)| +|q_z(f)-\tilde{q}_z(f)|\\ &\le&|p_z(f)-q_z(f)|+\xi q_z(f) ~~\mbox{(with a success probability at least $w$ for each $f$ and $z$)}\\ &\le&|p_z(f)-q_z(f)|+\xi(p_z(f)+|p_z(f)-q_z(f)|)\\ &=&\xi p_z(f)+|p_z(f)-q_z(f)|(1+\xi)\\ &\le&\xi p_z(f) +\frac{\epsilon}{2^n\delta}(1+\xi) ~~\mbox{(for at least $1-\delta$ fraction of $z$)}\\ &\le&\xi p_z(f)+\sigma p_z(f)(1+\xi) ~~\mbox{(for at least $\frac{(1-\frac{\epsilon}{\sigma\delta})^2}{3}$ fraction of $(z,f)$)}\\ &=&p_z(f)\Big(\sigma+(1+\sigma)\xi\Big)\\ &=&p_z(f)u~~ (\mbox{We take $u\equiv \sigma+(1+\sigma)\xi$}). \end{eqnarray*} If we take $\epsilon$ and $\delta$ such that $-\delta+\frac{1}{3}(1-\frac{\epsilon}{\sigma\delta})^2=v$, the above inequality is correct for at least $v$ fraction of $(z,f)$. Hence, we obtain \begin{eqnarray*} |(gap(f_z))^2-2^{2n}\tilde{q}_z(f)|\le u(gap(f_z))^2 \end{eqnarray*} for at least $v$ fraction of $(z,f)$. It means $(gap(f))^2$ is computable within the multiplicative error $u$ for at least $v$ fraction of $f$, which contradict Conjecture~\ref{conjecture:IQP}. \fbox Note that $w$ can be arbitrary close to 1, but $u$ is lower-bounded as $u\ge\frac{\epsilon}{1+\sqrt{3}}$, and $v$ is upper-bounded as $v \le1-\frac{\epsilon}{u(1+\sqrt{3})}$. \section{IQP plus log-depth Boolean circuit} \label{sec:log} In this section, we show additive-error fine-grained quantum supremacy for the IQP plus log-depth Boolean circuit model. Let us consider the following conjecture. \begin{conjecture} \label{conjecture:log} Let $f:\{0,1\}^n\to\{0,1\}$ be an $n$-variable degree-2 polynomial over ${\mathbb F}_2$, and $g:\{0,1\}^n\to\{0,1\}$ be an $n$-variable log-depth Boolean circuit. Define \begin{eqnarray*} gap(f+g)\equiv\sum_{x\in\{0,1\}^n}(-1)^{f(x)+g(x)}. \end{eqnarray*} There exist $g$, and positive constants $a$ and $n_0$ such that for every $n>n_0$ the following holds. Computing $[gap(f+g)]^2$ within a multiplicative error $u$ for at least $v$ fraction of $f$ cannot be done with a classical probabilistic $O^*(2^{an})$-time algorithm that makes queries of length $O(2^{an})$ to an ${\rm NTIME}[n^2]$ oracle with a success probability at least $w$. \end{conjecture} Conjecture~\ref{conjecture:log} is ``more stable" than Conjecture~\ref{conjecture:IQP}, because log-depth Boolean circuit is more general than constant-degree polynomials. For constant-degree polynomials, there is a non-trivial exponential time algorithm to count the number of solutions~\cite{Tamaki}, but we do not know how to apply it to log-depth Boolean circuits. Furthermore, note that in Conjecture~\ref{conjecture:log}, the average case is considered only for $f$, and $g$ can be taken as the worst case one. Based on Conjecture~\ref{conjecture:log}, we show the following result. \begin{theorem} \label{theorem:log} If Conjecture~\ref{conjecture:log} is true, then there exists an $N$-qubit $poly(N)$-size quantum circuit (consisting of an IQP circuit and a log-depth Boolean circuit) whose output probability distribution cannot be classically sampled in $O(2^{aN})$-time within a certain constant additive error $\epsilon$. \end{theorem} {\it Proof of Theorem~\ref{theorem:log}.} Given a log-depth Boolean circuit $g:\{0,1\}^n\to\{0,1\}$, we can construct an $(n+1)$-qubit $poly(n)$-size quantum circuit $U$ such that \begin{eqnarray*} U(|x\rangle\otimes|0\rangle) =e^{ih(x)}|x\rangle\otimes|g(x)\rangle \end{eqnarray*} for any $x\in\{0,1\}^n$, where $h$ is a certain function whose detail is irrelevant here~\cite{Cosentino}. Let us consider the following circuit. \begin{itemize} \item[1.] The initial state is $|0^n\rangle\otimes|0\rangle$. \item[2.] Apply $H^{\otimes n}\otimes I$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}|x\rangle\otimes|0\rangle. \end{eqnarray*} \item[3.] Apply $U$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}e^{ih(x)} |x\rangle\otimes|g(x)\rangle. \end{eqnarray*} \item[4.] Apply $Z$ on the last qubit to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}e^{ih(x)}(-1)^{g(x)} |x\rangle\otimes|g(x)\rangle. \end{eqnarray*} \item[5.] Apply $U^\dagger$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} (-1)^{g(x)} |x\rangle\otimes|0\rangle. \end{eqnarray*} \item[6.] Apply $Z$ and $CZ$ that correspond to $f$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} (-1)^{g(x)+f(x)} |x\rangle\otimes|0\rangle. \end{eqnarray*} \item[7.] Apply $H^{\otimes n}\otimes I$ and measure the first $n$ qubits in the computational basis. \end{itemize} The probability of obtaining $z\in\{0,1\}^n$ is \begin{eqnarray*} p_z(f+g) &=&\Big| \frac{1}{2^n}\sum_{x\in\{0,1\}^n} (-1)^{g(x)+f(x)+\sum_{j=1}^nx_jz_j} \Big|^2\\ &=& \frac{(gap(g+f_z))^2}{2^{2n}}. \end{eqnarray*} Assume that there exists a $T$-time classical probabilistic algorithm that outputs $z\in\{0,1\}^n$ with probability $q_z(f+g)$ such that \begin{eqnarray*} \sum_{z\in\{0,1\}^n}|p_z(f+g)-q_z(f+g)|\le\epsilon. \end{eqnarray*} From Markov's inequality, \begin{eqnarray*} {\rm Pr}_z \Big[|p_z(f+g)-q_z(f+g)|\ge\frac{\epsilon}{2^n\delta}\Big] \le \delta \end{eqnarray*} for any $f$, $g$, and $\delta>0$. According to the fine-grained Stockmeyer's theorem, a classical $O^*(T)$-time probabilistic algorithm that makes queries of length $O(T)$ to the ${\rm NTIME}[n^2]$ oracle can compute $\tilde{q}_z(f+g)$ such that \begin{eqnarray*} |q_z(f+g)-\tilde{q}_z(f+g)|\le \xi q_z(f+g), \end{eqnarray*} where \begin{eqnarray*} \xi\equiv\frac{2^{\frac{1}{\alpha}}-2^{-\frac{1}{\alpha}}}{2}, \end{eqnarray*} for any $f$, $g$, integer $\alpha\ge1$, and $z\in\{0,1\}^n$, with a success probability at least $w$. Due to the anti-concentration lemma~\cite{BMS} \begin{eqnarray*} {\rm Pr}_{z,f}\Big[p_z(f+g)\ge\frac{\tau}{2^n}\Big] \ge \frac{(1-\tau)^2}{3} \end{eqnarray*} for any $0<\tau<1$. Then we have \begin{eqnarray*} |p_z(f+g)-\tilde{q}_z(f+g)|&\le&|p_z(f+g)-q_z(f+g)| +|q_z(f+g)-\tilde{q}_z(f+g)|\\ &\le&|p_z(f+g)-q_z(f+g)|+\xi q_z(f+g)\\ &\le&|p_z(f+g)-q_z(f+g)|+\xi(p_z(f+g)+|p_z(f+g)-q_z(f+g)|)\\ &=&\xi p_z(f+g)+|p_z(f+g)-q_z(f+g)|(1+\xi)\\ &\le&\xi p_z(f+g) +\frac{\epsilon}{2^n\delta}(1+\xi) ~~\mbox{(for at least $1-\delta$ fraction of $z$)}\\ &\le&\xi p_z(f+g)+\sigma p_z(f+g)(1+\xi) ~~\mbox{(for at least $\frac{(1-\frac{\epsilon}{\sigma\delta})^2}{3}$ fraction of $(z,f)$)}\\ &=&p_z(f+g)\Big(\sigma+(1+\sigma)\xi\Big)\\ &=&p_z(f+g)u ~~\mbox{(We take $u\equiv\sigma+(1+\sigma)\xi$)}. \end{eqnarray*} If we take $\epsilon$ and $\delta$ such that $-\delta+\frac{1}{3}(1-\frac{\epsilon}{\sigma\delta})^2=v$, the above inequality is correct for at least $v$ fraction of $(z,f)$, which contradict Conjecture~\ref{conjecture:log}. \fbox \section{Clifford plus $T$} \label{sec:T} In this section, we finally show additive-error fine-grained quantum supremacy for Clifford+$T$ circuits. Let us consider the following conjecture. \begin{conjecture} \label{conjecture:T} Let $g:\{0,1\}^n\to\{0,1\}$ be a 3-CNF with $m$ clauses, and $f:\{0,1\}^n\to\{0,1\}$ be an $n$-variable degree-2 polynomial over ${\mathbb F}_2$. Define \begin{eqnarray*} gap(f+g)\equiv\sum_{x\in\{0,1\}^n}(-1)^{f(x)+g(x)}. \end{eqnarray*} There exist $g$, and positive constants $a$ and $n_0$ such that for every $n>n_0$ the following holds. Computing $[gap(f+g)]^2$ within a multiplicative error $u$ for at least $v$ fraction of $f$ cannot be done with a classical probabilistic $O^*(2^{am})$-time algorithm that makes queries of length $O(2^{am})$ to an ${\rm NTIME}[n^2]$ oracle with a success probability at least $w$. \end{conjecture} Based on Conjecture~\ref{conjecture:T}, we show the following result. \begin{theorem} \label{theorem:T} If Conjecture~\ref{conjecture:T} is true, then there exists a quantum circuit over Clifford gates and $t$ $T$ gates whose output probability distribution cannot be classically sampled in $O(2^{\frac{a(t+14)}{42}})$-time within a certain constant additive error $\epsilon$. \end{theorem} {\it Proof of Theorem~\ref{theorem:T}.} Given a 3-CNF $g:\{0,1\}^n\to\{0,1\}$, we can construct a quantum circuit $U$ such that \begin{eqnarray*} U(|x\rangle\otimes|0^\xi\rangle) =|g(x)\rangle\otimes|junk(x)\rangle \end{eqnarray*} for any $x\in\{0,1\}^n$, where $\xi\equiv 3m-1$, and $junk(x)\in\{0,1\}^{n+\xi-1}$ is a certain bit string whose detail is irrelevant here. Note that $U$ consists of Clifford and $7(3m-1)$ number of $T$ gates. (The 3-CNF $g$ contains $2m$ OR gates and $m-1$ AND gates. Each AND and OR gate can be simulated with a single Toffoli gate by using a single ancilla qubit. A single Toffoli gate can be simulated with Clifford and 7 $T$ gates.) Let us consider the following circuit. \begin{itemize} \item[1.] The initial state is $|0^n\rangle\otimes|0^\xi\rangle$. \item[2.] Apply $H^{\otimes n}\otimes I^{\otimes \xi}$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n}|x\rangle\otimes|0^\xi\rangle. \end{eqnarray*} \item[3.] Apply $U$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} |g(x)\rangle\otimes|junk(x)\rangle. \end{eqnarray*} \item[4.] Apply $Z\otimes I^{\otimes n+\xi-1}$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} (-1)^{g(x)}|g(x)\rangle\otimes|junk(x)\rangle. \end{eqnarray*} \item[5.] Apply $U^\dagger$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} (-1)^{g(x)}|x\rangle\otimes|0^\xi\rangle. \end{eqnarray*} \item[6.] Apply $Z$ and $CZ$ that correspond to $f$ to obtain \begin{eqnarray*} \frac{1}{\sqrt{2^n}}\sum_{x\in\{0,1\}^n} (-1)^{g(x)+f(x)}|x\rangle\otimes|0^\xi\rangle. \end{eqnarray*} \item[7.] Apply $H^{\otimes n}\otimes I^{\otimes \xi}$ and measure all qubits in the first register in the computational basis. \end{itemize} This quantum computing uses $t\equiv14(3m-1)$ number of $T$ gates. The probability of obtaining $z\in\{0,1\}^n$ is \begin{eqnarray*} p_z(f+g) &=&\Big| \frac{1}{2^n}\sum_{x\in\{0,1\}^n} (-1)^{f(x)+\sum_{j=1}^nx_jz_j+g(x)} \Big|^2. \end{eqnarray*} Assume that there exists a $T$-time classical probabilistic algorithm that outputs $z\in\{0,1\}^n$ with probability $q_z(f+g)$ such that \begin{eqnarray*} \sum_{z\in\{0,1\}^n}|p_z(f+g)-q_z(f+g)|\le\epsilon. \end{eqnarray*} From Markov's inequality, \begin{eqnarray*} {\rm Pr}_z \Big[|p_z(f+g)-q_z(f+g)|\ge\frac{\epsilon}{2^n\delta}\Big] \le \delta \end{eqnarray*} for any $f$, $g$, and $\delta>0$. According to the fine-grained Stockmeyer's theorem, a classical $O^*(T)$-time probabilistic algorithm that makes queries of length $O(T)$ to the ${\rm NTIME}[n^2]$ oracle can compute $\tilde{q}_z(f+g)$ such that \begin{eqnarray*} |q_z(f+g)-\tilde{q}_z(f+g)|\le \xi q_z(f+g), \end{eqnarray*} where \begin{eqnarray*} \xi\equiv\frac{2^{\frac{1}{\alpha}}-2^{-\frac{1}{\alpha}}}{2}, \end{eqnarray*} for any $f$, $g$, integer $\alpha\ge1$, and $z\in\{0,1\}^n$, with a success probability at least $w$. Due to the anti-concentration lemma~\cite{BMS} \begin{eqnarray*} {\rm Pr}_{z,f}\Big[p_z(f+g)\ge\frac{\tau}{2^n}\Big] \ge \frac{(1-\tau)^2}{3} \end{eqnarray*} for any $0<\tau<1$. Then we have \begin{eqnarray*} |p_z(f+g)-\tilde{q}_z(f+g)|&\le&|p_z(f+g)-q_z(f+g)| +|q_z(f+g)-\tilde{q}_z(f+g)|\\ &\le&|p_z(f+g)-q_z(f+g)|+\xi q_z(f+g)\\ &\le&|p_z(f+g)-q_z(f+g)|+\xi(p_z(f+g)+|p_z(f+g)-q_z(f+g)|)\\ &=&\xi p_z(f+g)+|p_z(f+g)-q_z(f+g)|(1+\xi)\\ &\le&\xi p_z(f+g) +\frac{\epsilon}{2^n\delta}(1+\xi) ~~\mbox{(for at least $1-\delta$ fraction of $z$)}\\ &\le&\xi p_z(f+g)+\sigma p_z(f+g)(1+\xi) ~~\mbox{(for at least $\frac{(1-\frac{\epsilon}{\sigma\delta})^2}{3}$ fraction of $(z,f)$)}\\ &=&p_z(f+g)\Big(\sigma+(1+\sigma)\xi\Big)\\ &=&p_z(f+g)u ~~\mbox{(We take $u\equiv\sigma+(1+\sigma)\xi$)}. \end{eqnarray*} If we take $\epsilon$ and $\delta$ such that $-\delta+\frac{1}{3}(1-\frac{\epsilon}{\sigma\delta})^2=v$, the above inequality is correct for at least $v$ fraction of $(z,f)$, which contradict Conjecture~\ref{conjecture:T}. \fbox \section{Discussion} \subsection{Conjectures} \label{sec:conjectures} In this paper, we have shown additive-error fine-grained quantum supremacy based on several conjectures. In this subsection, we provide some ``evidence" that support these conjectures. Our conjectures are related to the exponential-time hypothesis (ETH) and the strong exponential-time hypothesis (SETH) that are standard conjectures in fine-grained complexity theory~\cite{Impagliazzo1,Impagliazzo2}. ETH and SETH are stronger (or more pessimistic) versions of the famous ${\rm NP}\neq{\rm P}$ conjecture that says that an NP-complete problem cannot be solved in polynomial-time. More precisely, ETH is stated as follows: \begin{conjecture}(ETH) Any (classical) deterministic algorithm that decides whether $\#f>0$ or $\#f=0$ given (a description of) a 3-CNF with $n$ variables, $f:\{0,1\}^n\to\{0,1\}$, needs $2^{\Omega(n)}$ time. Here, $\#f\equiv\sum_{x\in\{0,1\}^n}f(x)$. \end{conjecture} SETH is the stronger version of ETH which says as follows: \begin{conjecture}(SETH) Let $A$ be any (classical) deterministic $T(n)$-time algorithm such that the following holds: given (a description of) a CNF, $f:\{0,1\}^n\to\{0,1\}$, with at most $cn$ clauses, $A$ accepts if $\#f>0$ and rejects if $\#f=0$, where $\#f\equiv\sum_{x\in\{0,1\}^n}f(x)$. Then, for any constant $a>0$, there exists a constant $c>0$ such that $T(n)>2^{(1-a)n}$ holds for infinitely many $n$. \end{conjecture} All conjectures used in this paper are the average-case hardness of computing GapP functions within a multiplicative error in classical probabilistic time with an NTIME oracle, and therefore different from ETH and SETH. There are, however, three reasons that support these conjectures. First, our conjectures consider GapP functions, while ETH and SETH consider $\#$P functions. A GapP function is a difference of two $\#$P functions. Furthermore, to our knowledge, only known way of computing a GapP function is to compute the number of accepting and rejecting paths (i.e., $\#$P functions). Therefore, computing GapP functions should not be easier than computing $\#$P functions. Second, our conjectures study average cases. One might think that solving an average case could be easier than the worst case, but, at least, SETH has not been refuted even in average cases. (The best upper-bound is Ref.~\cite{Lincoln}.) Third, our conjectures allow the algorithm to use an NTIME oracle. We point out that at least $\#$ETH, which is the counting version of ETH, has not been refuted for MA (which is in ${\rm ZPP}^{\rm NP}$) and AM (which is ${\rm coNP}^{\rm NP}$). It is an important open problem for the research of (not only fine-grained but also non-fine-grained) quantum supremacy to show additive-error quantum supremacy based on standard conjectures. \subsection{Other models} For simplicity, we have considered the three models, but similar results should hold for other sub-universal models such as the one-clean qubit model and the random circuit model. For all sub-universal models, Markov's inequality and the anti-concentration lemma hold. (For the Boson sampling model, the anti-concentration is a conjecture.) Therefore if we assume similar average-case hardness conjectures as we have introduced in this paper, we should be able to show additive-error fine-grained quantum supremacy for other sub-universal models. \acknowledgements TM thanks Yoshifumi Nakata for discussion. We thank authors of Ref.~\cite{Dalzell} for comments on our manuscript, and sharing their draft. We also thank anonymous reviewers, especially one reviewer who told us some errors and an improvement of fine-grained Stockmeyer's theorem. TM is supported by MEXT Q-LEAP, JST PRESTO No.JPMJPR176A, and the Grant-in-Aid for Young Scientists (B) No.JP17K12637 of JSPS. ST is supported by JSPS KAKENHI Grant Numbers 16H02782, 18H04090, and 18K11164.
1,314,259,996,790
arxiv
\section{\label{sec:intro} Introduction} As a part of the {C{\smaller[2]ESR}TA}~ program at Cornell~\cite{1748-0221-10-07-P07012}, the Cornell Electron Storage Ring (CESR) was instrumented with several retarding field analyzers (RFAs)~\cite{NIMA453:507to513}, to study the buildup of low energy electrons in an accelerator vacuum chamber. This effect, known as electron cloud~\cite{ECLOUD12:Miguel,doi:10.1142/S0217751X14300233}, has been observed in a number of machines~\cite{PRSTAB7:024402,PRSTAB14:071001,NIMA556:399to409,PRSTAB6:034402,PAC09:WE4GRC02,PhysRevSTAB.6.014203,PRSTAB16:011003}, and is known to cause emittance growth and beam instabilities~\cite{PhysRevSTAB.7.124801}. It is especially dangerous for low emittance, positively charged beams, and is expected to be a limiting factor in next generation positron and proton storage rings, such as the International Linear Collider damping ring~\cite{ILCREP2007:001,PRSTAB17:031002}. In lepton machines, electron cloud is usually seeded by photoelectrons generated by synchrotron radiation. The collision of these electrons with the beam pipe can then produce one or more secondary electrons, depending on the secondary electron yield (SEY) of the material. The SEY depends on the energy and angle of the incident electron~\cite{PRSTAB5:124404}, with peak secondary production occurring at $E_{max} \approx 300$~eV. If the average SEY is greater than unity, the cloud density will grow exponentially, until a saturation is reached. Most secondary electrons are generated with low energy ($<$ 10~eV), but can be given additional energy by the beam. As we will show in this paper, an unfortunate choice of beam parameters (particulary bunch spacing and charge) can drive up the average electron energy up into a regime of high secondary production (near $E_{max}$), resulting in a higher cloud density. Retarding field analyzers provide information on the local electron cloud density, energy, and transverse distributions. Previous papers have described the use of RFAs at {C{\smaller[2]ESR}TA}~ to directly compare different electron cloud mitigation techniques~\cite{NIMA760:86to97,NIMA770:141to154}. In addition, computer simulations have been compared to RFA measurements, to quantify the electron emission properties of different cloud mitigating coatings in field free regions~\cite{PRSTAB17:061001}. Simulations of cloud dynamics in dipole and wiggler fields have been presented in conference proceedings~\cite{PAC09:FR5RFP043,IPAC10:TUPD022,IPAC11:MOPS083,IPAC12:WEPPR088}. This paper will summarize and expand on these results. In particular, multipacting and cyclotron resonances will be examined in detail. These effects, in which resonant interactions between the beam and electrons lead to accelerated cloud development, should be avoided to ensure optimal machine performance. \subsection{Retarding Field Analyzers} A retarding field analyzer consists of three main components~\cite{NIMA453:507to513}: holes drilled in the beam pipe to allow electrons to enter the device; a retarding grid, to which a voltage can be applied, rejecting electrons with less than a certain energy; and a positively biased collector, to capture any electrons which make it past the grid. If space permits, additional (grounded) grids can be added to produce a more ideal retarding field. In addition, the collectors of most RFAs used in {C{\smaller[2]ESR}TA}~are segmented to allow characterization of the spatial structure of the cloud build-up. Thus a single RFA measurement provides information on the local cloud density, energy, and transverse distribution. Some of the data presented here are voltage scans, in which the retarding voltage is varied (typically from +100 to $-250$~V or $-400$~V) while beam conditions are held constant. In other measurements, where we want to study the detector response as a function of some external parameter (e.g. bunch spacing), the retarding grid was biased at +50~V, to capture all incoming electrons. The collector was set to +100~V for all of our measurements. An example voltage scan is given in Fig.~\ref{fig:chic_dipole_meas}. The RFA response is plotted as a function of collector number and retarding voltage. Roughly speaking, this is a description of the transverse and energy distribution of the cloud. Collector 1 is closest to the outside of the chamber (where direct synchrotron radiation hits). The signal is strongly peaked in the central collector (no. 9), which is aligned with the horizontal position of the beam. The sign convention for retarding voltage is chosen so that a positive value on this axis corresponds to a negative physical voltage on the grid (and thus a rejection of lower energy electrons). The beam conditions are given as ``1x45x1.25~mA e$^+$, 14~ns, 5.3~GeV." This notation indicates one train of 45 positron bunches, with a per-bunch current of 1.25~mA (1~mA = $1.6\times10^{10}$ particles), with 14~ns bunch spacing, and a beam energy of 5.3~GeV. \begin{figure} \centering \includegraphics[width=.6\textwidth]{run2983_slac4_pub_notitle3.pdf} \\ \caption[CESR dipole RFA voltage scans]{\label{fig:chic_dipole_meas} Dipole RFA voltage scan: 1x45x1.25~mA $e^+$, 14~ns, 5.3~GeV, 810~gauss field. The central collector is no. 9.} \end{figure} \subsection{Electron Cloud in Dipoles} In the presence of a dipole magnetic field, an electron will undergo helical motion, spiralling around the field lines. For a standard dipole magnet in an accelerator (with strength $\sim$ 1~kilogauss), a typical cloud electron (with energy $\sim$ 10 - 100~eV) will have a cyclotron radius on the order of a few hundred $\mu$m. In other words, the motion of the electron will be approximately one dimensional, along the direction of the dipole field. This pinning of the motion to the field lines often results in a strong concentration of the cloud in the center of the chamber, where beam kicks are strongest. Stronger beam kicks drive the average electron energy up, which typically results in a higher average SEY (since most secondary electrons are emitted with $E_{sec} \ll E_{max}$). This effect is seen clearly in Fig.~\ref{fig:chic_dipole_meas}. In addition, multipacting and cyclotron resonances, described below, can appear in dipole fields. \subsubsection{\label{ssec:mult} Multipacting Resonances} A multipacting resonance occurs when a characteristic time for the cloud development is equal to the bunch spacing. As originally proposed by Gr\"{o}bner~\cite{HEACC77:GROBNER}, this happens when the kick from the beam gives secondary electrons near the vacuum chamber wall just enough energy to reach the opposite wall in time for the next bunch. These electrons generate more secondaries, which are again given energy by the beam. This process continues, resulting in a resonant buildup of the cloud. The resonant condition is given by Eq.~(\ref{eq:grob}). \begin{equation} \label{eq:grob} t_b = \frac{b^2}{c r_e N_b} \end{equation} Here $t_b$ is the bunch spacing, $b$ is the chamber half-height, $c$ is the speed of light, $r_e$ is the classical electron radius, and $N_b$ is the bunch population. A more general condition was derived by Harkay et al.~\cite{PAC03:RPPG002,PRSTAB6:034402}, which includes nonzero secondary emission velocity. In Section~\ref{ssec:mult_res}, we develop an even more general model of multipacting resonances, which includes the possibility of multiple beam kicks. \subsubsection{\label{ssec:cyc} Cyclotron Resonances} A cyclotron resonance occurs when the bunch spacing is an integral multiple of the cyclotron period of an electron in a dipole field~\cite{PRSTAB11:091002}. Under these conditions, the transverse beam kick to a given electron will always be in the same direction, resulting in a steady increase in the particle's energy, and (usually) a higher secondary electron yield when it hits the vacuum chamber wall. The resonant condition is given in Eq.~(\ref{eq:cyc}), where $m_e$ is the electron mass, $q_e$ is the electron charge, $n$ is an integer, and $B$ is the magnetic field strength. \begin{equation} \label{eq:cyc} t_b = \frac{2 \pi m_e n}{q_e B} \end{equation} Cyclotron resonances were observed at SLAC using a chicane of four dipole magnets instrumented with RFAs~\cite{NIMA621:33to38}. Unexpectedly, the resonances sometimes appeared as peaks in the signal, and other times as dips. This chicane was moved to CESR early in the {C{\smaller[2]ESR}TA}~program. In Section~\ref{ssec:cyc_res}, we confirm the existence of cyclotron resonances, and in Section~\ref{ssec:cyc_res_sim}, we provide an explanation for the peak/dip phenomenon. \section{\label{sec:instrumentation} Instrumentation} Detailed descriptions of the {C{\smaller[2]ESR}TA}~electron cloud experimental program, design of the field region RFAs, and data acquisition system can be found elsewhere~\cite{NIMA770:141to154,CLNS:12:2084}; here we provide only a brief summary. RFAs in each field region had to be specially designed to fit inside the narrow magnet apertures. The key parameters of each RFA type are listed in Table~\ref{tab:dipole_rfa_styles}. \begin{table} \centering \footnotesize \setlength{\tabcolsep}{10pt} \caption{\label{tab:dipole_rfa_styles} List of dipole/wiggler RFA locations. The elliptical and rectangular chambers are 9~cm in width by 5~cm in height. The circular chamber is 4.5~cm in radius. ``Grid trans." refers to the optical transparency of the grids. Note that the wiggler RFAs used two generations of grids with different transparencies.} \begin{tabular}{ccccccc} \hline \hline RFA & Chamber type & Field Strength & Grids & Collectors & Grid trans. \\ \hline CESR dipole & Elliptical Al & 0.079 - 0.2010~T & 1 & 9 & ~38\% \\ Chicane dipole & Circular Al & 0 - 0.12~T & 3 & 17 & ~92\% \\ Wiggler & Rectangular Cu & 1.9~T & 1 & 12 & ~38/92\% \\ \hline \hline \end{tabular} \end{table} \paragraph{CESR Dipole RFA} To study cloud buildup in a realistic dipole field environment, a thin RFA was installed inside a CESR dipole magnet. The magnetic field in this magnet depends on the beam energy: 790~gauss at 2.1~GeV, 1520~gauss at 4~GeV, and 2010~gauss at 5.3~GeV. The chamber is made of uncoated (6063) aluminum. \paragraph{Chicane RFAs} A chicane of four dipole magnets designed at SLAC~\cite{NIMA621:33to38} was installed in the L3 straight. The field of these magnets can be varied over the range of 0 to 1.46~kilogauss, which allowed for the study of the effect of dipole field strength on cloud dynamics, without affecting the trajectory of stored beams in the rest of the ring. Three of the chicane dipole chambers tested different electron cloud mitigation techniques: two of the chambers were TiN coated~\cite{NIMA551:187to199}, and one was both grooved~\cite{NIMA571:588to598,PAC07:THPMN118} and TiN coated (the fourth was bare aluminum). \paragraph{Wiggler RFAs} During the {C{\smaller[2]ESR}TA}~reconfiguration in 2008, six superconducting wigglers were installed in the L0 straight section of CESR. They were typically operated with a peak transverse field of 1.9~T. Three of these wigglers were instrumented with RFAs, at three different locations in the wiggler field: in the center of the wiggler pole (effectively a 1.9~T dipole field), half way between two poles (where the field is longitudinal), and in an intermediate region~\cite{NIMA770:141to154}. This paper will focus on the pole center RFAs. The first generation wiggler RFAs were equipped with low-transparency stainless steel grids. However, as described in Section~\ref{ssec:tramp}, secondary emission from these grids lead to a significant interaction between the electron cloud and the RFA, complicating the interpretation of the measurements. Consequently, in the second generation of wiggler chambers, the grids were changed to high-transparency copper meshes. The use of high transparency grids effectively solved the grid emission problem. \section{Measurements and Analytical Models} Many measurements have been taken in CESR with RFAs in dipole fields, under a wide variety of different beam conditions. This has allowed for detailed studies of electron cloud dynamics, in particular of multipacting and cyclotron resonances. \subsection{\label{ssec:mult_res} Multipacting Resonances} To study the time evolution of the electron cloud, we collected RFA data with bunch spacings varying from 4~ns to 112~ns. All of the data presented in this section were taken with a single train of 20 bunches, at beam energy 5.3~GeV. Fig.~\ref{fig:dipole_spacing_chic} shows the signal in the central collector of the chicane RFA as a function of bunch spacing, for different bunch currents, and for both electron and positron beams. A few interesting features are readily apparent in the data. Except at the lowest current value, both the electron and positron beam data show a peak at 56 ns. The positron data has another peak, which moves to lower bunch spacings at higher currents. These data are not consistent with a simple multipacting resonance (Eq.~(\ref{eq:grob})), which would account for only one resonance in the positron measurement, and none in the electron measurement. Additionally, the beam kicks at the wall are very small for this case (amounting to 13~eV for a 3.5~mA beam), and so are unlikely to drive electrons at the wall into a regime of high secondary production. A similar set of data for the CESR dipole RFA is shown in Fig.~\ref{fig:dipole_spacing_cesr}. In this case, both the electron and positron beam data contain a single peak that moves to lower spacings as the current increases. The positron data peaks occur at much lower spacings that the electron peaks. \begin{figure} \centering \begin{tabular}{c} \includegraphics*[width=0.6\textwidth]{multipacting_3curs_slac4_pos_pub4.pdf} \\ \includegraphics*[width=0.6\textwidth]{multipacting_3curs_slac4_elec_pub4.pdf} \end{tabular} \caption{\label{fig:dipole_spacing_chic} Central collector signal in the chicane dipole RFA (set to 810~gauss) as a function of bunch spacing, at different bunch currents. Top: positron beam; bottom: electron beam. Note that the signals have been normalized to be on the same scale. In absolute terms, the peak positron signal was about five times the peak electron signal.} \end{figure} \begin{figure} \centering \begin{tabular}{c} \includegraphics*[width=0.6\textwidth]{multipacting_3curs_cesr_pos_pub4.pdf} \\ \includegraphics*[width=0.6\textwidth]{multipacting_4curs_cesr_elec_pub5.pdf} \end{tabular} \caption{\label{fig:dipole_spacing_cesr} Central collector signal in the CESR dipole RFA as a function of bunch spacing, at different bunch currents. Top: positron beam; bottom: electron beam. Note that the signals have been normalized to be on the same scale. In absolute terms, the peak positron signal was about four times the peak electron signal.} \end{figure} \subsubsection{Analytical Model} These resonances can be explained if we allow the secondary electrons to be generated with some (small) energy. If the time for a typical secondary electron to travel to the center of the beam pipe is equal to the bunch spacing, this electron will be kicked strongly by the beam, and is likely to produce more secondary electrons~\cite{PRSTAB6:034402}. If we ignore the time for the kicked electron to travel to the beam pipe wall, the resonance condition is given by Eq.~(\ref{eq:tb1_res}), where $t_b$ is the bunch spacing, $b$ is the chamber half-height (i.e. the distance from the wall to the beam), and $v_{sec}$ is a characteristic secondary electron velocity. \begin{equation} \label{eq:tb1_res} t_b = b / v_{sec} \end{equation} For a (plausible~\cite{PRSTAB5:124404}) secondary emission energy of 1.5~eV, this peak will occur at 61~ns for the chicane dipole case ($b = 4.5$~cm). Because aluminum has a high SEY for a broad range of incident energies, we expect the resonance to be somewhat broad. The fact that there is a finite width to the secondary energy distribution will further smear out the peak. Because this model does not distinguish between electron and positron beams, we expect this peak to be in the same location for both species. This is indeed what we observe in the measured data. For the CESR dipole RFA ($b = 2.5$~cm), the resonance should occur at 34~ns, which does not agree with either the electron or positron data. In order to derive a more accurate prediction, we need to take into account the time it takes for a kicked electron to reach the chamber wall. We define the resonant condition as the bunch spacing that results in an electron energy $E_2 = E_{max}$, where $E_{max}$ is the energy corresponding to peak secondary production (in eV). This process is diagrammed in Fig.~\ref{fig:tb_diagram}. The resonant condition now becomes: \begin{align} \begin{split} t_b & = \frac{b - r}{v_{sec}} + \frac{b \pm r}{v_{max}} \\ v_{max} & \equiv \sqrt{\frac{2 q_e E_{max}}{m_e}} = \frac{2 c N_b r_e}{r} \pm v_{sec} \label{eq:tb1_res2} \end{split} \end{align} Here $r$ is the distance from the electron to the beam during the bunch passage, $N_b$ is the bunch population and $r_e$ is the classical electron radius. Where there is a $\pm$ symbol, the plus sign applies for positron beams, and the minus for electron beams. Eliminating $r$ from Eq.~(\ref{eq:tb1_res2}) and defining $k \equiv 2 c N_b r_e$ gives us a resonant bunch spacing (Eq.~(\ref{eq:tb1_res3})). Interestingly, the condition is still the same for electron and positron beams. \begin{equation} t_{b,1} = \frac{b (v_{max} + v_{sec}) - k}{v_{max} v_{sec}} \label{eq:tb1_res3} \end{equation} In this analysis we have used the impulse approximation for determining the beam kick~\cite{CERN:LHC97,doi:10.1142/S0217751X14300233}, which assumes that $r$ is much greater than the beam size. This approximation is valid as long as the distance from the electron to the beam is greater than a critical radius $r_c \approx 2 \sqrt{N_b r_e \sigma_z \sqrt{2/\pi}}$, where $\sigma_z$ is the bunch length. For the conditions presented here, $\sigma_z \approx$ 17~mm, so the critical radius is 1.6~mm at 1~mA, and 2.9~mm at 3.4~mA. For the resonant condition in Eq.~(\ref{eq:tb1_res2}), $r \approx$ 2.8~mm at 1~mA, and 9.6~mm at 3.4~mA. So the impulse approximation is always valid, although it's close at low current. \begin{figure} \centering \includegraphics*[width=0.9\textwidth]{multipacting_diagram_tb1_new.pdf} \caption{\label{fig:tb_diagram} Diagram of single bunch multipacting resonances: positron beams (top) and electron beams (bottom). A secondary electron is released from the bottom wall (left), travels upward at speed $v_{sec}$, receives a kick from a passing bunch (middle), and hits the wall, releasing another secondary electron at time $t = t_b$ (right).} \end{figure} The 14~ns peak in the positron data is due to a higher order multipacting resonance, where it takes two bunches to set up the resonance condition. Here we consider the case where the first bunch gives some additional energy to the electron, so that it arrives near the center of the chamber in time for the second bunch, when it receives a large enough kick to give it energy $E_{max}$. This process is shown in Fig.~\ref{fig:tb2_diagram}. \begin{figure} \centering \includegraphics*[width=0.9\textwidth]{multipacting_diagram_tb2_new.pdf} \caption{\label{fig:tb2_diagram} Diagram of a two-bunch multipacting resonance. From left to right: a secondary electron is released from the bottom wall with speed $v_{sec}$. It receives a kick from a passing bunch, and continues with higher velocity ($v_2$). It is kicked again by a second bunch, bringing its speed up to $v_{max}$. Finally, it hits the wall, releasing another secondary electron at time $t = 2 t_b$.} \end{figure} From this picture we can derive a system of equations for $t_{b,2}$ (where the subscript 2 is used to signify a 2-bunch resonance): \begin{align} \begin{split} 2 t_{b,2} &= \frac{b - r_1}{v_{sec}} + \frac{r_1 - r_2}{v_2} + \frac{r_2 + b}{v_{max}} \\ t_{b,2} &= \frac{r_1 - r_2}{v_2} \\ v_2 &= v_{sec} + \frac{k}{r_1} \\ v_{max} &= v_2 + \frac{k}{r_2} \label{eq:tb2_res} \end{split} \end{align} Here $r_1$ is the distance between the beam and the electron during the first bunch passage, $r_2$ is this distance during the second bunch passage, and $v_2$ is the electron velocity after the first beam kick . Note that this condition only applies to positron beams, since the kicks must be towards the beam. These equations are a bit too unwieldily to be solved analytically, but they can be solved numerically to give predictions for the resonant bunch spacings. \subsubsection{Comparison with Measured Data} Fig.~\ref{fig:mp_all} compares the measured and predicted resonances for both the chicane and CESR dipole chambers. Effectively, we have varied the two most important parameters of the model: bunch current, and chamber size (since the two dipole RFAs have different chamber heights). Overall there is good agreement between the data and model for all measured resonances. In particular, the model captures the major features of the data: \begin{itemize} \item For the chicane RFA, the 1-bunch resonance appears in both the electron and positron data, at the same bunch spacing. \item The 2-bunch resonances are only observed in the positron data, at lower spacing than the 1-bunch resonances. \item All resonances move toward lower bunch spacing at higher current. \end{itemize} The 1-bunch resonance is not seen as clearly in the CESR dipole RFA positron data, though a ``shelf" can be seen at 1.4~mA, which does correspond to the electron data peak. Simulations (Section~\ref{ssec:mult_sim}) also predict a peak. The lack of a clear resonance in the data may be a result of the depletion phenomena described in a previous paper (\cite{NIMA770:141to154}, Sec. 3.1.2). Essentially, in a strong field (such as the 2~kilogauss field of the CESR dipole RFA), the RFA can actually become less sensitive to multipacting, since it depletes the cloud under the RFA holes, exactly where it's measuring. In general the 1-bunch resonances are less pronounced than the 2-bunch resonances; this may be why we still see the 2-bunch resonance. The model and data are also in quantitative agreement, with two exceptions: the 1-bunch resonance for the chicane dipole at low current, and the 1-bunch resonance for the CESR dipole at high current. The former discrepancy may be due to the impulse approximation not being valid (as explained above). The latter discrepancy may be due to the fact that we are ignoring the beam's image charge, and the cloud's space charge. The chicane RFA chamber is in a circular chamber, so there will be no image charge (assuming a centrally located beam). It is also located in a long straight section that receives relatively little synchrotron radiation. This means the overall cloud density is lower, and space charge is less important. The CESR dipole chamber, however, is (approximately) elliptical, so image charge can be important. It is also located in a high radiation environment. An improved model, which takes image charge and space charge into account, would probably fit this data better. \begin{figure} \centering \begin{tabular}{c} \includegraphics*[width=0.6\textwidth]{multipacting_all4_slac2.pdf} \\ \includegraphics*[width=0.6\textwidth]{multipacting_all4_cesr2.pdf} \end{tabular} \caption{\label{fig:mp_all} Comparison of measured and predicted multipacting resonances for the chicane (top) and CESR dipole (bottom) RFAs. The solid lines represent 1-bunch resonances (Eq.~(\ref{eq:tb1_res3})), the dashed lines 2-bunch resonances (Eq.~(\ref{eq:tb2_res})), and the points are measured data. In both cases the n=1 points are taken from the electron beam data, and the n=2 from the positron data. The error bars are defined as half the difference in bunch spacing between successive measurements.} \end{figure} Measurements of multipacting resonances with a positron beam at the Advanced Photon Source~\cite{PRSTAB6:034402} found a peak at 20~ns for bunch populations in the range of $3.45\times10^{10}$ to $5.75\times10^{10}$. Plugging these numbers and the chamber half-height (21~mm) into Eq.~(\ref{eq:tb1_res3}) gives a resonant spacing of 18-23~ns, consistent with their result. However, they measured a different resonance (30~ns) for an electron beam, which is not predicted by our theory. Their measurements were made in a field-free region, with an RFA located at an angle with respect to the top of the chamber, so our one-dimensional model may not be completely valid. Nonetheless it is suggestive that the location of the positron peak agrees with our prediction. Table~\ref{tab:res_other} lists the predicted locations of multipacting resonances for some proposed accelerators with positively charged beams. Also included for comparison are the two most common operating modes of the APS (which now uses electron beams, so there is no 2-bunch resonance). The LHC is not included, because the beam is so intense that $E_{2} > E_{max}$ at the beam pipe wall, so the machine will generate high energy secondaries regardless of bunch spacing. \begin{table} \centering \setlength\tabcolsep{5mm} \caption{Resonant bunch spacings ($t_{b,1}$, $t_{b,2}$) compared to operational spacing ($t_b$) for different accelerators.}\label{tab:res_other} \begin{tabular}{c c c c c c} \hline \hline Machine & $N_b$ & b (cm) & $t_b$ (ns) & $t_{b,1}$ (ns) & $t_{b,2}$ (ns) \\ \hline ILC DR & $2\times10^{10}$ & 2.5 & 6 & 32 & 7.6 \\ CLIC DR & $4.1\times10^{9}$ & 3 & 0.5 & 43 & 17.4 \\ SuperKEKB & $9\times10^{10}$ & 4.5 & 4 & 46 & 5.4 \\ APS (324b) & $7.1\times10^{9}$ & 2.1 & 11 & 29 & X \\ APS (24b) & $9.5\times10^{10}$ & 2.1 & 153 & 9 & X \\ \hline \hline \end{tabular} \end{table} It is worth noting that running with very short bunch spacing (as many cutting edge accelerators do) can actually be advantageous from an electron cloud point of view, since it avoids both multipacting resonances. Running with high current and very large bunch spacing (as some light sources do) also works. However, it is important to keep in mind that this model does not include the cloud's space charge, which could be an important effect in these high intensity machines. Particle tracking simulations (see Section~\ref{ssec:mult_sim}) can be used to more accurately predict the resonances. \subsection{\label{ssec:cyc_res} Cyclotron Resonances} By varying the strength of the chicane magnets, we can also study the behavior of the cloud at different dipole magnetic field values. Fig.~\ref{fig:chicane_scan} shows RFA data taken as a function of magnetic field strength, at two different bunch spacings. The most prominent feature of the data is regularly occurring spikes or dips, which are seen in all cases. These correspond to ``cyclotron resonances," which occur whenever the cyclotron period of cloud electrons is an integral multiple of the bunch spacing (see Section~\ref{ssec:cyc}). For 4~ns bunch spacing we expect them every 89~gauss; and for 12~ns spacing, every 30~gauss. This is exactly what is seen in the data. Another interesting feature of this measurement is that these resonances appear as peaks in the RFA signal in the aluminum chamber, but as dips in the coated chambers. This difference in the behavior of the two chamber materials is explained in Section~\ref{ssec:cyc_res_sim}. \begin{figure} \centering \begin{tabular}{c} \includegraphics*[width=0.6\textwidth]{chicane_scan_1418-eps-converted-to.pdf} \\ \includegraphics*[width=0.6\textwidth]{chicscan_1445_pub.pdf} \\ \end{tabular} \caption{\label{fig:chicane_scan} RFA signal as a function of chicane magnetic field: 1x45x1~mA $e^+$, 5~GeV. Top: 4~ns spacing. Bottom: 12~ns spacing. Cyclotron resonances are observed every 89~gauss with 4~ns spacing, and every 30~gauss with 12~ns spacing, as predicted by Equation~(\ref{eq:cyc}). Note that the aluminum chamber signal is divided by 20.} \end{figure} \subsection{\label{ssec:tramp} Anomalous Enhancement} Detailed analysis of the wiggler RFA data is complicated by an interaction between the cloud and the RFA itself. Fig.~\ref{fig:tramp_example} shows a voltage scan done with an RFA in the center pole of a wiggler (approximated by a 1.9 T dipole field). Here one can see a clear enhancement in the signal at low (but nonzero) retarding voltage. Since the RFA should simply be collecting all electrons with an energy more than the magnitude of the retarding voltage, the signal should be a monotonically decreasing function of the voltage. So the RFA is not behaving simply as a passive monitor. A similar effect has been observed in a strong dipole field at KEKB~\cite{NIMA598:372to378}. The spike in collector current is accompanied by a corresponding dip in the grid current, suggesting that the grid is the source of the extra collector current. \begin{figure} \centering \includegraphics*[width=0.6\textwidth]{wig1w_tramp_2585_pub4.pdf} \caption[Resonant enhancement in wiggler data]{\label{fig:tramp_example} Resonant enhancement in wiggler data, 45 bunches, 1.25 mA/bunch, $e^+$, 2.1~GeV, 14~ns. Note that there are 12 collectors, so collector 6 is one of the central ones.} \end{figure} This spurious signal comes from a resonance between the bunch spacing and retarding voltage. To understand this, consider an electron which collides with the retarding grid and generates a secondary. Because electrons are so strongly pinned to the magnetic field lines in a 1.9~T field, this electron is likely to escape through the same beam pipe hole through which it entered. An electron ejected from the grid will gain energy from the retarding field before it re-enters the vacuum chamber. If it is given the right amount of energy, it will be near the center of the vacuum chamber during the next bunch passage, and get a large beam kick, putting it in a position to generate even more secondaries. This process, which we have dubbed the ``trampoline effect", is essentially an artificial multipacting resonance. If we take Eq.~(\ref{eq:tb1_res}) from Section~\ref{ssec:mult_res}, and use the retarding voltage in place of the secondary electron energy, the resonance conditions becomes: \begin{equation} \label{eq:tramp} V_{ret} = \frac{m_e b^2}{2 q_e t_b^2} \end{equation} Here $V_{ret}$ is the retarding voltage, $b$ is the chamber half-height, $t_b$ is the bunch spacing, $m_e$ is the electron mass, and $q_e$ is the electron charge. Fig.~\ref{fig:tramp_spacing} plots a series of retarding voltage scans done with a wiggler RFA, for 4, 8, 12, and 20~ns bunch spacing. The trampoline effect is seen in all cases, with the spike occurring at $\sim$110, 30, 15, and 10~V, respectively. Meanwhile, the simple model given in Eq.~(\ref{eq:tramp}) predicts 111, 28, 12, and 4~V, respectively. The predictions are quite close to the measurements, especially for short bunch spacing. The second spike at low voltage in the 4~ns data corresponds to a two-bunch resonance, also described in Section~\ref{ssec:mult_res}. \begin{figure} \centering \includegraphics*[width=0.6\textwidth]{ipac_tramp_combined6.pdf} \caption[Resonant spike location at different bunch spacings]{\label{fig:tramp_spacing} Resonant spike location at different bunch spacings, 1x45x1.25 mA $e^+$, 5~GeV. Only the signal in the central collector is plotted.} \end{figure} \section{Simulations} While the analytical models described above are generally successful at explaining our data, additional insight can be gained by using more detailed computer simulations. The results presented here were obtained with the particle tracking code POSINST~\cite{MBI97:170,LHC:ProjRep:180,PRSTAB5:124404}. In POSINST, a simulated photoelectron is generated on the chamber surface and tracked under the action of the beam. Secondary electrons are generated via a probabilistic process. Space charge and image charge are also included in the simulation. \subsection{\label{sec:modeling} RFA Modeling} In order to accurately predict the RFA signal, a sophisticated model of the detector must be incorporated into the code. Our model has been described in detail for the RFAs installed in field free regions~\cite{PRSTAB17:061001}; the dipole RFA models are essentially the same. In short, when a macroparticle in the simulation collides with the vacuum chamber wall in the region covered by the RFA, a special function is called which calculates a simulated RFA signal based on the particle's incident energy and angle. The signal is binned by energy and transverse position, reproducing the energy and position resolution of the RFA. Fig.~\ref{fig:dipole_rfa_eff} shows the efficiency (fraction of the macroparticle's charge that contributes to the RFA signal) as a function of incident angle in the chicane RFA. This represents the probability that an incoming electron will make it through the beam pipe hole and grids, and to the collector. Note that low energy particles have a very high efficiency, due to their small cyclotron radius. \begin{figure} \centering \includegraphics[width=.6\linewidth]{trans_ang_custom_slac_a.pdf} \\ \caption[Simulated RFA efficiency vs. incident angle, dipole RFA]{\label{fig:dipole_rfa_eff} Simulated RFA efficiency vs. incident angle for the chicane dipole RFA, with a 810~gauss magnetic field.} \label{dipoleff} \end{figure} Using the model described above, we ran simulations for the dipole RFAs, for various beam conditions. Fig.~\ref{fig:sim_example} shows a typical example, for the aluminum chicane RFA. Overall, the agreement with data (Fig.~\ref{fig:chic_dipole_meas}) is reasonable, without any additional tuning of the simulation parameters. \begin{figure} \centering \includegraphics[width=.6\linewidth]{slac4_paramtest_c23_pub2.pdf} \caption{\label{fig:sim_example} Example aluminum chicane RFA simulation: 1x45x1.25~mA $e^+$, 14~ns, 5.3~GeV. Compare to Fig.~\ref{fig:chic_dipole_meas}.} \end{figure} \subsection{\label{ssec:mult_sim} Simulation of Multipacting Resonances} Because the simulation contains all the relevant features of our multipacting model (i.e. secondary emission, beam kicks, chamber geometry), it should be able to reproduce the resonances predicted by the model. In addition, we are able to vary the secondary emission energy, to study the effect this has on the resonant spacings. According to Eq.~(\ref{eq:tb1_res}), the 1-bunch resonance should have an approximately inverse dependence on the emission velocity, i.e. $t_{b,1} \sim 1/\sqrt{E_{sec}}$. The 2-bunch resonance should have a much weaker dependence on emission energy. Fig.~\ref{fig:dip_spacing_sim} plots the simulated central collector signal as a function of bunch spacing, for four different combinations of chamber, bunch current, and beam species. Both the 1-bunch and 2-bunch multipacting peaks are observed. As predicted by the model, the locations of these peaks (especially for the 1-bunch resonance) are sensitive to the energy spectrum of emitted secondary electrons. A secondary emission energy distribution peaked at 1.5~eV is generally consistent with the data, in particular with the locations of the multipacting peaks. Lowering the emission energy to 0.75~eV moves the peaks to higher bunch spacings, and broadens the peaks. Increasing the energy to 3~eV moves the peaks to lower spacings, and also results in narrower peaks. Neither of these cases are consistent with the measured data. Thus this comparison provides a fairly sensitive indirect measurement of the secondary emission energy. In general, the data, analytical model, and simulation are in good agreement, assuming 1.5~eV secondary electrons. It is notable that the simulation agrees well with the high current electron beam data in the CESR chamber (which the analytical model did not match well). This is most likely because the simulation includes space and image charge, which are important in the high current regime. For the sake of simplicity, the angular distribution of emitted secondaries was set to be strongly peaked at normal to the vacuum chamber wall (POSINST parameter \texttt{pangsec}~\cite{PRSTAB5:124404} was set to 10). This was done to make it easy to compare the location of resonances to those predicted by the model. In reality the electrons should be emitted at various angles, which would complicate the analysis, but may give a qualitatively better fit to the data. Studying the effect of \texttt{pangsec} and other simulation parameters on these results would be an interesting subject for future study. \begin{figure*} \centering \begin{tabular}{c c} \includegraphics[width=.5\linewidth]{esec_compare_pangsec10_slac_1mA2.pdf} \includegraphics[width=.5\linewidth]{esec_compare_pangsec10_slac_3mA2.pdf} \\ \includegraphics[width=.5\linewidth]{esec_compare_pangsec10_cesr_1mA2.pdf} \includegraphics[width=.5\linewidth]{esec_compare_pangsec10_cesr_3mA2.pdf} \\ \end{tabular} \caption{\label{fig:dip_spacing_sim} Simulation of the multipacting resonances, compared to measurements, for different secondary emission energies. Top left: chicane RFA, 1.4~mA, e$^+$; top right: chicane RFA, 3.4~mA, e$^+$; bottom left: CESR dipole RFA, 1~mA e$^-$; bottom right: CESR dipole RFA, 3.4~mA, e$^-$. All cases were done with 20 bunches, at beam energy 5.3~GeV.} \end{figure*} \subsection{\label{ssec:cyc_res_sim} Simulation of Cyclotron Resonances} Under the conditions of a cyclotron resonance, we expect to see a increase in the RFA signal, due to the increased energy of the cloud electrons. As discussed in Section~\ref{ssec:cyc_res}, we do indeed observe peaks in the RFA current in the aluminum chicane chamber, but in the TiN-coated chambers we observe dips. Fig.~\ref{fig:cyc_sim} shows a simulated magnetic field scan over a cyclotron resonance, in both an aluminum and TiN-coated chamber. Consistent with the data, we observe an increase in the aluminum chamber signal, but a decrease in the TiN chamber signal. Fig.~\ref{fig:cyc_eff} provides an explanation: since the additional energy in the resonant electrons comes from transverse beam kicks, these electrons will have a larger cyclotron radius, and thus a lower RFA efficiency (see Fig.~\ref{fig:dipole_rfa_eff}). Thus there are two competing effects: an increased cloud density due to a higher average SEY, and lower overall detector sensitivity. In the aluminum chamber (where the peak SEY is high) the former effect dominates, while in the coated chamber (where the peak SEY is low) the latter one does. The net result is resonant peaks in the uncoated chamber, and dips in the coated one. \begin{figure*} \centering \begin{tabular}{c c} \includegraphics[width=.45\linewidth]{chicscan_slac4_pub.pdf} \includegraphics[width=.45\linewidth]{chicscan_slac3_highstat_pub.pdf} \\ \end{tabular} \caption[Simulation of cyclotron resonances]{\label{fig:cyc_sim} Simulation of cyclotron resonances observed by an RFA in aluminum (left) and TiN (right) chambers, 1x45x1~mA $e^+$, 4~ns, 5~GeV. Note that, as in Fig.~\ref{fig:chicane_scan}, the resonance appears as an increase in the aluminum chamber signal, but a decrease in the TiN chamber signal.} \end{figure*} \begin{figure} \centering \includegraphics[width=.6\linewidth]{cyclotron_efficiency_pub3.pdf} \\ \caption[Effect of cyclotron resonance on RFA efficiency]{\label{fig:cyc_eff} Effect of cyclotron resonance on RFA efficiency, 1x45x1~mA $e^+$, 4~ns, 5~GeV. Under the resonant field, the average electron cyclotron radius increases, resulting in a decrease in the average RFA efficiency.} \end{figure} \subsection{Simulation of Anomalous Enhancement in the Wiggler RFA} The main disadvantage of treating the RFA analytically (as described in Section~\ref{sec:modeling}) is that we cannot self-consistently model any interaction between the detector and the cloud, such as the trampoline effect described in Section~\ref{ssec:tramp}. Motivated by these measurements, we have incorporated into POSINST a model of the RFA geared toward reproducing the geometry of the RFAs installed in the wiggler vacuum chambers. The motion of the electrons within the RFA, including the electrostatic force from the retarding field, is tracked using a special add-on routine. The grid is modeled realistically, and secondary electrons can be produced there, with the same secondary yield model used for normal vacuum chamber collisions. The peak secondary electron yield and peak yield energy can be specified separately for the grid. Because the actual retarding field is included in the wiggler RFA model, the retarding voltage must be specified in the input file, and a separate simulation must be run for each voltage. Fig~\ref{fig:int_model} shows the result of running this full particle tracking simulation, for the set of beam conditions corresponding to Fig.~\ref{fig:tramp_example}. Notably, the simulation reproduces the resonant enhancement seen in the data, at approximately the same voltage ($\sim$10~V for 14~ns spacing), and shows that the extra signal comes from the grid. \begin{figure} \centering \includegraphics*[width=0.6\textwidth]{tramp_examp_newpub.pdf} \caption[POSINST simulation showing resonant enhancement]{\label{fig:int_model} POSINST simulation showing resonant enhancement in a wiggler RFA, 1x45x1.2~mA $e^+$, 2.1~GeV, 14~ns, central collector. Compare to Fig.~\ref{fig:tramp_example}.} \end{figure} \section{Conclusions} Electron cloud buildup has been investigated in dipole field regions throughout CESR. Measurements of multipacting and cyclotron resonances have been made at different bunch spacings, bunch currents, and with electron and positron beams. A sophisticated analytical model for multipacting resonances has been developed, which takes into account secondary emission energy, as well as the time for kicked electrons to reach the chamber wall. This model is generally consistent with data, and has been further validated by computer simulations. An anomalous enhancement in the center-pole wiggler RFA signal has also been identified as an artificial multipacting resonance. Cyclotron resonances have been observed in the chicane RFAs, at field values that correspond well to basic theory. The question of these resonances sometimes appearing as dips, rather than peaks in the signal, has been explained as a detector efficiency effect. The electron cloud density is very sensitive to multipacting effects. On resonance, we observe as much as a factor of 3 increase in electron cloud signal for positron beams, and several orders of magnitude for electron beams (though the measured signal for electron beams was always lower than for positrons). Because electron cloud is a potential limiting factor for high current, low emittance beams, avoiding these resonances is crucial for achieving emittance and stability goals in present and future accelerators. \begin{acknowledgments} This research was supported by NSF and DOE Contracts No. PHY-0734867, No. PHY-1002467, No. PHYS-1068662, No. DE-FC02-08ER41538, No. DE-SC0006505, and the Japan/U.S. Cooperation Program. The authors would like to thank D. Rubin, G. Dugan, J.A. Crittenden, J. Sikora, J. Livesey, M. Palmer, and K. Harkay for their helpful advice and suggestions; R. Schwartz, S. Santos, and S. Roy for assisting with the RFA measurements; and M. Furman at LBNL for his support with the POSINST simulation code. \end{acknowledgments} \bibliographystyle{medium}
1,314,259,996,791
arxiv
\section{Introduction} Categorical approaches have been very successful in bringing topological ideas into other areas of mathematics. A major example is the category of sheaves over a topological space, from which the open sets of the space can be reconstructed as subobjects of the terminal object. More generally, in any topos such subobjects form a frame. Frames are lattices with properties capturing the behaviour of the open sets of a space, and form the basis of the constructive theory of pointfree topology~\cite{johnstone:stonespaces}. In this article we study this inherent notion of space in categories more general than those with cartesian products. Specifically, we argue that a semblance of this topological intuition remains in categories with mere tensor products. For the special case of toposes this exposes topological aspects in a different direction than considered previously. The aim of this article is to lay foundations for this (ambitiously titled) `tensor topology'. Boyarchenko and Drinfeld~\cite{boyarchenkodrinfeld:idempotent,boyarchenkodrinfeld:duality} have already shown how to equate the open sets of a space with certain morphisms in its monoidal category of sheaves of vector spaces. This forms the basis for our approach. We focus on certain subobjects of the tensor unit in a (braided) monoidal category that we call \emph{subunits}, fitting other treatments of tensor units~\cite{hines:coherenceselfsimilarity,kock:saavedra,fioreleinster:thompsonsgroupf}. For subunits to behave well one requires only that monomorphisms and tensor products interact well; we call a category \emph{firm} when it does so for subunits and \emph{stiff} when it does so globally, after~\cite{quillen:nonunitalrings}. In a firm category subunits always form a (meet-)semilattice. They may have further features, such as having joins that interact with the category through universal properties, and in the strongest case form a frame. We axiomatise such {\emph{locale-based}} categories. Aside from toposes, major noncartesian examples are categories of Hilbert modules, with subunits indeed given by open subsets of the base space. More generally, we show how to complete any stiff category {to a locale-based one}. There are at least two further perspectives on this study. First, it is analogous to tensor triangular geometry~\cite{balmer:tensortriangulargeometry}, a programme with many applications including algebraic geometry, stable homotopy theory, modular representation theory, and symplectic geometry~\cite{balmer:spectrum,balmerfavi:telescope,balmerkrausestevenson:smashing}. Disregarding triangulations and direct sums, we show that the natural home for many arguments is mere monoidal categories~\cite{hogancamp:idempotent}. We will also not require our categories to be cocomplete~\cite{brandenburg}. Second, just as Grothendieck toposes may be regarded as a categorification of frames~\cite{street:topos}, our results may be regarded as categorifying the study of central idempotents in a ring. Our algebraic examples include categories of firm nondegenerate modules over a firm nonunital commutative ring, or more generally, over a nonunital bialgebra in a braided monoidal category. \subsubsection*{Structure of article} We set out the basics of subunits in Section~\ref{sec:subunits}, showing that they form a semilattice in any firm category. Section~\ref{sec:examples} introduces our main examples: sheaves, Hilbert modules, modules over a ring, and order-theoretic examples including commutative quantales, generalising frames~\cite{resende:groupoidquantales}. In Section~\ref{sec:restriction} we introduce the notion of a morphism `restricting to' a subunit, and show how to turn any subunit into a unit of its restricted category. These \emph{restriction} functors together are seen to form a graded monad. We also show that subunits correspond to certain ideal subcategories and to certain comonads; although these give equivalent descriptions, we stick with subunits for the rest of the article to stay as close as possible to the theory of sheaves. Section~\ref{sec:simplicity} then proves that restriction forms a localisation of our category, and more broadly that one may localise to a category with only trivial subunits. Section~\ref{sec:support} introduces the notion of \emph{support} of a morphism, derived from the collection of subunits to which it restricts. This notion seems unrelated to earlier definitions requiring more structure~\cite{joyal:chevalleytarski,kockpitsch:pointfree}. In Sections~\ref{sec:spatial} and~\ref{sec:universaljoins} we characterise categories, such as toposes and categories of Hilbert modules, whose subunits come with suprema satisfying universal properties and so form a lattice, preframe, or frame; the latter being {locale-based} categories. Finally, Sections~\ref{sec:broad} and~\ref{sec:completion} show how to complete a given monoidal category to one with each kind of universal joins, including a {locale-based} category, in a universal way. This involves passing to certain presheaves, that we will call \emph{broad}, under Day convolution, as detailed in \ref{sec:day}; but this completion is not a sheafification for any Grothendieck topology. \subsubsection*{Further directions} This foundation opens various directions for further investigation. The {locale-based} completion of a stiff category is a stepping stone to future structure theorems for monoidal categories in the spirit of Giraud's theorem~\cite[C2.2.8]{johnstone:elephant}. We therefore also leave to future work more precise connections between tensor topology and topos theory, although the reader might find useful the analogy between objects of a monoidal category and (pre)sheaves. Applications to linear logic and computer science, as proposed in~\cite{enriquemolinerheunentull:space}, remain to be explored, including amending the graphical calculus for monoidal categories~\cite{selinger:graphicallanguages} with spatial information. It would be interesting to examine what happens to subunits under constructions such as Kleisli categories, Chu spaces, or the Int-construction~\cite{joyalstreetverity:traced}. One could ask how much of the theory carries over to skew monoidal categories~\cite{szlachanyi:skew}, as topology encompasses more than locale theory and one may be interested in `noncommutative' topologies. Similarly, one could investigate how these notions relate to partiality and restriction categories~\cite{grandis:cohesive}. Finally, it would be desirable to find global conditions on a category providing its subunits with further properties, such as being a compact frame or Boolean algebra, or with further structure, such as being a metric space. \acknowledgements{We thank Robin Cockett, Richard Garner, Marino Gran, Joachim Kock, Tim van der Linden, Bob Par{\'e}, John Power, Pedro Resende, Manny Reyes, Phil Scott, and Isar Stubbe for helpful discussions. } \section{Subunits}\label{sec:subunits} We work with braided monoidal categories~\cite{maclane:categorieswork}, and will sometimes suppress the coherence isomorphisms $\lambda_A \colon I \otimes A \to A$, $\rho_A \colon A \otimes I \to A$, $\alpha_{A,B,C} \colon A \otimes (B \otimes C) \to (A \otimes B) \otimes C$, and $\sigma_{A,B} \colon A \otimes B \to B \otimes A$, and often abbreviate identity morphisms $\id[A] \colon A \to A$ simply by $A$. Recall that a subobject of an object $A$ is an equivalence class of monomorphisms $s \colon S \rightarrowtail A$, where $s$ and $s'$ are identified if they factor through each other. Whenever we talk about a subobject, we will use a small letter $s$ for a representing monomorphism, and the corresponding capital $S$ for its domain. \begin{definition}\label{def:subunit} A \emph{subunit} in a braided monoidal category $\cat{C}$ is a subobject $s \colon S \rightarrowtail I$ of the tensor unit such that $s \otimes S \colon S \otimes S \to I \otimes S$ is an isomorphism\footnote{ Boyarchenko and Drinfeld call morphisms $s \colon S \to I$ for which $s \otimes S$ and $S \otimes s$ are isomorphisms \emph{open idempotents}~\cite{boyarchenkodrinfeld:idempotent}, with (the dual of) this notion going back implicitly at least to~\cite[Exercise~4.2]{kashiwara2005categories}. In~\cite{enriquemolinerheunentull:space} subunits were called \emph{idempotent subunits}.}. Write $\ISub(\cat{C})$ for the collection of subunits in $\cat{C}$. \end{definition} Note that, because $s$ is monic, if $s \otimes S$ is invertible then so is $S \otimes s$. \begin{remark}\label{rem:centrality} We could have generalised the previous definition to arbitrary monoidal categories by additionally requiring subunits to be central in the sense that there is a natural isomorphism $(-) \otimes S \Rightarrow S \otimes (-)$. Most results below still hold, but the bureaucracy is not worth the added generality here. Many results also remain valid when we require $s \otimes S$ not to be invertible but merely split epic, but for simplicity we stick with invertibility. \end{remark} We begin with some useful observations, mostly adapted from Boyarchenko and Drinfeld~\cite{boyarchenkodrinfeld:idempotent}. \begin{lemma}\label{lem:subunitsretract} Let $m \colon A \to B$ and $e \colon B \to A$ satisfy $e \circ m = A$, and $s \colon S \rightarrowtail I$ be a subunit. If $s \otimes B$ is an isomorphism, then so is $s \otimes A$. \end{lemma} \begin{proof} The diagram below commutes by bifunctoriality of $\otimes$. \[\begin{tikzcd}[column sep=3cm] S \otimes A \rar{S \otimes m} \dar{s \otimes A} & S \otimes B \rar{S \otimes e} \arrow{d}{s \otimes B}[swap]{\simeq} & S \otimes A \dar{s \otimes A} \\ I \otimes A \rar{I \otimes m} & I \otimes B \rar{I \otimes e} & I \otimes A \end{tikzcd}\] Both rows compose to the identity, and the middle vertical arrow is an isomorphism. Hence $s \otimes A$ is an isomorphism with inverse $(S \otimes e) \circ (s \otimes B)^{-1} \circ (I \otimes m)$. \end{proof} Recall that subobjects of a fixed object always form a partially ordered set, where $s \leq t$ if and only if $s$ factors through $t$. The following observations characterises this order in another way for subunits. \begin{lemma}\label{lem:subunitsorder} A subunit $s$ factors through another $t$ if and only if $S \otimes t$ is invertible, or equivalently, $t \otimes S$ is invertible. \end{lemma} \begin{proof} Suppose $s=t \circ f$. Set $g=(S \otimes f) \circ (S \otimes s)^{-1} \circ \rho_S^{-1} \colon S \to S \otimes T$. Then \[ \rho_S \circ (S \otimes t) \circ g = \rho_S \circ (S \otimes s) \circ (S \otimes s)^{-1} \circ {\rho_S}^{-1} = S\text. \] Idempotence of $t$ makes $S \otimes T \otimes t \colon S \otimes T \otimes T \to S \otimes T \otimes I$ an isomorphism. Hence, by the right-handed version of Lemma~\ref{lem:subunitsretract}, so is $S \otimes t$. A symmetric argument makes $t \otimes S$ invertible. Conversely, suppose $S \otimes t$ is an isomorphism. Because the diagram \[\begin{tikzcd}[column sep=3cm] S \otimes T \dar{S \otimes t} \rar{s \otimes T} & I \otimes T \dar{I \otimes t} \rar{\rho_T} & T \dar{t} \\ S \otimes I \rar{s \otimes I} & I \otimes I \rar{\rho_I} & I \end{tikzcd}\] commutes, the bottom row $s \circ \rho_S$ factors through the right vertical arrow $t$, whence so does $s$. \end{proof} It follows from Lemma~\ref{lem:subunitsorder} that subunits are determined by their domain: if $s, s' \colon S \rightarrowtail I$ are subunits, then $s' = s \circ f$ for a unique $f$, which is an isomorphism. This justifies our convention to use the same letter for a subunits and its domain. For the theory to work smoothly, we impose a condition on the category. \begin{definition}\label{def:firm} A category is called \emph{firm} when it is braided monoidal and $s \otimes T \colon S \otimes T \to I \otimes T$ is a monomorphism whenever $s$ and $t$ are subunits. \end{definition} \begin{remark} The name \emph{firm} is chosen after Quillen~\cite{quillen:nonunitalrings}, who employs it as a natural condition for nonunital rings to make up for a missing unit. The previous definition extends the term to the category of nonunital rings; see {Proposition}~\ref{prop:modules} below. Note, however, that a firm category has genuine identity morphisms and a genuine tensor unit. Firmness is very mild condition: {Proposition}~\ref{prop:PshFirmCounter} below gives a category that is not firm, but we know of no other `naturally occurring' categories that are not firm. \end{remark} \begin{lemma} Any co-closed braided monoidal category is firm. \end{lemma} \begin{proof} Each functor $(-) \otimes T$ is a right adjoint and so preserves limits and hence monomorphisms. Hence whenever $s$ is monic so is $s \otimes T$. \end{proof} In particular, a $*$-autonomous category is firm, as is a compact category. \begin{remark} In the following, we will completely disregard size issues, and pretend $\ISub(\cat{C})$ is a set, as in our main examples. \end{remark} \begin{proposition}\label{prop:semilattice} The subunits in a firm category form a semilattice, with largest element $I$, meets given by \[ \big(s \colon S \rightarrowtail I\big) \wedge \big(t \colon T \rightarrowtail I\big) = \big(\lambda_I \circ (s \otimes t) \colon S \otimes T \rightarrowtail I\big)\text, \] and the usual order of subobjects. \end{proposition} \begin{proof} First observe that $s \otimes t = (I \otimes t) \circ (s \otimes T)$ is monic, because $I \otimes t = \lambda_I^{-1} \circ t \circ \lambda_T$ is monic, and $s \otimes T$ is monic by firmness. It is easily seen to be idempotent using the braiding, and hence it is a well-defined subunit. Next, we show that $\ISub(\cat{C})$ is an idempotent commutative monoid under $\wedge$ and $I$. The subunit $I$ is a unit as $I \otimes s = \lambda_I \circ (I \otimes s) = s \circ \lambda_S$ represents the same subobject as $s$, and similarly $I \otimes s$ represents the same subobject as $s$ because $\rho_I=\lambda_I$. An analogous argument using coherence establishes associativity. For commutativity, use the braiding to observe that $s \otimes t$ and $t \otimes s$ represent the same subobject. For idempotence note that $s \otimes s$ and $s$ represent the same subobject because $\lambda_I \circ (s \otimes s) = s \circ \rho_S \circ (S \otimes s)$. Hence $\ISub(\cat{C})$ is a semilattice where $s$ is below $t$ if and only if $s = s \wedge t$. Finally, we show that this order is the same as the usual order of subobjects. On the one hand, if $s$ and $s \otimes t$ represent the same subobject, then $S \simeq S \otimes T$, making $S \otimes t$ an isomorphism and so $s \leq t$ by Lemma~\ref{lem:subunitsorder}. \[ \begin{pic}[xscale=2,yscale=1] \node (s) at (0,0) {$S$}; \node (i) at (1,.5) {$I$}; \node (t) at (0,1) {$T$}; \draw[>->] (s) to node[below]{$s$} (i); \draw[>->] (t) to node[above]{$t$} (i); \draw[>->,dashed] (s) to (t); \end{pic} \qquad \iff \qquad \begin{pic}[xscale=2,yscale=1] \node (s) at (0,0) {$S$}; \node (i) at (1,0) {$I$}; \node (st) at (0,1) {$S \otimes T$}; \node (ii) at (1,1) {$I \otimes I$}; \draw[>->] (s) to node[below]{$s$} (i); \draw[>->] (st) to node[above]{$s \otimes t$} (ii); \draw[->] (ii) to node[right]{$\lambda_I$} node[left]{$\simeq$} (i); \draw[->,dashed] (s) to node[left]{$\simeq$} (st); \end{pic} \] On the other hand, if $s \leq t$ then by the same lemma $S \otimes t$ is an isomorphism with $s = \lambda_I \circ (s \otimes t) \circ (S \otimes t)^{-1} \otimes \rho_S^{-1}$, and so both subobjects are equal. \end{proof} \section{Examples}\label{sec:examples} This section determines the subunits of four families of examples: cartesian categories, like sheaves over a topological space; commutative unital quantales; firm modules over a nonunital ring; and Hilbert modules over a nonunital commutative C*-algebra. \subsection*{Cartesian categories} We start with examples in which the tensor product is in fact a product. { \begin{proposition}\label{prop:cartesian} Any cartesian category $\cat{C}$ is firm, and $\ISub(\cat{C})$ consists of the subobjects of the terminal object. In particular, if $X$ is a topological space, then subunits in its category of sheaves $\mathrm{Sh}(X)$ correspond to open subsets of $X$~\cite[Corollary~2.2.16]{borceux:3}. \end{proposition} } \begin{proof} Let $s \colon S \rightarrowtail 1$ be a subterminal object. Let $\Delta = \langle S, S \rangle \colon S \to S \times S$ be the diagonal and write $\pi_i \colon A_1 \times A_2 \to A_i$ for the projections. Then $(s \times S) \circ \Delta \circ \pi_2 = \pi_2^{-1} \circ S \circ \pi_2 = 1 \times S$. Now, the unique map $s$ of type $S \to 1$ is monic precisely when any two parallel morphisms into $S$ are equal. Hence $\pi_i \circ \Delta \circ \pi_2 \circ (s \times S) = \pi_i$, and so $\Delta \circ \pi_2 \circ (s \times S) = \langle \pi_1,\pi_2 \rangle = S \times S$. Thus $s \times S$ is automatically invertible. Finally, suppose $s_i \colon S_i \rightarrowtail 1$ for $i=1,2$ are monic, and that $f,g \colon A \to S_1 \times S_2$ satisfy $(s_1 \times s_2) \circ f = (s_1 \times s_2) \circ g$. Postcomposing with $\pi_i$ shows that $s_i \circ \pi_i \circ f = s_i \circ \pi_i \circ g$, whence $\pi_i \circ f = \pi_i \circ g$ and so $f=g$. This establishes firmness. \end{proof} \subsection*{Semilattices} Next we consider examples that are degenerate in another sense: firm categories in which there is at most one morphism between two given objects. { \begin{example}\label{ex:semilattice} Any semilattice $(L, \wedge, 1)$ forms a strict symmetric monoidal category: objects are $x \in L$, there is a unique morphism $x \to y$ if $x \leq y$, tensor product is given by meet, and tensor unit is $I = 1$. Every morphism is monic so this monoidal category is firm, and its (idempotent) subunits are $(L, \wedge, 1)$. \end{example} This gives the free firm category on a semilattice. More precisely, this construction is left adjoint to the functor from the category $\cat{Firm}$ of firm categories with (strong) monoidal subunit-preserving functors to the category $\cat{SLat}$ of semilattices and their homomorphisms, which takes subunits. \[\begin{pic} \node (l) at (0,0) {$\cat{SLat}$}; \node (r) at (4,0) {$\cat{Firm}$}; \draw[->] ([yshift=2mm]l.east) to ([yshift=2mm]r.west); \draw[draw=none] (l.east) to node{$\perp$} (r.west); \draw[<-] ([yshift=-2mm]l.east) to node[below]{$\ISub$} ([yshift=-2mm]r.west); \end{pic}\] } \subsection*{Quantales} We move on to more interesting examples, namely special kinds of semilattices like frames and quantales. \begin{definition}\label{def:frame} A \emph{frame} is a complete lattice in which finite joins distribute over suprema. A morphism of frames is a function that preserves $\bigvee$, $\wedge$, and $1$. Frames and their morphisms form a category $\cat{Frame}$. \end{definition} The prototypical example of a frame is the collection of open sets of a topological space~\cite{johnstone:stonespaces}. Frames may be generalised as follows~\cite{rosenthal:quantales}. \begin{definition}\label{def:quantale} A \emph{quantale} is a monoid in the category of complete lattices. More precisely, it is a partially ordered set $Q$ that has all suprema, that has a multiplication $Q \times Q \to Q$, and that has an element $\qunit$, such that: \[ a \big(\bigvee b_i\big) = \bigvee a b_i, \qquad \big(\bigvee a_i\big) b = \bigvee a_i b, \qquad a \qunit = a = \qunit a. \] A morphism of quantales is a function that preserves $\bigvee$, $\cdot$, and $\qunit$. A quantale is \emph{commutative} when $ab = ba$ for all $a, b \in Q$. Commutative quantales and their morphisms {form a monoidal category} $\ensuremath{\cat{cQuant}}$. \end{definition} Equivalently, a frame is a commutative quantale in which the multiplication is idempotent {and whose unit is the largest element}. Any quantale may be regarded as a monoidal category, whose objects are elements of the quantale, where the (composition of) morphisms is induced by the partial order, and the tensor product is induced by the multiplication. This monoidal category is firm, but only braided if the quantale is commutative. { \begin{proposition}\label{ex:quantale} Taking subunits is right adjoint to the inclusion: \[\begin{pic} \node (l) at (0,0) {$\cat{Frame}$}; \node (r) at (4,0) {$\ensuremath{\cat{cQuant}}$}; \draw[->] ([yshift=2mm]l.east) to ([yshift=2mm]r.west); \draw[draw=none] (l.east) to node{$\perp$} (r.west); \draw[<-] ([yshift=-2mm]l.east) to node[below]{$\ISub$} ([yshift=-2mm]r.west); \node (L) at (-1,-1) {$\{ q \in Q \mid q^2=q \leq \qunit \}$}; \node (R) at (3.4,-1) {$Q$}; \draw[|->] (R.west) to (L.east); \end{pic}\] \end{proposition} } \begin{proof} We first prove that $\ISub(Q)$ is a well-defined frame. If $q_i \in \ISub(Q)$, \[ (\bigvee q_i)^2 = \bigvee_{i,j} q_i q_j \leq \bigvee_{i,j} q_i \qunit = \bigvee_i q_i = \bigvee_i q_i q_i \leq \bigvee_{i,j} q_i q_j =(\bigvee q_i)^2 \] and $\bigvee q_i \leq \bigvee_i \qunit = \qunit$, so $\bigvee q_i \in \ISub(Q)$. Moreover, if $p,q \in \ISub(Q)$, then $pq$ is again below $\qunit$ and is idempotent by commutativity of $Q$. {It follows that $pq=p \wedge q$ in $\ISub(Q)$.} Since quantale multiplication distributes over suprema, so do finite meets. For the adjunction, observe that if $F$ is a frame and $Q$ is a commutative quantale, then $F= \ISub(F)$ and any morphism $F \to Q$ of quantales restricts to a unique morphism of frames $F \to \ISub(Q)$. \end{proof} \begin{remark} {Propositions}~\ref{prop:cartesian} and~\ref{ex:quantale} show that subunits do not capture all possible topological {content} in the traditional sense. For a Grothendieck topos they form the poset of internal truth values, which does not suffice to reconstruct the category, which may itself be said to embody a notion of topological space. For the commutative quantale, $[0,\infty]$ under multiplication and the usual order, the subunits form the two-element Boolean algebra, which is clearly far poorer than the quantale itself. \end{remark} \begin{example} If $M$ is a monoid, then its (right) ideals form a unital quantale $Q$ with multiplication $IJ=\{xy \mid x \in I,y \in J\}$ and unit $M$ itself. When $M$ is commutative, so is $Q$, and $\ISub(Q)$ consists of all ideals satisfying $I=II$. \end{example} { \begin{example} If $R$ is a commutative ring, then its additive subgroups form a unital commutative quantale $Q$ with multiplication $GH=\{x_1y_1+\cdots+x_ny_n\mid x_i \in G, y_i \in H\}$, supremum $\bigvee G_i = \{\sum_{j \in J} x_j \mid x_j \in G_j \text{ for } J \subseteq I \text{ finite}\}$, and unit $\mathbb{Z}1 = \{ 0, 1, -1, 1+1, -1-1, 1+1+1, -1-1-1, \ldots\}$. Then $G \leq H$ iff $G \subseteq H$ and $\ISub(Q)$ consists of those subgroups $G$ such that $G \subseteq G \cdot G$ and $G \subseteq \mathbb{Z}1$. The latter means that $G$ must be of the form $n\mathbb{Z}1$ for some $n \in \mathbb{N}$. The former then means that $n1=n^2y1$ for some $y \in \mathbb{Z}$. Thus $\ISub(Q) = \{n\mathbb{Z}1 \mid n \in \mathbb{N}, \exists y \in \mathbb{Z} \colon n1=n^2y1\}$. \end{example} } \subsection*{Modules} Another example of a monoidal category is that of modules over a ring. We have to take some pains to treat nonunital rings. { \begin{definition}\label{def:firmring} A commutative ring $R$ is \emph{idempotent} when $R$ equals $R^2=\{\sum_{i=1}^n r_i' r_i'' \mid r_i',r_i'' \in R\}$, \emph{firm} when its multiplication is a bijection $R \otimes_R R \to R$, and \emph{nondegenerate} when $r \in R$ vanishes as soon as $rs=0$ for all $s \in R$. \end{definition} Any unital ring is firm and nondegenerate, but examples also include infinite direct sums $\bigoplus_{n \in \mathbb{N}} R_n$ of unital rings $R_n$. Firm rings $R$ are idempotent. \begin{definition}\label{def:firmmodule} Let $R$ be a nondegenerate firm commutative ring. An $R$-module $E$ is \emph{firm} when the scalar multiplication is a bijection $E \otimes_R R \to E$~\cite{quillen:nonunitalrings}, and \emph{nondegenerate} when $x \in E$ vanishes as soon as $xr=0$ for all $r \in R$. Nondegenerate firm $R$-modules and linear maps form a monoidal category $\cat{FMod}_R$. \end{definition} If $R$ is unital, then every unital $R$-module is firm and nondegenerate.} { \begin{proposition}\label{prop:modules} The subunits in $\cat{FMod}_R$ correspond to \emph{nondegenerate firm idempotent ideals}: ideals $S \subseteq R$ that are idempotent as rings, and nondegenerate and firm as $R$-modules. Any ideal that is unital as a ring is a nondegenerate firm idempotent ideal. The category $\cat{FMod}_R$ is firm. \end{proposition} } \begin{proof} Monomorphisms are injective by nondegeneracy, so every subunit is a nondegenerate firm $R$-submodule of $R$, that is, a nondegenerate firm ideal. Because the inclusion $S \otimes S \to R \otimes S$ is surjective and $S$ is firm, the map $S \otimes S \to S$ given by $s' \otimes s''\mapsto s's''$ is surjective. Thus $S$ is idempotent. Conversely, let $S$ be a nondegenerate firm idempotent ideal of $R$. The inclusion $S \otimes S \to R \otimes S$ is surjective, as $r \otimes s \in R \otimes S$ can be written as $r \otimes s's'' = r s' \otimes s'' \in S \otimes S$. Hence $S$ is a subunit. Next suppose ideal $S$ is unital (with generally $1_S \neq 1_R$ if $R$ is unital). Then $S \otimes R \to S$ given by $s \otimes r \mapsto sr$ is bijective: surjective as $1_S \otimes s \mapsto 1_S s = s$; and injective as $s \otimes r = 1_S \otimes sr = 1_S \otimes 0 = 0$ if $sr=0$. Hence $S$ is firm and nondegenerate. Any $s \in S$ can be written as $s=s 1_S \in S^2$, so $S$ is idempotent. Finally, to see that the category is firm, let $S,T \subseteq R$ be nondegenerate firm idempotent ideals. We need to show that the map $S \otimes T \mapsto R \otimes T$ given by $s \otimes t \mapsto s \otimes t$ is injective. Because $T$ is firm, it suffices that multiplication $S \otimes T \to S$ given by $s \otimes t \mapsto st$ is injective, which holds because $S$ is firm. \end{proof} The previous proposition generalises to commutative nonunital bialgebras in any symmetric monoidal category. \begin{example}\label{ex:modulesoverbialgebra} Let $\cat{C}$ be a symmetric monoidal category. A \emph{commutative nonunital bialgebra} in $\cat{C}$ is an object $M$ together with an associative multiplication $\mu \colon M \otimes M \to M$ and a comonoid $\delta \colon M \to M \otimes M$, $\varepsilon \colon M \to I$, for which $\mu$ and $\delta$ are commutative and satisfy both $ \varepsilon \circ \mu = \varepsilon \otimes \varepsilon$ and the bialgebra law: \[ (\mu \otimes \mu) \circ (M \otimes \sigma \otimes M) \circ (\delta \otimes \delta) = \delta \circ \mu \] We define a braided monoidal category $\cat{Mod}_M$ where objects are $\alpha \colon M \otimes A \to A$ satisfying $\alpha \circ (\mu \otimes A) = \alpha \circ (M \otimes \alpha)$, with morphisms and $\otimes$ all defined as for modules over a (unital) commutative bialgebra (see e.g.~\cite[2.2,2.3]{hasegawa2010bialgebras}). The category $\cat{Mod}_M$ is firm when $\cat{C}$ is, and its subunits correspond to \emph{firm ideals}: monomorphisms $s \colon S \rightarrowtail M$ such that \[\begin{tikzpicture}[xscale=3] \node (tl) at (0,1) {$M \otimes S$}; \node (tr) at (1,1) {$M \otimes M$}; \node (bl) at (0,0) {$S$}; \node (br) at (1,0) {$M$}; \draw[->] (tl) to node[above]{$M \otimes s$} (tr); \draw[->] (tr) to node[right]{$\mu$} (br); \draw[>->] (bl) to node[below]{$s$} (br); \draw[->,dashed] (tl) to (bl); \end{tikzpicture}\] and $\varepsilon \otimes S$ and $s \otimes S$ are isomorphisms. \end{example} We next instantiate the previous example in two special cases: in the monoidal categories of semilattices and of quantales. \begin{example} Any semilattice $M$ is a nondegenerate nonunital bialgebra in $\cat{SLat}$. In $\cat{Mod}_M$ objects are semilattices $A$ with functions $\alpha \colon M \times A \to A$ which respect $\wedge$ in each argument and satisfy $\alpha(x \wedge y,a)=\alpha(x,\alpha(y,a))$. Subobjects of the tensor unit correspond to subsets $S \subseteq M$ which are ideals under $\wedge$, or equivalently downward-closed. Because $x \otimes y = (x \wedge x) \otimes y = x \otimes (x \wedge y) \in S \otimes S$, we have $S \otimes S = S \otimes M$, and every subobject of the tensor unit is a subunit. \end{example} \begin{example} Any commutative unital quantale $M$ is a nondegenerate nonunital bialgebra in the category of complete lattices. $\cat{Mod}_M$ then consists of complete lattices $A$ with functions $\alpha \colon M \times A \to A$ preserving arbitrary suprema in each argument and with $\alpha(x, \alpha(y,a)) = \alpha(xy, a)$. Subobjects of the tensor unit are {submodules} $S \subseteq M$. Subunits furthermore have that for every $r \in S$ and $x \in M$ there exist $s_i,t_i \in S$ with $r \otimes x = \bigvee s_i \otimes t_i$. For example, if $M=[0,\infty]$ under addition with the opposite ordering, subunits include $\emptyset,\{\infty\}$, $\{0,\infty\}$, $(0,\infty]$, and $[0,\infty]$. \end{example} \subsection*{Hilbert modules} The above examples of module categories were all algebraic in nature. Our next suite of examples is more analytic. \begin{definition}\label{def:hilbertmodule} Fix a locally compact Hausdorff space $X$. It induces a commutative C*-algebra \[ C_0(X)=\{f \colon X \to \mathbb{C} \text{ continuous} \mid \forall \varepsilon>0\; \exists K \subseteq X \text{ compact} \colon |f(X \setminus K)|<\varepsilon \}\text. \] A \emph{Hilbert module} is a $C_0(X)$-module $A$ with a map $\inprod{-}{-} \colon A \times A \to C_0(X)$ that is $C_0(X)$-linear in the second variable, satisfies $\inprod{a}{b}=\inprod{b}{a}^*$, and $\inprod{a}{a}\geq 0$ with equality only if $a=0$, and makes $A$ complete in the norm $\|a\|^2_A = \sup_{x \in X} \inprod{a}{a}(x)$. A function $f \colon A \to B$ between Hilbert $C_0(X)$-modules is \emph{bounded} when $\|f(a)\|_{{A}} \leq {C}\|a\|_A$ for some {constant $C \in \mathbb{R}$; the infimum of such constants is written $\|f\|$}. Here we will focus on {\emph{nonexpansive maps}}, i.e.\ those bounded functions with $\|f\| \leq 1$. \end{definition} Hilbert modules were first introduced by Kaplansky~\cite{kaplansky:modules} and studied by many others, including Rieffel~\cite{rieffel:representations} {and} Kasparov~\cite{kasparov:hilbertmodules}. For more information we refer to~\cite{lance:hilbertmodules}. The category $\HModc$ of Hilbert $C_0(X)$-modules and {nonexpansive} $C_0(X)$-linear maps is not abelian, not complete, and not cocomplete~\cite{heunen:embedding}. Nevertheless, $\HModc$ is symmetric monoidal~\cite[Proposition~2.2]{heunenreyes:frobenius}. Here $A \otimes B$ is constructed as follows: consider the algebraic tensor product of $C_0(X)$-modules, and complete it to a Hilbert module with inner product $\inprod{a \otimes b}{a' \otimes b'}$ given by $\inprod{a}{a'} \inprod{b}{b'}$. The tensor unit is $C_0(X)$ itself, which forms a Hilbert $C_0(X)$-module under the inner product $\inprod{f}{g}(x) = f(x)^* g(x)$. { \begin{proposition}\label{prop:hilbertmodules} $\HModc$ is firm, and its subunits are \begin{equation}\label{eq:subunithilbertmodules} \{ f \in C_0(X) \mid f(X \setminus U)=0 \} \simeq C_0(U) \end{equation} for open subsets $U \subseteq X$. \end{proposition} } \begin{proof} If $U$ is an open subset of $X$, we may indeed identify $C_0(U)$ with the closed ideal of $C_0(X)$ in~\eqref{eq:subunithilbertmodules}: if $f \in C_0(U)$, then its extension by zero on $X \setminus U$ is in $C_0(X)$, and conversely, if $f \in C_0(X)$ is zero outside $U$, then its restriction to $U$ is in $C_0(U)$. Moreover, note that the canonical map $C_0(X) \otimes C_0(X) \to C_0(X)$ is always an isomorphism as $C_0(X)$ is the tensor unit, and hence the same holds for $C_0(U)$. Thus $C_0(U)$ is a subunit in $\HModc$. For the converse, let $s \colon S \rightarrowtail C_0(X)$ be a subunit in $\HModc$. We will show that $s(S)$ is a closed ideal in $C_0(X)$, and therefore of the form $C_0(U)$ for some open subset $U \subseteq X$. It is an ideal because $s$ is $C_0(X)$-linear. To see that it is closed, let $g \in s(S)$. Then \begin{align*} \|g\|_S^4 & = \| \inprod{g}{g}_S^2 \|_{C_0(X)} = \| \inprod{g}{g}_S \inprod{g}{g}_S\|_{C_0(X)} \\ & = \| \inprod{g \otimes g}{g \otimes g}_{C_0(X)} \|_{C_0(X)} = \|g \otimes g\|_S^2 \\ & \leq \|\rho_S^{-1}\|^2 \|g^2\|_S = \|\rho_S^{-1}\|^2 \| \inprod{g}{g}_S g^* g \|_{C_0(X)} \\ & \leq \|\rho_S^{-1}\|^2 \|g\|_S^2 \|g\|_{C_0(X)}^2 \end{align*} and therefore $\|g\|_S \leq \| \rho_S^{-1}\|^2 \|g\|_{C_0(X)}^2$. Because $s$ is bounded, it is thus an equivalence of normed spaces between $(S,\|-\|_S)$ and $(s(S), \|-\|_{C_0(X)})$. Since the former is complete, so is the latter. Firmness follows from {Proposition}~\ref{prop:hilbertmodules:localadjunction} later. \end{proof} The category $\HModc$ can be adapted to form a dagger category by considering (not necessarily {nonexpansive}) bounded maps between Hilbert modules to that are \emph{adjointable}. In that case only clopen subsets of $X$ correspond to subunits~\cite[Lemma~3.3]{heunenreyes:frobenius}. Another way to view a Hilbert $C_0(X)$-module is as a \emph{field of Hilbert spaces} over $X$. Intuitively, this assigns to each $x \in X$ a Hilbert space, that `varies continuously' with $x$. In particular, for each $x \in X$ there is a monoidal functor $\HModc \to \cat{Hilb}_{\mathbb{C}}$. For details, see~\cite{heunenreyes:frobenius}. This perspective may be useful in reading Section~\ref{sec:restriction} later. Not every subobject of the tensor unit in $\HModc$ is induced by an open subset $U \subseteq X$, and so the condition of Definition~\ref{def:subunit} is not redundant. { \begin{proposition} Let $X=[0,1]$. If $f \in C_0(X)$, write $\hat{f} \in C_0(X)$ for the map $x \mapsto xf(x)$. Then $S=\{ \hat{f} \mid f \in A\}$ is a subobject of $A=C_0(X)$ in $\HModc$ under $\inprod{\hat{f}}{\hat{g}}_S = \inprod{f}{g}_A$, that is not closed under $\|-\|_A$. \end{proposition} } \begin{proof} Clearly $S$ is a $C_0(X)$-module, and $\inprod{-}{-}_S$ is sesquilinear. Moreover $S$ is complete: $\hat{f_n}$ is a Cauchy sequence in $S$ if and only if $f_n$ is a Cauchy sequence in $A$, in which case it converges in $A$ to some $f$, and so $\hat{f_n}$ converges to $\hat{f}$ in $S$. Thus $S$ is a well-defined Hilbert module. The inclusion $S \hookrightarrow A$ is bounded and injective, and hence a well-defined monomorphism. In fact, $A$ is a C*-algebra, and $S$ is an ideal. The closure of $S$ in $A$ is the closed ideal $\{f \in C_0(X) \mid f(0)=0\}$, corresponding to the closed subset $\{0\} \subseteq X$. It contains the function $x \mapsto \sqrt{x}$ while $S$ does not, and so $S$ is not closed. \end{proof} \section{Restriction}\label{sec:restriction} Regarding subunits as open subsets of an (imagined) base space, the idea of restriction to such an open subset makes sense. For example, if $U$ is an open subset of a locally compact Hausdorff space $X$, then any $C_0(X)$-module induces a $C_0(U)$-module, and any sheaf over $X$ induces a sheaf over $U$. More generally, any subunit in a topos induces an open subtopos. This section shows that this restriction behaves well in any monoidal category. \begin{definition}\label{def:supportin} A morphism $f \colon A \to B$ \emph{restricts to} a subunit $s \colon S \to I$ when it factors through $\lambda_B \circ (s \otimes B)$. \[\begin{tikzpicture}[xscale=3,yscale=1.25] \node (tl) at (0,1) {$A$}; \node (tr) at (1,1) {$B$}; \node (bl) at (0,0) {$S \otimes B$}; \node (br) at (1,0) {$I \otimes B$}; \draw[->] (tl) to node[above]{$f$} (tr); \draw[->,dashed] (tl) to (bl); \draw[->] (bl) to node[below]{$s \otimes B$} (br); \draw[->] (br) to node[right]{$\lambda_B$} (tr); \end{tikzpicture}\] \end{definition} As a special case, we can consider to which subunits identity morphisms restrict~\cite[Lemma~1.3]{boyarchenkodrinfeld:idempotents}. \begin{proposition}\label{prop:supportofobject} The following are equivalent for an object $A$ and subunit $s$: \begin{enumerate} \item[(a)] $s \otimes A \colon S \otimes A \to I \otimes A$ is an isomorphism; \item[(b)] there is an isomorphism $S \otimes A \simeq A$; \item[(c)] there is an isomorphism $S \otimes B \simeq A$ for some object $B$; \item[(d)] the identity $A \to A$ restricts to $s$. \end{enumerate} \end{proposition} \begin{proof} Trivially (a) $\implies$ (b) $\implies$ (c). For (c) $\implies$ (d): because $s$ is a subunit, $s \otimes S \otimes A$ is an isomorphism, so if $S \otimes B \simeq A$ then also $s \otimes A$ is an isomorphism by Lemma~\ref{lem:subunitsretract}. For (d) $\implies$ (a): if $A$ factors through $s \otimes A$, then because $s$ is a subunit $s \otimes S \otimes A$ is an isomorphism, and hence so is $s \otimes A$ by Lemma~\ref{lem:subunitsretract}. \end{proof} The following observation is simple, but effective in applications~\cite{enriquemolinerheunentull:space}. \begin{lemma}\label{lem:supportin} Let $s \colon S \to I$ and $t \colon T \to I$ be subunits in a firm category. If $f$ restricts to $s$, and $g$ restricts to $t$, then $f \circ g$ and $f \otimes g$ restrict to $s \wedge t$. \end{lemma} \begin{proof} Straightforward. \end{proof} In particular, if $A$ or $B$ restrict to a subunit $s$, then so does any map $A \to B$. It also follows that restriction respects retractions: if $e \circ m = \id$, then $m$ restricts to $s$ if and only if $e$ does. \begin{definition}\label{def:restriction} Let $s$ be a subunit in a monoidal category $\cat{C}$. Define the \emph{restriction} of $\cat{C}$ to $s$, denoted by $\cat{C}\restrict{s}$, to be the full subcategory of $\cat{C}$ of objects $A$ for which $s \otimes A$ is an isomorphism. \end{definition} \begin{proposition}\label{prop:restrictioncoreflective} If $s$ is a subunit in a monoidal category $\cat{C}$, then $\cat{C}\restrict{s}$ is a coreflective monoidal subcategory of $\cat{C}$. \[\begin{pic} \node (l) at (0,0) {$\cat{C}$}; \node (r) at (3,0) {$\cat{C}\restrict{s}$}; \node at (1.5,0) {$\top$}; \draw[->] ([yshift=2mm]l.east) to ([yshift=2mm]r.west); \draw[left hook->] ([yshift=-2mm]r.west) to ([yshift=-2mm]l.east); \end{pic}\] The right adjoint $\cat{C} \to \cat{C}\restrict{s}$, given by $A \mapsto S \otimes A$ and $f \mapsto S \otimes f$, is also called \emph{restriction} to $s$. \end{proposition} \begin{proof} First, if $A \in \cat{C}$, note that $S \otimes A$ is indeed in $\cat{C}\restrict{s}$ because $s \otimes S \otimes A$ is an isomorphism as $s$ is a subunit. Similarly, $\cat{C}\restrict{s}$ is a monoidal subcategory of $\cat{C}$. Finally, there is a natural bijection \begin{align*} \cat{C}(A,B) &\simeq \cat{C}\restrict{s}(A,S\otimes B)\\ f &\mapsto (s\otimes f) \circ (s \otimes A)^{-1} \circ \rho_A^{-1}\\ \lambda_B \circ (s \otimes B) \circ g & \mapsfrom g \end{align*} for $A \in \cat{C}\restrict{s}$ and $B \in \cat{C}$. So restriction is right adjoint to inclusion. For monoidality, see~\cite[Theorem~5]{jacobsmandemaker:coreflections}; both functors are (strong) monoidal when $\cat{C}\restrict{s}$ has tensor unit $S$ and tensor product inherited from $\cat{C}$. \end{proof} \begin{remark} The previous result motivates our terminology; a subunit $s$ in $\cat{C}$ is precisely a subobject of $I$ with the property that it may form the tensor unit of a monoidal subcategory of $\cat{C}$, namely $\cat{C}\restrict{s}$. \end{remark} { \begin{example} Let $L$ be a semilattice, regarded as a firm category as in {Proposition}~\ref{ex:semilattice}. For a subset $U \subseteq L$ we define $\ensuremath{\mathop{\downarrow}} U = \{x \in L \mid x \leq u \text{ for some }u \in U\}$. Then for $s \in L$, the restriction $\cat{C} \restrict{s}$ is the subsemilattice $\ensuremath{\mathop{\downarrow}} s = \ensuremath{\mathop{\downarrow}} \{s \}$. \end{example} } { \begin{example} Let $L$ be a frame. A subunit in $\mathrm{Sh}(L)$ is just an element $s \in L$, and a morphism $f \colon A \Rightarrow B$ restricts to it precisely when $A(x)=\emptyset$ for $x \not\leq s$. \end{example} } { \begin{proposition} Let $S$ be a nondegenerate firm idempotent ideal of a nondegenerate firm commutative ring $R$. Then $\cat{FMod}_R\restrict{S}$ is monoidally equivalent to $\cat{FMod}_S$. \end{proposition} } \begin{proof} Send $A$ in $\cat{FMod}_R\restrict{S}$ to $A$ with $S$-module structure $a \cdot s := as$, and send an $R$-linear map $f$ to $f$. This defines a functor $\cat{FMod}_R\restrict{S} \to \cat{FMod}_S$. In the other direction, a firm $S$-module $B \simeq B \otimes_S S$ has firm $R$-module structure $(b \otimes s)\cdot r:=b \otimes (sr)$ because $S$ is idempotent, and if $g$ is an $S$-linear map then $g \otimes_S S$ is $R$-linear. This defines a functor $\cat{FMod}_S \to \cat{FMod}_R\restrict{S}$. Composing both functors sends a firm $R$-module $A$ to $A \otimes_S S \simeq A \otimes_R R \simeq A$, and a firm $S$-module $B$ to $B \otimes_S S \simeq B$. \end{proof} { \begin{proposition}\label{prop:hilbertmodules:localadjunction} For any Hilbert $C_0(X)$-module $A$ and subunit $C_0(U)$ induced by an open subset $U \subseteq X$, the module $A \otimes C_0(U)$ is isomorphic to its submodule \[ A|_U = \{a \in A \mid \inprod{a}{a} \in C_0(U) \}\text, \] viewing $C_0(U)$ as a closed ideal of $C_0(X)$ via~\eqref{eq:subunithilbertmodules}. Hence in $\HModc$ a morphism $f \colon A \to B$ restricts to this subunit when $\inprod{f(a)}{f(a)} \in C_0(U)$ for all $a \in A$. \end{proposition} } \begin{proof} Write $S=C_0(U)$. We first prove that $A \in \HModc\restrict{S}$ if and only if $|a| \in C_0(U)$ for all $a \in A$, where $|a|^2 = \langle a, a \rangle$. On the one hand, if $a \in A$ and $f \in S$ then $|a \otimes f|(X \setminus U) = |a| |f| (X \setminus U)=0$. Therefore $|a| \in C_0(U)$ for all $a \in A \otimes S$. Because $A \otimes S \simeq A$ is invertible, $|a| \in C_0(U)$ for all $a \in A$. On the other hand, suppose that $|a| \in C_0(U)=0$ for all $a \in A$. We are to show that the morphism $A \otimes S \to A$ given by $a \otimes f \mapsto af$ is bijective. To see injectivity, let $f \in S$ and $a \in A$, and suppose that $af=0$. Then $|a| \cdot |f|=|af|=0$, so for all $x \in U$ either $|a|(x)=0$ or $f(x)=0$. So $|a \otimes f|(U)=0$, and hence $a \otimes f=0$. To see surjectivity, let $a \in A$. Then $|a|(x)=0$ for all $x \in X \setminus U$. So $a=\lim af_n$ for an approximate unit $f_n$ of $S$. But that means $a$ is the image of $\lim a \otimes f_n$. \end{proof} \begin{remark} Restricting $\HModc$ to the subunit $C_0(U)$ for an open subset $U \subseteq X$ gives the full subcategory of modules $A$ with $A = A|_U$. This is nearly, but not quite, $\cat{Hilb}_{C_0(U)}$: any such module also forms a $C_0(U)$-module, but conversely there is no obvious way to extend the action of scalars on a general $C_0(U)$-module to make it a $C_0(X)$-module. There is a so-called \emph{local} adjunction between $\HModc\restrict{C_0(U)}$ and $\cat{Hilb}_{C_0(U)}$, which is only an adjunction when $U$ is clopen~\cite[Proposition~4.3]{clarecrisphigson:adjoint}. \end{remark} Above we restricted along one individual subunit $s$. Next we investigate the structure of the family of these functors when $s$ varies. \begin{definition}\label{def:gradedmonad} \cite{fujiikatsumatamellies:gradedmonads} Let $\cat{C}$ be a category and $(\cat{E}, \otimes, 1)$ a monoidal category. Denote by $[\cat{C}, \cat{C}]$ the monoidal category of endofunctors of $\cat{C}$ with $F\otimes G = G\circ F$. An \emph{$\cat{E}$-graded monad} on $\cat{C}$ is a lax monoidal functor $T\colon \cat{E} \rightarrow [\cat{C}, \cat{C}]$. More concretely, an $\cat{E}$-graded monad consists of: \begin{itemize} \item a functor $T\colon \cat{E} \rightarrow [\cat{C}, \cat{C}]$; \item a natural transformation $\eta \colon \id[\cat{C}] \Rightarrow T(1)$; \item a natural transformation $\mu_{s,t}\colon T(t)\circ T(s) \rightarrow T(s\otimes t)$ for all $s,t$ in $\cat{E}$; \end{itemize} making the following diagrams commute for all $r,s,t$ in $\cat{E}$. \begin{align*} \begin{pic}[xscale=2.5, yscale=1.25] \node (TsTtTu) at (1,2) {$T(t)\circ T(s)\circ T(r)$}; \node (TstTu) at (0,1) {$T(t)\circ T(r\otimes s)$}; \node (Tstu1) at (0,0) {$T((r\otimes s)\otimes t)$}; \node (Tstu2) at (2,0) {$T(r\otimes (s\otimes t))$}; \node (TsTtu) at (2,1) {$T(t\otimes s)\circ T(r)$}; \draw[->] (TsTtTu) to node[left, xshift=-2mm]{$\mu_{r,s}\otimes \id[T(t)]$} (TstTu); \draw[->] (TstTu) to node[left]{$\mu_{r\otimes s,t}$} (Tstu1); \draw[->] (Tstu1) to node[above]{$T(\alpha_{r,s,t})$} (Tstu2); \draw[->] (TsTtu) to node[right]{$\mu_{r,s\otimes t}$} (Tstu2); \draw[->] (TsTtTu) to node[right, xshift=1mm]{$\id[T(r)]\otimes\mu_{s,t}$} (TsTtu); \end{pic} \end{align*} \begin{align*} \begin{pic}[xscale=3.5,yscale=.8] \node (idc Ts) at (0,1.5) {$T(s)\circ \id[\cat{C}]$}; \node (Ts) at (0,0) {$T(s)$}; \node (T1s) at (1.5,0) {$T(1\otimes s)$}; \node (T1Ts) at (1.5,1.5) {$T(s)\circ T(1)$}; \draw[-, double] (idc Ts) to node[left]{} (Ts); \draw[->] (idc Ts) to node[below]{$\eta\otimes \id[T(s)]$} (T1Ts); \draw[->] (T1Ts) to node[right]{$\mu_{1,s}$} (T1s); \draw[->] (T1s) to node[below]{$T(\lambda_s)$} (Ts); \end{pic} \\ \begin{pic}[xscale=3.5,yscale=.8] \node (Ts idc) at (0,1.5) {$\id[\cat{C}]\circ T(s)$}; \node (Ts) at (0,0) {$T(s)$}; \node (Ts1) at (1.5,0) {$T(s\otimes 1)$}; \node (TsT1) at (1.5,1.5) {$T(1)\circ T(s)$}; \draw[-, double] (Ts idc) to node[left]{} (Ts); \draw[->] (Ts idc) to node[below]{$\id[T(s)]\otimes \eta$} (TsT1); \draw[->] (TsT1) to node[right]{$\mu_{s,1}$} (Ts1); \draw[->] (Ts1) to node[below]{$T(\rho_s)$} (Ts); \end{pic} \end{align*} \end{definition} \begin{theorem}\label{thm:gradedmonad} Let $\cat{C}$ be a monoidal category. Restriction is a monad graded over the subunits, when we do not identify monomorphisms representing the same subunit. More precisely, it is an $\cat{E}$-graded monad, where $\cat{E}$ has as objects monomorphisms $s \colon S \rightarrowtail I$ in $\cat{C}$ with $s \otimes S$ an isomorphism, and as morphisms $f \colon s \to t$ those $f$ in $\cat{C}$ with $s = t \circ f$. \end{theorem} \begin{proof} The functor $\cat{E} \to [\cat{C},\cat{C}]$ sends $s \colon S \rightarrowtail I$ to $(-) \otimes S$, and $f$ to the natural transformation $\id[(-)] \otimes f$. The natural transformation $\eta_E \colon E \to E \otimes I$ is given by $\rho_E^{-1}$. The family of natural transformations $\mu_{s,t} \colon ((-)\otimes S)\otimes T \rightarrow (-)\otimes(S\otimes T)$ is given by $\alpha_{(-),S,T}$. Associativity and unitality diagrams follow. \end{proof} We end this section by giving two characterisations of subunits in terms that are perhaps more well-known. The first characterisation is in terms of idempotent comonads. \begin{definition} A \emph{restriction comonad} on a monoidal category $\cat{C}$ is a monoidal comonad $F \colon \cat{C} \to \cat{C}$: \begin{itemize} \item whose comultiplication $\delta \colon F \Rightarrow F^2$ is invertible; \item whose counit $\varepsilon \colon F \to \id[\cat{C}]$ has a monic unit component $\varepsilon_I \colon F(I) \rightarrowtail I$. \end{itemize} \end{definition} \begin{proposition}\label{prop:restrictioncomonads} Let $\cat{C}$ be a braided monoidal category. There is a bijection between subunits in $\cat{C}$ and restriction comonads on $\cat{C}$. \end{proposition} \begin{proof} If $s \colon S \rightarrowtail I$ is a subunit, then $F(A)=S \otimes A$ defines a comonad by Proposition~\ref{prop:restrictioncoreflective}. Its comultiplication is given by $\delta_A = (\lambda_{S \otimes A} \circ (s \otimes S \otimes A))^{-1}$, by definition being an isomorphism. Its counit is given by $\varepsilon_A = \lambda_A \circ (s \otimes A)$. Because $\rho_I=\lambda_I$, its component $\varepsilon_I = \lambda_I \circ (s \otimes I) = \rho_I \circ (s \otimes I) = s \circ \rho_S$ is monic. Conversely, if $F$ is a restriction monad, then $\varepsilon_I \colon F(I) \rightarrowtail I$ is a subobject of the tensor unit. Writing $\varphi_{A,B} \colon A \otimes F(B) \to F(A \otimes B)$ for the coherence maps, and $\psi_{A,B} = F(\sigma) \circ \varphi_{B,A} \circ \sigma \colon F(A) \otimes B \to F(A \otimes B)$ for its induced symmetric version, the insides of the following diagram commute: \[\begin{pic}[xscale=4,yscale=1.2] \node (bl) at (0,0) {$F^2(I \otimes I)$}; \node (b) at (1,0) {$F(I \otimes I)$}; \node (br) at (2,0) {$F(I \otimes I)$}; \node (m) at (1,1) {$F^2(I \otimes I)$}; \node (l) at (0,2) {$F(F(I) \otimes I)$}; \node (t) at (1,2) {$F(F(I) \otimes I)$}; \node (tl) at (0,3) {$F(I) \otimes F(I)$}; \node (tr) at (2,3) {$F(I) \otimes I$}; \draw[-, double distance=.75mm] (b) to (br); \draw[-, double distance=.75mm] (bl) to (m); \draw[-, double distance=.75mm] (l) to (t); \draw[->] (tl) to node[above]{$F(I) \otimes \varepsilon_I$} (tr); \draw[->] (tl) to node[left]{$\varphi_{F(I),I}$} (l); \draw[->] (l) to node[left]{$F(\psi_{I,I})$} (bl); \draw[->] (bl) to node[below]{$\delta^{-1}_{I \otimes I}$} (b); \draw[->] (b) to node[right]{$\delta_{I \otimes I}$} (m); \draw[->] (m) to node[right]{$F(\psi_{I,I}^{-1})$} (t); \draw[->] (m) to node[above]{$\varepsilon_{F(I \otimes I)}$} (br); \draw[->] (t) to node[below=1mm]{$\varepsilon_{F(I) \otimes I}$} (tr); \draw[->] (br) to node[right]{$\psi^{-1}_{I,I}$} (tr); \end{pic}\] But the long outside path is composed entirely of isomorphisms. Hence $F(I) \otimes \varepsilon_I$ is invertible, and $\varepsilon_I$ is a subunit. These two constructions are clearly inverse to each other. \end{proof} \begin{remark}\label{rem:subunitcomonads} Monoidal comonads on $\cat{C}$ form a category with morphisms of monoidal comonads~\cite{street:formal}. This category is monoidal as a subcategory of $[\cat{C},\cat{C}]$. The monoidal unit is the identity comonad $A \mapsto A$. A subunit is a comonad $F$ with a comonad morphism $\lambda \colon F \Rightarrow \id[\cat{C}]$ whose comultiplication is idempotent, and such that $\lambda_A \colon F(A) \to A$ is monic. But by coherence, the latter means that $\varepsilon_I = \lambda_I \colon F(I) \rightarrowtail I$ is monic. It follows that subunits in $\cat{C}$ also correspond bijectively to subunits in $[\cat{C},\cat{C}]$ in the same sense as Definition~\ref{def:subunit}, though we have not strictly defined these since the latter category is not braided. See also~\cite[Remark~2.3]{boyarchenkodrinfeld:idempotent}. \end{remark} It also follows that restrictions monads automatically satisfy the Frobenius law $\delta^{-1} F \circ F \delta = F \delta^{-1} \circ \delta F$~\cite{heunenkarvonen:monads}, matching the viewpoint in \cite{hines:classicalstructures}. The second characterisation of subunits $s$ we will give is in terms of the subcategory $\cat{C}\restrict{s}$. \begin{definition}\label{def:tensorideal} Let $\cat{C}$ be a monoidal category. A \emph{monocoreflective tensor ideal} is a full replete subcategory $\cat{D}$ such that: \begin{itemize} \item if $A \in \cat{C}$ and $B \in \cat{D}$, then $A \otimes B \in \cat{D}$; \item the inclusion $F \colon \cat{D} \hookrightarrow \cat{C}$ has a right adjoint $G \colon \cat{C} \to \cat{D}$; \item the component of the counit at the tensor unit $\varepsilon_I \colon F(G(I))\to I$ is monic; \item $F(B) \otimes \varepsilon_I$ is invertible for all $B \in \cat{D}$. \end{itemize} \end{definition} \begin{proposition}\label{prop:tensorideals} Let $\cat{C}$ be a firm category. There is a bijection between $\ISub(\cat{C})$ and the set of monocoreflective tensor ideals of $\cat{C}$. \end{proposition} \begin{proof} A subunit $s$ corresponds to $\cat{C}\restrict{s}$, and a monocoreflective tensor ideal $\cat{D}$ corresponds to $\varepsilon_I$. First notice that $\cat{C}\restrict{s}$ is indeed a monocoreflective tensor ideal by Proposition~\ref{prop:restrictioncoreflective}. Starting with $s \in \ISub(\cat{C})$ ends up with $s \circ \lambda \colon I \otimes S \rightarrowtail I$, which equals $s$ qua subobject. Starting with a monocoreflective tensor ideal $\cat{D}$ ends up with $\{ A \in \cat{C} \mid A \otimes \varepsilon_I \text{ is invertible}\}$. We need to show that this equals $\cat{D}$. One inclusion is obvious. For the other, let $A \in \cat{C}$. If $A \otimes \varepsilon_I \colon A \otimes FG(I) \to A \otimes I$ is invertible, then $A \simeq A \otimes F(G(I))$, and so $A \in \cat{D}$ because $\cat{D}$ is a tensor ideal. \end{proof} We leave open the question of what sort of factorization systems are induced by monocoreflective tensor ideals~\cite{cassidyhebertkelly:factorization,day:monoidallocalisation}. \section{Simplicity}\label{sec:simplicity} Localisation in algebra generally refers to a process that adds formal inverses to an algebraic structure~\cite[Chapter 7]{kashiwara2005categories}. This section discusses how to localise all subunits in a monoidal category at once, by showing that restriction is an example of localisation in this sense. \begin{definition} Let $\cat{C}$ be a category and $\Sigma$ a collection of morphisms in $\cat{C}$. A \emph{localisation of $\cat{C}$ at $\Sigma$} is a category $\cat{C}[\Sigma^{-1}]$ and a functor $Q \colon \cat{C} \to \cat{C}[\Sigma^{-1}]$ such that: \begin{itemize} \item $Q(f)$ is an isomorphism for every $f\in \Sigma$; \item for any functor $R\colon\cat{C} \rightarrow \cat{D}$ such that $R(f)$ is an isomorphism for all $f\in\Sigma$, there exists a functor $\overline{R}\colon\cat{C}[\Sigma^{-1}] \rightarrow \cat{D}$ and a natural isomorphism $\overline{R}\circ Q \simeq R$; \[\begin{tikzpicture} \node (l) at (0,1.3) {$\cat{C}$}; \node (tr) at (3,1.3) {$\cat{C}[\Sigma^{-1}]$}; \node (br) at (3,0) {$\cat{D}$}; \draw[->] (l) to node[above]{$Q$} (tr); \draw[->] (l) to node[below]{$R$} (br); \draw[->,dashed] (tr) to (br); \node at (2,.9) {$\simeq$}; \end{tikzpicture}\] \item precomposition $ (-)\circ Q \colon \big[\cat{C}[\Sigma^{-1}], \cat{D}\big] \to [\cat{C}, \cat{D}] $ is full and faithful for every category $\cat{D}$. \end{itemize} \end{definition} \begin{proposition} Restriction $\cat{C} \to \cat{C}\restrict{s}$ at a subunit $s$ is a localisation of $\cat{C}$ at $\{ s \otimes A \mid A\in\cat{C} \}$. \end{proposition} \begin{proof} Observe that $S\otimes (-)$ sends elements of $\Sigma$ to isomorphisms because $s$ is idempotent. Let $R\colon\cat{C}\rightarrow\cat{D}$ be any functor making $R(s \otimes A)$ an isomorphism for all $A\in\cat{C}$. Define $\overline{R}\colon \cat{C}\restrict{s} \to \cat{D}$ by $A \mapsto R(A)$ and $f \mapsto R(f)$. Then \begin{equation*} \eta_A = R(\rho_A)\circ R(s\otimes A) \colon R(s\otimes A) \rightarrow R(A) \end{equation*} is a natural isomorphism. It is easy to check that precomposition with restriction is full and faithful. \end{proof} The above universal property concerns a single subunit. We now move to localising all subunits simultaneously. \begin{definition}\label{def:simple} A monoidal category is \emph{simple} when it has no subunits but $I$. \end{definition} In the words of Proposition~\ref{prop:tensorideals}, a category is simple when it has no proper monocoreflective tensor ideals. Let us now show how to make a category simple. \begin{proposition}\label{prop:localisation} If $\cat{C}$ is a firm category, then there is a universal simple category ${\Simple}(\cat{C})$ with a monoidal functor $\cat{C} \to {\Simple}(\cat{C})$: any monoidal functor $F \colon \cat{C} \to \cat{D}$ into a simple category $\cat{D}$ factors through it via a unique monoidal functor ${\Simple}(\cat{C})\to \cat{D}$. \[\begin{pic}[xscale=10,yscale=1.25] \node (tl) at (0,1) {$\cat{C}$}; \node (tr) at (0.5,1) {${\Simple}(\cat{C})$}; \node (bl) at (.5,0) {$\cat{D}$}; \draw[->] (tl) to (tr); \draw[->,dashed] (tr) to (bl); \draw[->] (tl) to node[below]{$F$} (bl); \end{pic}\] \end{proposition} \begin{proof} We proceed by formally inverting the collection of morphisms \[ \Sigma = \{ \lambda_A \circ (s \otimes A) \mid A \in \cat{C}, s \in \ISub(\cat{C}) \} \cup \{ A \mid A \in \cat{C} \}\text. \] To show that the localisation $\cat{C}[\Sigma^{-1}]$ of $\Sigma$ exists we will show that $\Sigma$ admits \emph{a calculus of right fractions}~\cite{gabrielzisman:calculusoffractions}. Firstly, $\Sigma$ contains all identities and is closed under composition, since the composition of $\lambda_A \circ (A \otimes t)$ and $\lambda_{A \otimes T} \circ (A \otimes T \otimes s)$ is simply $\lambda_A \circ (A \otimes (s \wedge t))$. It remains to show that: \begin{itemize} \item for morphisms $s \colon A \to C$ in $\Sigma$ and $f \colon B \to C$ in $\cat{C}$, there exist morphisms $t \colon P \to B$ in $\Sigma$ and $g \colon P \to A$ in $\cat{C}$ such that $g\circ s = t \circ f$; \[\begin{tikzpicture}[scale=1.5] \node (tl) at (0,1) {$\bullet$}; \node (tr) at (1,1) {$\bullet$}; \node (bl) at (0,0) {$\bullet$}; \node (br) at (1,0) {$\bullet$}; \draw[->] (bl) to node[below]{$f$} (br); \draw[->] (tr) to node[right]{$s \in \Sigma$} (br); \draw[->,dashed] (tl) to node[left]{$\Sigma \ni t$} (bl); \draw[->,dashed] (tl) to node[above]{$g$} (tr); \end{tikzpicture}\] \item if a morphism $t \colon C \to D$ in $\Sigma$ and $f,g \colon B \to C$ in $\cat{C}$ satisfy $t \circ f = t \circ g$, then $f \circ s = g \circ s$ for some $s \colon A \to B$ in $\Sigma$. \end{itemize} It suffices to merely consider $\{\lambda_A \circ (s \otimes A) \mid A \in \cat{C}, s \in \ISub(\cat{C}) \}$ by~\cite[Remark~3.1]{fritz:categoriesoffractions}. The first, also called the \emph{right Ore condition}, is satisfied by bifunctoriality of the tensor: \[\begin{tikzpicture}[xscale=4,yscale=1.2] \node (tl) at (0,1) {$S \otimes A$}; \node (tr) at (1,1) {$S \otimes B$}; \node (l) at (0,0) {$I \otimes A$}; \node (r) at (1,0) {$I \otimes B$}; \node (bl) at (0,-1) {$A$}; \node (br) at (1,-1) {$B$}; \draw[->] (bl) to node[below]{$f$} (br); \draw[->] (l) to node[below]{$I \otimes f$} (r); \draw[->,dashed] (tl) to node[above]{$S \otimes f$} (tr); \draw[->] (tr) to node[right]{$s \otimes B$} (r); \draw[->] (r) to node[right]{$\rho_B$} (br); \draw[->,dashed] (tl) to node[left]{$s \otimes A$} (l); \draw[->,dashed] (l) to node[left]{$\rho_A$} (bl); \end{tikzpicture}\] For the second, suppose that $(s \otimes B) \circ f = (s \otimes B) \circ g$. Then applying $S \otimes (-)$ and using that $S \otimes s$ is invertible, it follows that $S \otimes f=S \otimes g$. But then \begin{align*} f \circ \lambda_A \circ (s \otimes A) & = \lambda_{SB} \circ (s \otimes S \otimes B) \circ (S \otimes f) \\ & = \lambda_{SB} \circ (s \otimes S \otimes B) \circ (S \otimes g) = g \circ \lambda_A \circ (s \otimes A)\text, \end{align*} so the second requirement is satisfied. As a result, $\cat{C}[\Sigma^{-1}]$ exists; an easy constuction may be found in~\cite{fritz:categoriesoffractions}. It satisfies the universal property of localisation on the nose. {We define $\Simple(\cat{C}) = \cat{C}[\Sigma^{-1}]$.} Moreover, the functor $\cat{C} \to {\Simple}(\cat{C})$ is monoidal because the class $\Sigma$ is closed under tensoring with objects of $\cat{C}$ by construction~\cite[Corollary~1.4]{day:monoidallocalisation}. Finally, notice that ${\Simple}(\cat{C})$ is simple by construction. \end{proof} \section{Support}\label{sec:support} When a morphism $f$ restricts to a given subunit $s$, we might also say that $f$ `has support in' $s$. Indeed it is natural to assume that each morphism in our category comes with a canonical least subunit to which it restricts, which we may call its support. This is the case in a topos, for example, but in general requires extra structure. Write $\mors{\cat{C}}$ for the braided monoidal category whose objects are morphisms $f \in \cat{C}$, with $f \otimes g$ defined as in $\cat{C}$, tensor unit $I$, and a unique morphism $f \to g$ whenever ($g$ restricts to $s$) $\implies$ ($f$ restricts to $s$). \begin{definition}\label{def:supportdatum} A \emph{support datum} on a firm category $\cat{C}$ is a functor $F \colon \mors{\cat{C}} \to L$ into a complete lattice $L$ satisfying \begin{equation}\label{eq:supportdatum} F(f) = \bigwedge \big\{ F(s) \colon s \in \ISub(\cat{C}) \mid f \text{ restricts to } s \big\} \end{equation} for all morphisms $f$ of $\cat{C}$. A \emph{morphism of support data} $F \to F'$ is one of complete lattices $G \colon L \to L'$ with $G \circ F = F'$. \end{definition} \begin{lemma} If $F \colon \mors{\cat{C}} \to L$ is a support datum, and $f, g$ morphisms in $\cat{C}$: \begin{itemize} \item $F(f) = \bigwedge \{ F(A) \mid A \in \cat{C},\, f \text{ factors through $A$} \}$; \item $F(f \otimes g) \leq F(f) \wedge F(g)$ for all $f, g$; so $F$ is colax monoidal. \end{itemize} \end{lemma} This notion of support via objects is similar to that of~\cite{balmer:spectrum,kockpitsch:pointfree,joyal:chevalleytarski}. \begin{proof} For the first statement, it suffices to show that $f$ restricts to a subunit $s$ iff it factors through some object $A$ which does. But if $f$ factors through $A$ then $f=g \circ A \circ h$ for some $g, h$ and so if $A$ restricts to $s$ so does $f$. Conversely if $f \colon B \to C$ restricts to $s$ it factors over $S \otimes C$, which always restricts to $s$. For the second statement, Note that $F(I) \leq 1$ always, so colax monoidality reduces to the rule above. But if $f$ restricts to $s$ then so does $f \otimes g$. Hence $F (f \otimes g) \leq F(f)$, and $F(f \otimes g) \leq F(g)$ similarly. \end{proof} Most features of support data follow from the associated map $\ISub(\cat{C}) \to L$. \begin{proposition} \label{prop:suppasfunctor} Let $\cat{C}$ be a firm category and $L$ a complete lattice. Specifying a support datum $F \colon \mors{\cat{C}} \to L$ is equivalent to specifying a monotone map $\ISub(\cat{C}) \to L$. \end{proposition} \begin{proof} In $\mors{\cat{C}}$ there is a morphism $s \to t$ between subunits $s$ and $t$ precisely when $s \leq t$. Hence any support datum restricts to a monotone map $\ISub(\cat{C}) \to L$. Conversely, let $F$ be such a map and extend it to arbitrary morphisms by~\eqref{eq:supportdatum}. Both definitions of $F$ agree on subunits $s$ since a subunit restricts to another one $t$ precisely when $s \leq t$, so that $F(s)=\bigwedge \{F(t) \mid s \leq t\}$. Finally, for functoriality suppose there exists a morphism $f \to g$ in $\mors{\cat{C}}$. If this holds then whenever $g$ restricts to $s$ then so does $f$, so that $F(f) \leq F(g)$. \end{proof} This observation provides examples of support data. Recall that the free complete lattice on a semilattice $L$ is given by its collection $D(L)$ of downsets $U = \ensuremath{\mathop{\downarrow}} U \subseteq L$ under inclusion, via the embedding $x \mapsto \ensuremath{\mathop{\downarrow}} x$~\cite[II.1.2]{johnstone:stonespaces}. \begin{proposition}\label{prop:initialsupport} Any firm category $\cat{C}$ has a canonical support datum, valued in $D(\ISub(\cat{C}))$, given by \begin{equation} \label{eq:suppfcanonical} \suppinit(f) = \{s \in \ISub(\cat{C}) \mid \text{$f$ restricts to $t$} \implies s \leq t \}\text. \end{equation} Moreover, $\suppinit$ is initial: any support datum factors through it uniquely. \[\begin{tikzpicture}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$\mors{\cat{C}}$}; \node (tr) at (1,1) {$D(\ISub(\cat{C}))$}; \node (br) at (1,0) {$L$}; \node (t) at (1.5,1) {$\{s_i\}$}; \node (b) at (1.5,0) {$\bigvee F(s_i)$}; \draw[->] (tl) to node[above]{$\suppinit$} (tr); \draw[->] (tl) to node[below]{$F$} (br); \draw[->,dashed] (tr) to (br); \draw[|->] (t) to (b); \end{tikzpicture}\] \end{proposition} This generalises~\cite{balmer:spectrum,balmerfavi:telescope,balmerkrausestevenson:smashing} from triangulated categories to firm ones. \begin{proof} Extend the embedding $L \to D(L)$ to a support datum via Proposition~\ref{prop:suppasfunctor}. Initiality is immediate by freeness of $D(L)$, with~\eqref{eq:suppfcanonical} coming from the description of meets in terms of joins in a complete lattice. \end{proof} Rather than require extra data, it would be desirable to define support internally to the category. If $\cat{C}$ has the property that $\ISub(\cat{C})$ is already a complete lattice (or frame), then it indeed comes with a support datum given by the identity on $\ISub(\cat{C})$. We may then define \emph{the support} of a morphism as \[ \supp(f) = \bigwedge \big\{ s \in \ISub(\cat{C}) \mid f \text{ restricts to } s \big\}\text. \] Note that $\supp(f) = \bigvee \suppinit(f)$. It therefore follows from Proposition~\ref{prop:initialsupport} that $\supp$ also has a universal property: if $\ISub(\cat{C})$ is already a complete lattice, any support datum $F$ factors through $\supp$ via a semilattice morphism. Therefore, in the case of a topos, $\supp(A)$ is the factorisation of a morphism $A \to 1$ into a strong epimorphism and a monomorphism. \begin{example} Let $L$ be a frame and consider $\mathrm{Sh}(L)$. A morphism $f \colon A \Rightarrow B$ has $\suppinit(f) = \ensuremath{\mathop{\downarrow}} \{ t \mid A(t)\neq\emptyset\}$, and $\supp(f) = \bigwedge \{ s \mid A(s) \neq\emptyset \}$. \end{example} \begin{example} In $\HModc$ the collection of subunits forms a frame, and each morphism $f \colon A \to B$ has $\supp(f) = C_0(U_f)$, where \[ U_f = \{x \in X \mid \inprod{f(a)}{f(a)}(x) \neq 0 \text{ for some } a \in A\}\text. \] Letting $L$ be the totally ordered set of cardinals below $|X|$, we may define another support datum by $F(f) = |U_f| \in L$. \end{example} In the remaining sections we turn to categories coming with such an intrinsic spatial structure. First, the following example shows that, even in case $\ISub(\cat{C})$ is a frame, our notion of support differs from that of~\cite[Definition~3.1(SD5)]{balmer:spectrum} and~\cite[Definition~3.2.1(5)]{kockpitsch:pointfree}: without further assumptions, a support datum is only colax monoidal. { \begin{proposition} There is a firm category $\cat{C}$ for which $\ISub(\cat{C})$ is a frame but \[ \supp(f) \otimes \supp(g) \neq \supp(f \otimes g)\text. \] \end{proposition} } \begin{proof} Let $Q$ be the commutative unital quantale with elements $0 \leq \varepsilon \leq 1$, with unit $1$ and satisfying $0 = 0 \cdot 0 = 0 \cdot \varepsilon = \varepsilon \cdot \varepsilon$. Then the frame of subunits is $\ISub(Q) = \{0,1\}$, and $\varepsilon$ satisfies $\supp(\varepsilon) = 1$ whereas $\supp(\varepsilon \cdot \varepsilon) = 0$. \end{proof} \section{{Locale-based categories}}\label{sec:spatial} In our main examples, the subunits satisfy extra properties over being a mere semilattice, and they interact universally with the rest of the category. First, they often satisfy the following property. \begin{definition}\label{def:stiff} A category is \emph{stiff} when it is braided monoidal and \begin{equation} \label{eq:stiff-pullback} \begin{pic}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$S \otimes T \otimes X$}; \node (tr) at (1,1) {$T \otimes X$}; \node (bl) at (0,0) {$S \otimes X$}; \node (br) at (1,0) {$X$}; \draw[>->] (tl) to node[above]{$s \otimes T \otimes X$} (tr); \draw[>->] (tl) to node[left]{$S \otimes t \otimes X$} (bl); \draw[>->] (tr) to node[right]{$t \otimes X$} (br); \draw[>->] (bl) to node[below]{$s \otimes X$} (br); \draw (.1,.7) to (.15,.7) to (.15,.825); \end{pic} \end{equation} is a pullback of monomorphisms for all objects $X$ and subunits $s,t$. \end{definition} Any stiff category is firm: take $X=I$ and recall that pullbacks of monomorphisms are monomorphisms. More strongly, subunits often come with joins satisfying the following. \begin{definition}\label{def:universalfinitejoins} Let $\cat{C}$ be a braided monoidal category. We say that $\cat{C}$ has \emph{universal finite joins} of subunits when it has an initial object $0$ whose morphism $0 \to I$ is monic, with $X \otimes 0 \simeq 0$ for all objects $X$, and $\ISub(\cat{C})$ has finite joins such that each diagram \begin{equation} \label{eq:pullback-pushout} \begin{pic}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$S \otimes T \otimes X$}; \node (tr) at (1,1) {$T \otimes X$}; \node (bl) at (0,0) {$S \otimes X$}; \node (br) at (1,0) {$(S \vee T) \otimes X$}; \draw[>->] (tl) to node[above]{} (tr); \draw[>->] (tl) to node[left]{} (bl); \draw[>->] (tr) to node[right]{} (br); \draw[>->] (bl) to node[below]{} (br); \draw (.1,.7) to (.15,.7) to (.15,.825); \draw (.95,.3) to (.90,.3) to (.90,.175); \end{pic} \end{equation} is both a pullback and pushout of monomorphisms, where each morphism is the obvious inclusion tensored with $X$ as in~\eqref{eq:stiff-pullback}. \end{definition} \begin{lemma} \label{lem:lat} Let $\cat{C}$ be braided monoidal with universal finite joins of subunits. Then $\cat{C}$ is stiff and $\ISub(\cat{C})$ is a distributive lattice with least element $0$. \end{lemma} \begin{proof} For stiffness, take $t = \id[I]$ to see that each morphism $s \otimes X$ is monic. Then since $(s \vee t) \otimes X$ is monic it follows easily that each diagram~\eqref{eq:stiff-pullback} is a pullback. By assumption $0 \to I$ is indeed a subunit. Finally it follows from~\eqref{eq:pullback-pushout} with $X=R$ that subunits $R, S, T$ satisfy $(S \vee T) \wedge R = (S \wedge R) \vee (T \wedge R)$. \end{proof} { \begin{proposition} \label{prop:coherentUnivUnion} Any coherent category $\cat{C}$ forms a cartesian monoidal category with universal finite joins of subunits. \end{proposition} } \begin{proof} Each partial order $\Sub(A)$ is a distributive lattice, and for subobjects $S, T \rightarrowtail A$ each diagram~\eqref{eq:pullback-pushout} with $\wedge$ replacing $\otimes$ and $X=1$ is indeed both a pushout and pullback~\cite[A1.4.2, A1.4.3]{johnstone:elephant}. Moreover in such a category each functor $X \times (-)$ preserves these pullbacks, since limits commute with limits, and preserves finite joins and hence these pushouts since each functor $(\pi_2)^* \colon \Sub(A) \to \Sub(X \times A)$ does so by coherence of $\cat{C}$. \end{proof} To obtain arbitrary joins of subunits from finite ones, it will suffice to also have the following. Recall that a subset $U$ of a partially ordered set is (upward) \emph{directed} when any $a,b \in U$ allow $c \in U$ with $a \leq c \geq b$. A \emph{preframe} is a semilattice in which every directed subset has a supremum, and finite meets distribute over directed suprema. By a \emph{directed colimit of subunits} we mean a colimit of a diagram $D \colon \cat{J} \to \cat{C}$, for which $\cat{J}$ is a directed poset, all of whose arrows are inclusions $S_i \rightarrowtail S_j$ between a collection of subunits $s_i \colon S_i \to I$. In particular $D$ has a cocone given by these subunits, inducing a morphism $\colim D \to I$ if a colimit exists. \begin{definition}\label{def:universaldirectedjoins} A stiff category $\cat{C}$ has \emph{universal directed joins} of subunits when it has directed colimits of subunits, each of whose induced arrow $\colim S \to I$ is again a subunit, and these colimits are preserved by each functor $X \otimes (-)$. \end{definition} \begin{lemma} \label{lem:preframe} If a stiff category $\cat{C}$ has universal directed joins of subunits, then $\ISub(\cat{C})$ is a preframe. \end{lemma} \begin{proof} Any directed subset $U \subseteq \ISub(\cat{C})$ induces a diagram $U \to \cat{C}$, and its colimit is by assumption a subunit which is easily seen to form a supremum of $U$. Taking $X$ to be a subunit shows that $\wedge$ distributes over directed suprema. \end{proof} \begin{example} Any preframe $L$, regarded as a monoidal category under $(\wedge,1)$, has universal directed joins. \end{example} The rest of this section shows that the subunits of a category have a spatial nature when it has both types of universal joins above. We unify Definitions~\ref{def:universalfinitejoins} and~\ref{def:universaldirectedjoins} as follows. Let $\cat{C}$ be a braided monoidal category and $U \subseteq \ISub(\cat{C})$ a family of subunits. For any object $X$, write $D(U,X)$ for the diagram of objects $S \otimes X$ for $s \in U$ and all morphisms $f \colon S \otimes X \to T \otimes X$ satisfying $(t \otimes X) \circ f = s \otimes X$. If $\cat{C}$ is stiff, there is a unique such $f$ for $s$ and $t$. \[ \begin{tikzpicture}[xscale=6,yscale=1] \node (tl) at (-0.4,1) {$S \otimes X$}; \node (tr) at (0.4,1) {$T \otimes X$}; \node (b) at (0,0) {$X$}; \draw[>->,dashed] (tl) to node[above]{} (tr); \draw[>->] (tl) to node[left=3mm]{$s \otimes X$} (b); \draw[>->] (tr) to node[right=2mm]{$t \otimes X$} (b); \end{tikzpicture} \] Call such a set $U$ of subunits \emph{idempotent} when $U = U \otimes U := \{s \wedge t \mid s, t \in U\}$. \begin{definition}\label{def:spatial} A category $\cat{C}$ is {\emph{locale-based}} when it is stiff, $\ISub(\cat{C})$ is a frame, and the canonical maps $S \otimes X \to (\bigvee U) \otimes X$ form a colimit of $D(U,X)$ for each idempotent $U \subseteq \ISub(\cat{C})$ and $X \in \cat{C}$. \end{definition} Let us now see how this combines our earlier notions. In any poset $P$, an \emph{ideal} is a downward closed, upward directed subset. Let us call a subset $U \subseteq P$ \emph{finitely bounded} when it has a finite set of maximal elements. If $U$ is downward closed then equivalently it is finitely generated: $U=\ensuremath{\mathop{\downarrow}}\{x_1, \dots, x_n\}$. \begin{proposition} \label{prop:unify} A category $\cat{C}$ has universal finite (directed) joins if and only if $\ISub(\cat{C})$ has finite (directed) joins, and $D(U,X)$ has colimit $S \otimes X \to (\bigvee U) \otimes X$ for each idempotent $U \subseteq \ISub(\cat{C})$ that is finitely bounded (directed). \end{proposition} \begin{proof} First consider finite joins. A colimit of $D(\emptyset, X)$ is precisely an initial object and the conditions on $0$ in both cases are equivalent to $0 \to I$ being a subunit with $0 \otimes X \simeq 0$ for all $X$. Moreover in any stiff category it is easy to see that cocones over the top left corner of \eqref{eq:pullback-pushout} correspond to those over $D(\ensuremath{\mathop{\downarrow}}\{s, t\},X)$. (See also Lemma~\ref{lem:idempotentsetsofidempotentsubunits} below.) Hence the properties above provide each diagram with a colimit $(S \vee T) \otimes X$, and so $\cat{C}$ with universal finite joins. Conversely, suppose that $\cat{C}$ has universal finite joins. For any idempotent $U$ we claim that any cocone $c_s$ over $D(U,X)$ extends to one over $D(V,X)$, where $V = \{s_1 \vee \dots \vee s_n \mid s_i \in U\}$. Indeed for any $s, t \in U$ the following diagram commutes, giving $c_{s \vee t}$ as the unique mediating morphism. \[\begin{tikzpicture}[xscale=5,yscale=1.5] \node (tl) at (0,1) {$S \otimes T \otimes X$}; \node (tr) at (0.7,1) {$T \otimes X$}; \node (bl) at (0,0) {$S \otimes X$}; \node (br) at (0.7,0) {$(S \vee T) \otimes X$}; \node (x) at (1.2,-.8) {$C$}; \draw[>->] (tl) to node[above]{} (tr); \draw[>->] (tl) to node[left]{} (bl); \draw[->] (tr) to node[right]{} (br); \draw[->] (bl) to node[below]{} (br); \draw[>->] (tr) to[out=-30,in=90,looseness=.7] node[right]{$c_t$} (x); \draw[>->] (bl) to[out=-60,in=180] node[below]{$c_s$} (x); \draw[->,dashed] (br) to node[above]{$c_{s \vee t}$} (x); \draw (.65,.3) to (.60,.3) to (.60,.175); \end{tikzpicture}\] Similarly define morphisms $c_{s_1 \vee \dots \vee s_n}$ for arbitrary elements of $V$; these form a cocone. Hence $\colim D(U,X) = \colim D(V,X)$. But if $U$ is bounded by some $s_1, \dots, s_n$ then clearly $\colim D(V,X) = (s_1 \vee \dots \vee s_n) \otimes X$ and we are done. Next, consider directed joins. Let $D$ be a directed diagram of inclusions between elements of $U \subseteq \ISub(\cat{C})$. Then $U$ must be directed and therefore $V = \{s_1 \wedge \dots \wedge s_n \mid s_i \in U\}$ is idempotent and directed. Moreover, for each object $X$, any cocone $c_s$ over $D \otimes X$ extends to one over $D(V,X)$: for any $s \in V$, let $s \leq t \in U$ and set $c_s = c_t \circ (x \otimes \id[X])$ where $x \colon S \to T$ is the inclusion. Since $R = \bigvee V$ has $R \otimes X= \colim D(V,X)$ then $R \otimes X=\colim (D \otimes X)$ as required. Conversely, suppose $\cat{C}$ has universal directed joins. Then $\ISub(\cat{C})$ is a preframe by Lemma~\ref{lem:preframe}. If $U \subseteq \ISub(\cat{C})$ is directed and idempotent then for each $X$ we have $R \otimes X = \colim \big(D(U,I) \otimes X\big)$, where $R = \bigvee U$. But any cocone over $D(U,X)$ certainly also forms one over $D(U,I) \otimes X$, and so $R \otimes X = \colim D(U,X)$ also. \end{proof} \begin{corollary} \label{cor:spatialiffboth} A category is {locale-based} if and only if it has universal finite and directed joins of subunits. \end{corollary} \begin{proof} Proposition~\ref{prop:unify} proves one direction. In the other direction, suppose $\cat{C}$ has universal finite and directed joins of subunits. Then $\ISub(\cat{C})$ is a frame by Lemmas~\ref{lem:lat} and~\ref{lem:preframe}, since a poset is a frame precisely when it is a preframe and a distributive lattice. Let $U \subseteq \ISub(\cat{C})$ be idempotent. Then $V = \{s_1 \vee \dots \vee s_n \mid s_i \in U\}$ is idempotent by distributivity, as well as directed, so that $\colim D(V,X) = (\bigvee V) \otimes X$ exists for any $X$. But $\colim D(U,X) = \colim D(V,X)$ as in the proof of Proposition~\ref{prop:unify}. \end{proof} The previous corollary justifies saying that a category simply \emph{has universal joins} of subunits when it is {locale-based}. The rest of this section shows that our main examples are {locale-based}. \begin{example} \label{ex:frameisspatial} Any commutative unital quantale $Q$ is {locale-based} when regarded as a category as in {Proposition}~\ref{ex:quantale}; in particular so is any frame under tensor $\wedge$. Indeed that {proposition} showed that $\ISub(Q)$ is a frame, and for any $U \subseteq \ISub(Q)$ and $x \in Q$ we have $\colim D(U,x) = \bigvee_{s \in U} sx = (\bigvee_{s \in U} s)x$. \end{example} { \begin{proposition} \label{prop:topos-spatial} Any cocomplete Heyting category $\cat{C}$ is {locale-based} under cartesian products. This includes all cocomplete toposes, such as Grothendieck toposes. \end{proposition} } \begin{proof} Since a Heyting category is coherent, it has universal finite joins by {Proposition}~\ref{prop:coherentUnivUnion}, with each change of base functor having a right adjoint and so preserving arbitrary joins of subobjects. In any cocomplete regular category with this property, for any directed diagram $D$ and any cocone $C$ over $D$ all of whose legs are monic, the induced map $\colim D \to C$ is again monic~\cite[Corollary II.2.4]{grillet1971regular}. Hence whenever $U$ is directed, so is each map $\colim D(U,X) \to X$, ensuring that $\colim D(U,X) = \bigvee_{s \in U} s \times X$ is in $\Sub(X)$. Since each functor $X \times (-)$ now preserves arbitrary joins of subobjects furthermore $\bigvee_{s \in U} s \times X = \colim D(U,I) \times X$, establishing universal directed joins. \end{proof} Next we consider Hilbert modules. In general $\HModc$ is finitely cocomplete but not cocomplete, and so lacks directed colimits by~\cite[IX.1.1]{maclane:categorieswork}; this follows from~\cite[Example 2.3~(9)]{adamek1994locally} by taking $X$ to be trivial and so reducing to the category of Hilbert spaces and {nonexpansive} linear maps. Nonetheless, we have the following. { \begin{proposition} $\HModc$ is {locale-based}. \end{proposition} } \begin{proof} Throughout this proof we again identify $C_0(U)$ with the submodule~\eqref{eq:subunithilbertmodules} of $C_0(X)$, and identify the module $A \otimes C_0(U)$ with $A|_U$, for open $U \subseteq X$. First let us show that $\HModc$ has universal finite joins of subunits. For open subsets $U, V \subseteq X$, and any Hilbert $C_0(X)$-module $A$, consider the diagram of inclusions between $A|_{U \cap V}$, $A|_U$, $A|_V$ and $A|_{U \cup V}$. It is easily seen to be a pullback, since $A|_{U \cap V} = A|_U \cap A|_V$ as subsets of $A$. We verify that it is also a pushout. Since any morphism $A_{U \cup V} \to B$ restricts to $C_0(U \cup V)$, it suffices to assume that $X = U \cup V$. We claim that \[ C_0(U) + C_0(V) = \{g_U + g_v \in C_0(X) \mid g_U \in C_0(U), g_V \in C_0(V)\} \] is a dense submodule of $C_0(X)$. To see this, let $g \in C_0(X)$ and $\varepsilon > 0$, and $K$ be compact with $|g(x)| \geq \epsilon \implies x \in K$. Urysohn's lemma for locally compact Hausdoff spaces~\cite[2.12]{rudin2006real} produces $h \in C_0(U)$ such that $|h(x)| \leq |g(x)|$ for $x \in U$ and $h(x)=g(x)$ for $x \in K \setminus V$. Then $|(g-h)(x)| \geq 2 \varepsilon \implies x \in L$ for some compact $L \subseteq K \cap V$. Again there is $k \in C_0(V)$ with $|k(x)| \leq |g(x)|$ for all $x \in V$ and $k(x)=(g-h)(x)$ for $x \in L$. By construction $\|g-h-k\| \leq 4 \varepsilon$, establishing the claim. It follows also that \[ A|_U + A|_V = \{a_U + a_V \mid a_U \in A|_U, a_V \in A|_V\} \] is dense in $A$, since $A \cdot C_0(X) = \{a \cdot g \mid g \in C_0(X) \}$ is so too~\cite[p5]{lance:hilbertmodules}. Now suppose $f_U \colon A|_U \to B$ and $f_V \colon A|_V \to B$ agree on $A|_{U \cap V}$. Then for $a=a_U + a_V$ with $a_U \in A|_U$ and $a_V \in A|_V$, the assignment \[ f(a) = f_U(a_U) + f_V(a_V) \] is a well-defined $A$-linear map. Hence it extends to a unique map $f \colon A \to B$ which is by definition the unique factorisation of $f_U$ and $f_V$ through the diagram. Now we must check that $f$ is {nonexpansive} when $f_U$ and $f_V$ are. Let $x \in X$, and without loss of generality say $x \in U$. Urysohn's lemma again produces $g \in C_0(U)$ with $g(x) = 1 = \|g\|$. Now $a \cdot g \in A|_U$ for any $a \in A$. So, writing $|a|^2(x)$ for $|\inprod{a}{a}(x)|$, we find \[ |f(a)|(x) = |f(a)\cdot g| (x) \leq \|f(a) \cdot g\| = \|f_U(a) \cdot g\| \leq \|a\| \|g\| \leq \|a\| \] using $\|f_U\|\leq 1$. Since $x$ was arbitrary, also $\|f\|\leq 1$. Next, let us consider universal directed joins of subunits. For this, let $W$ be a directed family of open sets in $X$; again it suffices to assume $X = \bigcup W$. We claim that \[ \bigcup_{U \in W} C_0(U) = \{g \in C_0(X) \mid g \in C_0(U) \text{ for some }U \in W\} \] is a dense submodule of $C_0(X)$. Again let $g \in C_0(X)$ and $\varepsilon > 0$, and let $K$ be compact with $|g(x)| \geq \epsilon \implies x \in K$. Since $K$ is compact and $W$ is directed, $K \subseteq U$ for some $U \in W$. Urysohn again provides $h \in C_0(U)$ with $|h(x)| \leq |g(x)|$ for all $x \in U$ and $h(x)=g(x)$ for $x \in K$. Then $|g - h|(x) \leq |g(x)| + |h(x)| \leq 2 \varepsilon$ for $x \in X \setminus K$ and so, since $g$ and $h$ agree on $K$, we have $\|g-h\| \leq 2 \varepsilon$, establishing the claim. Similarly, for any Hilbert module $A$, since $A \cdot C_0(X)$ is dense in $A$, so is $\bigcup_{U \in W} A|_U$. Finally, let $f_U \colon A|_U \to B$ be a cocone over $D(W,A)$. It suffices to show that there is a unique $f \colon A \to B$ with $f(a) = f_U(a)$ for all $a \in A|_U$. But any $a \in A$ has $a = \lim (a_n)^{\infty}_{n=1}$ with each $a_n \in A|_{U_n}$ for some $U_n$. By directedness we may assume $U_n \subseteq U_{n+1}$ for all $n$. Then $f \colon A \to B$ must satisfy $f(a) = \lim f_{U_n}(a_n)$, making $f$ unique. Additionally, this limit is always well-defined since $a_n$ is a Cauchy sequence and so for $n \leq m$: \[ \|f_{U_n}(a_n) - f_{U_m}(a_m)\| = \|f_{U_m}(a_n - a_m)\| \leq \|a_n - a_m \| \] and $f_{U_n}(a_n)$ is also a Cauchy sequence. Clearly $f$ is $A$-linear and $\|f\|\leq 1$. \end{proof} \section{Universal joins from colimits}\label{sec:universaljoins} This section characterises each of the notions of universal joins purely categorically, without order-theoretic assumptions on $\ISub(\cat{C})$. Instead, they will be cast solely in terms of the diagrams $D(U,X)$. When we turn to completions in the next sections, we can therefore use the diagrams $D(U,X)$ themselves as formal joins to add. \begin{lemma}\label{lem:idempotentsetsofidempotentsubunits} Let $\cat{C}$ be a stiff category. If $U \subseteq \ISub(\cat{C})$ is idempotent, then any cocone over $D(U,X)$ extends uniquely to one over $D(\ensuremath{\mathop{\downarrow}} U, X)$. \end{lemma} Therefore, $\cat{C}$ has colimits of $D(U,X)$ for all downward-closed $U \subseteq \ISub(\cat{C})$ if and only if it has them for idempotent $U$. \begin{proof} Let $U$ be idempotent and consider a cocone $c_s \colon S \otimes X \to X$ over $D(U,X)$. Let $r \in \ensuremath{\mathop{\downarrow}} U$, say $r = s \circ f$ for $s \in U$ and $f \colon R \to S$. Define $c_r = c_s \circ (f \otimes X) \colon R \otimes X \to X$. This is clearly the only possible extension of $c_s$ to $D(\ensuremath{\mathop{\downarrow}} U, X)$. We will prove that it is a well-defined cocone. Suppose $r' \in \ISub(\cat{C})$ satisfies $r' \leq s'$ for $s' \in U$, and $r \otimes X = (r' \otimes X) \circ g$. Then the marked morphism in the following diagram is an isomorphism: \[\begin{pic}[xscale=5] \node (tl) at (-1,3) {$R \otimes X$}; \node (tr) at (1,3) {$R' \otimes X$}; \node (um) at (0,2.25) {$R \otimes R' \otimes X$}; \node (lml) at (-1,1) {$S \otimes X$}; \node (lmc) at (0,1) {$S \otimes S' \otimes X$}; \node (lmr) at (1,1) {$S' \otimes X$}; \node (b) at (0,0.25) {$X$}; \draw[>->] (tl) to node[above]{$g$} (tr); \draw[>->] (tl) to (lml); \draw[>->] (tr) to (lmr); \draw[>->] (um) to (lmc); \draw[>->] (um) to node[below]{$r\otimes R'\otimes X$} (tr); \draw[>->] (um) to node[right=8mm]{$\simeq$} node[below=1mm]{$R \otimes r' \otimes X$} (tl); \draw[>->] (lmc) to node[above]{$S \otimes s' \otimes X$} (lml); \draw[>->] (lmc) to node[above]{$s \otimes S' \otimes X$} (lmr); \draw[->] (lml) to node[below]{$c_s$} (b); \draw[->] (lmr) to node[below]{$c_{s'}$} (b); \end{pic}\] The upper triangle and central squares commute trivially. The lower quadrilateral commutes and equals $c_{s \otimes s'}$ because $s \otimes s' \in U$ and $c$ is a cocone. Hence the outer diagram commutes, showing $c_r = c_{r'} \circ g$ as required. In particular, taking $R' = R$ shows that $c_r$ is independent of the choice of $s$. \end{proof} \begin{lemma}\label{lem:cocone} Let $\cat{C}$ and $\cat{D}$ be stiff categories, $U \subseteq \ISub(\cat{C})$ be idempotent, and $c_s \colon S \otimes X \to C$ be a cocone over $D(U,X)$. If a functor $F \colon \cat{C} \to \cat{D}$ preserves monomorphisms of the form $s \otimes X \rightarrowtail X$, for subunits $s$, and the pullbacks~\eqref{eq:stiff-pullback}, then $F(c_s)$ is a cocone over $D\big( F(U), F(X) \big)$, where $F(U) = \{F(s) \mid s \in U\}$. \end{lemma} \begin{proof} Clearly, if $s \otimes X \leq t \otimes X$ then $F(s \otimes X) \leq F(t \otimes X)$, and $F(c_s)$ respects the inclusion. Conversely, suppose that $F(s \otimes X) \leq F(t \otimes X)$ via some morphism $f$, and consider the following diagram. \[\begin{tikzpicture}[xscale=5,yscale=1.5] \node (tl) at (0,1) {$F(S \otimes T \otimes X)$}; \node (tr) at (1,1) {$F(T \otimes X)$}; \node (bl) at (0,0) {$F(S \otimes X)$}; \node (br) at (1,0) {$F(C)$}; \node (x) at (1.4,-.7) {$F(X)$}; \draw[>->] (tl) to node[above]{$F(s \otimes T \otimes X)$} (tr); \draw[>->] (tl) to node[left]{$F(S \otimes t \otimes X)$} (bl); \draw[->] (tr) to node[right]{$F(c_t)$} (br); \draw[->] (bl) to node[below]{$F(c_s)$} (br); \draw[>->] (tr) to[out=-30,in=90,looseness=.7] node[right]{$F(t \otimes X)$} (x); \draw[>->] (bl) to[out=-60,in=180] node[below]{$F(s \otimes X)$} (x); \draw[->](bl) to node[above]{$f$} (tr); \end{tikzpicture}\] The outer rectangle commutes by bifunctoriality, and $F(t \otimes X) \circ f = F(s \otimes X)$ by assumption. Hence the upper left triangle commutes because $F(t \otimes X)$ is monic by stiffness and the assumption on $F$. The inner square commutes and is equal to $F(c_{s \otimes t})$ by definition of $D(U,X)$. Since the outer rectangle is a pullback, the leftmost vertical morphism is invertible and hence $F(c_t) \circ f = F(c_s)$. \end{proof} Now suppose a diagram $D(U,X)$ has a colimit $c_s^X \colon S \otimes X \to \colim D(U,X)$ for each idempotent $U \subseteq \ISub(\cat{C})$ and object $X$. Then there are two canonical morphisms. First, a mediating map $\colim D(U,I) \to I$ to the cocone $s \colon S \to I$. \begin{equation}\label{eq:sU}\begin{pic}[xscale=2,yscale=1.5] \node (c) at (2,0) {$\colim D(U,I)$}; \node (i) at (2,-1) {$I$}; \node (s) at (0,0) {$S$}; \draw[->,dashed] (c) to (i); \draw[->] (s) to node[below]{$s$} (i); \draw[->] (s) to node[above]{$c_s^I$} (c); \end{pic}\end{equation} Second, in a stiff category it follows from applying Lemma~\ref{lem:cocone} to $(-) \otimes X$ that there is a unique map making the following triangle commute for all $s \in U$: \begin{equation}\label{eq:deltaUX}\begin{pic}[xscale=2,yscale=1.5] \node (t) at (0,0) {$S \otimes X$}; \node (br) at (2,-1) {$(\colim D(U,I)) \otimes X$}; \node (bl) at (2,0) {$\colim D(U,X)$}; \draw[->,dashed] (bl) to (br); \draw[->] (t) to node[above]{$c_s^X$} (bl); \draw[->] (t) to node[below]{$c_s^I \otimes X$} (br); \end{pic}\end{equation} If $\cat{C}$ has universal joins of $U$ then $\bigvee U = \colim D(U,I)$ and~\eqref{eq:sU} is monic, and~\eqref{eq:deltaUX} is invertible by definition. We now set out to prove the converse. \begin{lemma}\label{lem:spatialcharacterisation:monic} Let $\cat{C}$ be a stiff category, and let $U \subseteq \ISub(\cat{C})$ be idempotent. Suppose that $D(U,X)$ has a colimit for each object $X$ and that each morphism \eqref{eq:deltaUX} is an isomorphism. If the morphism $\colim D(U,I) \to I$ of~\eqref{eq:sU} is monic, then it is a subunit. \end{lemma} \begin{proof} Write $s_U$ for this morphism, which is monic by assumption. For each $s \in U$, we claim $S \otimes s_U \colon S \otimes \colim D(U,I) \to S$ is an isomorphism. It is monic because \[ s_U \circ c_s \circ (S \otimes s_U) = s \otimes s_U = s_U \circ \big(s \otimes \colim D(U,I)\big) \] where $s_U$ and $s \otimes \colim D(U,I)$ are monic by stiffness. But it is also split epic since $(S \otimes s_U) \circ (S \otimes c_s) = S \otimes s$ is an isomorphism. Now since $s \circ (S \otimes s_U) = s_U \circ (s \otimes \colim D(U,I))$, bifunctoriality of $\otimes$ shows that for all $s, t \in U$: \[ s \otimes \colim D(U,I) \leq t \otimes \colim D(U,I) \quad\iff\quad s \leq t \] This gives an isomorphism of diagrams $S \otimes s_U \colon S \otimes \colim D(U,I) \to S$ from $D\big(U, \colim D(U,I)\big)$ to $D(U,I)$. Writing $c_s \colon S \to \colim D(U,I)$ for the latter colimit, $c_s \otimes \colim D(U,I)$ is a colimit for the former by assumption. Hence the unique map making the following square commute \[\begin{pic}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$S \otimes \colim D(U,I)$}; \node (tr) at (1,1) {$S$}; \node (bl) at (0,0) {$\colim D(U,I) \otimes \colim D(U,I)$}; \node (br) at (1,0) {$\colim D(U,I)$}; \draw[->,dashed] (bl) to (br); \draw[->] (tl) to node[above]{$S \otimes s_U$} (tr); \draw[->] (tl) to node[left]{$c_s \otimes \colim D(U,I)$} (bl); \draw[->] (tr) to node[right]{$c_s$} (br); \end{pic}\] is invertible. But this map is just $\colim D(U,I) \otimes s_U$, so $s_U$ is a subunit. \end{proof} We can now characterise {locale-based} categories purely categorically. \begin{theorem} \label{thm:spatialcharacterisation} A stiff category $\cat{C}$ has universal (finite, directed) joins if and only if for each idempotent (and finitely bounded, directed) $U \subseteq \ISub(\cat{C})$: \begin{itemize} \item the diagram $D(U,X)$ has a colimit; \item the canonical morphism~\eqref{eq:sU} is monic; \item the canonical morphism~\eqref{eq:deltaUX} is invertible. \end{itemize} \end{theorem} \begin{proof} The conditions are clearly necessary, as already discussed. Conversely, suppose that they hold and let $U \subseteq \ISub(\cat{C})$ be as above. Lemma~\ref{lem:idempotentsetsofidempotentsubunits} lets us assume $U = \ensuremath{\mathop{\downarrow}} U$. Then $s_U \colon \colim D(U,I) \to I$ is a subunit by Lemma~\ref{lem:spatialcharacterisation:monic}, and by definition $s \leq s_U$ for all $s \in U$. Now suppose that $t$ is also an upper bound in $\ISub(\cat{C})$ of all $s \in U$. Then the inclusions $i_{s,t} \colon S \to T$ form a cocone over $D(U,I)$. Hence there is a unique mediating map $f \colon \colim D(U,I) \to T$ with $i_{s,t} = f \circ c_s^I$ for all $s \in U$. But then \[ t \circ f \circ c_s^I = t \circ i_{s,t} = s = s_U \circ c_s^I \] for all $s \in U$. Because the $c_s^I$ are jointly epic, $t \circ f = s_U$, so that $s_U \leq t$. Therefore indeed $\colim D(U,I) = \bigvee U$. Thus universal finite or directed joins follow by Proposition~\ref{prop:unify}, and so arbitrary ones by Corollary~\ref{cor:spatialiffboth}. \end{proof} \section{Completions} \label{sec:broad} Our goal for this section is to embed a stiff category $\cat{C}$ into one with any given kind of universal joins of subunits, including a {locale-based} category. One might think to work with the free cocompletion of $\cat{C}$, the category of presheaves $\Psh{\cat{C}}=[\cat{C}\ensuremath{^{\mathrm{op}}}, \cat{Set}]$. Here, $\Psh{\cat{C}}$ is endowed with the Day convolution $\ensuremath{\mathop{\widehat{\otimes}}}$ as tensor; for details see \ref{sec:day}. Although $\Psh{\cat{C}}$ has a complete lattice of subunits, we will see that it has two problems: it is in general not firm, and it has too many subunits to be the {locale-based} completion. We will remedy both problems by passing to a full subcategory of so-called broad presheaves. First, note that any subunit $s$ in a firm category $\cat{C}$ induces a subunit ${s \circ (-)} \colon \cat{C}(-,S) \to \cat{C}(-,I)$ in $\Psh{\cat{C}}$ since the Yoneda embedding is monoidal, full, and faithful, and preserves all limits and hence monomorphisms. \begin{proposition}\label{prop:colimitspresheaves} If $\cat{C}$ is a cocomplete regular category, and for all objects $A$ the functors $A \otimes (-)$ preserve colimits, then $\ISub(\cat{C})$ is a complete lattice. Thus, if $\cat{C}$ is any braided monoidal category, then $\ISub(\Psh{\cat{C}})$ is a complete lattice. \end{proposition} \begin{proof} In cocomplete regular categories, the subobjects of a fixed object form a complete lattice~\cite[Proposition~4.2.6]{borceux:1}. Explicitly, let $s_i \colon S_i \rightarrowtail I$ be a family of subunits. Choose a coproduct $c_i \colon S_i \to C$. The unique mediating map $C \to I$ factors through a monomorphism $\bigvee s_i \colon S \rightarrowtail I$, which is the supremum. \[\begin{tikzpicture}[xscale=2,yscale=1] \node (Si) at (0,2.5) {$S_i$}; \node (Sj) at (2,2.5) {$S_j$}; \node (C) at (1,2) {$C$}; \node (S) at (1,1) {$S$}; \node (I) at (1,0) {$I$}; \draw[->] (Si) to node[below]{$c_i$} (C); \draw[->] (Sj) to node[below]{$c_j$} (C); \draw[>->] (Si) to[out=-90,in=150] node[left]{$s_i$} (I); \draw[>->] (Sj) to[out=-90,in=30] node[right]{$s_j$} (I); \draw[->>,dashed] (C) to node[right]{$e$} (S); \draw[>->,dashed] (S) to node[right]{$s$} (I); \end{tikzpicture}\] Next we show that $\bigvee s_i$ is a subunit. Let $c = s \circ e \colon C \to I$. We claim that \[\begin{pic}[xscale=5] \node (tl) at (0,1) {$C \otimes C$}; \node (tr) at (1,1) {$C$}; \node (bl) at (.5,0) {$\coprod_i S_i \otimes C$}; \draw[->] (tl) to node[above]{$C \otimes c$} (tr); \draw[->] (tl) to node[left=3mm]{$\simeq$} (bl); \draw[->] (bl) to node[right, pos=0.1]{$\ \ \ \ \ \coprod_i (S_i \otimes c)$} (tr); \end{pic}\] is a regular epimorphism. Since colimits commute with colimits, it suffices to check that each $S_i \otimes c$ is a regular epimorphism. But this is so: if $S_i \otimes c = m \circ f$ for some regular epimorphism $f$ and monomorphism $m$, then $m \circ f \circ (S_i \otimes c_i) = (S_i \otimes c) \circ (S_i \otimes c_i) = S_i \otimes s_i$ is an isomorphism by idempotence of $s_i$, so that $m$ is split epic as well as monic and hence an isomorphism. Now the topmost two rectangles in the following diagram commute. \[\begin{tikzpicture}[xscale=4,yscale=1.25] \node (Si) at (.5,2.5) {$S_i$}; \node (C) at (1,2) {$C$}; \node (S) at (1,1) {$S$}; \node (I) at (1,0) {$I$}; \node (II) at (2,0) {$I \otimes I$}; \node (SS) at (2,1) {$S \otimes S$}; \node (CC) at (2,2) {$C \otimes C$}; \node (SiSi) at (2.5,2.5) {$S_i \otimes S_i$}; \draw[->] (SiSi) to node[above]{$S_i \otimes s_i$} (Si); \draw[->>] (CC) to node[above]{$C \otimes c$} (C); \draw[->] (SS) to node[above]{$\lambda_S \circ (S \otimes s)$} (S); \draw[->] (II) to node[below]{$\lambda_I$} (I); \draw[>->] (S) to node[left]{$s$} (I); \draw[->>] (C) to node[left]{$e$} (S); \draw[>->] (SS) to node[right]{$s \otimes s$} (II); \draw[->>] (CC) to node[right]{$e \otimes e$} (SS); \draw[->] (Si) to node[below]{$c_i$} (C); \draw[->] (SiSi) to node[right=2mm,pos=.9]{$c_i \otimes c_i$} (CC); \draw[->] (SiSi) to[out=-90,in=0,looseness=.4] node[right]{$s_i \otimes s_i$} (II); \draw[->] (Si) to[out=-90,in=180,looseness=.5] node[left]{$s_i$} (I); \end{tikzpicture}\] The left and right triangles commute by construction, and the bottom rectangle commutes by bifunctoriality of the tensor and naturality of $\lambda$. Because $e$ is a coequaliser, so are $C \otimes e$ and $e \otimes S$, and hence so is $e \otimes e$. Therefore both vertical morphisms factor as regular epimorphisms followed by monomorphisms, and the mediating morphism, which must be $\lambda_S \circ (S \otimes s)$ by uniqueness, is an isomorphism. Thus $S \otimes s$ is an isomorphism, as required. The second statement now follows, because $\Psh{\cat{C}}$ is regular and cocomplete, and the functors $F \ensuremath{\mathop{\widehat{\otimes}}} (-)$ are cocontinuous~\cite{imkelly:day}. \end{proof} However, the subunits in $\Psh{\cat{C}}$ are in general not well behaved. { \begin{proposition} \label{prop:PshFirmCounter} Consider the commutative monoid $M=[0,1) \times [0,\infty)$ under \[ (a,b) + (c,d) = \begin{cases} (a+c, b+d) & \text{ if } a + c < 1 \\ (a + c - 1, b + d + 1) & \text{ if } a + c \geq 1 \end{cases} \] with unit $(0,0)$. Then $M$ is a firm one-object category, but $\Psh{M}$ is not firm. \end{proposition} } \begin{proof} The identity $(0,0)$ represents the only subunit of the one-object category $M$, which is therefore firm. \ref{sec:day} proves that $\Psh{M}$ is not firm. \end{proof} Moreover, $\Psh{\cat{C}}$ may have subunits that are not suprema of subunits of $\cat{C}$. { \begin{proposition}\label{prop:daycounterexample} In general $\ISub(\Psh{\cat{C}})$ is not the free frame on $\ISub(\cat{C})$. \end{proposition} } \begin{proof} Consider a commutative unital quantale $Q$ as a firm category. By their description in \ref{sec:day}, any subunit in $\widehat{Q}$ is given by a suitable downward closed subset $S \subseteq\ensuremath{\mathop{\downarrow}} \qunit \subseteq Q$ such that $\forall x \in S\, \exists y,z \in S \colon x \leq yz$, and to be a subunit it suffices for $S$ to be directed. In particular, take $Q=[0,\infty]$ under the opposite order and addition. Then $\ISub(Q) = \{ 0, \infty \}$, whose free completion to a frame is its collection of downsets $ \big\{ \emptyset, \{\infty\}, \{0,\infty\} \big\} $. However, by the above description of subunits in $\widehat{Q}$ it is easy to see that $ \ISub(\widehat{Q}) \supseteq \big\{ \emptyset, \{\infty\}, [0,\infty], (0,\infty] \big\} $. \end{proof} Instead, to complete $\ISub(\cat{C})$ to a distributive lattice, preframe, or frame, we will consider certain full subcategories of $\Psh{\cat{C}}$. \begin{definition}\label{def:broad} A presheaf on a braided monoidal category $\cat{C}$ is \emph{(finitely, directedly) broad} when it is naturally isomorphic to one of the form \[ \broad{U}{X} \colon A \mapsto \{ f \colon A \to X \mid \text{$f$ restricts to some $s \in U$} \} \] for a (finitely bounded, directed) family $U$ of subunits and an object $X$. Write $\Spsh{\cat{C}}$ ($\FUsh{\cat{C}}$, $\DUsh{\cat{C}}$) for the full subcategory of (finitely, directedly) broad presheaves. We will also write $\widehat{U}$ for $\broad{U}{I}$, and $\widehat{X}$ for $\broad{\{1\}}{X}$. \end{definition} We will see below that the broad presheaves are precisely the colimits of the diagrams $D(\{\hat{s} \mid s \in U\}, \hat{X})$, and leave open the possibility of characterising when a given presheaf is broad in terms not referring to $U$ or $X$. The following lemma shows that broad presheaves are closed under (Day) tensor products and so form a monoidal category. \begin{lemma}\label{lem:broadtensor} For any objects $X$, $Y$ and families of subunits $U$, $V$ in a stiff category $\cat{C}$, there is a (unique) natural isomorphism making \begin{equation} \label{eq:day-useful} \begin{pic}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$\broad{U}{X} \ensuremath{\mathop{\widehat{\otimes}}} \broad{V}{Y}$}; \node (tr) at (0,0) {$\widehat{X} \ensuremath{\mathop{\widehat{\otimes}}} \widehat{Y}$}; \node (bl) at (1,1) {$\broad{U \otimes V}{X \otimes Y}$}; \node (br) at (1,0) {$\widehat{X \otimes Y}$}; \draw[->] (tl) to node[left]{$u \ensuremath{\mathop{\widehat{\otimes}}} v$} (tr); \draw[->,dashed] (tl) to node[above]{$\simeq$} (bl); \draw[->] (tr) to node[above]{$\simeq$} (br); \draw[>->] (bl) to (br); \end{pic} \end{equation} commute, where $U \otimes V = \{ s \wedge t \mid s \in U, t \in V \}$, and $u, v$ are the inclusions. \end{lemma} \begin{proof} See \ref{sec:day}. \end{proof} We now describe the subunits in each completion. \begin{proposition}\label{prop:broadidempotent} If $\cat{C}$ is stiff, the subunits in $\Spsh{\cat{C}}$ ($\FUsh{\cat{C}}$, $\DUsh{\cat{C}}$) are the presheaves of the form $\widehat{U}$ for (finitely bounded, directed) $U \subseteq \ISub(\cat{C})$. \end{proposition} \begin{proof} Clearly $\widehat{U}$ is a subunit. Conversely, if $\eta \colon \broad{U}{X} \to \widehat{I}$ is a subunit then \[ s_X = \eta_{S \otimes X}(s \otimes X) \colon S \otimes X \to I \] will be proven to be a subunit in $\cat{C}$ for each $s \in U$. Given this, let $U' = \{ s_X \mid s \in U \}$, noting that $\widehat{U'}$ again belongs to each respective category, and consider the function $\broad{U}{X}(A) \to \broad{U'}{I}(A)$ given by $((s \otimes X) \circ f) \mapsto s_X \circ f$. It is surjective by definition of $U'$, clearly natural, and is well-defined and injective since \begin{align*} s_X \circ f = s'_X \circ f' & \iff \eta(s \otimes X) \circ f = \eta(s' \otimes X) \circ f' \\ & \iff \eta((s \otimes X) \circ f)) = \eta((s' \otimes X) \circ f') \\ & \iff (s \otimes X) \circ f =(s' \otimes X) \circ f' \end{align*} by naturality and injectivity of $\eta$. Let us show that $s_X$ is indeed a subunit. By stiffness of $\cat{C}$ each morphism $(s \otimes X)$ is monic, and so by the above argument $s_X$ is, too. Next we show $s_X \otimes S \otimes X$ is invertible. Notice that $\broad{U}{X} = \broad{\ensuremath{\mathop{\downarrow}} U}{X}$, so we may assume that $U$ is idempotent. The fact that $\eta$ is a subunit means precisely that each map \begin{align}\label{eq:idempotentpresheafbijection}\tag{$*$} \broad{U}{X \otimes X}(A) & \to \broad{U}{X}(A) \notag\\ (s \otimes (X \otimes X)) \circ f & \mapsto (s_X \otimes X) \circ f \end{align} is a well-defined bijection, where $f \colon A \to S \otimes X \otimes X$ and $s \in U$. Now note that $S \otimes s_X \otimes X$ is monic, since by injectivity of~\eqref{eq:idempotentpresheafbijection}, $s_X \otimes X$ is monic, and it is easy to see from stiffness that for any subunit $s$ and monic $m$ that $S \otimes m$ is again monic. Moreover it is split epic and hence an isomorphism, since by surjectivity of~\eqref{eq:idempotentpresheafbijection} there is some $f$ with $(s_X \otimes X) \circ f = s \otimes X$, and $S \otimes (s \otimes X)$ is always split epic by idempotence of $s$. \end{proof} For any semilattice, as well as its downsets forming its free completion to a frame, recall that its free completion to a preframe is given by its collection of directed downsets~\cite[Theorem~9.1.5]{vickers:topology}; and that its free completion to a distributive lattice is given by its finitely bounded downsets~\cite[I.4.8]{johnstone:stonespaces}, with (directed, finite) joins given by unions. \begin{corollary} The subunits in $\FUsh{\cat{C}}$, $\DUsh{\cat{C}}$, and $\Spsh{\cat{C}}$, are the free completion of $\ISub(\cat{C})$ to a distributive lattice, preframe, and frame, respectively. \end{corollary} \begin{proof} For any $U, V \subseteq \ISub(\cat{C})$ it is easy to see that $\widehat{U} \leq \widehat{V} \iff U \leq {\ensuremath{\mathop{\downarrow}} V}$. In particular $\widehat{U} = \widehat{\ensuremath{\mathop{\downarrow}}\! U}$ as we have already noted. Hence by Proposition~\ref{prop:broadidempotent}, subunits in each category correspond to the respective kinds of downset $U \subseteq \ISub(\cat{C})$. \end{proof} Next let us note that each of our constructions are again stiff. \begin{lemma}\label{lem:broadpresheavesstiff} If a monoidal category $\cat{C}$ is stiff, then so are $\DUsh{\cat{C}}$, $\FUsh{\cat{C}}$ and $\Spsh{\cat{C}}$. \end{lemma} \begin{proof} For any object $\broad{U}{X}$ and subunit $V \colon \widehat{V} \to \widehat{I}$ in $\Spsh{\cat{C}}$ we need to show that the morphism $\broad{U}{X} \otimes V$ is monic. This holds since the obvious morphism $\broad{U}{X} \otimes \widehat{V} \to \widehat{X}$ factors over it, and is itself monic by equation~\eqref{eq:day-useful} of Lemma~\ref{lem:broadtensor}. By the same result, for the pullback property we must show each diagram \[ \begin{pic}[xscale=4,yscale=1.5] \node (tl) at (0,1) {$\broad{U \otimes V \otimes W}{X}$}; \node (tr) at (1,1) {$\broad{U \otimes W}{X}$}; \node (bl) at (0,0) {$\broad{V \otimes W}{X}$}; \node (br) at (1,0) {$\broad{W}{X}$}; \draw[>->] (tl) to node[above]{} (tr); \draw[>->] (tl) to node[left]{} (bl); \draw[>->] (tr) to node[right]{} (br); \draw[>->] (bl) to node[below]{} (br); \draw (.1,.7) to (.15,.7) to (.15,.825); \end{pic} \] to be a pullback in $\Spsh{{\cat{C}}}$. For this it suffices to check that applying the diagram to each object $A$ yields a pullback in $\cat{Set}$, or equivalently that any morphism $f \colon A \to X$ factoring over $u \otimes w \otimes X$ and $v \otimes w' \otimes X$ for some $u \in U, v \in V$ and $w, w' \in W$ factors over $u' \otimes v' \otimes w'' \otimes X$ for some $u' \in U, v' \in V, w'' \in W$. But this follows easily from the pullbacks~\eqref{eq:stiff-pullback} taking $u'=u$, $v'=v$ and $w'' = w \wedge w'$, again for convenience assuming $W$ to be idempotent. \end{proof} The next lemma shows that $\Spsh{\cat{C}}$ formally adds to $\cat{C}$ the colimits of the diagrams $D(U,X)$ for all suitable $U \subseteq \ISub(\cat{C})$ and objects $X$. \begin{lemma}\label{lem:morphismsofbroadpresheaves} Let $\cat{C}$ be firm, and let $U, V \subseteq \ISub(\cat{C})$ be idempotent. Morphisms $\alpha \colon \broad{U}{X} \to \broad{V}{Y}$ of broad presheaves correspond to cocones $c_s \colon S \otimes X \to Y$ over $D(U, X)$ for which each $c_s$ restricts to some $t \in V$. \end{lemma} \begin{proof} Given $\alpha$ and $s \in U$, by naturality we may define such a cocone by $c_s = \alpha_{S \otimes X}(s \otimes X)$. Conversely, given a cocone as above define \[ \alpha_A\big((s \otimes X) \circ g\big) = c_s \circ g \] for each $g \colon A \to S \otimes X$. This is clearly natural and is well-defined; indeed if $(s \otimes X) \circ g = (t \otimes X) \circ h$ then since~\eqref{eq:stiff-pullback} is a pullback this morphism factors as $(s \otimes t \otimes X) \circ k$ for some $k$, then with $c_s \circ g =c_{s \wedge t} \circ k = c_t \circ h$ since the $(c_s)$ form a cocone. Clearly these two assignments are inverses. \end{proof} Finally we can prove that our free constructions have the desired properties. \begin{theorem}\label{thm:spatialcompletion} If $\cat{C}$ is a stiff category, then: \begin{itemize} \item $\FUsh{\cat{C}}$ has universal finite joins of subunits; \item $\DUsh{\cat{C}}$ has universal directed joins of subunits; \item $\Spsh{\cat{C}}$ is {locale-based}. \end{itemize} \end{theorem} \begin{proof} Consider the final statement first. Lemma~\ref{lem:broadpresheavesstiff} makes $\Spsh{\cat{C}}$ stiff. Let $\mathcal{U}$ be an idempotent family of subunits in $\Spsh{\cat{C}}$. By Proposition~\ref{prop:broadidempotent}, its elements are of the form $\widehat{U}$ for some $U \subseteq \ISub(\cat{C})$. Also, its supremum in $\ISub(\Spsh{\cat{C}})$ is given by $\broad{\bigcup \mathcal{U}}{I}$ where we write $\bigcup \mathcal{U} = \bigcup \{ U \mid \widehat{U} \in \mathcal{U} \}$. Let $V \subseteq \ISub(\cat{C})$, and let $Y$ be an object in $\cat{C}$. We have to prove that the inclusions $\widehat{U} \ensuremath{\mathop{\widehat{\otimes}}} \broad{V}{Y} \to \bigcup \mathcal{U} \ensuremath{\mathop{\widehat{\otimes}}} \broad{V}{Y}$ are a colimit of the diagram $D(\mathcal{U},\broad{V}{Y})$ in $\Spsh{\cat{C}}$. By Lemma~\ref{lem:broadtensor}, we may equivalently consider the inclusions \[ \broad{U \otimes V}{Y} \hookrightarrow \broad{ (\bigcup \mathcal{U}) \otimes V }{Y}\text. \] These certainly form a cocone. The questions is whether it is a universal one. Suppose that $\alpha_U \colon \broad{U \otimes V}{Y} \to \broad{W}{Z}$ is another cocone. Define a natural transformation $\beta \colon \broad{ (\bigcup \mathcal{U}) \otimes V}{Y} \to \broad{W}{Z}$ by $\beta_A(f) = (\alpha_U)_A(f)$ for any $f \colon A \to X$ that restricts to $U \in \mathcal{U}$. Now $\beta$ is indeed well-defined, since if $f$ also restricts to $U' \in \mathcal{U}$ then by the pullback~\eqref{eq:stiff-pullback}, it also restricts to $U \cap U' \in \mathcal{U}$, so that $(\alpha_U)_A(f) = (\alpha_{U \cap U'})_A(f) = (\alpha_{U'})_A(f)$. By definition $\beta$ is the unique natural transformation making the following triangle commute: \[\begin{pic}[xscale=6,yscale=1.5] \node (tl) at (0,0.7) {$\broad{U\otimes V}{Y}$}; \node (tr) at (0.7,0.7) {$\broad{(\bigcup \mathcal{U}) \otimes V}{Y}$}; \node (br) at (0.7,0) {$\broad{W}{Z}$}; \draw[>->] (tl) to (tr); \draw[->] (tr) to node[right]{$\beta$} (br); \draw[->] (tl) to node[below]{$\alpha_U$} (br); \end{pic}\] Hence the inclusions indeed form a colimit, and $\Spsh{\cat{C}}$ is {locale-based}. The proofs of the first two statements are identical, observing that if $U, V \subseteq \ISub(\cat{C})$ and $\mathcal{U} \subseteq \ISub(\FUsh{\cat{C}})$ or $\ISub(\DUsh{\cat{C}})$ are finitely bounded or directed, then so are $U \otimes V$ and $\bigcup \mathcal{U}$. \end{proof} We end this section by showing that the {locale-based} completion cannot be read in the traditional topological sense, in that broad presheaves are not sheaves for any Grothendieck topology. \begin{proposition} There is a firm category $\cat{C}$ for which there is no Grothendieck topology $J$ with $\Spsh{\cat{C}} \simeq \mathrm{Sh}(\cat{C},J)$. \end{proposition} \begin{proof} Suppose that $\Spsh{\cat{C}}$ is a Grothendieck topos. Then it is a reflective subcategory of $\Psh{\cat{C}}$~\cite[Proposition~3.5.4]{borceux:3}. Hence $\Spsh{\cat{C}}$ has a terminal object $\broad{U}{X}$ that, because right adjoints preserve limits, must equal the terminal object of $\Psh{\cat{C}}$. Therefore, for all objects $A$ of $\cat{C}$, the set $\broad{U}{X}(A)$ must be a singleton. This means that for all objects $A$, there is a unique morphism $A \to X$ that restricts to some $s \in U$. Suppose $\ISub(\cat{C})=\{I\}$. Since every morphism restricts to $I$, now $X$ must be a terminal object. But there exists a braided monoidal category $\cat{C}$ with only one subunit but no terminal object: any nontrivial abelian group. \end{proof} \begin{remark} In future it would be natural to consider the above completions with presheaves valued in a category other than $\cat{Set}$~\cite{borceuxquinteiro:sheaves}. After all, {Proposition}~\ref{ex:quantale} is enriched over complete lattices, {Proposition}~\ref{prop:modules} is enriched over abelian groups, and {Proposition}~\ref{prop:hilbertmodules} is enriched over normed vector spaces. Proposition~\ref{prop:colimitspresheaves} holds for enriching categories $\cat{V}$ that are complete, cocomplete, locally small, and symmetric monoidal closed~\cite{imkelly:day}, covering all these examples. But an enriched version of Definition~\ref{def:broad} would require taking the subobject of $[A,X]$ in $\cat{V}$ that restricts to some $s \in U$. \end{remark} \section{Universality of the completions} \label{sec:completion} Finally, let us prove that the {locale-based} completion $\Spsh{\cat{C}}$ and our other constructions $\FUsh{\cat{C}}$ and $\DUsh{\cat{C}}$ indeed have universal properties. \begin{definition} A \emph{morphism} of categories with universal (finite, directed) joins of subunits is a braided monoidal functor $F \colon \cat{C} \to \cat{D}$ that preserves subunits and their (finite, directed) suprema. For short we call morphisms of categories with universal joins of subunits simply morphisms of {locale-based} categories. \end{definition} Here, a functor $F$ is monoidal when it comes equipped with coherent isomorphisms $\varphi_{A,B} \colon F(A) \otimes F(B) \to F(A \otimes B)$ and $\varphi \colon I \to F(I)$; these need to be invertible to make sense of preservation of subunits: if $s \in \ISub(\cat{C})$, then $\varphi^{-1} \circ F(s) \in \ISub(\cat{D})$. By Lemma~\ref{lem:cocone} and Theorem~\ref{thm:spatialcharacterisation}, a morphism is equivalently a braided monoidal functor $F \colon \cat{C} \to \cat{D}$ with $F\big(\colim D(U,X)\big) = \colim D\big(F(U),F(X)\big)$ for (finitely bounded, directed) idempotent $U \subseteq \ISub(\cat{C})$ and objects $X$ of $\cat{C}$. \begin{definition} The \emph{{locale-based} completion} of a braided monoidal category $\cat{C}$ is a monoidal functor $y \colon \cat{C} \to \cat{D}$ that preserves subunits such that $\cat{D}$ is {locale-based}, and any monoidal functor $\cat{C} \to \cat{E}$ into a {locale-based} category that preserves subunits factors as $y$ followed by a morphism of {locale-based} categories $G$ that is unique up to a unique monoidal natural isomorphism $\gamma$ with $\gamma_y = \id[G]$. \[\begin{pic}[xscale=15,yscale=1.5] \node (tl) at (0,1) {$\cat{C}$}; \node (tr) at (0.5,1) {$\cat{D}$}; \node (bl) at (.5,0) {$\cat{E}$}; \draw[->] (tl) to node[above]{\begin{tabular}{c}monoidal,\\preserves subunits\end{tabular}} (tr); \draw[->,dashed] (tr) to node[right]{{locale-based}} (bl); \draw[->] (tl) to node[below]{\begin{tabular}{c}monoidal,\\preserves subunits\end{tabular}} (bl); \end{pic}\] A \emph{completion under universal finite} or \emph{directed joins of subunits} of $\cat{C}$ is defined similarly. \end{definition} \begin{theorem}\label{thm:spatialcompletionuniversal} If $\cat{C}$ is a stiff category, then via the Yoneda embedding its \begin{itemize} \item completion under universal finite joins of subunits is $\FUsh{\cat{C}}$; \item completion under universal directed joins of subunits is $\DUsh{\cat{C}}$; \item {locale-based} completion is $\Spsh{\cat{C}}$. \end{itemize} \end{theorem} \begin{proof} We prove the {locale-based} case, the others being identical. For any monoidal functor $F \colon \cat{C} \to \cat{D}$ into a {locale-based} category, we need to show that there is a morphism $\overline{F} \colon \Spsh{\cat{C}} \to \cat{D}$ with $\overline{F} \circ y = F$, where $y$ is the Yoneda embedding. Because $\broad{U}{X} = \broad{\ensuremath{\mathop{\downarrow}} U}{X}$ for any $U \subseteq \ISub(\cat{C})$, we may assume that $U$ is idempotent. Because $F$ is monoidal, $F(U)$ is idempotent too. On objects, the requirement $\overline{F} \circ y = F$ forces us to define \begin{align*} \overline{F}\broad{U}{X} &=\overline{F}\big(\colim D(y(U),\widehat{X})\big) \\ &=\colim D\big(\overline{F} \circ y(U), \overline{F} \circ y(X)\big) \\ &=\colim D\big(F(U), F(X)\big) \\ &\simeq \big(\bigvee F(U)\big) \otimes F(X)\text. \end{align*} Now consider morphisms of (broad) presheaves. Any $\alpha \colon \broad{U}{X} \to \broad{V}{Y}$ induces a cocone $\alpha_s = \alpha_{S \otimes X}(s \otimes X) \colon S \otimes X \to Y$ over $D(U,X)$, where, as in Lemma~\ref{lem:morphismsofbroadpresheaves}, each such map factors through $t \otimes Y$ for some $t \in V$. Hence $F(\alpha_s)$ factors through $F(t) \otimes F(Y)$ and hence $\colim D\big(F(V),F(Y)\big) = \overline{F}\broad{V}{Y}$, giving a morphism $\beta_s$ as below. \[\begin{pic}[xscale=5,yscale=1.5] \node (tl) at (0,2) {$F(S) \otimes F(X)$}; \node (tm) at (0.7,2) {$F(S \otimes X)$}; \node (tr) at (1.5,2) {$F(Y)$}; \node (bl) at (0,0) {$\overline{F}\broad{U}{X}$}; \node (br) at (1.5,0) {$\overline{F}\broad{V}{Y}$}; \node (ml) at (0,1) {$\overline{F}\broad{\{s\}}{X}$}; \node (mr) at (1.5,1) {{$\overline{F}(\widehat{Y})$}}; \draw[->] (tl) to node[above]{$\simeq$} (tm); \draw[->] (tm) to node[above]{$F(\alpha_s)$} (tr); \draw[->,dashed] (ml) to node[above]{$\beta_s$} (br); \draw[->] (tl) to node[left]{$\simeq$} (ml); \draw[>->] (br) to node[left]{} (mr); \draw[->,dashed] (bl) to node[below]{$\overline{F}(\alpha)$} (br); \draw[-, double distance=.75mm] (mr) to (tr); \draw[>->] (ml) to node[above]{$\overline{F}\big(\alpha_s \circ (-)\big)$} (mr); \draw[->] (ml) to (bl); \draw[>->] (br) to (mr); \end{pic}\] By Lemma~\ref{lem:cocone}, the upper row forms a cocone over $D\big(F(U),F(X)\big)$ with $s$ ranging over $U$. Because the vertical composite on the right is monic, the $\beta_s$ also form a cocone (after composition with the upper left vertical isomorphism). But $\overline{F}\broad{U}{X}$ is a colimit, so there is a mediating map $\overline{F}(\alpha)$ making the diagram commute. Uniqueness of this map makes $\overline{F}$ functorial. Given our definition of $\overline{F}$ on objects, this assignment $\overline{F}(\alpha)$ is unique with $\overline{F} \circ y = F$, since for each $s \in S$ the lower square commutes by functoriality, with the lower left vertical morphisms forming a colimit. Next, $\overline{F}$ may readily be checked to be (strong) braided monoidal: \begin{align*} \overline{F}(\broad{U}{X} \ensuremath{\mathop{\widehat{\otimes}}} \broad{V}{Y}) &\simeq \overline{F}\broad{U \otimes V}{X \otimes Y} \\ &\simeq \big(\bigvee_{s \in U, t \in V} F(s) \wedge F(t) \big) \otimes F(X) \otimes F(Y) \\ &\simeq \bigvee F(U) \otimes \bigvee F(V) \otimes F(X) \otimes F(Y) \\ &\simeq \overline{F}{\broad{U}{X}} \otimes \overline{F}{\broad{V}{Y}} \end{align*} By construction $\overline{F}$ preserves subunits because $\overline{F}\broad{U}{I} = \bigvee F(U)$, as well as their suprema: \[ \overline{F}\big(\bigvee_{U \in \mathcal{U}} \broad{U}{I}\big) = \overline{F}\broad{\bigcup \mathcal{U}}{I} \simeq \bigvee_{U \in \mathcal{U}} \bigvee_{s \in U} F(s) \simeq \bigvee_{U \in \mathcal{U}} \overline{F}\broad{U}{I} \] Hence $\overline{F}$ is indeed a morphism of {locale-based} categories. Finally, we must show for any other morphism $\overline{F}'$ with $\overline{F}' \circ y = F$ that there is a unique monoidal natural isomorphism $\gamma \colon \overline{F} \to \overline{F'}$ with $\gamma_y = \id[F]$. But this follows from the uniqueness of $\colim D\big(F(U),F(X)\big)$ up to unique isomorphism, and our statement above on the uniqueness of $\overline{F}(\alpha)$. \end{proof} We leave open the question how these completions relate to the free cocompletions in a left exact context in the case of toposes~\cite{menni:thesis}. Each construction is functorial; we consider the {locale-based} case in detail. Write {$\cat{LocBased}$} for the category of {locale-based} categories and their morphisms, and $\cat{Stiff}$ for the category of stiff categories and braided monoidal functors that preserve subunits. \begin{proposition} The map $\cat{C} \mapsto \Spsh{\cat{C}}$ defines a functor $\cat{Stiff} \to {\cat{LocBased}}$. \end{proposition} \begin{proof} For any $F \colon \cat{C} \to \cat{D}$ in $\cat{Stiff}$, define $\Spsh{\cat{C}} \to \Spsh{\cat{D}}$ on objects by $\broad{U}{X} \mapsto \broad{F(U)}{F(X)}$. We have seen that it suffices to consider when $U$ is idempotent. By Lemma~\ref{lem:morphismsofbroadpresheaves}, morphisms $\alpha \colon \broad{U}{X} \to \broad{V}{Y}$ are equivalently cocones over $D(U,X)$ each of whose legs factors over $t \otimes Y$ for some $t \in V$. Map such a cocone $c_s$ to the cocone $F(c_s)$ over $D(F(U),F(X))$. This is well-defined by Lemma~\ref{lem:cocone}, and clearly functorial. \end{proof} It follows from Theorem~\ref{thm:spatialcompletionuniversal} that the {locale-based} completion functor of the previous proposition is a left \emph{biadjoint} to the forgetful functor $\cat{{LocBased}} \to \cat{Stiff}$, when we make each category a strict 2-category with 2-cells being monoidal natural transformations (for this it suffices to check that each Yoneda embedding $\cat{C} \to \Spsh{\cat{C}}$ is a \emph{biuniversal arrow}~\cite[Theorem~9.16]{fiore2006pseudo}). The other constructions $\cat{C} \mapsto \FUsh{\cat{C}}$ and $\cat{C} \mapsto \DUsh{\cat{C}}$ similarly give left biadjoints; write $\cat{UnivFin}$ or $\cat{UnivDir}$ for the category of categories with universal finite or directed joins. \begin{theorem} The following cube of forgetful functors commutes, all functors in the top face have left biadjoints, and the rest have left adjoints. \[ \begin{pic}[xscale=5,yscale=2.5,cross line/.style={preaction={draw=white, -,line width=6pt}}] \node (slat) at (0,0) {$\cat{SemiLat}$}; \node (pref) at (.5,.5) {$\cat{PreFrame}$}; \node (fram) at (1.5,.5) {$\cat{Frame}$}; \node (dist) at (1,0) {$\cat{DistrLat}$}; \node (stif) at (0,1) {$\cat{Stiff}$}; \node (udir) at (.5,1.5) {$\cat{UnivDir}$}; \node (spat) at (1.5,1.5) {$\cat{{LocBased}}$}; \node (ufin) at (1,1) {$\cat{UnivFin}$}; \draw[->] (spat) to (ufin); \draw[->] (spat) to (udir); \draw[->] (udir) to (stif); \draw[->] (fram) to (pref); \draw[->] (fram) to (dist); \draw[->] (pref) to (slat); \draw[->] (stif) to (slat); \draw[->] (udir) to (pref); \draw[->] (dist) to (slat); \draw[->] (spat) to node[right]{$\ISub$} (fram); \draw[->, cross line] (ufin) to (dist); \draw[->, cross line] (ufin) to (stif); \end{pic} \] \end{theorem} \begin{proof} All functors in the bottom face have a left adjoint~\cite[Lemma~C1.1.3]{johnstone:elephant}. Explicitly: the free frame on a preframe is given by taking its Scott closed subsets~\cite[Proposition~1]{banaschewski:freeframe}, and we have already mentioned the free frame, preframe or distributive lattice on a semilattice. Observe that all these free constructions take certain types of downward closed subsets. Therefore they can be categorified from posets to categories that have universal joins of these types of subsets of subunits. The universal property of Theorem~\ref{thm:spatialcompletionuniversal} then holds in each case. Hence all functors in the top face of the cube have a left biadjoint. Finally, all vertical functors have a left adjoint as in {Proposition}~\ref{ex:semilattice}. \end{proof} \bibliographystyle{plain}
1,314,259,996,792
arxiv
\section{Introduction} \label{sec:intro} Synthetic media have become so realistic with the advancement of deep neural networks that they are often indiscernible from authentic content. However, synthetic media designed to deceive poses a dangerous threat to many communities around the world \cite{Cahlan,Ingram}. In this context, \textit{Deepfake} videos – which portray human subjects with altered identities or malicious/embarrassing actions - have emerged as a vehicle for misinformation. With the current advancement and growing availability of computing resources, sophisticated deepfakes have become more pervasive, especially to generate revenge pornography \cite{Hao} and defame celebrities or political targets \cite{Vaccari}. Hence, there is a critical need for automated systems that can effectively combat misinformation on the internet. To address this challenge, the vision community has conducted a series of excellent works on detecting deepfakes \cite{tolosana2020deepfakes,mirsky2021creation}. Sophisticated facial forgery detection tools \cite{Afchar,li2020face,liu2020global} and advanced training sets \cite{rossler2019faceforensics++v3,jiang2020deeperforensics10} were developed to train detectors capable of identifying deepfakes with high precision. Such results have also seen in real-world impact with Microsoft's release of Video Authenticator \cite{burt_horvitz_2020}, an automated tool trained on the publicly available FaceForensics++ dataset, to analyze a still photo or video to provide a percentage chance that the media is artificially manipulated. It works by detecting the blending boundary of the deepfake and subtle fading or grayscale elements that might not be detectable by the human eye. On the other hand, Facebook has also been pioneering its own system to detect AI-generated profiles and ban hundreds of fake accounts, pages, posts, and social groups\footnote{https://about.fb.com/news/2019/12/removing-coordinated-inauthentic-behavior-from-georgia-vietnam-and-the-us/}, along with strengthening its policy on deepfakes and authentic media\footnote{https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/}. While these works have achieved good progress towards the prediction task, detecting fake videos at a low false-positive rate is still a challenging problem~\cite{li2020face}. Moreover, since most studies focus on the visual artifacts existing within deepfakes, little is discussed about how such systems perform on diverse groups of real people across gender and race, which is the common setting where personal profiles and videos are being audited en masse for authenticity via automated systems. In this context, a small percentage difference in false-positive rates between subgroups would indicate that millions people of a particular group are more likely to be mistakenly classified as fake. This draws a connection to fairness in machine learning, where growing concerns about unintended consequences from biased or flawed systems call for a careful and thorough examination of both datasets and models. Gender Shades \cite{gendershades} have demonstrated how facial recognition system discriminates across gender and race, showing a large gap in the accuracy of gender classifiers across different intersectional groups: darker-skinned females are misclassified in up to 34.7\% of cases, while the maximum error rate for lighter-skinned males is only 0.8\%. Others have shown that training with biased data has resulted in algorithmic discrimination \cite{bolukbasi2016man}. Although many works have studied how to create fairer algorithms, and benchmarked discrimination in various contexts \cite{Hardt2016EqualityOO,liu2018delayed}, few works have done this analysis for computer vision in the context of synthetic media and deepfakes. Our contributions are as follows: \begin{enumerate} \item We find that the FaceForensics++ dataset commonly used for training deepfake detectors is overwhelmingly composed of Caucasian subjects, with the majority (36.6\%) of videos features female Caucasian subjects. \item We also find that approaches to generate fake samples as \textit{positive} training signals tend to overwhelmingly produce ``irregular" deepfakes - when a person’s face is swapped onto another person of a different race or gender, which leads to detectors learning spurious correlations between foreground faces and \textit{fakeness}. \item Using facial datasets balanced by gender and race, we find that classifiers designed to detect deepfakes have large predictive disparities across racial groups, with up to 10.7\% difference in error rate. \item Lastly, we observe that when detectors are trained with the Blended Images (BI) from Face X-Rays \cite{li2020face}, we find that detectors develops systematic discrimination towards female Asian subjects. \end{enumerate} \section{Related Work} \subsection{Deepfake detection} Early deepfake forensic work focused on hand-crafted facial features such as eye colors, light reflections, and 3D head poses / movements. However, these approaches do not scale well to more advanced GAN-based deepfakes. To combat the new generation of deepfakes, researchers leverage deep learning and convolutional networks to automatically extract meaningful features for face forgery detection \cite{rossler2019faceforensics++v3}. Work on shallow networks such as MesoInception4 \cite{Afchar} and Patch-based CNN \cite{chai2020makes} are developed to focus on low and medium level manipulation artifacts. Deep networks, such as Xception \cite{rossler2019faceforensics++v3}, also demonstrated success by achieving state-of-the-arts via fine-tuning on ImageNet. Other lines of research examine resolution-inconsistent facial artifacts DSP-FWA \cite{Li} through spatial pyramid pooling modules, blending artifacts via Face X-ray \cite{li2020face}, or temporal artifacts via dynamic prototypes \cite{trinhinterpretable}. FakeSpotter \cite{wang2020fakespotter} uses layer-wise neuron behaviors as features in addition to the output of the final-layer. \subsection{Generalizability and robustness of detectors} With more advanced deepfake creations, recent works \cite{cozzolino2018forensictransfer,8553251} have shown that the performance of current detection models \textit{drops} drastically on new types of facial manipulations. Few work call for a closer investigation into the generalizability of deepfake detectors towards unseen manipulations. In particular, ForensicTransfer \cite{cozzolino2018forensictransfer} proposes an autoencoder-based network to transfer knowledge between different but related manipulations via the hidden latent space. Face X-ray \cite{li2020face} addressed the problem by focusing on the more general blending artifacts as well as creating a blended image dataset to help networks generalize across unseen manipulations. In addition to generalization, recent work have also demonstrated the vulnerability of deepfake detectors to adversarial attacks \cite{Carlini_2020_CVPR_Workshops}, where small tailored perturbations generated via either black-box or white-box attacks can easily fool the networks. This raises a concern about the robustness and commercial readiness of deepfake detectors. In contrast to complex adversarial attacks, our work examines the performances of deepfake detectors on natural images composing of different gender and diverse racial groups, as well as investigating the real-world consequences if deepfake detectors are commercially adopted. \subsection{Algorithmic fairness and consequences} Concerns about malicious applications of AI and unintended consequences from flawed or biased systems have propelled many investigations in studying representational and algorithmic bias. Gender Shades \cite{gendershades} have demonstrated how facial recognition system discriminates across gender and race, especially for darker-skinned females. \cite{liu2018delayed} showed that common fairness criteria may in fact harm underrepresented or disadvantaged groups due to delayed outcomes. \cite{celis2019controlling} proposed a framework to combat echo chambers created by highly personalized recommendations on social media that reinforced people’s biases and opinions. In terms of standardized approaches for the field, \cite{mitchell2019model} and \cite{gebru2018datasheets} recommend the usage of model cards and datasheets to better document the intended usage of models and data. Although many works have studied how to create fairer algorithms and benchmarked discrimination in various contexts \cite{Hardt2016EqualityOO,liu2018delayed}, we conduct a fairness analysis in the context of deepfake detection, which requires bookkeeping of the racial distribution face swaps and providing subgroup-specific deepfakes for audit. \section{Deepfake Detection} We investigate 3 popular deepfake detection models of various sizes, architectures, and loss formulations, all with proven success in detecting deepfake videos. We trained MesoInception4 \cite{Afchar}, Xception \cite{rossler2019faceforensics++v3}, and Face X-Ray \cite{li2020face} on the FaceForensics++ dataset, which contains four variants of face swaps. For a fair comparison, we also cross-test the models' generalizability across datasets with unknown manipulations not seen in FaceForensics++, such as Google's DeepfakeDetection, Celeb-DF, and DeeperForensics-1.0. Our results matche state-of-the-art results, which we then used to audit for fairness. For more detailed information on training, testing, and cross evaluations, see Section \ref{appendix:detection_methods} and Table \ref{tab:gen} in the Appendix. \vspace{-0.1cm} \begin{figure*}[!htb] \centering \includegraphics[keepaspectratio=false,width=0.9\textwidth]{figures/dataset_statistics.png} \includegraphics[keepaspectratio=false,width=0.9\textwidth]{figures/dataset_sample.png} \vspace{-0.2cm} \caption{Examples and average faces of both the RFW and UTKFace database, along with their respective gender and racial distributions. In each row from top to bottom: Caucasian, African, Asian, Indian.} \label{fig:rfwutk} \vspace{-0.4cm} \end{figure*} \section{Deepfake Detection Audit} We evaluated the deepfake detectors in the previous section, trained using both the FF++ and Blended Image (BI) datasets. Overall, all detectors perform equally on real and deepfake images containing male and female subjects, and all detectors trained with BI perform worst on media with darker African faces. Further analysis of the intersectional subgroups reveals that media with male African faces have the lowest TPR and media with female Asian faces have the highest FPR. \subsection{Key findings on evaluated detectors} \begin{itemize} \item All detectors perform equally on male faces and female faces (0.1 - 0.3\% difference in error rate) \item All detectors trained with BI perform worst on darker faces from the African subgroup, especially male African faces (3.5 - 6.7\% difference in error rate) \item For detectors trained with BI, faces from the Asian subgroup have the highest FPR, especially female Asian faces (5.2 - 8.1\% diff.) \item For detectors trained with BI, faces from the African subgroup have the lowest TPR, especially male African faces (4.7 - 10.7\% diff.) \item FaceXRay + BI performs best on Caucasian faces, especially male Caucasian faces (9.8\%, 9.5\% error rate respectively). Meso4 and Xception detectors (with and without BI) perform best on Indian faces \item The maximum difference in error rate between the best and worst classified subgroups is 10.7\% \end{itemize} \subsection{Evaluation methodology} We describe in detail the datasets and metrics utilized in this work to audit deepfake detectors. We adapted racially aware and fair facial recognition datasets labeled with demographic information for our task. For evaluations, we measure AUC and binary classification metrics across different subgroups. \subsubsection{Auditing datasets} We utilized two face datasets labeled with demographic information: (1) Racial Face-in-the-Wild (RFW) \cite{RFW} and (2) UTKFace \cite{UTKFace}. RFW is a dedicated testing dataset manually created for studying racial bias in face recognition. RFW contains four testing subsets, namely Caucasian, Asian, Indian, and African, with images selected from MS-Celeb-1M. Each subset contains about 10K images of 3K individuals for face verification - all with similar distribution with respect to age, gender, yaw pose, and pitch pose. Images in RFW have been carefully and manually cleaned. UTKFace is a large-scale face dataset with a long age span. The dataset consists of over 20K face images with annotations of age, gender, and race. The race labels consist of five groups, namely Caucasian, African, Asian, Indian, and Others (like Hispanic, Latino, Middle Eastern). All images in UTKFace cover large variations in pose, facial expression, illumination, occlusion, and resolution. For both datasets, we preprocessed images similarly to the deepfake images used for detection training (see Section \ref{appendix:preprocessing}). Following RFW, we preserve the testing condition of the RFW dataset and do not alter the distribution of the images, which is adapted as the \textit{not-fake} portion of the testing dataset. For UTKFace, despite a large number of available images, it has a quite skewed distribution of racial groups. Hence, we did not use all labeled images but instead downsampled subgroups to achieve a balanced racial distribution similar to RFW. Figure \ref{fig:rfwutk} presents examples and average faces of both the RFW and UTKFace database, along with their respective gender and racial distributions. To obtain \textit{deepfakes} for the testing dataset along with their demographic labels, we utilized the provided 68 facial landmarks within UTKFace to construct blended images, following the exact methodology as in \cite{li2020face}. To remain ethnically aware and also maintain demographic information, pairs of faces selected for swapping via the Face X-Rays approach are constrained to be from within the same gender and racial group. We generated 40K blended images per subgroup for a balanced distribution (Figure \ref{fig:testing_samples}). Our goal is to utilize a deepfake dataset with faithful demographic labels to audit the detectors' performance on both real and manipulated images. \begin{figure}[!htb] \vspace{-0.1cm} \centering \includegraphics[keepaspectratio=false,width=.9\columnwidth]{figures/testing_good_bad_samples.png} \vspace{-0.2cm} \caption{Visualized blended images along with their Face X-Rays for images with low (top row) and high (bottom row) artifacts.} \label{fig:testing_samples} \vspace{-0.5cm} \end{figure} \renewcommand{\arraystretch}{1.0}{ \begin{table*}[!htb] \caption{Deepfake detection performance on gender and racial groups as measured by the area-under-the-ROC-curve (AUC), positive predictive value (PPV), error rate (1-PPV), true positive rate (TPR), and false positive rate (FPR) of the 3 evaluated deepfake detection models, trained using the standard and Blended Image (BI) approaches.} \vspace{-0.1cm} \label{tab:main} \begin{adjustbox}{width=\textwidth} \begin{tabular}{@{}ll|c|cc|cccc|cccccccc@{}} \toprule Model & Metric & ALL & M & F & Cau. & African & Asian & Indian & M/Cau. & F/Cau. & M/African & F/African & M/Asian & F/Asian & M/Indian & F/Indian \\ \midrule \multirow{5}{*}{Meso4} & AUC & 0.614 & \textbf{0.622} & 0.597 & 0.589 & 0.626 & 0.626 & 0.614 & 0.604 & 0.577 & 0.588 & 0.629 & \textbf{0.651} & 0.601 & 0.644 & 0.586 \\ & PPV & 0.784 & 0.764 & \textbf{0.802} & 0.779 & \textbf{0.794} & 0.789 & 0.774 & 0.795 & 0.764 & 0.661 & \textbf{0.940} & 0.807 & 0.772 & 0.797 & 0.754 \\ & Error Rate & 0.435 & \textbf{0.436} & 0.434 & \textbf{0.462} & 0.439 & 0.424 & 0.415 & 0.462 & \textbf{0.462} & 0.458 & 0.410 & 0.412 & 0.436 & 0.407 & 0.423 \\ & TPR & 0.541 & 0.517 & \textbf{0.565} & 0.493 & 0.524 & 0.555 & \textbf{0.592} & 0.476 & 0.510 & 0.457 & 0.591 & 0.556 & 0.554 & 0.578 & \textbf{0.605} \\ & FPR & 0.374 & 0.200 & \textbf{0.225} & 0.350 & 0.344 & 0.371 & \textbf{0.431} & 0.307 & 0.394 & 0.335 & 0.416 & 0.333 & 0.409 & 0.369 & \textbf{0.494} \\ \midrule \multirow{5}{*}{Xception} & AUC & 0.810 & 0.804 & \textbf{0.819} & 0.793 & 0.803 & 0.808 & \textbf{0.841} & 0.800 & 0.786 & 0.788 & 0.815 & 0.805 & 0.812 & 0.841 & \textbf{0.842} \\ & PPV & 0.856 & 0.827 & \textbf{0.888} & 0.863 & 0.838 & 0.861 & \textbf{0.865} & 0.864 & 0.861 & 0.739 & \textbf{0.957} & 0.857 & 0.865 & 0.863 & 0.868 \\ & Error Rate & 0.267 & \textbf{0.276} & 0.256 & \textbf{0.301} & 0.264 & 0.276 & 0.225 & 0.290 & \textbf{0.313} & 0.301 & 0.206 & 0.280 & 0.273 & 0.229 & 0.220 \\ & TPR & 0.753 & 0.749 & \textbf{0.758} & 0.687 & 0.783 & 0.731 & \textbf{0.812} & 0.704 & 0.671 & 0.755 & 0.812 & 0.730 & 0.733 & 0.808 & \textbf{0.816} \\ & FPR & 0.317 & \textbf{0.276} & 0.205 & 0.274 & \textbf{0.384} & 0.295 & 0.316 & 0.276 & 0.272 & 0.382 & \textbf{0.403} & 0.304 & 0.287 & 0.321 & 0.310 \\ \midrule \multirow{5}{*}{Meso4 + BI} & AUC & 0.795 & \textbf{0.811} & 0.765 & 0.798 & 0.79 & 0.775 & \textbf{0.821} & 0.818 & 0.785 & 0.766 & 0.78 & 0.82 & 0.73 & \textbf{0.845} & 0.8 \\ & PPV & 0.901 & \textbf{0.906} & 0.897 & 0.908 & \textbf{0.915} & 0.878 & 0.905 & 0.935 & 0.888 & 0.849 & \textbf{0.976} & 0.913 & 0.846 & 0.923 & 0.889 \\ & Error Rate & 0.356 & 0.355 & \textbf{0.357} & 0.362 & \textbf{0.385} & 0.358 & 0.319 & 0.384 & 0.34 & 0.367 & \textbf{0.413} & 0.341 & 0.374 & 0.324 & 0.314 \\ & TPR & 0.564 & 0.532 & \textbf{0.596} & 0.548 & 0.511 & 0.58 & \textbf{0.618} & 0.497 & 0.6 & 0.458 & 0.563 & 0.577 & 0.582 & 0.596 & \textbf{0.64} \\ & FPR & 0.155 & 0.062 & \textbf{0.105} & 0.138 & 0.12 & \textbf{0.201} & 0.162 & 0.087 & 0.19 & 0.116 & 0.153 & 0.138 & \textbf{0.264} & 0.123 & 0.2 \\ \midrule \multirow{5}{*}{Xception + BI} & AUC & 0.962 & \textbf{0.964} & 0.959 & 0.969 & 0.951 & 0.959 & \textbf{0.972} & 0.972 & 0.968 & 0.938 & 0.958 & 0.97 & 0.948 & \textbf{0.977} & 0.968 \\ & PPV & 0.952 & \textbf{0.956} & 0.949 & \textbf{0.963} & 0.957 & 0.933 & 0.957 & 0.971 & 0.955 & 0.928 & \textbf{0.987} & 0.956 & 0.912 & 0.969 & 0.945 \\ & Error Rate & 0.099 & 0.098 & \textbf{0.099} & 0.092 & \textbf{0.119} & 0.102 & 0.082 & 0.092 & 0.091 & \textbf{0.13} & 0.102 & 0.088 & 0.116 & 0.076 & 0.089 \\ & TPR & 0.907 & 0.896 & \textbf{0.919} & 0.907 & 0.873 & 0.923 & \textbf{0.926} & 0.898 & 0.916 & 0.846 & 0.901 & 0.918 & 0.927 & 0.923 & \textbf{0.93} \\ & FPR & 0.114 & 0.077 & \textbf{0.14} & 0.088 & 0.099 & \textbf{0.165} & 0.104 & 0.068 & 0.109 & 0.094 & 0.133 & 0.105 & \textbf{0.224} & 0.074 & 0.135 \\ \midrule \multirow{5}{*}{FaceXRay + BI} & AUC & 0.950 & 0.95 & 0.95 & \textbf{0.962} & 0.936 & 0.953 & 0.95 & \textbf{0.963} & 0.96 & 0.928 & 0.937 & 0.959 & 0.946 & 0.946 & 0.955 \\ & PPV & 0.939 & 0.932 & \textbf{0.947} & \textbf{0.951} & 0.944 & 0.931 & 0.933 & 0.95 & 0.952 & 0.906 & \textbf{0.985} & 0.938 & 0.923 & 0.932 & 0.933 \\ & Error Rate & 0.115 & \textbf{0.115} & 0.114 & 0.098 & \textbf{0.133} & 0.112 & 0.116 & 0.095 & 0.101 & \textbf{0.136} & 0.13 & 0.104 & 0.119 & 0.122 & 0.11 \\ & TPR & 0.897 & 0.895 & \textbf{0.899} & 0.91 & 0.865 & \textbf{0.912} & 0.902 & \textbf{0.915} & 0.905 & 0.858 & 0.872 & 0.914 & 0.909 & 0.894 & 0.911 \\ & FPR & 0.145 & 0.128 & \textbf{0.134} & 0.118 & 0.129 & \textbf{0.17} & 0.163 & 0.121 & 0.115 & 0.127 & 0.15 & 0.151 & \textbf{0.189} & 0.163 & 0.163 \\ \bottomrule \end{tabular} \end{adjustbox} \vspace{-0.3cm} \end{table*}} \subsubsection{Evaluation metrics} We analyze two sets of metrics, binary classification metrics and threshold agnostic Area under the ROC curve (AUC). For classification metrics, similar to Gender Shades \cite{gendershades}, we follow the gender classification evaluation precedent established by the National Institute for Standards and Technology (NIST) and assess the overall classification accuracy, along with the extension of true positive rate, false positive rate, and error rate (1-PPV) of the intersectional subgroups: \{male, female\} $\times$ \{Caucasian, African, Asian, Indian\}. Since the FaceForensics++ training dataset is heavily imbalanced, we set the threshold as the value in the range (0.01, 0.99, 0.01) that maximizes the balanced accuracy on the Faceforensics++ validation set. We also evaluated the AUC due to its robustness against class imbalance. \vspace{-0.1cm} \subsection{Audit results} Table \ref{tab:main} shows detection performances on gender and racial groups as measured by the AUC, positive predictive value (PPV), error rate, true positive rate (TPR), and false positive rate (FPR) of the 3 deepfake detection models, trained using the FF++ and Blended Image (BI) datasets. We observe disparities in predictive performances between racial groups, which is most apparent in models trained with the BI dataset. \subsubsection{Gender groups audit} From Table \ref{tab:main}, we observe that all detectors are \textit{equally} accurate in detecting manipulated images containing male and female subjects, with the difference in error rate as low as 0.1 - 0.3\%. For four out of five detectors, female subjects have both higher FPR and higher TPR. In the realistic setting where facial profiles on social media are automatically screened via deepfake detectors, FPR indicates that the proportion of real subjects mistakenly identified as fake can be much larger for female subjects than male subjects. This is especially true for the Xception + BI detector, which achieves the best result with error rates of 9.8\% on male subjects and 9.9\% on female subjects, but nearly twice as large FPR with 7.7\% for male subjects and 14.0\% for female subjects. \begin{figure*}[!htb] \centering \includegraphics[keepaspectratio=false,width=0.95\textwidth]{figures/FPRTPR.png} \vspace{-0.2cm} \caption{Ratios of FPR (TPR) for each of the intersectional subgroup to a reference group. In this case, we have chosen the ``M-Caucasian'' to be the reference group. Purple lines indicates the 20\% margins above and below. Red bars indicate the violation of these margins.} \label{fig:TPRFPR} \vspace{-0.3cm} \end{figure*} \subsubsection{Racial and intersectional subgroups audit} We conduct an intersectional analysis of all detectors on all eight subgroups (M-Cau. F-Cau. M-African, F-African, M-Asian, F-Asian, M-Indian, and F-Indian). As seen in Table \ref{tab:main}, we observe large disparities in error rate across race, with the difference in error rate ranging from 3.5 - 7.6\% across all detectors. Of note, FaceXRay + BI performs best on Caucasian faces, especially male Caucasians (9.8\%, 9.5\% error rate respectively). MesoNet and Xception detectors (with and without BI) perform best on Indian faces. Across all detectors, the maximum difference in error rate between the best and worst intersectional subgroups is 10.7\%. Figure \ref{fig:TPRFPR} presents the ratios of FPR and TPR of each subgroup to a reference group, which we have chosen as the ``M-Caucasian'' group. We notice a stark contrast between FPR and TPR where we observe that subgroups with Asian or African racial backgrounds have false positive rates as high as three times as that of the reference group. In contrast, the TPRs of all groups are either well-within or around the accepted\footnote{Fairness for this metric is in [0.8, 1.2] w.r.t the Four-Fifths rule.} 20\% margins of the reference group, indicated by the purple lines. In addition, there exists a trend in which all detectors trained with BI perform worst on African faces, especially male African faces with 3.5 - 6.7\% difference in error rate to the best subgroup. On a closer look, we can see that MesoNet + BI perform worst on female African faces, with 41.3\% error rate while Xception + BI and FaceXRay + BI perform worst on male African faces, with 13.0\% and 13.6\% error rate respectively. In addition to this trend, we also observe that, for detectors trained with BI, Asian faces have the highest FPR, especially female Asian faces with 5.2 - 8.1\% difference in FPR across all BI-trained detectors. Similarly, faces from the African subgroup have the lowest TPR, especially male African faces 4.7 - 10.7\% difference in TPR. This trend is uniquely consistent across all three detectors trained with BI (including the state-of-the-art Face X-Ray), even though the detectors all have diverse architectures and training losses. \subsubsection{Analysis of results} We agree with the findings in \cite{gendershades} that using single performance metrics such as AUC or detection accuracy over the entire dataset is not enough to justify massive commercial rollouts of deepfake detectors. Despite high AUC up to 0.962 and detection accuracy up to 90.1\% on our deepfake testing dataset, which would allow companies to claim commercial readiness for these detectors on all demographics represented, an intersectional analysis of the detectors shows otherwise. Our results also show indications of systematic bias in the learning process via generating and using manipulated images for training. Even though training with fake data generated via the BI process helped MesoNet, Xception, and Face X-Ray to improve their overall predictive performances, it also negatively impact predictions on real videos and images. Since fake artifacts are the focus of the detectors given how the training data was prepared, the absence of such artifacts in real and genuine media can lead to unintentional consequences in prediction. Figure \ref{fig:TPRFPR} plots the ratios of FPR (TPR) for each of the intersectional subgroup to a reference group. We have chosen the ``M-Caucasian'' to be the reference group. The disparities in FPR suggest that in a real-world scenario, facial profiles of female Asian or female African are 1.5-3 times more likely to be mistakenly labeled as fake than profiles of the male Caucasian. For large scale commercial applications, this would indicate bias against millions of people. However, we note that the disparities observed are not ``intentionally'' built into the detectors. Figure \ref{fig:TPRFPR} (bottom) also demonstrated that the models are indeed focusing on manipulation artifacts as intended, where the the ratios of TPR across intersectional subgroups stay well within the 20\% margins around the reference group. To the best of our knowledge, the closest work that mentions similar observations about performances between fake and real images is the work by Carlini and Farid \cite{Carlini_2020_CVPR_Workshops} where the authors use adversarial attacks to change the detectors' predictions. Noted as surprising by the authors, they found that it is harder to cause real images to be misclassified as fake, requiring up to 7\% of image pixels to be flipped, as opposed to causing fake images to be misclassified as real, needing just 1\% of pixels required to be flipped. We posit that because of the networks' focus on the detection of fake artifacts, it is easier to quickly fool the network using its gradient. However, the reverse direction is harder as the network has more trouble coming up with artifacts to ``manipulate" a real image. \begin{figure*}[!htb] \centering \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=\linewidth]{figures/training_dist.png} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=\linewidth]{figures/FF++_dist.png} \label{fig:sub2} \end{subfigure}% \begin{subfigure}{.31\textwidth} \centering \includegraphics[width=\linewidth]{figures/BI_dist.png} \label{fig:sub3} \end{subfigure} \vspace{-0.6cm} \caption{(Left) Distribution of intersectional subgroups within FaceForensics++ real videos. (Middle, Right) Heatmaps of the distribution of pairwise swaps in FF++ and Blended Image (BI) datasets. Numbers in each row are normalized row-wise to present the percentage of swaps with foreground faces belonging to specific gender and racial group.} \label{fig:training} \vspace{-0.5cm} \end{figure*} \subsection{Training distribution and methodology bias} To further investigate potential sources for bias in the trained detectors, we analyze both the FaceForensics++ and Blend Image (BI) datasets with respect to their gender and racial distribution. We observe the following key findings: \begin{itemize} \item Within FF++, 61.7\% of all real videos contain a person from the Caucasian group, with 36.0\% being female Caucasians. \item For FF++ fake videos, 59.44\% are videos of "irregular swaps". The rest are regular. "Irregular" swaps are when a person's face is swapped onto another person's face of a different gender or race. \item For BI blended face images, 65.45\% of the images are "irregular swaps". The rest are regular. \item For BI images with foreground female Asian faces, 35\% are swapped onto female Caucasian faces, 21\% onto female Asian faces, and 14\% onto female Hispanic faces. \end{itemize} \vspace{-0.2cm} \subsubsection{Evaluation Methodology} With the lack of demographic information for videos within the FaceForensics++ dataset, we manually collect ground truth demographic labels. To do so, we annotate each subject into two groups of perceived gender \{male, female\}, and five groups of perceived race \{Caucasian, African, Asian, Indian, Others\}. Three graduate annotators are selected for the task, with the assumption that each is of the same skill level in determining gender and racial group. For each subject, the annotators are presented with 5 distinct frames at various times in the video, which displayed the subject at different light angles and poses. We utilized pairwise percent agreement for multiple raters to measure Inter-Rater Reliability (IRR), and the majority label for each subject is selected as the ground truth demographic label. Our annotators achieve 75.93\% IRR, which is high for 2 genders and 5 racial groups. With the demographic labels, we evaluate the percentage of ``regular'' and ``irregular'' faceswaps, where ``irregular'' is defined as a swap where a person's face is swapped onto another person's face of a different gender or race. FaceForensics++ provided the IDs for pairs of swaps for all four manipulation methods. Blended Images requires bookkeeping of the target and source faces selected via the BI methodology, which selects a source face from 5000 images where its 68 facial landmarks are closest in Euclidean distance to the target face. \vspace{-0.1cm} \subsubsection{Results} Figure \ref{fig:training} presents the labeled distribution of intersectional subgroups within FaceForensics++ real videos. Overall, we observe a strong representation imbalance of gender and racial groups, with the videos containing 58.3\% of female subjects and 41.7\% of male subjects. The majority of authentic videos are of subjects from the Caucasian Group (61.7\%), with a major part being female Caucasians (36.0\%). Moreover, less than 5\% of the real videos contain subjects from the African group or Indian group, with the male Indian subgroup having the least representation. Figure \ref{fig:training} plots the heatmap of the distribution of pairwise swaps for manipulated videos/images in FF++ and Blended Image (BI) datasets. Numbers in each square are normalized row-wise to show the percentage of swaps with foreground faces belonging to specific gender and racial group. For FF++ fake videos, 59.44\% (428/720) are videos of ``irregular swaps", the rest are ``regular'' swaps (292/720). 58.75\% (423/720) of fakes have female Caucasians and male Caucasians subjects as foreground faces. Zooming in, we can see that the majority (60\%) (154/255 videos) of fakes with female Caucasian foreground faces are swapped onto other female Caucasians. On the other hand, the majority (61\%) (40/66 videos) of fakes with female Asian foreground faces are swapped onto female Caucasian faces, with only 7\% swapping onto female Asian faces. We also observed other types of irregular swaps where female faces are swapped onto male background faces. In the BI dataset, networks are trained with millions of swapped images. Here we sampled 1,000,000 images from the same process to visualize the distribution. Similarly, 65.45\% (654,400) of the images are ``irregular swaps", with the scale of BI is much more massive compared to FF++. For BI blended face images with female Asian faces as foreground faces, the majority 35.3\% (34,031/96,443 images) are swapped onto female Caucasian faces, 21\% onto regular female Asian faces, and 14\% onto female Hispanic faces. Hence, given that the networks sees deepfakes with female Asian faces irregularly swapped for most of the time, it is more likely for them to learn a correlation between fakeness and Asian facial features. Without a specific way to pinpoint the exact source of bias, BI alone may not be fully responsible for misclassification and large disparities in false positive rates. However, we caution against using it for improving deepfake detection performances. Perhaps a more racially aware method to generate blended images could be essential for future directions. \vspace{-0.1cm} \section{Conclusion} As deepfakes become more pervasive, there is a growing reliance on automated systems to combat deepfakes. We argue that practitioners should investigate all societal aspects and consequences of these high impact systems. In this work, we thoroughly measured the predictive performance of popular deepfake detectors on racially aware datasets balanced by gender and race. We found large disparities in predictive performances across races, as well as large representation bias in widely used FaceForensics++. Moreover, a majority of fakes are composed of ``irregular'' swaps between faces of different gender and races. Our work echoes the importance of benchmark representation and intersectional auditing for increased demographic transparency and accountability in AI systems. \noindent \section*{Acknowledgment} \noindent \small This work is supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00111990059. The authors would like to express their appreciation to colleagues Caroline Johnston and Nathan Dennler for many useful inputs and valuable comments on this work. \small \bibliographystyle{named}
1,314,259,996,793
arxiv
\section{Introduction} In the problem of one-dimensional randomly forced Burgers turbulence one studies the statistical properties a velocity field $v(x,t)$ governed by the Burgers equation \cite{burgers_74} \begin{equation} \label{1} \partial_{t} v(x,t) + v(x,t)\partial_{x} v(x,t) = \nu \partial^{2}_{x} v(x,t) + f(x,t) \end{equation} where the parameter $\nu$ is the viscosity and $f(x,t)$ is the Gaussian distributed random force which is $\delta$-correlated in time and which is characterized by finite correlation length $R$ in space: $\overline{f(x,t) f(x',t')} = u \delta(t-t') {\cal F}[(x-x')/R]$. Here ${\cal F}(x)$ is a smooth function decaying to zero fast enough at large arguments and the parameter $u$ is the injected energy density. This problem is the subject of active investigations for more that six decades (see e.g. \cite{Sinai,Bouch-Mez-Par,Khanin} and references there in). In the framework of the celebrated Kolmogorov theory \cite{Kolmogorov1} one obtains the probability density function (PDF) of the velocity increment $w = v(x_{0}+x,t)-v(x_{0},t)$, such that at distances much smaller than the length scale $R$ of the random stirring force $f$, one finds simple scaling for the moments $\langle w^{q}\rangle \sim x^{\zeta(q)}$ with $\zeta(q) = n/3$ (in particular one can prove that $\zeta(3) =1$). This prediction is based on the assumption that the statistical properties of the velocity field is locally homogeneous, so that the corresponding PDF of $w$ depends only on $r$ and the average rate of the energy dissipation. However, extensive studies during last decades convincingly demonstrate that in fact the exponent $\zeta(q)$ significantly deviates from the Kolmogorov's law $q/3$. The physical reason for that is the so called intermittency phenomenon, namely, formation of local coherent structures that drives a strong deviation from the mean fluctuation level of the velocity field \cite{Kolmogorov2,Obukhov,intermittency1,intermittency2,intermittency3,intermittency4,intermittency5,intermittency6,intermittency7} In the present paper using formal equivalence of the above Burger's problem, eq.(\ref{1}), with the model of one-dimensional directed polymers in a random potential \cite{Bouch-Mez-Par} (see below) we are going to derive en explicit expression for the joint PDF $P(v, v')$ for two velocities separated by a distance $x$, as well as the corresponding PDF for the velocity increment $w = v'-v$. In particular, at distances much smaller than the scale of the stirring force, $x \ll R$ this allows to demonstrate the typical intermittency behavior of the exponent $\zeta(q)$ (see Fig.1) \vspace{5mm} It is well known that the Burgers problem, eq.(\ref{1}), is formally equivalent to the one of growing interfaces in a random environment described by the Kardar-Parisi-Zhang (KPZ) equation \cite{KPZ,hh_zhang}. Indeed, redefining \begin{equation} \label{2} v(x,t) = -\partial_{x} F(x,t) \end{equation} and $f(x,t) = -\partial_{x} V(x,t)$, and integrating once eq.(\ref{1}), one gets the KPZ equation for the interface profile $F(x,t)$, \begin{equation} \label{3} \partial_{t} F(x,t) = \frac{1}{2} \bigl(\partial_{x} F(x,t)\bigr)^{2} + \nu \partial^{2}_{x} F(x,t) + V(x,t) \end{equation} where $V(x,t)$ is a random potential. On the other hand, let us consider one-dimensional directed polymers system which is defined in terms of the Hamiltonian \begin{equation} \label{4} H[\phi(\tau), V] = \int_{0}^{t} d\tau \Bigl\{\frac{1}{2} \bigl[\partial_\tau \phi(\tau)\bigr]^2 + V[\phi(\tau),\tau]\Bigr\}; \end{equation} where $\phi(\tau)$ is a scalar field defined within an interval $0 \leq \tau \leq t$ and $V(\phi,\tau)$ is the Gaussian distributed random potential with a zero mean, $\overline{V(\phi,\tau)}=0$, and the correlation function \begin{equation} \label{5} \overline{V(\phi,\tau)V(\phi',\tau')} = u \delta(\tau-\tau') U(\phi-\phi') \end{equation} Here the parameter $u$ defines the strength of the disorder and $U(\phi)$ is the spatial correlation function characterized by the correlation length $R$. For simplicity we take \begin{equation} \label{6} U(\phi) \; = \; \frac{1}{\sqrt{2\pi} \, R} \; \exp\Bigl\{-\frac{\phi^{2}}{2 R^{2}}\Bigr\} \end{equation} For a given realization of the random potential $ V[\phi,\tau]$ the partition function of this system is defined as \begin{equation} \label{7} Z(x,t) = \int_{\phi(0)=0}^{\phi(t)=x} {\cal D}\phi(\tau) \exp\bigl\{-\beta H[\phi(\tau), V]\bigr\} \; = \; \exp\bigl\{-\beta F(x,t)\bigr\} \end{equation} where $\beta$ is the inverse temperature, $F(x,t)$ is the free energy and the integration is taken over all trajectories $\phi(\tau)$ with the boundary conditions at $\phi(\tau=0) = 0$ and $\phi(\tau=t) = x$. One can easily show that the partition function $Z(x,t)$ defined above satisfy the linear differential equation \begin{equation} \label{8} \partial_{t} Z(x,t) \; = \; \frac{1}{2\beta} \partial^{2}_{x} Z(x,t) \; - \; \beta V(x,t) Z(x,t) \end{equation} Substituting here $Z(x,t) = \exp\bigl\{-\beta F(x,t)\bigr\}$, one easily finds that the free energy function $F(x,t)$ satisfy the KPZ equation (\ref{3}) with the viscosity parameter $\nu = \frac{1}{2\beta}$. In other words, the original random force Burger's problem, eq.(\ref{1}) is formally equivalent the directed polymer system, eqs.(\ref{4})-(\ref{7}), such that the the viscosity parameter $\nu$ in the Burger's equation is proportional to the temperature in the directed polymer system, $\nu = \frac{1}{2} T$, and the velocity $v(x,t)$ in the Burger's equation is the negative spatial derivative of the free energy $F(x,t)$ of the directed polymers system. The standard dimensionless parameter which characterizes the level of turbulence of the velocity field in the Burgers problem is called the Reynolds number $Re$, and it is defined as the ratio of typical values of the inertial forces to viscous forces. In the present notations it can be defined as $Re = v_{0} R/\nu$, where $v_{0}$ is the typical flow velocity at the characteristic linear dimension which in the present case is the injection scale of the random force $R$. Using dimensional arguments one easily finds that \begin{equation} \label{9a} v_{0} \; \sim \; \Bigl(\frac{u}{R^{2}}\Bigr)^{1/3} \end{equation} Indeed, according to eq.(\ref{2}) the dimension of the velocity $[v_{0}] = [F]/R$. On the other hand according to eq.(\ref{4}), the dimension of the free energy is $[F] = [H] = t [V]$. Finally, according to eqs.(\ref{5})-(\ref{6}), the dimension of the random potential is $[V] = \sqrt{u/(Rt)}$. Combining all that together one finds eq.(\ref{9a}). Therefore, in terms of the directed polymers notations the Reynolds number of the Burgers turbulence problem reads \begin{equation} \label{9} Re \; = \; \frac{v_{0} R}{\nu} \; = \; 2\beta \, \bigl(u R\bigr)^{1/3} \end{equation} It is evident that an increasing Reynolds number indicates an increasing turbulence of flow and the limit of strongly developed turbulence corresponds to $Re \to \infty$. Thus the strong turbulence Burgers regime corresponds to the zero-temperature limit in the directed polymers system and it is this limit which will be studied in the present paper. As the velocity in the Burgers problem is given by the spatial derivative of the free energy of the directed polymer system, it can be expressed in terms of the difference of two free energies: \begin{equation} \label{10} v(x.t) \; = \; -\frac{\partial F(x,t)}{\partial x} \; = \; - \lim_{\epsilon\to 0} \frac{F(x+\epsilon, t) - F(x,t)}{\epsilon} \end{equation} In other words, one-point velocity statistics is defined by the joint statistics of of {\it two} free energies. Correspondingly, if we are going to study the joint statistical properties of two spatially separated velocities, in terms of the free energies of the directed polymers we have to study the four-point spatial object. In Section II we describe the general ideas and the main lines of the replica approach which will be used in the further derivations of the probability distribution functions. In section III we describe the main points of the zero-temperature limit approach for the directed polymers with finite correlation length of the random potential, eqs.(\ref{5})-(\ref{6}) (for details see \cite{zero-T}). The zero temperature limit of the joint probability distribution function of free energies defined at four spatial points is derived in Section IV. The explicit expression for the corresponding joint probability density function of two velocities $v$ and $v'$ separated by a distance $x$ is derived in Section V, eqs.(\ref{110})-(\ref{112}). In Section VI it will be shown that the PDF for the velocity increment $w = v-v'$ has the following form: \begin{equation} \label{11} P_{x}(w) \; = \; p_{0}(x/R) \delta\Bigl(w - v_{0}\frac{x}{R}\Bigr) \; + \; {\cal P}_{x/R} \bigl(w/v_{0}\bigr) \, \theta\Bigl(v_{0}\frac{x}{R} - w\Bigr) \end{equation} where $\theta(z)$ is the Heaviside step function, $v_{0} \, \propto \, \bigl(u/R^{2}\bigr)^{1/3}$ (see eq.(\ref{9a})), \begin{equation} \label{12} p_{0}(x/R) \; = \; \int_{-\infty}^{+\infty} \frac{ds}{\sqrt{2\pi}} \; \frac{\exp\Bigl\{-\frac{1}{2} s^{2}\Bigr\}}{ \Biggl(1 \; + \; \frac{\zeta_{0}^{3/4} x}{R} \int_{0}^{+\infty} \frac{d\xi}{\sqrt{2\pi}} \, \xi \, \exp\Bigl\{-\frac{1}{2} (s+\xi)^{2}\Bigr\}\Biggr)} \end{equation} and \begin{equation} \label{13} {\cal P}_{x/R} \bigl(w/v_{0}\bigr) \; = \; \frac{\zeta_{0}^{3/2} x}{v_{0} R} \int_{-\infty}^{+\infty} \frac{ds}{\sqrt{2\pi}} \int_{0}^{\Delta(w/v_{0})} \frac{d\eta}{\sqrt{2\pi}} \frac{\exp\Bigl\{-\frac{1}{2} s^{2} -\frac{1}{2} \bigl(s-\Delta(w/v_{0})\bigr)^{2}\Bigr\}}{ \Biggl(1 + \frac{\zeta_{0}^{3/4} x}{R} \int_{0}^{+\infty} \frac{d\xi}{\sqrt{2\pi}} \, \xi \, \exp\Bigl\{-\frac{1}{2} \bigl(\xi + \eta + s - \Delta(w/v_{0}) \bigr)^{2}\Bigr\}\Biggr)^{2}} \end{equation} Here $\zeta_{0} \sim 1 $ is a number (see Section III) and \begin{equation} \label{14} \Delta(w/v_{0}) \; = \; \zeta_{0}^{3/4} \Bigl(\frac{x}{R} \; - \; \frac{w}{v_{0}} \Bigr) \end{equation} The above formulas, eqs.(\ref{11})-(\ref{14}) constitute the central result of the present research. The distribution function $P_{x}(w)$ has rather specific structure (see Section VI, Fig.4). According to eq.(\ref{11}) for a given distance $x$ the values of the velocity increment $w$ are bounded from above: $w \leq \frac{x}{R} v_{0}$, where $v_{0} \propto \bigl(u/R^{2}\bigr)^{1/3}$ is the typical flow velocity at the injection scale $R$ of the random force of the strength $u$. Moreover, at $w = \frac{x}{R} v_{0}$ the distribution function exhibits the $\delta$-function singularity which means that at a given distance $x$ the difference of two velocities $w = v-v'$ has a {\it finite} probability $p_{0}$, eq.(\ref{12}), to be equal to $\frac{x}{R} v_{0}$. The above result allows to study the behavior of the moments of the velocity increment $\langle w^{q}\rangle$ at distances $x \ll R$. Introducing the reduced distance parameter $r = \zeta_{0}^{3/4} x/R$ and the reduced velocity increment $\omega = \zeta_{0}^{3/4} w/v_{0}$, in the limit $r \ll 1$ instead of eqs.(\ref{11})-(\ref{13}) we get: \begin{equation} \label{15} P_{r}(\omega) \; \simeq \; \Bigl(1-\frac{r}{\sqrt{\pi}}\Bigr) \delta(\omega - r) \; + \; \frac{r}{\sqrt{\pi}} (r-\omega) \exp\Bigl\{-\frac{1}{4}(r-\omega)^{2}\Bigr\} \; \theta(r-\omega) \end{equation} Then, for even moments of the reduced velocity increment we obtain: \begin{equation} \label{16} \langle \omega^{2n}\rangle \; \simeq \; r^{2n} \; + \; C(n) \, r \end{equation} where $C(n) = 2^{2n+1} \Gamma(1+n)$. The above result can be analytically continued for arbitrary real values $q$ of the parameter $2n \to q$. Then, introducing the exponent $\zeta(q)$ as $\langle \omega^{q}\rangle \; \simeq \; r^{\zeta(q)}$, according to eq.(\ref{16}) in the limit $r \ll 1$ we recover the typical strong intermittency behavior (see Fig.1): \begin{equation} \label{17} \zeta(q) \; \simeq \; \left\{ \begin{array}{ll} q \; , \; \; \mbox{for} \; q \; \leq \; 1 \, ; \\ \\ 1 \; , \; \; \mbox{for} \; q \; > \; 1 \, . \end{array} \right. \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=9.0cm]{Figure1.eps} \caption[]{Intermittency behavior of the exponent $\zeta(q)$, eq.(\ref{17}). The dashed line represents the Kolmogorov scaling $\zeta(q) = q/3$} \end{center} \label{figure1} \end{figure} It should be stressed that the above results, eqs.(\ref{11}) and (\ref{17}), are in remarkable agreement with the ones obtained for the same system many years ago in \cite{Bouch-Mez-Par} in the framework of the Gaussian variation method (which formally should be valid only at high dimensions). \section{Replica formalism} In this section we are going to describe the general scheme of calculations of the statistical properties of the Burgers velocity field $v(x,t)$, eq.(\ref{1}), in terms of the standard replica approach used for the directed polymers model, eqs.(\ref{4})-(\ref{7}). Using the relation between $v(x,t)$ and the free energy $F(x,t)$ of the corresponding directed polymer model, eq.(\ref{10}), for finite value of the parameter $\epsilon$ (which should be taken to zero in the final result) we have \begin{equation} \label{18} \exp\bigl\{\beta\epsilon v(x,t) \bigr\} \; = \; \exp\bigl\{ -\beta F(x+\epsilon, t) + \beta F(x, t) \bigr\} \; = \; Z(x+\epsilon, t) \cdot Z^{-1}(x, t) \end{equation} Taking the integer power $N$ of both sides of the above relation and averaging over the disorder (which in what follows will be denoted by the overline, $\overline{(...)}$) we gets \begin{equation} \label{19} \int dv \, P_{x,\epsilon,t}(v) \, \exp\bigl\{\beta N\epsilon \, v\bigr\} \; = \; \overline{Z^{N}(x+\epsilon, t) \cdot Z^{-N}(x, t)} \end{equation} where $P_{x,\epsilon,t}(v)$ is the PDF of the velocity $v$. Formally, the above relation can be represented as follows, \begin{equation} \label{20} \int dv \, P_{x,\epsilon,t}(v) \, \exp\bigl\{\beta N\epsilon \, v\bigr\} \; = \; \lim_{M\to 0} \, Z(M,N,x,\epsilon,t) \end{equation} where \begin{equation} \label{21} Z(M,N,x,\epsilon,t) \; \equiv \; \overline{Z^{N}(x+\epsilon, t) \cdot Z^{M-N}(x, t)} \end{equation} is the {\it two-point} replica partition function. General scheme of calculations of the velocity PDF defined by the relation (\ref{20}) consists of several steps. First, for a given (finite) $\epsilon$ and {\it integers} $M$ and $N$, such that $M > N$, one has to compute the replica partition function $Z(M,N,x,\epsilon,t)$ as an analytic function of the parameters $M$ and $N$. Next, this function should be analytically continued for arbitrary complex values of $M$ and $N$, and the limits $M \to 0$ as well as $t\to\infty$ have to be taken. Next, to take the limit $\epsilon\to 0$, one introduces the parameter $s = \beta\epsilon N$ which has to be kept finite (this implies that together with the limit $\epsilon\to 0$ one simultaneously takes the limit $N\to\infty$). Thus, after performing these manipulations (provided all the above limits exist) the relation (\ref{20}) turns into bilateral Laplace transform for the velocity PDF $P_{*}(v) = \lim_{\epsilon\to 0}\lim_{t\to\infty} P_{x,\epsilon,t}(v)$ (which for finite values of $x$ in the limit $t\to\infty$ should be $x$-independent): \begin{equation} \label{22} \int dv \, P_{*}(v) \, \exp\{s \, v\} \; = \; Z_{*}(s) \end{equation} where \begin{equation} \label{23} Z_{*}(s) \; = \; \lim_{\epsilon\to 0}\lim_{t\to\infty}\lim_{M\to 0} \; Z\Bigl(M,\frac{s}{\beta\epsilon},x,\epsilon,t\Bigr) \end{equation} In this way the PDF $P_{*}(v)$ could be recovered by the inverse Laplace transform. Note that the above type of program has been already successfully implemented for the derivation of the Burgers two-point velocity PDF in the toy (Gaussian) Larking model of random directed polymers \cite{burgulence}. In this paper we are going to derive joint PDF of two velocities $v = v(x/2,t)$ and $v'=v(-x/2,t)$ at two points separated by a {\it finite } distance $x$. In this case straightforward generalization of the above replica scheme would require computation of the four-point replica partition function: \begin{equation} \label{24} \int dv dv' \, P_{x,\epsilon,t}(v, v') \, \exp\bigl\{\beta N_{1}\epsilon \, v + \beta N_{2}\epsilon \, v'\bigr\} = \lim_{M\to 0} \, \overline{Z^{N_{1}}(x/2,t) \, Z^{M-N_{1}}(x/2-\epsilon,t) \, Z^{N_{2}}(-x/2+\epsilon,t) \, Z^{M-N_{2}}(-x/2,t) } \end{equation} Technically, direct recovery (using the inverse Laplace transformation) of the two-velocity PDF using the above relation turns out to be rather involved task which still remains to be done. On the other hand, the experience shows that sometimes to compute a complicated quantity, first one just has to compute a more general object. In the present case, instead of two velocities PDF let us consider the joint distribution function of {\it three} free energy differences. Namely, for a given four spatial points, $-x/2, \; -x/2+\epsilon, \; x/2-\epsilon$ and $x/2$, let us define \begin{eqnarray} \nonumber f_{1} &=& F(x/2,t) \; - \; F(-x/2,t) \\ \label{25} f_{2} &=& F(x/2-\epsilon,t) \; - \; F(-x/2,t) \\ \nonumber f_{3} &=& F(-x/2+\epsilon,t) \; - \; F(-x/2,t) \end{eqnarray} In terms of the partition functions the above relations can be represented as follows: \begin{eqnarray} \nonumber Z(x/2,t) \, Z^{-1}(-x/2,t) &=& \exp\{-\beta f_{1}\} \\ \nonumber \\ \label{26} Z(x/2-\epsilon,t) \, Z^{-1}(-x/2,t) &=& \exp\{-\beta f_{2}\} \\ \nonumber \\ \nonumber Z(-x/2+\epsilon,t) \, Z^{-1}(-x/2,t) &=& \exp\{-\beta f_{3}\} \end{eqnarray} As the further considerations will be done in the zero temperature limit (which correspond to the limit of large Reynolds number, eq.(\ref{9})), it turns out that the simplest way to derive the joint PDF $P_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr)$ is to use the generating function approach. Namely, let us introduce the probability function \begin{equation} \label{27} W_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr) \; = \; \int_{-\infty}^{f_{1}}df_{1}' \int_{-\infty}^{f_{2}}df_{2}' \int_{-\infty}^{f_{3}}df_{3}' \; P_{x,\epsilon,t}\bigl(f_{1}', f_{2}',f_{3}'\bigr) \end{equation} One can easily see that in the zero temperature limit this function can be represented in a form of the series: \begin{eqnarray} \nonumber W_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr) &=& -\lim_{\beta\to\infty} \sum_{N_{1}=1}^{\infty} \frac{(-1)^{N_{1}}}{N_{1}!} \sum_{N_{2}=1}^{\infty} \frac{(-1)^{N_{2}}}{N_{2}!} \sum_{N_{3}=1}^{\infty} \frac{(-1)^{N_{3}}}{N_{3}!} \exp\bigl\{\beta N_{1} f_{1}+\beta N_{2} f_{2}+\beta N_{3} f_{3}\bigr\} \times \\ \label{28} \\ \nonumber &\times& \overline{\Bigl[Z(x/2,t) Z^{-1}(-x/2,t) \Bigr]^{N_{1}} \Bigl[Z(x/2-\epsilon,t) Z^{-1}(-x/2,t) \Bigr]^{N_{2}} \Bigl[Z(-x/2+\epsilon,t) Z^{-1}(-x/2,t) \Bigr]^{N_{3}} } \end{eqnarray} Indeed, substituting here eq.(\ref{26}) we gets \begin{eqnarray} \nonumber W_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr) &=& -\lim_{\beta\to\infty} \int_{-\infty}^{+\infty}df_{1}' \int_{-\infty}^{+\infty}df_{2}' \int_{-\infty}^{+\infty}df_{3}' \; P_{x,\epsilon,t}\bigl(f_{1}', f_{2}',f_{3}'\bigr) \; \Biggl[ \sum_{N_{1}=1}^{\infty} \frac{(-1)^{N_{1}}}{N_{1}!} \exp\bigl\{\beta(f_{1}-f_{1}')N_{1}\bigr\} \Biggr] \times \\ \nonumber \\ \nonumber &\times& \Biggl[ \sum_{N_{2}=1}^{\infty} \frac{(-1)^{N_{2}}}{N_{2}!} \exp\bigl\{\beta(f_{2}-f_{2}')N_{2}\bigr\} \Biggr] \Biggl[ \sum_{N_{3}=1}^{\infty} \frac{(-1)^{N_{3}}}{N_{3}!} \exp\bigl\{\beta(f_{3}-f_{3}')N_{3}\bigr\} \Biggr] \\ \nonumber \\ \nonumber \\ \nonumber &=& -\lim_{\beta\to\infty} \int_{-\infty}^{+\infty}df_{1}' \int_{-\infty}^{+\infty}df_{2}' \int_{-\infty}^{+\infty}df_{3}' \; P_{x,\epsilon,t}\bigl(f_{1}', f_{2}',f_{3}'\bigr) \; \Biggl[ \exp\Bigl\{-\exp\bigl[\beta(f_{1}-f_{1}')\bigr] \Bigr\} \, - \, 1 \Biggr] \times \\ \nonumber \\ \nonumber &\times& \Biggl[ \exp\Bigl\{-\exp\bigl[\beta(f_{2}-f_{2}')\bigr] \Bigr\} \, - \, 1 \Biggr] \Biggl[ \exp\Bigl\{-\exp\bigl[\beta(f_{3}-f_{3}')\bigr] \Bigr\} \, - \, 1 \Biggr] \\ \nonumber \\ \nonumber \\ \label{29} &=& \int_{-\infty}^{+\infty}df_{1}' \int_{-\infty}^{+\infty}df_{2}' \int_{-\infty}^{+\infty}df_{3}' \; P_{x,\epsilon,t}\bigl(f_{1}', f_{2}',f_{3}'\bigr) \; \theta\bigl(f_{1}-f_{1}'\bigr) \, \theta\bigl(f_{2}-f_{2}'\bigr) \, \theta\bigl(f_{3}-f_{3}'\bigr) \end{eqnarray} which coincides with the definition (\ref{27}). Thus, according to eq.(\ref{28}), in terms of the replica technique the probability function, eq.(\ref{27}), can be represented as: \begin{equation} \label{30} W_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr) = -\lim_{\beta\to\infty} \lim_{M\to 0} \sum_{N_{1},N_{2},N_{3}=1}^{\infty} \frac{(-1)^{N_{1}+N_{2}+N_{3}}}{N_{1}!N_{2}!N_{3}!} \exp\bigl\{\beta N_{1} f_{1}+\beta N_{2} f_{2}+\beta N_{3} f_{3}\bigr\} \; Z_{x,\epsilon,t} \bigl(M,N_{1},N_{2},N_{3} \bigr) \end{equation} where \begin{equation} \label{31} Z_{x,\epsilon,t} \bigl(M,N_{1},N_{2},N_{3} \bigr) \; = \; \overline{Z^{N_{1}}(x/2,t) \, Z^{N_{2}}(x/2-\epsilon,t) \, Z^{N_{3}}(-x/2+\epsilon,t) \, Z^{M-N_{1}-N_{2}-N_{3}}(-x/2,t) } \end{equation} Further program of calculations is in the following. The above {\it four-point} replica partition function has to be calculated for an integer $M > N_{1}+N_{2}+N_{3}$ as an analytic function of the parameter $M$. Then this function has to be analytically continued for arbitrary real values of $M$ and the limit $M\to 0$ has to be taken. Finally, after computing the series in eq.(\ref{30}) (in the limits $t\to\infty$ and $\beta\to\infty$) according to the definition (\ref{27}) the corresponding PDF $P_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr)$ can be obtained as \begin{equation} \label{32} P_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) \; = \; \frac{\partial^{3}}{\partial f_{1} \; \partial f_{2} \; \partial f_{3}} {\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) \end{equation} where \begin{equation} \label{33} {\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) \; \equiv \; \lim_{\beta\to \infty} \lim_{t\to\infty} W_{x,\epsilon,t}\bigl(f_{1}, f_{2},f_{3}\bigr) \end{equation} According to the representation (\ref{10}) and the definitions (\ref{25}) the velocities $v \equiv v(x/2,t)$ and $v'\equiv v(-x/2,t)$ are defined as \begin{eqnarray} \label{34} v &=& - \lim_{\epsilon\to 0} \frac{f_{1} - f_{2}}{\epsilon} \\ \label{35} v' &=& - \lim_{\epsilon\to 0} \frac{f_{3}}{\epsilon} \end{eqnarray} Thus, the corresponding joint PDF of these two velocities, $P_{x}(v, v')$ can be obtained as \begin{equation} \label{36} P_{x}(v, v') \; = \; \lim_{\epsilon\to 0} \Biggl[\epsilon^{2} \int_{-\infty}^{+\infty} df_{2} \; P_{x,\epsilon}\bigl(f_{2}-\epsilon v,\, f_{2},\, -\epsilon v'\bigr) \Biggr] \end{equation} The above general program of computations will be implemented in the further sections. \section{Zero temperature limit} To compute the replica partition function, eq.(\ref{31}), let us consider more general object: \begin{equation} \label{37} \Psi\bigl(x_{1}, x_{2}, ..., x_{M}; \, t\bigr) \; = \; \overline{\Biggl(\prod_{a=1}^{M} Z(x_{a}, t) \Biggr)} \end{equation} Substituting here eqs.(\ref{7}) and (\ref{4}), and performing simple Gaussian averaging (using eq.(\ref{5})) we get \begin{equation} \label{38} \Psi\bigl(x_{1}, x_{2}, ..., x_{M}; \, t\bigr) \; = \; \prod_{a=1}^{M} \Biggl[\int_{\phi_{a}(0)=0}^{\phi_{a}(t)=x_{a}} {\cal D}\phi_{a}(\tau)\Biggr] \exp\Bigl\{-\beta H_{M} \bigl[\phi_{1}(\tau), ..., \phi_{M}(\tau)\bigr] \Bigr\} \end{equation} where \begin{equation} \label{39} \beta H_{M} \bigl[\phi_{1}(\tau), ..., \phi_{M}(\tau)\bigr] \; = \; \int_{0}^{t} d\tau \Bigl[\frac{1}{2} \beta \sum_{a=1}^{M}\bigl(\partial_\tau \phi_{a}(\tau)\bigr)^2 - \frac{1}{2} \beta^{2} u \sum_{a,b=1}^{M} U\bigl(\phi_{a}(\tau) - \phi_{b}(\tau)\bigr) \Bigr]; \end{equation} is the replica Hamiltonian with the attractive interaction potential $U(\phi)$ given in eq.(\ref{6}). One can easily show that the function $\Psi\bigl(x_{1}, x_{2}, ..., x_{M}; \, t\bigr)$ is the wave function of one-dimensional quantum bosons which satisfy the imaginary time Schr\"odinger equation \begin{equation} \label{40} \beta \frac{\partial}{\partial t} \Psi({\bf x}; \, t) \; = \; \frac{1}{2}\sum_{a=1}^{M} \, \frac{\partial^{2}}{\partial x_{a}^{2}} \Psi({\bf x}; \, t) \; + \; \frac{1}{2} \, \beta^{3} u \, \sum_{a,b=1}^{M} U(x_{a} - x_{b}) \, \Psi({\bf x}; \, t) \end{equation} with the initial conditions $\Psi({\bf x}; \, 0) \; = \; \prod_{a=1}^{M}\, \delta(x_{a})$ (here we have introduced the vector notation ${\bf x} \equiv \{x_{1}, x_{2}, ..., x_{M}\}$). The high temperature limit of the replica problem formulated above is well studied (for a review see e.g. \cite{rev} and references therein). It can be shown that in the limit $\beta \to 0$ the interaction potential $U(x)$, eq.(\ref{6}), can be approximated by the $\delta$-function, and in this case the generic solution of the Schr\"odinger equation can be represented in terms of the Bethe ansatz eigenfunctions \cite{Lieb-Liniger,McGuire,Yang}. However, at low temperatures, $T \lesssim \bigl(u R\bigr)^{1/3}$, the typical distance between particles (defined by the wave function $\Psi({\bf x})$) becomes comparable with the size $R$ of the interaction potential $U(x)$ and its approximation by the $\delta$-function is no longer valid. The zero temperature limit of the considered system has been studied in \cite{zero-T,Lecomte} In the limit of low temperatures it is convenient to redefine the parameters of the system in the following way: \begin{eqnarray} \nonumber \phi &=& R \, \tilde{\phi} \\ \label{41} \beta &=& T_{*}^{-1} \tilde{\beta} \\ \nonumber \tau &=& \tau_{*} \tilde{\tau} \end{eqnarray} where \begin{eqnarray} \label{42} T_{*} &=& \Bigl(\frac{uR}{\sqrt{2\pi}}\Bigr)^{1/3} \\ \nonumber \\ \label{43} \tau_{*} &=& \bigl(\sqrt{2\pi} R^{5} u^{-1} \bigr)^{1/3} \end{eqnarray} In the new notations the replica Hamiltonian (\ref{39}) reads \begin{equation} \label{44} \beta H_{M} \bigl[\tilde{\boldsymbol{\phi}} \bigr] \; = \; \int_{0}^{t/\tau_{*}} d\tilde{\tau} \Bigl[\; \frac{1}{2} \tilde{\beta} \sum_{a=1}^{M}\bigl(\partial_{\tilde{\tau}} \tilde{\phi}_{a}(\tilde{\tau})\bigr)^2 - \frac{1}{2} \tilde{\beta}^{2} \sum_{a,b=1}^{M} U_{0}\bigl(\tilde{\phi}_{a}(\tilde{\tau}) - \tilde{\phi}_{b}(\tilde{\tau})\bigr) \; \Bigr]; \end{equation} where \begin{equation} \label{45} U_{0}(\phi) \; = \; \exp\Bigl\{-\frac{1}{2} \phi^{2} \Bigr\} \end{equation} Accordingly, instead of eq.(\ref{40}) we get \begin{equation} \label{46} \tilde{\beta} \frac{\partial}{\partial \tilde{t}} \Psi({\bf \tilde{x}}; \, t) \; = \; \frac{1}{2}\sum_{a=1}^{M} \, \frac{\partial^{2}}{\partial \tilde{x}_{a}^{2}} \Psi({\bf \tilde{x}}; \, t) \; + \; \frac{1}{2} \, \tilde{\beta}^{3} \, \sum_{a,b=1}^{M} U_{0}(\tilde{x}_{a} - \tilde{x}_{b}) \, \Psi({\bf \tilde{x}}; \, t) \end{equation} where $\tilde{t} = t/\tau_{*}$ and $\tilde{x} = x/R$. Substituting here $\Psi({\bf \tilde{x}}; \, t) \; = \; \psi({\bf \tilde{x}}) \, \exp\bigl\{-E \tilde{t}\bigr\}$ we obtain the following equation for the eigenfunctions $\psi({\bf \tilde{x}})$ and the eigenvalues (energy) $E$: \begin{equation} \label{47} -2\tilde{\beta} E \; \psi({\bf \tilde{x}}) \; = \; \sum_{a=1}^{M} \, \frac{\partial^{2}}{\partial \tilde{x}_{a}^{2}} \psi({\bf \tilde{x}}) \; + \; \tilde{\beta}^{3} \, \sum_{a,b=1}^{M} U_{0}(\tilde{x}_{a} - \tilde{x}_{b}) \, \psi({\bf \tilde{x}}) \end{equation} which is controlled by the only parameter \begin{equation} \label{48} \tilde{\beta} \; = \; \beta \, T_{*} \; = \; \beta \, \bigl(uR\bigr)^{1/3} \, (2\pi)^{-1/6} \end{equation} We see that $T_{*}$, eq.(\ref{42}), is the crossover temperature which separates the high-temperatures, $T \gg T_{*}$, and the low-temperatures, $T \ll T_{*}$, regimes. Note also that introduced above dimensionless inverse temperature parameter $\tilde{\beta}$, eq.(\ref{48}), coincides with the Reynolds number $Re$, eq.(\ref{9}), so that the limit of large Reynolds number in the Burgers problem corresponds to the zero temperature limit in the considered directed polymers model. Recently it has been demonstrated \cite{zero-T} that in the limit $\tilde{\beta}\to\infty$ the eigenfunction $\psi({\bf \tilde{x}})$ acquires specific vector replica symmetry breaking (RSB) coordinate structure, namely, $M$ its arguments $\{\tilde{x}_{1}, ... ,\tilde{x}_{M}\}$ split into $K=M/m$ groups each consisting of $m$ particles. In other words, to describe the coordinate structure of the eigenfunction $\psi({\bf \tilde{x}})$, instead of the particles coordinates $\{\tilde{x}_{a}\} \; \; (a = 1, ..., M)$ one introduces the coordinates of the center of masses of the groups $\bigl\{ X_{\alpha}\bigr\} \; \; (\alpha = 1, ..., K) $ and the deviations $\{\xi^{\alpha}_{i}\} \; \; (i = 1, ..., m)$ of the particles of a given group $\alpha$ from the position of its center of mass: \begin{equation} \label{49} \tilde{x}_{a} \; \to \; X_{\alpha} + \xi^{\alpha}_{i} \, ; \; \; \; \; \; \alpha = 1, ..., M/m \, ; \; \; \; i = 1, ..., m \end{equation} where $\sum_{i=1}^{m} \xi^{\alpha}_{i} = 0$. It can be shown \cite{zero-T} that in the zero temperature limit the typical value of the deviations insides groups are small, $\langle(\xi^{\alpha}_{i})^{2}\rangle\big|_{\tilde{\beta}\to\infty} \; \to \; 0$, while the typical distance between the groups remains finite. As these two spatial scales are well separated, the wave function $\psi({\bf \tilde{x}})$ factorizes into the product of two contributions: the "external" wave function which depends only on the coordinates $\{X_{\alpha}\}$ of the center of masses of the groups, and the "internal" wave functions which depend only on the coordinates $\{\xi^{\alpha}_{i}\}$ of the particles inside the groups: \begin{equation} \label{50} \psi({\bf \tilde{x}}) \; \to \; \psi\bigl(X_{\alpha}; \; \xi^{\alpha}_{i}\bigr) \; \simeq \; \psi_{*}\bigl(X_{1}, ..., X_{M/m}\bigr) \times \prod_{\alpha=1}^{M/m} \psi_{0}\bigl(\xi^{\alpha}_{1}, ... \xi^{\alpha}_{m}\bigr) \end{equation} As the values $\xi^{\alpha}_{i}$ are small the interaction potential, eq.(\ref{45}), between the particles inside groups can be approximated as \begin{equation} \label{51} U_{0}\bigl(\xi^{\alpha}_{i} - \xi^{\alpha}_{j}\bigr) \; \simeq \; 1 \; - \; \frac{1}{2} \bigl(\xi^{\alpha}_{i} - \xi^{\alpha}_{j}\bigr)^{2} \end{equation} Thus, according to eq.(\ref{47}), the corresponding equation for the "internal" eigenfunction $\psi_{0}\bigl(\boldsymbol{\xi}\bigr)$ of any group reads \begin{equation} \label{52} -2\tilde{\beta} E_{0} \; \psi_{0}\bigl(\boldsymbol{\xi}\bigr) \; = \; \sum_{i=1}^{m} \, \frac{\partial^{2}}{\partial \xi_{i}^{2}} \psi_{0}\bigl(\boldsymbol{\xi}\bigr) \; + \; \tilde{\beta}^{3} m^{2} \psi_{0}\bigl(\boldsymbol{\xi}\bigr) \; - \; \frac{1}{2} \tilde{\beta}^{3} \, \sum_{i,j=1}^{m} \bigl(\xi_{i} - \xi_{j}\bigr)^{2} \, \psi_{0}\bigl(\boldsymbol{\xi}\bigr) \end{equation} where $\boldsymbol{\xi} \; = \; \{\xi_{1}, \xi_{2}, ..., \xi_{m}\}$. One can easily show that this equation has the following exact (ground state) solution \begin{equation} \label{53} \psi_{0}\bigl(\boldsymbol{\xi}\bigr) \; = \; C \, \exp\Bigl\{ -\frac{1}{4} \tilde{\beta}^{2} \bigl(\tilde{\beta} m\bigr)^{-1/2} \sum_{i,j=1}^{m} \bigl(\xi_{i} - \xi_{j}\bigr)^{2} \Bigr\} \end{equation} where $C$ is the normalization constant and \begin{equation} \label{54} E_{0} \; = \; -\frac{1}{2} \bigl(\tilde{\beta} m\bigr)^{2} \; + \; \frac{1}{2} (m-1) \sqrt{\tilde{\beta} m} \end{equation} is the ground state energy. On the other hand, the "external" wave function $\psi_{*}\bigl({\bf X}\bigr)$ (with ${\bf X} = \{X_{1}, ..., X_{M/m} \}$) is defined by the equation \begin{equation} \label{55} -2 \bigl(\tilde{\beta} m\bigr) \, E_{*} \psi_{*}\bigl({\bf X}\bigr) \; = \; \sum_{\alpha=1}^{M/m} \, \frac{\partial^{2}}{\partial X_{\alpha}^{2}} \psi_{*}\bigl({\bf X}\bigr) \; + \; \frac{1}{2} \bigl(\tilde{\beta} m\bigr)^{3} \sum_{\alpha\not= \alpha'}^{M/m} U_{0}\bigl(X_{\alpha}-X_{\alpha'}\bigr) \psi_{*}\bigl({\bf X}\bigr) \end{equation} In terms of the replica approach, the parameter $m$ of the RSB ansatz described above is an integer such that $1 \leq m \leq M$ (so that $M/m$ is also an integer). In the framework of the standard replica technique, after computing the corresponding partition function and its analytic continuation for arbitrary (non-integer) values of $M$ and $m$, in the limit $M \to 0$ the parameter $m$ takes continuous (real) values at the interval $0 \leq m \leq 1$. Its actual physical value $m(\tilde{\beta})$ is fixed by the condition of the {\it maximum} of the total (linear in time $t \to \infty$) replica free energy. It can be shown \cite{zero-T} that in the limit $\tilde{\beta} \to \infty$ the value $m(\tilde{\beta})$ is defined by the relation \begin{equation} \label{56} \tilde{\beta} \, m \; = \; \zeta_{0} \end{equation} where $\zeta_{0}$ is a number of the order of one (such that $m(\tilde{\beta}) \to 0$ as $\tilde{\beta} \to \infty$). The exact value of $\zeta_{0}$ is yet to be computed, as it is defined by the exact solution of the "external" problem, eq.(\ref{55}), which at present is not known. In terms of this RSB ansatz in the zero temperature limit the replica partition function of the considered system, eq.(\ref{31}), factorizes into two parts: \begin{equation} \label{57} Z_{x,\epsilon,t} \bigl(M,N_{1},N_{2},N_{3} \bigr) \; \simeq \; Z_{*}\bigl[(\tilde{\beta} m), \; M/m, \; \tilde{t}\bigr] \times Z_{0}\bigl(M, m, \tilde{\beta}, N_{1}, N_{2}, N_{3}, x, \epsilon\bigr) \end{equation} where $Z_{*}$ is the "external" replica partition function: \begin{equation} \label{58} Z_{*} = \prod_{\alpha=1}^{M/m} \Biggl[\int_{\varphi_{\alpha}(0)=0}^{\varphi_{\alpha}(\tilde{t})=0} {\cal D}\varphi_{\alpha}(\tau)\Biggr] \exp\Biggl\{-\frac{1}{2} \int_{0}^{\tilde{t}} d\tau \Bigl[(\tilde{\beta} m) \sum_{\alpha=1}^{M/m} \bigl(\partial_{\tau} \varphi_{\alpha}\bigr)^{2} - (\tilde{\beta} m)^{2} \sum_{\alpha\not= \alpha'}^{M/m} U_{0}\bigl(\varphi_{\alpha} - \varphi_{\alpha'}\bigr) \Bigr] - \tilde{t} \frac{M}{m} \, E_{0} \Biggr\} \end{equation} where $E_{0}$ is given in eq.(\ref{54}). Note that in the limit $\tilde{t}\to\infty$ this partition function is getting independent of $x$ and $\epsilon$, as these parameters are not scaling with $\tilde{t}$. The above "external" partition function $Z_{*}$ defines the extensive in $\tilde{t}\to\infty$ part of the directed polymer free energy and fixes the value of the parameter $m = m(\tilde{\beta})$, eq.(\ref{56}), and it is in this way that the parameters of the large-scale random potential influence the small-scale statistics defined by the "internal" partition function (see below) which also depends on the value of $m(\tilde{\beta})$. On the other hand, by definition, \begin{equation} \label{59} \lim_{M\to 0} \, Z_{*}\bigl[(\tilde{\beta} m), \; M/m, \; \tilde{t}\bigr] \; = \; 1 \end{equation} and therefore, except for fixing the value of the replica parameter $m(\tilde{\beta})$, this part of the total partition function does not contribute to the probability function $ {\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr)$, eqs.(\ref{33}) and (\ref{30}). This probability function is defined only by the "internal" (independent of $\tilde{t}$) partition function \begin{eqnarray} \nonumber {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) &=& \lim_{\tilde{\beta}\to\infty} \lim_{M\to 0} Z_{0}\bigl(M, m, \tilde{\beta}, N_{1}, N_{2}, N_{3}, x, \epsilon\bigr) \\ \nonumber \\ &=& \lim_{\tilde{\beta}\to\infty} \lim_{M\to 0} \Biggl[ \sum_{\{\tilde{\xi}^{\alpha}_{i}\}} \prod_{\alpha=1}^{M/m} \psi_{0}\bigl(\tilde{\xi}^{\alpha}_{1}, ..., \tilde{\xi}^{\alpha}_{m}\bigr)\Big|_{\{\tilde{\xi}^{\alpha}_{i}\} = (\tilde{x}/2; \; \tilde{x}/2 - \epsilon ; \; -\tilde{x}/2 + \epsilon ; \; -\tilde{x}/2)} \Biggr] \label{60} \end{eqnarray} where the explicit expression for $\psi_{0}\bigl(\boldsymbol{\tilde{\xi}}\bigr)$ is given in eq.(\ref{53}), and where we have to sum over all possible distributions of $M$ particle coordinates $\{\tilde{\xi}^{\alpha}_{i}\} \; \; (\alpha = 1, ..., M/m ; \; \; i = 1, ..., m)$ over four end-points $\tilde{x}/2; \; \tilde{x}/2 - \epsilon ; \; -\tilde{x}/2 + \epsilon$ and $ -\tilde{x}/2$ with $\tilde{x} = x/R$ and $\tilde{\xi}^{\alpha}_{i} = \xi^{\alpha}_{i}/R$. \section{Free energies probability distribution function} Substituting eqs.(\ref{53}) and (\ref{48}) as well as $\tilde{x} = x/R$ and $\tilde{\xi}^{\alpha}_{i} = \xi^{\alpha}_{i}/R$ into eq.(\ref{60}) we gets \begin{equation} \label{61} {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) \; = \; \lim_{\beta\to\infty} \lim_{M\to 0} \Biggl[ \sum_{\{\tilde{\xi}^{\alpha}_{i}\}} \prod_{\alpha=1}^{M/m} \exp\Bigl\{ -\frac{1}{4} \beta^{2} \gamma^{2} \sum_{i,j=1}^{m} \bigl(\xi^{\alpha}_{i} - \xi^{\alpha}_{j}\bigr)^{2} \Bigr\}\Big|_{\{\xi^{\alpha}_{i}\} = (x/2; \; x/2 - \epsilon ; \; -x/2 + \epsilon ; \; -x/2)} \Biggr] \end{equation} where \begin{equation} \label{62} \gamma \; = \; \frac{T_{*}}{R (\tilde{\beta} m)^{1/4}} \end{equation} and $T_{*}$ is given in eq.(\ref{42}). Note that the normalization factor $C$ of the wave function (\ref{53}) can be dropped out in eq.(\ref{61}), as $\lim_{M\to 0} C^{M/m} = 1$. According to the definition, eq.(\ref{31}), in the summation over various distributions of $M$ end-points $\xi^{\alpha}_{i}$ over four spatial points the total number of $\xi^{\alpha}_{i}$'s attached to $x/2$, $ x/2 - \epsilon$, $-x/2 + \epsilon$ and $-x/2$ are equal to $N_{1}$, $N_{2}$, $N_{3}$ and $(M-N_{1}-N_{2}-N_{3})$ correspondingly. Let us denote the number of $\xi^{\alpha}_{i}$'s of the group $\alpha$ attached to the points $x/2$, $ x/2 - \epsilon$, $-x/2 + \epsilon$ and $-x/2$ by $k^{\alpha}_{1}$, $k^{\alpha}_{2}$, $k^{\alpha}_{3}$ and $k^{\alpha}_{4}$. As the total number of particles in each group is equal to $m$, by definition, \begin{equation} \label{63} k^{\alpha}_{1} + k^{\alpha}_{2} + k^{\alpha}_{3} + k^{\alpha}_{4} \; = \; m \end{equation} and \begin{equation} \label{64} \left\{ \begin{array}{ll} \sum_{\alpha=1}^{M/m} k^{\alpha}_{1} \; = \; N_{1} \\ \\ \sum_{\alpha=1}^{M/m} k^{\alpha}_{2} \; = \; N_{2} \\ \\ \sum_{\alpha=1}^{M/m} k^{\alpha}_{3} \; = \; N_{3} \\ \\ \sum_{\alpha=1}^{M/m} k^{\alpha}_{4} \; = \; M-N_{1}-N_{2}-N_{3} \end{array} \right. \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=13.0cm]{Figure2.eps} \caption[]{Schematic representation of the replica structure of the partition function in eqs.(\ref{61})-(\ref{65}).} \end{center} \label{figure2} \end{figure} Schematically the above replica structure of the partition function (\ref{61}) is represented in Fig.2. Accordingly, the factors $\bigl(\xi^{\alpha}_{i} - \xi^{\alpha}_{j}\bigr)^{2}$ in eq.(\ref{61}) can take four possible values: $\epsilon^{2}$, $(x-\epsilon)^{2}$, $(x-2\epsilon)^{2}$ and $x^{2}$. Simple combinatoric considerations yield: \begin{eqnarray} \nonumber {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) &=& \lim_{\beta\to\infty} \lim_{M\to 0} \Biggl\{ \frac{N_{1}! \, N_{2}! \, N_{3}! \, (M - N_{1}- N_{2}-N_{3})!}{M!} \times \\ \nonumber \\ \label{65} &\times& \prod_{\alpha=1}^{M/m} \Biggl[ \Bigl(\prod_{i=1}^{4} \sum_{k^{\alpha}_{i}=0}^{m} \Bigr) \frac{m!}{k^{\alpha}_{1}! \, k^{\alpha}_{2}! \, k^{\alpha}_{3}! \, k^{\alpha}_{4}!} \boldsymbol{\delta}\Bigl(\sum_{i=1}^{4}k^{\alpha}_{i}, \; m\Bigr) \exp\Bigl\{-\frac{1}{4} \beta^{2} \gamma^{2} \sum_{i,j=1}^{4} D_{ij} k^{\alpha}_{i}k^{\alpha}_{j} \Bigr\} \Biggr] \times \\ \nonumber \\ \nonumber &\times& \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{1}, \; N_{1}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{2}, \; N_{2}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{3}, \; N_{3}\Bigr) \Biggr\} \end{eqnarray} where $\boldsymbol{\delta}(p,q)$ is the Kronecker symbol and \begin{equation} \label{66} \hat{D} \; = \; \left( \begin{array}{cccc} 0 & \epsilon^{2} & (x-\epsilon)^{2} & x^{2} \\ \epsilon^{2} & 0 & (x-2\epsilon)^{2} & (x-\epsilon)^{2}\\ (x-\epsilon)^{2} & (x-2\epsilon)^{2} & 0 & \epsilon^{2} \\ x^{2} & (x-\epsilon)^{2} & \epsilon^{2} & 0 \\ \end{array} \right) \end{equation} Note that the last constraint in eq.(\ref{64}) can be dropped out of the expression (\ref{65}), as it is automatically fulfilled due to the previous three ones together with the condition (\ref{63}). Substituting the matrix (\ref{66}) into eq.(\ref{65}) we get \begin{eqnarray} \nonumber {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) &=& \lim_{\beta\to\infty} \lim_{M\to 0} \Biggl\{ \frac{N_{1}! \, N_{2}! \, N_{3}! \, (M - N_{1}- N_{2}-N_{3})!}{M!} \times \\ \nonumber \\ \label{67} &\times& \exp\Bigl\{ -\frac{1}{2} \beta N_{1} (\beta m) \gamma^{2} x^{2} -\frac{1}{2} \beta N_{2} (\beta m) \gamma^{2} (x-\epsilon)^{2} -\frac{1}{2} \beta N_{3} (\beta m) \gamma^{2} \epsilon^{2} \Bigr\} \times first \\ \nonumber \\ \nonumber &\times& \prod_{\alpha=1}^{M/m} \Biggl[ \sum_{k^{\alpha}_{1},k^{\alpha}_{2},k^{\alpha}_{3}=0}^{m} \; C^{m}_{k^{\alpha}_{1},k^{\alpha}_{2},k^{\alpha}_{3}} \exp\Bigl\{ \frac{1}{2} \beta^{2} \gamma^{2} \bigl(x k^{\alpha}_{1} + (x-\epsilon) k^{\alpha}_{2} + \epsilon k^{\alpha}_{3}\bigr)^{2} \Bigr\} \Biggr] \times \\ \nonumber \\ \nonumber &\times& \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{1}, \; N_{1}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{2}, \; N_{2}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{M/m}k^{\alpha}_{3}, \; N_{3}\Bigr) \Biggr\} \end{eqnarray} where \begin{equation} \label{68} C^{m}_{k^{\alpha}_{1},k^{\alpha}_{2},k^{\alpha}_{3}} \; = \; \frac{m!}{k^{\alpha}_{1}! \, k^{\alpha}_{2}! \, k^{\alpha}_{3}! \, \bigl(m-k^{\alpha}_{1}-k^{\alpha}_{2}-k^{\alpha}_{3}\bigr)!} \end{equation} Using the standard integral representation of the Kronecker symbol, \begin{equation} \label{69} \boldsymbol{\delta}(p, \, q) \; = \; \oint \frac{dz}{2\pi i z} \, z^{p-q} \end{equation} (where contour of integration in the complex plane is the circle around zero) the partition function, eq.(\ref{67}), can be represented as follows: \begin{eqnarray} \nonumber {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) &=& \lim_{\beta\to\infty} \lim_{M\to 0} \Biggl\{ \frac{N_{1}! \, N_{2}! \, N_{3}! \, (M - N_{1}- N_{2}-N_{3})!}{M!} \exp\bigl\{ -\beta N_{1} f_{01} -\beta N_{2} f_{02}-\beta N_{1} f_{03} \bigr\} \times \\ \nonumber \\ \nonumber &\times& \frac{1}{(2\pi i)^{3}} \oint \frac{dz_{1}}{z_{1}} \, z_{1}^{-N_{1}} \oint \frac{dz_{2}}{z_{2}} \, z_{2}^{-N_{2}} \oint \frac{dz_{3}}{z_{3}} \, z_{3}^{-N_{3}} \times \\ \nonumber \\ \label{70} &\times& \Biggl[ \Biggl<\Bigl( 1 + z_{1}\exp\{\beta\gamma x \xi\} + z_{2}\exp\{\beta\gamma (x-\epsilon) \xi\} + z_{1}\exp\{\beta\gamma \epsilon \xi\} \Bigr)^{m}\Biggr>_{\xi} \Biggr]^{M/m}\; \; \Biggr\} \end{eqnarray} where \begin{eqnarray} \nonumber f_{01} &=& \frac{1}{2} (\beta m) \gamma^{2} x^{2} \\ \label{71} f_{01} &=& \frac{1}{2} (\beta m) \gamma^{2} (x-\epsilon)^{2} \\ \nonumber f_{03} &=& \frac{1}{2} (\beta m) \gamma^{2} \epsilon^{2} \end{eqnarray} and $\bigl< (...)\bigr>_{\xi}$ denotes the Gaussian average over the variable $\xi$: \begin{equation} \label{72} \bigl<(...)\bigr>_{\xi} \; \equiv \; \int_{-\infty}^{+\infty} \frac{d\xi}{\sqrt{2\pi}} \; (...) \exp\Bigl\{-\frac{1}{2} \xi^{2} \Bigr\} \end{equation} Now the expression for the replica partition function, eq.(\ref{70}), can be analytically continued for arbitrary non-integer values of the parameter $M$. In particular, the factorial prefactor \begin{equation} \label{73} \frac{(M - N_{1}- N_{2}-N_{3})!}{M!} \; \to \; \frac{\Gamma(M - N_{1}- N_{2}-N_{3} + 1)}{\Gamma(M+1)} \end{equation} Using the Gamma function relation, \begin{equation} \label{74} \Gamma(-z) \; = \; -\frac{\pi}{\Gamma(z+1) \sin(\pi z)} \end{equation} for {\it integer and positive} values of $N_{1,2,3}$ this prefactor can be represented as follows, \begin{eqnarray} \nonumber \frac{\Gamma(M - N_{1}- N_{2}-N_{3} + 1)}{\Gamma(M+1)} &=& -\frac{\pi}{\Gamma(M+1) \Gamma(N_{1}+N_{2}+N_{3}-M) \sin\bigl[\pi(N_{1}+N_{2}+N_{3}-1)-\pi M\bigr]} \\ \nonumber \\ \label{75} &=& \frac{\pi \, (-1)^{N_{1}+N_{2}+N_{3}-1}}{\sin\bigl(\pi M\bigr) \, \Gamma(N_{1}+N_{2}+N_{3}-M) \, \Gamma(M+1)} \end{eqnarray} so that in the limit $M \to 0$ we get \begin{equation} \label{76} \frac{\pi \, (-1)^{N_{1}+N_{2}+N_{3}-1}}{\sin\bigl(\pi M\bigr) \, \Gamma(N_{1}+N_{2}+N_{3}-M) \, \Gamma(M+1)}\Bigg|_{M\to 0} \; \to \; \frac{(-1)^{N_{1}+N_{2}+N_{3}-1}}{M \; \Gamma(N_{1}+N_{2}+N_{3})} \end{equation} On the other hand, for the last factor in the expression (\ref{70}) we find \begin{equation} \label{77} \bigl[...\bigr]^{M/m}\Big|_{M \to 0} \; \to \; 1 \; + \; \frac{M}{m} \ln\bigl[...\bigr] \end{equation} Substituting eqs(\ref{76}) and (\ref{77}) into eq.(\ref{70}) and taking into account that for any nonzero {\it integer} $N$, \begin{equation} \label{78} \oint \frac{dz_{1}}{z} \, z^{-N} \; = \; 0 \end{equation} in the limit $M \to 0$ we get \begin{eqnarray} \nonumber {\cal Z}_{0}\bigl(N_{1}, N_{2}, N_{3}; x, \epsilon\bigr) &=& \lim_{\beta\to\infty} \Biggl\{ \frac{(-1)^{N_{1}+N_{2}+N_{3}-1} \Gamma(N_{1}+1) \, \Gamma(N_{2}+1) \, \Gamma(N_{3}+1)}{ m \, \Gamma(N_{1}+N_{2}+N_{3})} \exp\bigl\{ -\beta N_{1} f_{01} -\beta N_{2} f_{02}-\beta N_{1} f_{03} \bigr\} \times \\ \nonumber \\ \nonumber &\times& \frac{1}{(2\pi i)^{3}} \oint \frac{dz_{1}}{z_{1}} \, z_{1}^{-N_{1}} \oint \frac{dz_{2}}{z_{2}} \, z_{2}^{-N_{2}} \oint \frac{dz_{3}}{z_{3}} \, z_{3}^{-N_{3}} \times \\ \nonumber \\ \label{79} &\times& \ln \Biggl[ \Biggl<\Bigl( 1 + z_{1}\exp\{\beta\gamma x \xi\} + z_{2}\exp\{\beta\gamma (x-\epsilon) \xi\} + z_{1}\exp\{\beta\gamma \epsilon \xi\} \Bigr)^{m}\Biggr>_{\xi} \Biggr]\; \; \Biggr\} \end{eqnarray} Substituting this expression into eqs.(\ref{30}) and (\ref{33}) for the free energy probability distribution function we obtain \begin{eqnarray} \nonumber {\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) &=& \lim_{\beta\to\infty} \Biggl\{ \sum_{N_{1},N_{2},N_{3}=1}^{\infty} \frac{\exp\bigl\{ \beta N_{1}(f_{1}- f_{01}) +\beta N_{2}(f_{1}- f_{02}) + \beta N_{1}(f_{1}- f_{03}) \bigr\}}{ m \, \Gamma(N_{1}+N_{2}+N_{3})} \times \\ \nonumber \\ \nonumber &\times& \frac{1}{(2\pi i)^{3}} \oint \frac{dz_{1}}{z_{1}} \, z_{1}^{-N_{1}} \oint \frac{dz_{2}}{z_{2}} \, z_{2}^{-N_{2}} \oint \frac{dz_{3}}{z_{3}} \, z_{3}^{-N_{3}} \times \\ \nonumber \\ \label{80} &\times& \ln \Biggl[ \Biggl<\Bigl( 1 + z_{1}\exp\{\beta\gamma x \xi\} + z_{2}\exp\{\beta\gamma (x-\epsilon) \xi\} + z_{1}\exp\{\beta\gamma \epsilon \xi\} \Bigr)^{m}\Biggr>_{\xi} \Biggr]\; \; \Biggr\} \end{eqnarray} The limit $\beta\to \infty$ is somewhat tricky: on one hand, according to eq.(\ref{56}) in the zero temperature limit $m \propto 1/\beta \to 0$ and on the other hand we have several exponential factors in the above expression which are formally divergent in this limit. To take the limit $m \propto 1/\beta \to 0$ the expression under the logarithm in eq.(\ref{80}) can be represented as follows: \begin{eqnarray} \nonumber &&\Biggl<\Bigl( 1 + z_{1}\exp\{\beta\gamma x \xi\} + z_{2}\exp\{\beta\gamma (x-\epsilon) \xi\} + z_{1}\exp\{\beta\gamma \epsilon \xi\} \Bigr)^{m}\Biggr>_{\xi} \; = \; \\ \nonumber \\ \label{81} &=& 1 + \sum_{k_{1}+k_{2}+k_{3}\geq 1}^{\infty} C^{m}_{k_{1}k_{2}k_{3}} \; z_{1}^{k_{1}}z_{2}^{k_{2}}z_{3}^{k_{3}} \; \Bigl< \exp\bigl\{ \beta k_{1}\gamma x \xi + \beta k_{2}\gamma (x-\epsilon) \xi + \beta k_{3}\gamma\epsilon \xi \bigr\} \Bigr>_{\xi} \end{eqnarray} where \begin{equation} \label{82} C^{m}_{k_{1}k_{2}k_{3}} \; = \; \frac{\Gamma(m+1)}{ \Gamma(k_{1}+1) \Gamma(k_{2}+1) \Gamma(k_{3}+1) \Gamma(m - k_{1}- k_{2}- k_{3} +1)} \end{equation} In the limit $m\to 0$ we get (sf eqs.(\ref{75})-(\ref{76})) \begin{equation} \label{83} C^{m}_{k_{1}k_{2}k_{3}}\Big|_{m\to 0} \simeq m \frac{(-1)^{k_{1}+ k_{2}+ k_{3} -1}}{k_{1}+ k_{2}+ k_{3}} \; C^{0}_{k_{1}k_{2}k_{3}} \end{equation} where \begin{equation} \label{84} C^{0}_{k_{1}k_{2}k_{3}} \; = \; \frac{\Gamma(k_{1}+ k_{2}+ k_{3}+1)}{\Gamma(k_{1}+1) \Gamma(k_{2}+1) \Gamma(k_{3}+1)} \end{equation} Substituting eqs.(\ref{83}) and (\ref{81}) into eq.(\ref{80}) and expending the logarithm term after integrations over $z_{1}$, $z_{2}$ and $z_{3}$ we obtain \begin{eqnarray} \nonumber {\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) &=& \lim_{\beta\to\infty} \Biggl\{ \frac{1}{(\beta m)} \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \times \\ \nonumber \\ \nonumber &\times& \prod_{\alpha=1}^{n} \Biggl[ (\beta m) \sum_{k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\geq 1}^{\infty} \frac{(-1)^{k_{1}^{\alpha}+ k_{2}^{\alpha}+ k_{3}^{\alpha} -1}}{ \beta(k_{1}^{\alpha}+ k_{2}^{\alpha}+ k_{3}^{\alpha})} \; C^{0}_{k^{\alpha}_{1}k^{\alpha}_{2}k^{\alpha}_{3}} \Bigl< \exp\bigl\{ \beta k_{1}^{\alpha}\gamma x \xi + \beta k_{2}^{\alpha}\gamma (x-\epsilon) \xi + \beta k_{3}^{\alpha}\gamma\epsilon \xi \bigr\} \Bigr>_{\xi} \Biggr] \times \\ \nonumber \\ \nonumber &\times& \sum_{N_{1},N_{2},N_{3}=1}^{\infty} \frac{\beta(N_{1}+N_{2}+N_{3})}{\Gamma(N_{1}+N_{2}+N_{3}+1)} \exp\bigl\{ \beta N_{1}(f_{1}- f_{01}) +\beta N_{2}(f_{1}- f_{02}) + \beta N_{1}(f_{1}- f_{03}) \bigr\} \times \\ \nonumber \\ \label{85} &\times& \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{1}, \; N_{1}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{2}, \; N_{2}\Bigr) \; \boldsymbol{\delta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{3}, \; N_{3}\Bigr) \Biggr\} \end{eqnarray} Substituting here $\beta m \; = \; \tilde{\beta} m /T_{*} \; = \; \zeta_{0}/T_{*}$ (see eqs.(\ref{56}), (\ref{41}) and (\ref{42})) and resolving the Kronecker symbols in the summations over $N_{1}$, $N_{2}$ and $N_{3}$ we get \begin{eqnarray} \nonumber &&{\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr) \; = \; \frac{T_{*}}{\zeta_{0}} \lim_{\beta\to\infty} \Biggl\{ \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \; \Bigl(\frac{\zeta_{0}}{T_{*}}\Bigr)^{n} \prod_{\alpha=1}^{n} \Biggl[ \sum_{k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\geq 1}^{\infty} \frac{(-1)^{k_{1}^{\alpha}+ k_{2}^{\alpha}+ k_{3}^{\alpha} -1}}{ \beta(k_{1}^{\alpha}+ k_{2}^{\alpha}+ k_{3}^{\alpha})} \; C^{0}_{k^{\alpha}_{1}k^{\alpha}_{2}k^{\alpha}_{3}} \times \\ \nonumber \\ \nonumber &\times& \Bigl< \exp\Bigl\{ \beta k_{1}^{\alpha}\bigl(\gamma x \xi + f_{1} - f_{01}\bigr) + \beta k_{2}^{\alpha}\bigl(\gamma (x-\epsilon) \xi + f_{2}-f_{02}\bigr) + \beta k_{3}^{\alpha}\bigl(\gamma\epsilon \xi + f_{3} - f_{03}\bigr) \Bigr\} \Bigr>_{\xi} \Biggr] \times \\ \nonumber \\ \label{86} &\times& \frac{\beta\sum_{\alpha=1}^{n}\bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr)}{ \Gamma\Bigl[\sum_{\alpha=1}^{n}\bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr) + 1\Bigr]} \; \boldsymbol{\theta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{1}\, - \, 1\Bigr) \; \boldsymbol{\theta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{2}\, - \, 1\Bigr) \; \boldsymbol{\theta}\Bigl(\sum_{\alpha=1}^{n}k^{\alpha}_{3}\, - \, 1\Bigr) \Biggr\} \end{eqnarray} where the symbol $\boldsymbol{\theta}(p\, - \, 1)$ (the "discrete step function") indicates that $p \geq 1$. Note however, that according to eq.(\ref{86}) the contributions with $\sum_{\alpha=1}^{n}k^{\alpha}_{i} = 0$ (such that all $k^{1}_{i} = k^{2}_{1} = ... = k^{n}_{i} = 0$) are independent of the corresponding free energy parameter $f_{i}$. On the other hand, the probability density function $P_{x,\epsilon}(f_{1}, f_{2}, f_{3})$ which we are aiming to derive is given by the derivatives of the above probability function $W$ over {\it all three} variables $f_{1}$, $f_{2}$ and $f_{3}$ (see eq.(\ref{32})). Therefore, as far as the PDF $P_{x,\epsilon}(f_{1}, f_{2}, f_{3})$ is concerned the restrictions imposed by the last three "discrete step function" in eq.(\ref{86}) can be omitted. The factor $\beta\sum_{\alpha=1}^{n}\bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr)$ in the numerator of the last term in eq.(\ref{86}) can be obtained by taking the derivatives $\Bigl(\frac{\partial}{\partial f_{1}} + \frac{\partial}{\partial f_{2}} +\frac{\partial}{\partial f_{3}}\Bigr)$ of ${\cal W}_{x,\epsilon}\bigl(f_{1}, f_{2},f_{3}\bigr)$. On the other hand, \begin{equation} \label{87} \frac{1}{\beta \bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr)} \; = \; \int_{0}^{+\infty} dy \exp\bigl\{ - \beta \bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr) \; y \bigr\} \end{equation} Substituting this into eq.(\ref{86}) and then, substituting the obtained expression into eq.(\ref{32}) for the PDF $P_{x,\epsilon}(f_{1}, f_{2}, f_{3})$ we get \begin{eqnarray} \nonumber &&P_{x,\epsilon}(f_{1}, f_{2}, f_{3}) \; = \; \frac{T_{*}}{\zeta_{0}} \frac{\partial^{3}}{\partial f_{1} \; \partial f_{2} \; \partial f_{3}} \Bigl(\frac{\partial}{\partial f_{1}} + \frac{\partial}{\partial f_{2}} +\frac{\partial}{\partial f_{3}}\Bigr) \lim_{\beta\to\infty} \Biggl\{ \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \; \Bigl(\frac{\zeta_{0}}{T_{*}}\Bigr)^{n} \times \\ \nonumber \\ \nonumber &\times& \prod_{\alpha=1}^{n} \Biggl[ \int_{0}^{+\infty} dy \sum_{k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\geq 1}^{\infty} (-1)^{k_{1}^{\alpha}+ k_{2}^{\alpha}+ k_{3}^{\alpha} -1}\; C^{0}_{k^{\alpha}_{1}k^{\alpha}_{2}k^{\alpha}_{3}} \times \\ \nonumber \\ \nonumber &\times& \Bigl< \exp\Bigl\{ \beta k_{1}^{\alpha}\bigl(\gamma x \xi + f_{1} - f_{01}-y\bigr) + \beta k_{2}^{\alpha}\bigl(\gamma (x-\epsilon) \xi + f_{2}-f_{02}-y\bigr) + \beta k_{3}^{\alpha}\bigl(\gamma\epsilon \xi + f_{3} - f_{03}-y\bigr) \Bigr\} \Bigr>_{\xi} \Biggr] \times \\ \nonumber \\ \label{88} &\times& \frac{1}{\Gamma\Bigl[\sum_{\alpha=1}^{n}\bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr) + 1\Bigr]} \; \Biggr\} \end{eqnarray} In the limit $\beta\to\infty$ the summation the series over $k^{\alpha}_{i}$ in the above expression can be done using the their integral representation. Namely, let us consider the series of a general type \begin{equation} \label{89} R(\beta) \; = \; \sum_{k=0}^{\infty} (-1)^{k-1} \; \Phi\bigl(\beta k; \; k\bigr) \end{equation} where $\Phi(z, z')$ is a "good" analytic function in the complex plane. One can easily see that the summation in eq.(\ref{89}) can be changed by the integration in the complex plane: \begin{equation} \label{90} R(\beta) \; = \; \frac{1}{2i} \int_{{\cal C}} \frac{dz}{\sin(\pi z)} \; \Phi\bigl(\beta z; \; z\bigr) \end{equation} where the integration goes over the contour ${\cal C}$ shown in Fig.3, and it is assumed that the function $\Phi$ is such that its integration at infinity gives no contribution. Indeed, due to the sign alternating contributions of simple poles at integer $z = 1, 2, ...$ eq.(\ref{90}) reduces to eq.(\ref{89}). Then, redefining $z \to z/\beta$ we gets \begin{equation} \label{91} \lim_{\beta\to\infty} R(\beta) \; = \; \frac{1}{2\pi i} \int_{{\cal C}} \frac{dz}{z} \; \lim_{\beta\to\infty} \Phi\bigl(z; \; z/\beta\bigr) \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=8.0cm]{Figure3.eps} \caption[]{The contour of integration in eq.(\ref{90})} \end{center} \label{figure3} \end{figure} In terms of the above integral representation, changing $k^{\alpha}_{i} \to z^{\alpha}_{i}/\beta$ for the Gamma function factors in eq.(\ref{88}) we have: \begin{equation} \label{92} \Gamma\Bigl[\sum_{\alpha=1}^{n}\bigl(k_{1}^{\alpha}+k_{2}^{\alpha}+k_{3}^{\alpha}\bigr) + 1\Bigr] \; \to \; \Gamma\Bigl[\sum_{\alpha=1}^{n}\bigl(z_{1}^{\alpha}+z_{2}^{\alpha}+z_{3}^{\alpha}\bigr)/\beta + 1\Bigr]\Big|_{\beta\to\infty} \; \to \; 1 \end{equation} and (see eq.(\ref{84})) \begin{equation} \label{93} C^{0}_{k^{\alpha}_{1}k^{\alpha}_{2}k^{\alpha}_{3}} \; \to \; \frac{\Gamma(z^{\alpha}_{1}/\beta+ z^{\alpha}_{2}/\beta+ z^{\alpha}_{3}/\beta+1)}{ \Gamma(z^{\alpha}_{1}/\beta+1) \Gamma(z^{\alpha}_{2}/\beta+1) \Gamma(z^{\alpha}_{3}/\beta+1)}\Big|_{\beta\to\infty} \; \to \; 1 \end{equation} Thus, after extracting the contributions with $k^{\alpha}_{1} = k^{\alpha}_{2} = k^{\alpha}_{3} = 0$, the expression in eq.(\ref{88}) reduces to \begin{eqnarray} \nonumber &&P_{x,\epsilon}(f_{1}, f_{2}, f_{3}) \; = \; \frac{T_{*}}{\zeta_{0}} \frac{\partial^{3}}{\partial f_{1} \; \partial f_{2} \; \partial f_{3}} \Bigl(\frac{\partial}{\partial f_{1}} + \frac{\partial}{\partial f_{2}} +\frac{\partial}{\partial f_{3}}\Bigr) \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \; \Bigl(\frac{\zeta_{0}}{T_{*}}\Bigr)^{n} \times \\ \nonumber \\ \nonumber &\times& \Biggl[ \int_{0}^{+\infty} dy \Biggl< \Biggl( \int_{{\cal C}}\frac{dz_{1}}{2\pi i z_{1}} \exp\Bigl\{ z_{1}\bigl(\gamma x \xi + f_{1} - f_{01} - y \bigr)\Bigr\} \; \int_{{\cal C}}\frac{dz_{2}}{2\pi i z_{2}} \exp\Bigl\{ z_{2}\bigl(\gamma (x-\epsilon) \xi + f_{2} - f_{02} - y \bigr)\Bigr\} \times \\ \nonumber \\ \label{94} &\times& \int_{{\cal C}}\frac{dz_{3}}{2\pi i z_{3}} \exp\Bigl\{ z_{3}\bigl(\gamma \epsilon \xi + f_{3} - f_{03} - y \bigr)\Bigr\} \; + \; 1 \Biggr) \Biggr>_{\xi} \Biggr]^{n} \end{eqnarray} Taking into account that \begin{equation} \label{95} \int_{{\cal C}}\frac{dz}{2\pi i z} \exp\{ \lambda z\} \; = \; - \theta(-\lambda) \end{equation} we obtain \begin{equation} \label{96} P_{x,\epsilon}(f_{1}, f_{2}, f_{3}) \; = \; \frac{T_{*}}{\zeta_{0}} \frac{\partial^{3}}{\partial f_{1} \; \partial f_{2} \; \partial f_{3}} \Bigl(\frac{\partial}{\partial f_{1}} + \frac{\partial}{\partial f_{2}} +\frac{\partial}{\partial f_{3}}\Bigr) \; \ln\Bigl[ 1 \; + \; S(f_{1}, f_{2}, f_{3})\Bigr] \end{equation} or \begin{equation} \label{97} P_{x,\epsilon}(f_{1}, f_{2}, f_{3}) \; = \; \frac{\partial^{3}}{\partial f_{1} \; \partial f_{2} \; \partial f_{3}} \Biggl[ \Bigl(1 \; + \; S(f_{1}, f_{2}, f_{3})\Bigr)^{-1} \; G(f_{1}, f_{2}, f_{3}) \Biggr] \end{equation} where \begin{eqnarray} \nonumber S(f_{1}, f_{2}, f_{3}) &=& \frac{\zeta_{0}}{T_{*}} \int_{0}^{+\infty} dy \int_{-\infty}^{+\infty} \frac{d\xi}{\sqrt{2\pi}} \exp\Bigl\{-\frac{1}{2}\xi^{2}\Bigr\} \; \times \\ \nonumber \\ \label{98} &\times& \Biggl[ 1 \; - \; \theta\bigl(y + f_{01} - f_{1} - \gamma x \xi\bigr) \; \theta\bigl(y + f_{02} - f_{2} - \gamma (x-\epsilon) \xi\bigr) \; \theta\bigl(y + f_{03} - f_{3} - \gamma \epsilon \xi\bigr) \Biggr] \end{eqnarray} and \begin{eqnarray} \nonumber G(f_{1}, f_{2}, f_{3}) &=& \frac{T_{*}}{\zeta_{0}} \; \Bigl(\frac{\partial}{\partial f_{1}} + \frac{\partial}{\partial f_{2}} +\frac{\partial}{\partial f_{3}}\Bigr) \; S(f_{1}, f_{2}, f_{3}) \\ \nonumber \\ \label{99} &=& \int_{-\infty}^{+\infty} \frac{d\xi}{\sqrt{2\pi}} \exp\Bigl\{-\frac{1}{2}\xi^{2}\Bigr\} \Biggl[ 1 - \theta\bigl(f_{01} - f_{1} - \gamma x \xi\bigr) \; \theta\bigl(f_{02} - f_{2} - \gamma (x-\epsilon) \xi\bigr) \; \theta\bigl(f_{03} - f_{3} - \gamma \epsilon \xi\bigr) \Biggr] \end{eqnarray} \section{Two velocity probability density function} In this Section using the general result for the directed polymers three-point free energy distribution function, eqs.(\ref{97})-(\ref{99}), we are going to derive two velocity probability density function $P_{x}(v, v')$ of the corresponding randomly forced Burgers problem. According to the discussion of Section II, eqs.(\ref{34})-(\ref{36}), \begin{equation} \label{100} P_{x}(v, v') \; = \; \lim_{\epsilon\to 0} \Biggl[\epsilon^{2} \int_{-\infty}^{+\infty} df_{2} P_{x,\epsilon}\bigl(f_{2}-\epsilon v,\, f_{2},\, -\epsilon v'\bigr) \Biggr] \end{equation} where $v$ and $v'$ are two velocities at two spatial points separated by the distance $x$. Explicitly, the expression for the function $P_{x,\epsilon}(f_{1}, f_{2}, f_{3})$, eq.(\ref{97}), reads \begin{eqnarray} \nonumber P_{x,\epsilon}(f_{1}, f_{2}, f_{3}) &=& \bigl(1 + S\bigr)^{-1} \, G_{123}''' \; - \; \\ \nonumber \\ \nonumber &-& \bigl(1 + S\bigr)^{-2} \Bigl[S_{1}' G_{23}'' + S_{2}' G_{13}'' + S_{3}' G_{12}'' + S_{12}'' G_{3}' + S_{13}'' G_{2}' + S_{23}'' G_{1}' + S_{123}''' G \Bigr] \; + \; \\ \nonumber \\ \nonumber &+& 2 \bigl(1 + S\bigr)^{-3} \Bigl[S_{1}' S_{2}' G_{3}' + S_{1}' S_{3}' G_{2}' + S_{2}' S_{3}' G_{1}' + \bigl(S_{1}' S_{23}'' + S_{2}' S_{13}'' + S_{3}' S_{12}''\bigr) \, G \Bigr] \; - \; \\ \nonumber \\ \label{101} &-& 6 \bigl(1 + S\bigr)^{-4} \, S_{1}'S_{2}' S_{3}' \; G \end{eqnarray} where we have introduced the notations $\Phi_{i}' \, \equiv \, \frac{\partial}{\partial f_{i}} \, \Phi$ and the functions $S = S(f_{1}, f_{2}, f_{3})$ and $G = G(f_{1}, f_{2}, f_{3})$ are given in eqs.(\ref{98}) and (\ref{99}). Substituting this expression into eq.(\ref{100}) we find that the only non-zero contributions in the limit $\epsilon\to 0$ come from two terms in the r.h.s. of eq.(\ref{101}): $\bigl(1 + S\bigr)^{-1} \, G_{123}'''$ and $-\bigl(1 + S\bigr)^{-2} S_{12}'' G_{3}'$ (both of which $\propto 1/\epsilon^{2}$) where (see Appendix A), \begin{eqnarray} \label{102} G_{123}'''\Big|_{\epsilon\to 0} &=& \frac{x}{\epsilon^{2} \gamma \sqrt{2\pi}} \exp\Bigl\{-\frac{1}{2\gamma^{2}} (v')^{2}\Bigr\} \delta\bigl(f_{2} - f_{0} + x \, v'\bigr) \; \delta\bigl(x\, v' - x \, v - 2f_{0}\bigr) \\ \nonumber \\ \label{103} G_{3}'\Big|_{\epsilon\to 0} &=& \frac{1}{\epsilon \gamma \sqrt{2\pi}} \exp\Bigl\{-\frac{1}{2\gamma^{2}} (v')^{2}\Bigr\} \theta\bigl(f_{0} - f_{2} - x \, v'\bigr) \\ \nonumber \\ \label{104} S_{12}''\Big|_{\epsilon\to 0} &=& -\frac{\zeta_{0}}{\epsilon \gamma T_{*} \sqrt{2\pi}} \exp\Bigl\{-\frac{1}{2\gamma^{2} x^{2}} \bigl(x\, v + 2f_{0})^{2}\Bigr\} \theta\bigl(x\, v + f_{0} + f_{2}\bigr) \end{eqnarray} Here, according to eqs.(\ref{71}), (\ref{62}), (\ref{56}) and (\ref{48}), \begin{equation} \label{105} f_{0} \; \equiv \; f_{01} \; = \; \frac{1}{2} (\beta m) \gamma^{2} x^{2} \; = \; \frac{1}{2} \sqrt{\zeta_{0}} \; T_{*}\cdot \frac{x^{2}}{R^{2}} \end{equation} \begin{equation} \label{106} \gamma \; = \; \zeta_{0}^{-1/4} \frac{T_{*}}{R} \end{equation} and $T_{*}$ given in eq.(\ref{42}). Using explicit expression for $S(f_{1}, f_{2}, f_{3})$, eq.(\ref{98}), one finds \begin{equation} \label{107} \lim_{\epsilon\to 0} S\bigl(f_{2}-\epsilon v , f_{2}, -\epsilon v'\bigr) \; = \; \frac{\zeta_{0}}{\gamma T_{*} x} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \, \xi \; \exp\Bigl\{-\frac{1}{2\gamma^{2} x^{2}} \bigl(\xi + f_{0}- f_{2})^{2}\Bigr\} \end{equation} Substituting eqs.(\ref{101})-(\ref{107}) into eq.(\ref{100}) we get \begin{eqnarray} \nonumber P_{x}(v, v') &=& \int_{-\infty}^{+\infty} d f_{2} \Biggl\{ \frac{x}{\gamma\sqrt{2\pi}} \; \frac{\exp\Bigl\{-\frac{1}{2\gamma^{2}} (v')^{2}\Bigr\} \delta\bigl(f_{2} - f_{0} + x \, v'\bigr) \; \delta\bigl(x\, v' - x \, v - 2f_{0}\bigr)}{ 1 + \frac{\zeta_{0}}{\gamma T_{*} x} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \, \xi \; \exp\Bigl\{-\frac{1}{2\gamma^{2} x^{2}} \bigl(\xi + f_{0}- f_{2})^{2}\Bigr\}} \; + \; \\ \nonumber \\ \nonumber \\ \label{108} &+& \frac{\zeta_{0}}{2\pi \gamma^{2} T_{*}} \; \frac{\exp\Bigl\{-\frac{1}{2\gamma^{2} x^{2}} \bigl(x\, v + 2f_{0})^{2} -\frac{1}{2\gamma^{2}} (v')^{2} \Bigr\} \theta\bigl(x\, v + f_{0} + f_{2}\bigr) \theta\bigl(f_{0} - f_{2} - x \, v'\bigr) }{ \Bigl[1 + \frac{\zeta_{0}}{\gamma T_{*} x} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \, \xi \; \exp\Bigl\{-\frac{1}{2\gamma^{2} x^{2}} \bigl(\xi + f_{0}- f_{2})^{2}\Bigr\}\Bigr]^{2}} \Biggr\} \end{eqnarray} Introducing the notation (sf eq.(\ref{9a})) \begin{equation} \label{109} v_{0} \; = \; \zeta_{0}^{3/4}\gamma \; = \; \sqrt{\zeta_{0}}\, \frac{T_{*}}{R} \; = \; \Bigl(\frac{\zeta_{0}^{3}}{2\pi}\Bigr)^{1/6} \Bigl(\frac{u}{R^{2}}\Bigr)^{1/3} \end{equation} and changing the integration variables, $f_{2} \to f_{0} - x v_{0} \eta$, $\xi \to \gamma \, x \, \xi$ we eventually get the following result for the joint probability density function of two velocities at the distance $x$: \begin{equation} \label{110} P_{x}(v, v') \; = \; p_{0}\bigl(v, \, x\bigr) \, \delta\Bigl(v' - v - v_{0}\frac{x}{R}\Bigr) \; + \; {\cal P}_{x}\bigl(v, \, v'\bigr)\, \theta\Bigl(v + v_{0}\frac{x}{R} - v'\Bigr) \end{equation} where \begin{equation} \label{111} p_{0}(v, x) \; = \; \frac{\zeta_{0}^{3/4}}{v_{0} \sqrt{2\pi}} \; \frac{\exp\Bigl\{-\frac{1}{2}\zeta_{0}^{3/2} \Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)^{2}\Bigr\}}{ \Bigl[ 1 + \zeta_{0}^{3/4}\frac{x}{R} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{ -\frac{1}{2} \Bigl[\xi + \zeta_{0}^{3/4}\Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)\Bigr]^{2} \Bigr\} \Bigr] } \end{equation} and \begin{equation} \label{112} {\cal P}_{x}(v, v') \; = \; \frac{\zeta_{0}^{3} \, x}{2\pi \, v_{0}^{2} \, R} \int_{v'/v_{0}}^{v/v_{0} + x/R}d\eta \, \frac{\exp\Bigl\{ -\frac{1}{2}\zeta_{0}^{3/2} \Bigl[ \Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)^{2} +\Bigl(\frac{v'}{v_{0}}\Bigr)^{2}\Bigr] \Bigr\}}{ \Bigl[ 1 + \zeta_{0}^{3/4}\frac{x}{R} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{-\frac{1}{2} \bigl(\xi + \zeta_{0}^{3/4}\eta \bigr)^{2} \Bigr\} \Bigr]^{2} } \end{equation} \section{Probability distribution function of the velocity difference} Using the joint distribution function $P_{x}(v, \, v')$ of two velocities $v$ and $v'$ at distance $x$ derived above, eqs.(\ref{110})-(\ref{112}), the probability density function of the velocity difference $w = v' - v$ can be obtained as follows \begin{equation} \label{113} P_{x}(w) \; = \; \int_{-\infty}^{+\infty} dv \; P_{x}(v,\; v + w) \end{equation} Substituting here eqs.(\ref{110})-(\ref{112}) we get \begin{equation} \label{114} P_{x}(w) \; = \; p_{0}(x) \, \delta\Bigl(w - v_{0}\frac{x}{R}\Bigr) \; + \; {\cal P}_{x}(w)\, \theta\Bigl(v_{0}\frac{x}{R} - w\Bigr) \end{equation} where \begin{equation} \label{115} p_{0}(x) \; = \; \frac{\zeta_{0}^{3/4}}{v_{0} \sqrt{2\pi}} \; \int_{-\infty}^{+\infty} dv \frac{\exp\Bigl\{-\frac{1}{2}\zeta_{0}^{3/2} \Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)^{2}\Bigr\}}{ \Bigl[ 1 + \zeta_{0}^{3/4}\frac{x}{R} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{ -\frac{1}{2} \Bigl[\xi + \zeta_{0}^{3/4}\Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)\Bigr]^{2} \Bigr\} \Bigr] } \end{equation} and \begin{equation} \label{116} {\cal P}_{x}(w) \; = \; \frac{\zeta_{0}^{3} \, x}{2\pi \, v_{0}^{2} \, R} \; \int_{-\infty}^{+\infty} dv \int_{v'/v_{0}}^{v/v_{0} + x/R}d\eta \, \frac{\exp\Bigl\{ -\frac{1}{2}\zeta_{0}^{3/2} \Bigl[ \Bigl(\frac{v}{v_{0}} + \frac{x}{R}\Bigr)^{2} +\Bigl(\frac{v + w}{v_{0}}\Bigr)^{2}\Bigr] \Bigr\}}{ \Bigl[ 1 + \zeta_{0}^{3/4}\frac{x}{R} \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{-\frac{1}{2} \bigl(\xi + \zeta_{0}^{3/4}\eta \bigr)^{2} \Bigr\} \Bigr]^{2} } \end{equation} Changing the integration variables: $v = -\frac{x}{R} v_{0} + \zeta_{0}^{-3/4} v_{0} \, s$ and $\eta = (v+w)/v_{0} + \zeta_{0}^{-3/4} z$, and introducing rescaled (dimensionless) distance \begin{equation} \label{117} r \; \equiv \; \zeta_{0}^{3/4} \frac{x}{R} \end{equation} and rescaled (dimensionless) velocity difference \begin{equation} \label{118} \omega \; \equiv \; \zeta_{0}^{3/4} \frac{w}{v_{0}} \; = \; \zeta_{0}^{3/4} \frac{(v' - v)}{v_{0}} \end{equation} for the corresponding probability density function $P_{r}(\omega)$ we obtain the following final result: \begin{equation} \label{119} P_{r}(\omega) \; = \; p_{0}(r) \, \delta\bigl(\omega - r\bigr) \; + \; {\cal P}_{r}(\omega)\, \theta\bigl(r - \omega\bigr) \end{equation} where \begin{equation} \label{120} p_{0}(r) \; = \; \int_{-\infty}^{+\infty} \frac{ds}{\sqrt{2\pi}} \frac{\exp\Bigl\{-\frac{1}{2}s^{2}\Bigr\}}{ \Bigl[1 + r \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{-\frac{1}{2} \bigl(\xi + s\bigr)^{2}\Bigr\} \Bigr]} \end{equation} and \begin{equation} \label{121} {\cal P}_{r}(\omega) \; = \; r\, \int_{-\infty}^{+\infty} \frac{ds}{\sqrt{2\pi}} \int_{0}^{r-\omega} \frac{dz}{\sqrt{2\pi}} \, \frac{\exp\Bigl\{ -\frac{1}{2} s^{2} - \frac{1}{2} \bigl(s + \omega -r\bigr)^{2}\Bigr\}}{ \Bigl[ 1 + r \int_{0}^{\infty} \frac{d\xi}{\sqrt{2\pi}} \; \xi \; \exp\Bigl\{-\frac{1}{2} \bigl(\xi + z + s + \omega -r\bigr)^{2} \Bigr\} \Bigr]^{2} } \end{equation} It is evident that this function is positively defined and it can be easily checked that for any value of $r$ it is normalized: \begin{equation} \label{122} \int_{-\infty}^{+\infty} d\omega \, P_{r}(\omega) \, = \, 1 \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=13.0cm]{Figure4.eps} \caption[]{Probability density function $P_{r}(\omega)$, eqs.(\ref{119})-(\ref{121}) for: (a) $r = 0.1$; (b) $r=1$; (c) $r=3$. The vertical lines at $\omega = r$ represent the $\delta$-functions, and the difference in the thickness of these lines symbolizes the relative values of the corresponding weights $p_{0}(r)$, eq.(\ref{120}), which decrease with increasing $r$.} \end{center} \label{figure4} \end{figure} We see that the distribution function $P_{r}(\omega)$ has rather specific structure (see Fig.4). According to eq.(\ref{119}) for a given (rescaled) distance $r$ possible values of the (rescaled) velocity difference $\omega$ are bounded from above: $\omega \leq r$, or in terms of the original values, $(v'-v) \leq \frac{x}{R} v_{0}$ In other words, at a given distance $x$ between two points at which we measure two velocities $v$ and $v'$, their difference can not be bigger then $v_{0} \, x/R$, where, according to eq.(\ref{109}), $v_{0} \propto \bigl(u/R^{2}\bigr)^{1/3}$ is the typical flow velocity at the injection scale $R$ of the random force of the strength $u$. Moreover, at $\omega = r$ (or at $(v'-v) = \frac{x}{R} v_{0}$) the distribution function exhibits the $\delta$-function singularity. Let us investigate the statistical properties of the velocity difference at small distances, $x \ll R$, or $r \ll 1$. In the limit of small values of the parameter $r$, the probability density function $P_{r}(\omega)$, eqs.(\ref{119})-(\ref{121}), takes much more simple form: \begin{equation} \label{123} P_{r}(\omega) \; \simeq \; \Bigl(1 - \frac{1}{\sqrt{\pi}} \, r\Bigr) \, \delta\bigl(\omega - r\bigr) \; + \; \frac{1}{\sqrt{\pi}} \, r \, \bigl(r - \omega\bigr) \, \exp\Bigl\{-\frac{1}{2} (r - \omega)^{2}\Bigr\} \, \theta\bigl(r - \omega\bigr) \end{equation} For even moments of the velocity difference $\langle \omega^{2n}\rangle$ we find: \begin{equation} \label{124} \langle \omega^{2n}\rangle \; = \; \int_{-\infty}^{+\infty} d\omega \; \omega^{2n} \; P_{r}(\omega) \; \simeq \; r^{2n} \; + \; C(n) \, r \end{equation} where $C(n) \; = \; 2^{2n+1} \, \Gamma(1+n)$. Then, the analytic continuation of the above result for arbitrary real values of the parameter $2n \to q$, in the limit $r \ll 1$ yields: \begin{equation} \label{124a} \langle \omega^{q}\rangle \; \simeq \; r^{q} \; + \; C(q/2) \, r \; \simeq \; \left\{ \begin{array}{ll} r^{q} \; , \; \; \mbox{for} \; q \; \leq \; 1 \, ; \\ \\ C(q/2) \, r \; , \; \; \mbox{for} \; q \; > \; 1 \, . \end{array} \right. \end{equation} Finally, introducing the exponent $\zeta(q)$ according to the definition $\langle \omega^{q}\rangle = r^{\zeta(q)}$ we recover the typical strong intermittency behavior \cite{Bouch-Mez-Par}(see Fig.1): \begin{equation} \label{125} \zeta(q) \; \simeq \; \left\{ \begin{array}{ll} q \; , \; \; \mbox{for} \; q \; \leq \; 1 \, ; \\ \\ 1 \; , \; \; \mbox{for} \; q \; > \; 1 \, . \end{array} \right. \end{equation} \section{Conclusions} In this paper we studied the statistical properties of of the velocity field $v(x,t)$ in the one-dimensional randomly forced Burgers turbulence, eq.(\ref{1}). This system is known to be equivalent to the model of directed polymers in a random potential, eqs.(\ref{4})-(\ref{7}), such that the the viscosity parameter $\nu$ in the Burger's equation is proportional to the temperature in the directed polymer system, $\nu = \frac{1}{2} T$, and the velocity $v(x,t)$ in the Burger's equation is the negative spatial derivative of the free energy $F(x,t)$ of the directed polymers. The parameter which characterizes the level of turbulence of the velocity field in the Burgers problem is the Reynolds number $Re$ which in terms of the directed polymers notations is expressed as $Re = 2\bigl(u R\bigr)^{1/3}/T$ where $R$ and $u$ are the correlation length and the strength of the random potential, eqs.(\ref{5})-(\ref{6}). Thus the strong turbulence regime where $Re \to \infty$ corresponds to the zero-temperature limit in the directed polymers system. In this limit in terms of the replica technique a general expression for the joint distribution function of two velocities $v(-x/2,t)$ and $v(x/2,t)$ separated by a finite distance $x$ has been derived, eqs.(\ref{110})-(\ref{112}). Besides we have obtained an explicit expression for the probability density function for the corresponding velocity increment $w = v(-x/2,t)-v(x/2,t)$, eqs.(\ref{119})-(\ref{121}), which was shown to exhibit rather specific structure. Namely, for any given distance $x$ the values of the velocity increment $w$ are bounded from above: $w \leq \frac{x}{R} v_{0}$, where $v_{0} \propto \bigl(u/R^{2}\bigr)^{1/3}$ is the typical flow velocity at the injection scale $R$ of the random potential. Moreover, at $w = \frac{x}{R} v_{0}$ the distribution function exhibits the $\delta$-function singularity which means that at a given distance $x$ the difference of two velocities $w = v(-x/2,t)-v(x/2,t)$ has a {\it finite} probability to be equal to $\frac{x}{R} v_{0}$. Using this distribution function at length scales much smaller than the injection length of the random potential, $x \ll R$ we have computed the moments of the velocity increment $\langle \omega^{q}\rangle$, eq.(\ref{124a}). Introducing the exponent $\zeta(q)$ according to the definition $\langle \omega^{q}\rangle \; \simeq \; r^{\zeta(q)}$ we have demonstrated that the function $\zeta(q)$ exhibits the behavior typical for strong intermittency phenomena, eq.(\ref{125}), Fig.1. Finally a few remarks about the status of the obtained results. First of all, as the considerations has been performed in the framework of the heuristic replica method, the proposed derivation can be considered as rigorous. Moreover, at the moment it is also difficult to say whether the obtained results are exact or not: on one hand, no approximations have been used in the performed calculations, but on the other hand, the considered derivation is based on the unproved crucial assumption about the vector replica symmetry breaking structure of the $N$-particle bosonic wave function in the zero-temperature limit which, in particular, contains undefined numerical factor $\zeta_{0}$, eq.(\ref{56}) (\cite{zero-T}, Section III). All that means that further more systematic study of the considered problem. is required. \acknowledgments I am grateful to Kostya Khanin for numerous useful discussions. I would like to thank the mathematical research institute MATRIX in Australia where part of this research was performed. \vspace{15mm} \begin{center}
1,314,259,996,794
arxiv
\section{Introduction} \label{sec:intro} Supersymmetry (SUSY) is the most cherished and best studied vision of physics beyond the Standard Model (SM). SUSY tames the quadratic divergences that destabilize the electroweak (EW) scale, and results in a host of new particles which should be discovered in the near future if the SUSY vision of particle physics should prove correct. However, LEP-II has left the minimal supersymmetric standard model (MSSM) in an interesting situation \cite{Abbaneo:2001ix}. The minimal model predicts a light Higgs whose tree-level mass is at most $M_Z$, in contradiction with the LEP-II limit of $M_h^{(SM)} \geq 115$ GeV. In order to survive the LEP limit, one must either invoke very large radiative corrections from the top sector \cite{Carena:1995wu}, CP violation chosen in a very particular way \cite{Carena:2002bb}, or abandon the minimal model in favor of more ingredients \cite{Ellis:1988er,Batra:2003nj,Batra:2004vc,Harnik:2003rs,Chang:2004db,Casas:2003jx,Maloney:2004rc}. The invocation of large radiative corrections is particularly troublesome, because this tends to introduce unacceptably large corrections to the EW scale, recreating a ``little hierarchy problem''. While there is some uncertainty in the estimates for the lightest CP even Higgs mass originating in the uncertainty in measured top mass, it appears that the MSSM requires fine-tuning at the level of a few per cent if it is to be consistent with LEP data, and is uncomfortably fine-tuned. This is the ``Supersymmetric Little Hierarchy Problem''. The Fat Higgs (FH) \cite{Harnik:2003rs} is a particular, interesting solution to this dilemma. It proposes an alternative to the standard MSSM picture of electroweak symmetry breaking (EWSB) and results in a heavier ``light'' CP-even Higgs than can be realized in that standard scenario, thus naturally evading the LEP-II bounds. It originates from an $s$-confining theory, in which a number of fundamental preons charged under a strong $SU(2)$ form Higgs bosons as composites. A variation \cite{Chang:2004db} has a composite singlet from an $s$-confining $SU(4)$ theory, but the EWSB Higgses are fundamentals. Both theories have interesting distinctive SUSY Higgs phenomenology \cite{Batra:2004vc,Harnik:2003rs}, largely due to the fact that the Higgs quartic interaction may be much larger than is suggested by perturbative unification \cite{Haber:1986gz}. Both of these FH theories are challenged in producing large Yukawa interactions. The original FH must generate fermion masses through Yukawa interactions which couple the composite $H$ and $\overline{H}$ to the fundamental quarks and leptons. At the level of the preons, this is a non-renormalizable super-potential coupling, which the original FH generates from renormalizable interactions by integrating out a pair of Higgs-like fields uncharged under the strong $SU(2)$ (see Figure~\ref{fig:yukawa}). The resulting Yukawas thus depend on fundamental parameters as, \bea y_{eff} & \sim & \frac{y y^\prime}{4 \pi} \frac{\Lambda}{M_{H}} \eea in which $y$, $y^\prime$ are Yukawas between the preons and/or fundamental fermion superfields (at the compositeness scale $\Lambda$), $4 \pi$ is the naive dimensional analysis (NDA) counting \cite{Cohen:1997rt} for the coupling of a composite to fundamental fields, $\Lambda$ is the scale of $s$-confinement of the strong $SU(2)$ and $M_{H}$ is the (supersymmetric) mass of the Higgs-like fields. For the light fermions, this is not problematic. Small fermion masses are easily realized. For the top quark, producing a Yukawa coupling of order one requires tuning the scales $\Lambda$ and $M_{H}$ to be close to one another (which is somewhat counter-intuitive since they are in principal unrelated to one another, though it was argued in \cite{Harnik:2003rs} that the coincidence of scales could arise from a flavor symmetry) and that the underlying $y$ and $y^\prime$ be large at $\Lambda$ to compensate for the $4 \pi$. This last fact is also potentially a source of fine-tuning. The strong $SU(2)$ tries to renormalize $y$ and $y^\prime$ strong at low energies. This is helpful in that it compensates the suppression, but dangerous because a large super-potential coupling may ruin the conformal regime of the theory above $\Lambda$. While it is possible that interesting (and phenomenologically viable) low energy dynamics would emerge in this case, the additional strong $y$ (and/or in generalizations $y^\prime$) couplings potentially disrupt the low energy $s$-confinement solution, and makes it difficult to draw firm conclusions about the low energy physics. One is thus forced to assume that $y$ and $y^\prime$ become moderately strong, but do not quite reach truly strong coupling before the $s$-confinement scale. Another way to consider the tension is to note\footnote{We are indebted to Kaustubh Agashe for discussions on this point.} that one must tune the original $y$ and $y^\prime$ to some very particular values in the UV such that they become large enough (but not too large) at $\Lambda$. The ``New Fat Higgs'' \cite{Chang:2004db} avoids this issue for the top Yukawa, because in that case the EW Higgses and the quarks are fundamental. Thus, the strong $SU(4)$ does not effectively drive that interaction strong at low energies. However, it recreates the problem for the Higgs quartic itself, because now the quartic links the composite EW singlet $S$ to the fundamental EW Higgses $H$ and $\ov{H}$, and thus feels the same sort of tension when one tries to obtain a large Higgs quartic. \FIGURE[t]{\includegraphics[width=3.5in]{fig1.eps} \label{fig:yukawa}\caption{Example graph for how the top Yukawa coupling is generated in the Fat Higgs model by integrating out a pair of Higgs-like superfields ($\ov{H}^\prime$, $H^\prime$) to generate a non-renormalizable interaction between preons ($P_1$ and $P_2$) bound into a composite Higgs $H$. }} In this article, we explore a new incarnation of the Fat Higgs. Our theory is an $SU(3)_s$ SUSY gauge theory which $s$-confines, producing a composite singlet $S$ and doublets $H$ and $\ov{H}$ as in the original Fat Higgs. However, the additional preons are arranged such that they also produce a composite third generation quark doublet ($Q_3$) and up-type singlet ($t_R$). The dynamically generated super-potential contains the terms needed for FH-style EWSB, but it also includes the top Yukawa coupling\footnote{For pre-Fat Higgs SUSY models which realize the large top Yukawa coupling through $s$-confining dynamics, see \cite{Strassler:1995ia}.}. Since all fields requiring large Yukawa interactions are composite, we have removed the need for strong underlying Yukawa interactions, and thus the danger that the low energy physics could be spoiled by out-of-control non-perturbative couplings. Furthermore, while we will still need to invoke massive fields to generate the Yukawa interactions of the light fermions, there is considerably less need to fine-tune the mass of these ``spectator'' superfields ($M_{H}$) to the $s$-confinement scale $\Lambda$, and/or invoke underlying super-potential couplings which are dangerously large. In Sec.~\ref{sec:model}, we present the model and show how it gives rise to all of the required low energy structure of the MSSM. In Sec.~\ref{sec:uni}, we address some of the issues regarding high energy gauge coupling unification. In Sec.~\ref{sec:pheno} we discuss some of the distinctive phenomenology. And in Sec.~\ref{sec:concl} we conclude. \section{An $SU(3)$ Model} \label{sec:model} \begin{table} \centering \begin{tabular}{lccccc} & $SU(3)_s$ & $SU(3)_c$ & $SU(2)_W$ & $U(1)_Y$ & $Z_2$ \\ \hline $P_3$ & \yng(1) & \yng(1) & $\mathbf{1}$ & $0$ & $+$ \\ $P_1$ & \yng(1) & $\mathbf{1}$ & $\mathbf{1}$ & $-2/3$ & $-$ \\ $\ov{P}_2$ & $\ov{\yng(1)}$ & $\mathbf{1}$ & \yng(1) & $+1/6$ & $-$ \\ $\ov{P}_1$ & $\ov{\yng(1)}$ & $\mathbf{1}$ & $\mathbf{1}$ & $+2/3$ & $+$ \\ $\ov{P}_{\tilde 1}$ & $\ov{\yng(1)}$ & $\mathbf{1}$ & $\mathbf{1}$ & $-1/3$ & $-$ \\ \hline $P^\prime$ & \yng(1) & $\mathbf{1}$ & $\mathbf{1}$ & $+1/3$ & $-$ \\ $\ov{P}^\prime$ & $\ov{\yng(1)}$ & $\mathbf{1}$ & $\mathbf{1}$ & $-1/3$ & $-$ \\ \end{tabular} \caption{The $SU(3)_s$-charged Preons. The first set are those participating in the $s$-confining phase. The second category are integrated out, triggering $s$-confinement.} \label{tab:preons} \end{table} Our model has an extended gauge symmetry, \bea SU(3)_s \times SU(3)_c \times SU(2)_W \times U(1)_Y . \eea $SU(3)_s$ is a ``strong'' group which will be responsible for generating the MSSM Higgses, a Fat-Higgs like singlet, and top from a set of preons, and the remaining gauge groups are as in the MSSM. The particle content charged under $SU(3)_s$ consists of a set of preons listed in Table~\ref{tab:preons}. Since the matter is vector-like with respect to $SU(3)_s$, we follow the usual fashion and refer to it as a ``SUSY QCD'' theory, but this should not be confused with the usual color interaction of the MSSM, $SU(3)_c$. Note that the MSSM gauge groups are gauged sub-groups of the $SU(F) \times SU(F) \times U(1)_B$ chiral symmetries. The set of preons is non-anomalous (in fact, it is vector-like) with respect to $SU(3)_s$, and there are no mixed anomalies between $SU(3)_s$ and the MSSM gauge groups. However, the MSSM gauge symmetries are anomalous with respect to themselves. This is in fact related to the point that the strong sector will eventually give rise to a composite $Q_3$, $t_R$, $\ov{H}$, $S$ and $H$, but not to $b_R$, $L_3$, or $e_3$. Thus, we introduce a set of fundamental fields uncharged under $SU(3)_s$ in Table~\ref{tab:fundamentals}. The first and second generation superfields appear as fundamental fields, as in the MSSM. Also indicated are the charges of the fields under a $Z_2$ ``$R$-parity'' which plays the same role to suppress dangerous renormalizable baryon- and lepton-number violating processes as it does in the MSSM. The assignment of preon hypercharges is not completely determined by requiring the correct hypercharges for the composites, and the particular choice we make is based partly on aesthetics (requiring that all exotic colored particles have charges $\pm 1/3$ or $\pm 2/3$ and all exotic uncolored particles have charges $\pm 1$ or zero), and partly motivated by gauge coupling unification as we shall see below. Many fundamental Yukawa interactions can be formed out of these fields. To preserve readability, we discuss these in groups in the subsections below. \begin{table}[[t] \centering \begin{tabular}{lccccc} & $SU(3)_s$ & $SU(3)_c$ & $SU(2)_W$ & $U(1)_Y$ & $Z_2$ \\ \hline $L_i$ & $\mathbf{1}$ & $\mathbf{1}$ & {\yng(1)} & $-1/2$ & $-$ \\ $e_i$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $+1$ & $-$ \\ $Q_{1,2}$ & $\mathbf{1}$ & $\yng(1)$ & $\yng(1)$ & $+1/6$ & $-$ \\ $d_i$ & $\mathbf{1}$ & {$\ov{\yng(1)}$} & $\mathbf{1}$ & $+1/3$ & $-$ \\ $u_{1,2}$ & $\mathbf{1}$ & {$\ov{\yng(1)}$} & $\mathbf{1}$ & $-2/3$ & $-$ \\ $\ov{q}_1$ & $\mathbf{1}$ & {$\ov{\yng(1)}$} & $\mathbf{1}$ & $-2/3$ & $+$ \\ $\ov{q}_2$ & $\mathbf{1}$ & {$\ov{\yng(1)}$} & $\mathbf{1}$ & $+1/3$ & $-$ \\ \hline $H^\prime$ & $\mathbf{1}$ & $\mathbf{1}$ & \yng(1) & $+1/2$ & $+$ \\ $\ov{H}^\prime$ & $\mathbf{1}$ & $\mathbf{1}$ & \yng(1) & $-1/2$ & $+$ \\ \end{tabular} \caption{Additional fundamental fields for the $SU(3)$ model. The index $i=1,2,3$ denotes the usual generation number.} \label{tab:fundamentals} \end{table} This theory is SUSY $SU(3)$ QCD with $5$ flavors, which is inside the conformal window \cite{Terning:2003th}. From any value of the $SU(3)_s$ gauge coupling at very high scales, it flows (assuming, as we will do so, that all of the fundamental Yukawa interactions are not strong enough to disrupt the approximate scale-invariance) at lower scales to the fixed point at, \bea g_*^2 & \simeq & \frac{4 \pi^2}{3} \eea We include a super-potential mass for $P^\prime$ (and for the uncolored $H^\prime$), \bea W_m = M_P \ov{P}^\prime P^\prime + M_H \ov{H}^\prime H^\prime . \eea Below $M_P$, the $P^\prime$, $\ov{P}^\prime$ flavor may be integrated out and the theory loses conformality, flowing to an $s$-confining phase \cite{Csaki:1996zb}. We denote the confinement scale by $\Lambda$, and estimate from the large fixed point coupling $g_*$ that the two scales are approximately equal, \bea \Lambda & \simeq & M_P ~. \eea The scale $M_P$ must be input by hand, and determines the strong coupling scale $\Lambda$. \TABLE[t]{ \begin{tabular}{lccccc} & & $SU(3)_c$ & $SU(2)_W$ & $U(1)_Y$ & $Z_2$ \\ \hline $B_1 \lra t_R$ & $P_3 P_3 P_1$ & $\ov{\yng(1)}$ & $\mathbf{1}$ & $-2/3$ & $-$ \\ $B_2 \lra S$ & $ P_3 P_3 P_3$ & $\mathbf{1}$ & $\mathbf{1}$ & $0$ & $+$ \\ $\ov{B}_1 \lra H$ & $ \ov{P}_2 \ov{P}_1 \ov{P}_{\widetilde{1}}$ & $\mathbf{1}$ & \yng(1) & $+1/2$ & $+$ \\ $\ov{B}_2 \lra \psi$ & $\ov{P}_2 \ov{P}_2 \ov{P}_1$ & $\mathbf{1}$ & $\mathbf{1}$ & $+1$ & $+$ \\ $\ov{B}_3 \lra \chi$ & $\ov{P}_2 \ov{P}_2 \ov{P}_{\widetilde{1}}$ & $\mathbf{1}$ & $\mathbf{1}$ & $0$ & $-$ \\ $M_1 \lra Q_3$ & $P_3 \ov{P}_2$ & \yng(1) & \yng(1) & $+1/6$ & $-$ \\ $M_2 \lra {q}_1$ & $P_3 \ov{P}_1$ & \yng(1) & $\mathbf{1}$ & $+2/3$ & $+$ \\ $M_3 \lra {q}_2$ & $P_3 \ov{P}_{\widetilde{1}}$ & \yng(1) & $\mathbf{1}$ & $-1/3$ & $-$ \\ $M_4 \lra \ov{H}$ & $P_1 \ov{P}_2$ & $\mathbf{1}$ & $\yng(1)$ & $-1/2$ & $+$ \\ $M_5 \lra \ov{\chi}$ & $P_1 \ov{P}_1$ & $\mathbf{1}$ & $\mathbf{1}$ & $0$ & $-$ \\ $M_6 \lra \ov{\psi}$ & $P_1 \ov{P}_{\widetilde{1}}$ & $\mathbf{1}$ & $\mathbf{1}$ & $-1$ & $+$ \end{tabular} \hspace*{5in} \label{tab:composites} \caption{Composites of the $SU(3)$ model.}} \subsection{Composites and Dynamical Super-potential} Below the confinement scale, the theory can be described by composite $SU(3)_s$-invariant mesons ($M$) and baryons ($B$, $\ov{B}$), listed in Table~\ref{tab:composites}. A dynamical super-potential is generated with form, \bea W_{dyn} & = & \frac{1}{\Lambda^5} \left\{ \ov{B} M B - {\rm det}~M \right\} \nonumber \\ & \rightarrow & \lambda \left\{ H Q_3 t_R + H \ov{H} S + \psi {q}_2 t_R + \psi \ov{\psi} S + \chi \ov{\chi} S + \chi {q}_1 t_R - \frac{\lambda}{\Lambda} {\rm det} M \right\} , \label{eq:wdyn} \eea where in the second line we rescaled the baryons and mesons to canonically normalized superfields. It will not be very important for our purposes, but we note for completeness that one may express the irrelevant interactions as, \bea {\rm det}~M & = & \epsilon_{ij} \epsilon_{\alpha \beta \gamma} \left( \ov{H}^i Q_3^{\alpha j} q_1^\beta q_2^\gamma + \ov{\chi} Q_3^{\alpha i} Q_3^{\beta j} q_2^\gamma + \ov{\psi} Q_3^{\alpha i} Q_3^{\beta j} q_1^\gamma \right) , \eea suppressed by the confinement scale $\lambda / \Lambda$. We have also provided the naive dimensional analysis (NDA) estimate for the coupling $\lambda \sim 4\pi$ \cite{Cohen:1997rt}. Thus, this model dynamically generates the Fat Higgs sector and super-potential, along with the top Yukawa coupling and some exotic interactions with exotic superfields. Note that the exotics occur in pairs in these interactions, because they arise exclusively from composites which include an odd number of $\ov{P}_1$ and $\ov{P}_{\widetilde{1}}$. We shall see below that $q_1$ and $q_2$ receive masses of order $\Lambda$. Thus, below $\Lambda$ the relevant couplings in (\ref{eq:wdyn}) are the top Yukawa $y_t$, the $S H \ov{H}$ interaction $\lambda_H$, the $S \psi \ov{\psi}$ interaction $\lambda_\psi$, and the $S \chi \ov{\chi}$ interaction $\lambda_\chi$. All of these are equal and of order $\lambda \sim 4 \pi$ at the scale $\Lambda$, but because the $q's$ decouple at that scale, and because of our having gauged subgroups of the chiral symmetries of the SUSY QCD theory, they evolve apart at lower energies. \FIGURE[t]{\includegraphics[width=5.0in]{rge.eps} \label{fig:rge}\caption{The RGE evolution from $\lambda = 1000$ TeV to $v$ of the strong coupling $g_3$ (solid curve), top Yukawa interaction $y_t$ (dashed curve), $S H \ov{H}$ interaction $\lambda_H$ (dotted curve), and $S \psi \ov{\psi}$ and $S \chi \ov{\chi}$ interactions $\lambda_\psi$ and $\lambda_{\chi}$ (dot-dashed curve). }} In order to discuss the top mass and EWSB, these should be evolved down to energy scales of order the electroweak scale $v$. At one-loop, below $\Lambda$, the dominant renormalization effects are from $y_t$, and $\lambda_{(H,\psi,\chi)}$ themselves, and from the $SU(3)_c$ coupling $g_3$. The one loop renormalization group equations (RGEs) are \bea \frac{dg_3}{dt} & = & -\frac{3}{16\pi^2} g_3^3 \\ \frac{dy_t}{dt} & = & \frac{y_t}{16\pi^2} \left[ 6 |y_t|^2 + |\lambda_H|^2 - \frac{16}{3} g_3^3 \right] \\ \frac{d\lambda_H}{dt} & = & \frac{\lambda_H}{16\pi^2} \left[ 3 |y_t|^2 + 4 |\lambda_H|^2 + |\lambda_\psi|^2 + |\lambda_\chi|^2\right] \\ \frac{d\lambda_\psi}{dt} & = & \frac{\lambda_\psi}{16\pi^2} \left[ 2 |\lambda_H|^2 + 3 |\lambda_\psi|^2 + |\lambda_\chi|^2 \right] \\ \frac{d\lambda_\chi}{dt} & = & \frac{\lambda_\chi}{16\pi^2} \left[ 2 |\lambda_H|^2 + 3 |\lambda_\chi|^2 + |\lambda_\psi|^2 \right] \eea where $t$ is the renormalization scale $t \equiv \log \mu_R$. Since $\lambda_\psi = \lambda_\chi$ at scale $\Lambda$, these coupling strengths will remain equal up to very small effects from the different hypercharges of $\psi$ and $\chi$. The fact that the top mass has been measured at the Tevatron \cite{Hill:2004qu} allows us to approximately fix $\Lambda$, up to the choice of $\tan \beta$. As values of $\tan \beta \sim 1$ result in the largest light CP even Higgs masses, we make this choice for which the target $y_t$ is about $\sqrt{2}$. Solving the coupled equations numerically and imposing this requirement fixes $\Lambda \sim 10^4 \times v$ (i.e. $\Lambda \sim 1000$~TeV), and predicts that $\lambda_H$ will be somewhat less than $y_t$ itself. An example is shown in figure~\ref{fig:rge}. Note that there are order one uncertainties in $\lambda(\Lambda)$, which could easily modify our estimate for $\Lambda$ by an order of magnitude\footnote{There are also order one uncertainties in the RGE evolution from higher orders close to scale $\Lambda$, where the couplings are strong, as well.}. Irrespectively, the prediction that the Higgs quartic is approximately locked to the top Yukawa interaction is an interesting feature of the model. \subsection{Electroweak Symmetry Breaking} We include a Yukawa coupling in the fundamental theory, \bea W_S & = & -y_S \epsilon_{\alpha \beta \gamma} P_3^{\alpha} P_3^{\beta} P_3^{\gamma} \nonumber \\ & \rightarrow & -\left(\frac{y_S}{4 \pi} \Lambda^2 \right) S ~, \eea (where $\alpha$, $\beta$, and $\gamma$ are $SU(3)_c$ indices, and the $SU(3)_s$ indices are similarly contracted anti-symmetrically but not shown for clarity) which becomes a tadpole for $S$ below $\Lambda$. Combined with $W_{dyn}$, this results in Higgs super-potential, \bea W_H & = & \lambda_H S \left( H \ov{H} - v_0^2 \right) + \lambda_\psi S \psi \ov{\psi} + \lambda_\chi S \chi \ov{\chi} \eea where $v_0^2$ has NDA estimate (at scale $\Lambda$), \bea v_0^2 & \sim & \frac{y_S}{\lambda \left(4 \pi\right)} \Lambda^2 \sim \frac{y_S}{\left(4 \pi\right)^2} \Lambda^2 \eea thus indicating that $v_0$ is naturally at least an order of magnitude below $\Lambda$, and will be smaller if $y_S$ takes a sufficiently small value (as we will assume it does in order to appropriately generate the EW scale). Aside from the presence of the additional superfields $\psi$, $\ov{\psi}$, $\chi$, $\ov{\chi}$, this is the super-potential of the Fat Higgs, leading to a electroweak symmetry-breaking even in the supersymmetric limit. The scalar Higgs potential consists of the contribution from the dynamical super-potential above, the MSSM $D$-terms, and the corrections from soft SUSY breaking. There is also an effective $\mu$ term induced by integrating out $H^\prime$ and $\ov{H}^\prime$ as described below in section~\ref{sec:residual}. Altogether, this leads to a scalar potential, \bea V_H & = & | \lambda_H H \ov H + \lambda_\psi \psi \ov{\psi} + \lambda_\chi \chi \ov{\chi} - v_0^2 |^2 + \lambda_H^2 \left( | S H |^2 + |S \ov H|^2 \right) \nonumber \\ & & + \lambda_\psi^2 \left( | S \psi |^2 + |S \ov \psi|^2 \right) + \lambda_\chi^2 \left( | S \chi |^2 + |S \ov \chi|^2 \right) \nonumber \\ & & + \frac{g_2^2}{8} \left( H^\dagger \vec{\tau} H + \ov{H}^\dagger \vec{\tau} \ov{H} \right)^2 + \frac{g_1^2}{2} \left( \frac{1}{2} |H|^2 - \frac{1}{2} |\ov{H}|^2 + |\psi|^2 - |\ov{\psi}|^2 \right)^2 \nonumber \\ & & + \left( m_H^2 + |\mu|^2 \right) |H|^2 + \left( m_{\ov{H}}^2 + |\mu|^2 \right) |\ov{H}|^2 + m_S^2 |S|^2 \nonumber \\ & & + m_\psi^2 |\psi|^2 + m_{\ov{\psi}}^2 |\ov{\psi}|^2 + m_\chi^2 |\chi|^2 + m_{\ov{\chi}}^2 |\ov{\chi}|^2 \nonumber \\ & & + \left\{ A_{S} \left( \lambda_H S H \ov{H} + \lambda_\psi S \psi \ov{\psi} + \lambda_\chi S \chi \ov{\chi} \right) - T_S v_0^2 S + h.c. \right\} ~~, \eea where $g_{1,2}$ are the MSSM $U(1)/SU(2)$ gauge couplings, and the $m$'s, $A_S$, and $T_S$ are soft SUSY breaking parameters. We have assumed that the $A$ terms are locked together by the underlying chiral symmetries of the SUSY QCD theory, and in the same spirit ignored other potential SUSY breaking terms such as $B \mu$-like terms involving $H \ov{H}$, $\psi \ov{\psi}$, and $\chi \ov{\chi}$. Of course, we expect that the equality of the $A$ terms is only approximate, as the RGEs will split them apart just as it does the $\lambda$ interactions, but we continue to neglect such splittings to simplify the discussion. In general, the minimization conditions are quite complicated, but we sketch a solution below. To simplify matters, we begin by considering $m_H = m_{\ov H} = m_S \equiv m$, $m_\psi = m_{\ov{\psi}} = m_\chi = m_{\ov{\chi}} \equiv M$, $A_H = A_\psi = A_\chi = T_S = 0$, and ignore the MSSM $D$-terms. We will consider deviations from these assumptions below. Under these conditions, the potential is symmetric under $H \lra \ov{H}$ and $\psi \lra \ov{\psi} \lra \chi \lra \ov{\chi}$. The SM-like Higgs is $h=(H^0 + \ov{H}^0) / \sqrt{2}$, and we denote the common vacuum expectation value (VEV) of $\psi$, $\ov{\psi}$, $\chi$, and $\ov{\chi}$ as $\phi/\sqrt{2}$. The scalar potential becomes \bea \left( \frac{\lambda_H^2}{4}h^4 + \lambda_\psi \phi^4 + \lambda_H \lambda_\psi h^2 \phi^2 + 2 \lambda_\psi^2 |S|^2 \phi^2 \right) + m^2 |S|^2 \nonumber \\ + \left( m^2 + |\mu|^2 - \lambda_H^2 v_0^2 \right) h^2 + \left( M^2 - \lambda_\psi^2 v_0^2 \right) \phi^2 \eea and the vacuum crucially depends on the signs of the quantities $(m^2 + |\mu|^2 - \lambda_H^2 v_0^2)$ and $(M^2 - \lambda_\psi^2 v_0^2)$. Under the relatively mild requirement that the soft masses respect, \bea \left( m^2 +|\mu|^2 - \lambda_H^2 v_0^2 \right) & < & 0 \\ \label{eq:ewsbreq} \left( M^2 - \lambda_\psi^2 v_0^2 \right) & > & 0 \eea we arrive at the solution $\langle H \rangle = \langle \ov{H} \rangle = \sqrt{v_0^2 - ( m^2 + |\mu|^2 )/ \lambda_H^2}$, $\langle S \rangle = \langle \psi \rangle = \langle \ov{\psi} \rangle = \langle \chi \rangle = \langle \ov{\chi} \rangle = 0$, leading to viable\footnote{Note that a VEV for $\psi$ or $\ov{\psi}$ would lead to large (tree level) corrections to $\Delta \rho$.} EWSB. Including the $D$ terms and relaxing the universality among the soft masses will not disrupt this general feature, provided $m_\psi$, $m_{\ov{\psi}}$, $m_\chi$, and $m_{\ov{\chi}}$ continue to individually satisfy Eq.~(\ref{eq:ewsbreq}), though it will modify the expressions for the VEVs and cause $\tan \beta \equiv \langle H \rangle / \langle \ov{H} \rangle$ to deviate from unity. We also consider non-zero values for $A_S$ and $T_S$. Both of these terms, combined with the EWSB VEVs for $H$ and $\ov{H}$, generate tadpoles for $S$ which will generically result in it acquiring a VEV of order the weak scale, and further complicating the precise relation between the underlying parameters and $\langle H \rangle$ and $\langle \ov{H} \rangle$. The VEV for $S$ is crucial, because combined with the dynamical super-potential, it provides supersymmetric masses for the fermionic components\footnote{Alternately, one may introduce further spectators to marry $\psi$, $\ov{\psi}$, $\chi$, and $\ov{\chi}$ with masses of order $\Lambda$ through non-renormalizable operators mediated by a new set of spectator preons. While this results in a more minimal particle content below $\Lambda$ (and reproduces precisely the FH scalar potential), it requires many more ingredients, and thus we prefer to accept the extra light states at the weak scale.} of $\psi$, $\ov{\psi}$, $\chi$ and $\ov{\chi}$. Thus, we expect that in generic points in the parameter space, subject to quite mild constraints, phenomenologically viable EWSB and weak scale masses for the uncolored exotics result. \subsection{Light Fermion masses} We have seen that the top Yukawa coupling and Higgs quartic are generated by the strong dynamics, and are naturally large. The remainder of the fermion masses can also be generated in the following ways. \subsubsection{Charged Leptons} The lepton sector is entirely fundamental, so the required operators are dimension 5 at the preon level, to connect $L_i$, $e_j$ and the composite Higgs $\ov{H}$. The needed underlying interactions are generated by integrating out the spectators $H^\prime$ and $\ov{H}^\prime$ (just as in the original FH), and result in, \bea W_L & = & y_{H^\prime} H^\prime P_1 \ov{P}_2 + y^e_{ij} \ov{H}^\prime L_i e_j \nonumber \\ & \rightarrow & \left( \frac{y^e_{ij} y_{H^\prime}}{4 \pi} \frac{\Lambda}{M_H} \right) \ov{H} L_i e_j ~. \label{eq:ye} \eea As in the Fat Higgs case, this is suppressed by $\Lambda/M_H$. However, a wide range of parameters is permitted given the smallness of the observed charged lepton masses. \subsubsection{Down-type Quarks} The couplings between the fundamental left-handed quarks $Q_{1,2}$ to the fundamental right-handed down quarks, $d_{1,2,3}$ is also a dimension five operator. It can also be generated by the spectator Higgses, \bea W_{d1} & = & y^d_{ij} \ov{H}^\prime Q_{i} d_j \nonumber \\ & \rightarrow & \left( \frac{y_{H^\prime} y^d_{ij}}{4 \pi} \frac{\Lambda}{M_H}\right) \ov{H} Q_{1,2} d_i ~. \label{eq:yd1} \eea We also need couplings between $Q_3$ and $d_i$, in order to have a bottom quark mass. This requires a dimension 6 interaction between preons, to connect $Q_3$ and $\ov{H}$ (both mesons) to $d_i$. This can be arranged by integrating out both $P^\prime$ and $H^\prime$, through the interactions, \bea W_{d2} & = & y_{\ov{H}^\prime} \ov{H}^\prime \ov{P}_2 P^\prime + y_{d_j} \ov{P}^\prime P_3 d_j \nonumber \\ & \rightarrow & \left( y_{H^\prime} y_{\ov{H}^\prime} y_{d_j} \frac{\Lambda^2}{M_P M_H} \right) \ov{H} Q_3 d_i ~. \eea Note that the NDA estimates do not include a $4\pi$ suppression in this case, which might point to bottom being naturally heavier than down or strange. At this point, the down-type quark mass matrix is generic - it contains no necessarily zero or very small entries. Thus, it is able to generate all of the down-type masses, and (after we generate the up and charm quark masses, below) is sufficient to generate the full CKM structure of the Standard Model. \subsubsection{Up-type Quarks} Finally, we need a mass for the up and charm quarks, the top quark having already been arranged through the dynamical super-potential. Since the CKM mixing has already been arranged in the down-type sector, we do not pursue masses linking $Q_3$ with $u_{1,2}$ (or $Q_{1,2}$ with $t_R$) but instead just masses connecting $Q_i$ with $u_j$ where $i,j=1,2$. These can be generated by integrating out both $P^\prime$ and $H^\prime$, \bea W_u & = & y^u_{ij} H^\prime Q_{i} u_{j} + y_{P_1} \ov{P}^\prime \ov{P}_1 \ov{P}_{\widetilde{1}} \nonumber \\ & \rightarrow & \left( \frac{ y^u_{ij} y_{\ov{H}^\prime} y_{P_1}}{4 \pi} \frac{\Lambda^2}{M_P M_H} \right) H Q_{1,2} u_{1,2} \eea And thus all Yukawas can be built by integrating out the spectator preons $P^\prime$ and Higgses $H^\prime$. \subsubsection{Residual Interactions} \label{sec:residual} In addition to the light fermion Yukawa interactions described above, there are residual effects from integrating out the spectators $H^\prime$ and $\ov{H}^\prime$. The first is that these massive fields mediate flavor-violating interactions of the form, \bea W_{\not F} & = & \left( \frac{y^u_{ij} y^d_{kl}}{M_H} \right) Q_i u_j Q_k d_l + \left( \frac{y^u_{ij} y^e_{kl}}{M_H} \right) Q_i u_j L_k e_l ~. \eea While not a consequence of the composite sector in our model, these types of interactions are often referred to as ``compositeness operators'' \cite{Eichten:1983hw}. They lead to interactions involving two SM fermions and two of their scalar superpartners, and thus to anomalous flavor violation at the loop level. Given the large value of $M_H \gtrsim \Lambda \sim 1000$ TeV, they are not expected to be in contradiction with data, though they are in the region where improved precision in future experiments could potentially see some of their effects. The second operator is an induced $\mu$-term for the composite EWSB Higgses $H$ and $\ov{H}$, \bea W_{\mu} & = & \left( y_{P_1} y_{\ov{H}^\prime} y_{H^\prime} \frac{\Lambda^3}{M_H M_P} \right) \; H \ov{H} \equiv \mu \; H \ov{H} ~. \eea As we saw above, a large $\mu$ term would lead to EW fine-tuning, and so we assume that the Yukawa interactions and/or the suppression from $\Lambda / M_H$ is sufficient to bring this operator down to the weak scale. Both of these features are a consequence of our having taken a minimal approach to the question of flavor, and not an ``over-kill'' approach as proposed in \cite{Murayama:2003ag}. There is no problem to incorporate the over-kill framework in our $SU(3)$ model, though since the contributions are not sizable enough to be dangerous, we choose to present the simpler and potentially more phenomenologically interesting case here. \subsection{Exotic Quark Masses} We have already seen that the VEV for the singlet $S$ generates weak scale masses for the $\psi$ and $\chi$ superfields for fairly generic parameters. We also need masses for the exotic quarks $q_1$, $q_2$, in order to avoid having these them appear at low energies. We introduce fundamental fields $\ov{q}_{(1,2)}$ to marry these exotics through the super-potential, \bea W_q & = & y_{q_1} \ov{q}_1 P_3 \ov{P}_1 + y_{q_2} \ov{q}_2 P_3 \ov{P}_{\widetilde{1}} \nonumber \\ & \rightarrow & \left( \frac{y_{q_1}}{4 \pi} \Lambda \right) \ov{q}_1 q_1 + \left( \frac{y_{q_2}}{4 \pi} \Lambda \right) \ov{q}_2 q_2 \label{eq:qmass} \eea where we continue to include the NDA $4 \pi$ estimates. Thus, we typically expect that $q_1$ and $q_2$ are the heaviest of the exotics. \section{Unification} \label{sec:uni} One of the hallmark successes of the MSSM is the prediction of the unification of the gauge couplings. In this section we demonstrate that this success can also be preserved in our $SU(3)$ FH model. Unlike the generations of the MSSM, our preons do not fill out complete $SU(5)$ representations, and so it is clear that the standard structural successes of four dimensional GUTs is not present. However, it may be that unification of couplings results from ``string unification'' or from a higher dimensional theory with orbifold GUT breaking \cite{Kawamura:1999nj}, in which case matter need not fill out complete representations. The evolution of the gauge couplings takes place in two steps. Below the strong coupling scale $\Lambda$ the matter content is that of the MSSM, including the composite Higgses and top quark, plus the weak scale exotics $S$, $\psi$, $\ov{\psi}$, $\chi$, and $\ov{\chi}$. The fields $S$, $\chi$, and $\ov{\chi}$ are singlets under the MSSM gauge groups, and thus do not contribute to the evolution of couplings at one loop. Thus, the couplings evolve as, \bea \frac{dg_i}{dt} & = & \beta_i \frac{g_i^3}{16 \pi^2} \eea with \bea \beta_i & = & \left( -3, 1, 39/5 \right) \eea for $( SU(3)_C, SU(2)_W, U(1)_Y )$, and we have normalized the hypercharge coupling in the usual $SU(5)$ way, $\beta_1 = 3/5 \beta_Y$. Above the scale $\Lambda$ the evolution includes the extra composites $q_1$ and $q_2$ (and their partners). More correctly, one should consider the evolution in terms of the preons as the relevant degrees of freedom at large scales, but the two descriptions are equivalent because of holomorphicity. In order to recover unification of couplings, we also include two vector-like pairs of spectator ``unifons'' which do not participate in the strong dynamics, and are doublets under $SU(2)_W$ with no hyper-charge. Thus, above $\Lambda$ we have, \bea \beta_i & = & \left( -2, 3, 9 \right) \, , \eea and combining these together with $\Lambda \sim 1000$ TeV, we find unification of couplings at the level of $5\%$ at a scale of $3\times 10^{14}$ GeV. Such a low scale of unification could be problematic with respect to proton stability, but since there is no clear GUT structure the usual proton decay mediated by $X,Y$ GUT bosons may not be present and could be further evaded by imposing some type of baryonic symmetry. One might worry that the additional strong dynamics will spoil any true prediction of unification because of the extra strong dynamics threshold at $\Lambda$. In a supersymmetric theory, this is not a problem because the holomorphicity of the super-potential demands that the low energy couplings are determined only by the bare masses of the heavy fields \cite{Arkani-Hamed:1997ut}. Thus, our $SU(3)$ FH theory has true unification at a level comparable to the MSSM. \section{Phenomenology} \label{sec:pheno} This model has some distinctive phenomenology, which helps to distinguish it from other supersymmetric theories. The MSSM super-partner phenomenology depends (as usual) quite crucially on the mechanism by which SUSY breaking is communicated to the MSSM fields, and thus is model-dependent. In order to avoid EW fine-tuning, it is important that the scalar partners of top be no more than a few hundred GeV. This requirement, combined with a model of SUSY breaking at high scales will also favor a gluino mass in this region (see \cite{Kobayashi:2005mg} for models designed to evade this requirement). Stop masses of up to about 200 GeV (depending on decay mode and other super-partner masses) can be found in a variety of decay modes at the Tevatron \cite{Demina:1999ty}, which can also typically discover gluinos provided their mass is less than 400 GeV \cite{Abel:2000vs}. The LHC is expected to be sensitive to gluino masses up to about $2$ TeV \cite{Asai:2002xv}. \subsection{Higgs} Including the $S$ superfield, our theory has the additional singlet Higgs (containing additional neutral scalars and pseudo-scalars) which mixes through EWSB with the usual MSSM Higgses. This rich spectrum corresponds to various cases of the next-to-minimal supersymmetric standard model, and has been studied in great detail \cite{Miller:2003ay}. The mixing with the extra scalar state can lead to reduced $Z$-$Z$-$h^0$ and $W$-$W$-$h^0$ couplings, thus weakening the LEP II direct search limits. The fermionic component of $S$ will also mix with the MSSM neutralinos, leading to a modification of the MSSM neutralino properties \cite{Franke:2001nx}. The Higgs responsible for EWSB is generally quite a bit heavier than in the usual MSSM, because of the large value of $\lambda_H$ which contributes to the Higgs mass. For large $m_A$, $\tan \beta \sim 1$ and $\Lambda \sim 1000$ TeV, the mass is expected to be around 140 GeV, which is considerably higher than any reasonable value in the MSSM, and high enough that decays such as $H \rightarrow W W^*$ will begin to dominate. More exotic decay modes such as $H \rightarrow A^0 A^0$ may occur, and can be very challenging for LHC Higgs searches \cite{Dobrescu:2000jt}. In addition, large values of $\lambda_H$ can lead to the charged Higgs being the lightest one, something that never occurs in the MSSM \cite{Batra:2004vc,Harnik:2003rs}. \subsection{Exotics} \FIGURE[t]{\includegraphics[width=5.0in]{xsec.eps} \label{fig:xsec}\caption{The cross sections for producing $\psi^+ \psi^-$ and $\widetilde{\psi}^* \widetilde{\psi}$ + $\widetilde{\ov{\psi}}^* \widetilde{\ov{\psi}}$ at the Tevatron.}} \FIGURE[t]{\includegraphics[width=5.0in]{lhc.eps} \label{fig:lhc}\caption{The cross sections for producing $\psi^+ \psi^-$ and $\widetilde{\psi}^* \widetilde{\psi}$ + $\widetilde{\ov{\psi}}^* \widetilde{\ov{\psi}}$ at the LHC.}} The model also has a number of additional chiral multiplets. The colored quark singlets $q_1$ and $q_2$ have masses of order $\Lambda$ (and thus will probably not be produced at near future colliders), whereas the color neutral particles are expected to have masses $\lambda_\psi \langle S \rangle$, of order $v \sim 200$ GeV. We expect the lightest of these to be the singlet $\chi$ fields, and the charge $\pm 1$ fields $\psi$ should be slightly heavier, because of its non-zero hyper-charge. We expect that the scalar components will be slightly heavier than their fermionic partners because of SUSY-breaking contributions to the scalar masses. The dynamically generated super-potential has a $Z_2$ symmetry which has all of the exotic particles coupling in pairs. This symmetry could be imposed exactly, but more likely will be broken by interactions such as $\ov{q}_1 d_i d_j$, which allows the scalar $\ov{q}_1$ to decay directly into down-type quarks (or the fermionic ${q}_1$ to decay into two quarks and a gaugino). Since all of the exotic states must decay through $\ov{q}_1$ whose mass is of order 1000 TeV, the exotics are typically very long lived and have complicated multi-particle final states. In the case of $\psi$, this results in electrically charged fermions and their scalar partners which are stable on length scales of the order of the detector, and thus appear as massive charged objects. Studies in Ref. \cite{Culbertson:2000am} considered such objects in the context of certain gauge-mediated SUSY breaking models and conclude that the Tevatron can discover them with 2 ${\rm fb}^{-1}$ at the $5\sigma$ level provided the production cross section is larger than about 100 (10) fb for masses of 100 (250) GeV. In figure~\ref{fig:xsec} we plot the production cross sections for both the fermion ($\psi$) and scalars ($\widetilde{\psi}$ and $\widetilde{\ov{\psi}}$) at the Tevatron \cite{Weiglein:2004hn}, through the partonic processes $q \ov{q} \rightarrow \gamma, Z \rightarrow \psi^+ \psi^-$, and so forth for the scalars. Note that the scalar cross sections are suppressed relative to the fermionic ones because of the intermediate vector boson, which requires that the scalars be produced in the $p$-wave to conserve angular momentum. For a wide variety of masses, the Tevatron should be able to probe this scenario with 2 ${\rm fb}^{-1}$ of collected luminosity. The LHC should be able to produce and detect the charged quasi-stable particles up to even larger masses. The cross sections at the LHC are plotted in figure~\ref{fig:lhc} \cite{Weiglein:2004hn}, and it is expected that the LHC will cover the entire parameter space \cite{Ambrosanio:2000zu}. The $\chi$ and $\ov{\chi}$ particles will be produced much less copiously, and being electrically neutral and quasi-stable are very difficult to detect. \section{Conclusions} \label{sec:concl} The Fat Higgs is a fascinating alternative to the minimal supersymmetric standard model, which may naturally explain why LEP II did not discover the light CP even Higgs responsible for EWSB. In this article, we have examined an alternative to the minimal model based on an $s$-confining (at $\sim 1000$ TeV) $SU(3)$ group which generates not only the MSSM Higgses and a singlet, but also the top quark as composites in the low energy theory. This naturally generates the large top Yukawa coupling as a residual of the strong dynamics, perhaps explaining why top is so much more massive than any other fermion of the Standard Model. We are able to generate all of the observed flavor structure of the standard model, and predict that the Higgs mass and top mass are correlated because of the common origin of both couplings from the dynamical super-potential. This relieves some fine-tuning in the original FH model, and perhaps motivates the large top mass. Electroweak symmetry breaking happens in a way which is reminiscent of the FH, and does impose some mild conditions on the soft masses of the MSSM-like and exotic Higgses. The model is compatible with unification of couplings, and results in some weak scale exotic states not seen the in the MSSM. These include quasi-stable electrically charged ($\pm 1$) objects for which there are good discovery prospects at the Tevatron run II once 2 ${\rm fb}^{-1}$ of data has been collected. These provide a means to distinguish this model from other supersymmetric theories, including the original Fat Higgs itself. There are also interesting modifications to Higgs physics, with the most important one being the fact that the lightest CP even Higgs will typically be heavier than in the MSSM, even at tree level. Clearly, supersymmetric theories are likely to be richer than even the minimal models, and the next generation of colliders is likely to have an exciting time unrevealing the physics at the TeV scale. \acknowledgments The authors have benefited from discussions with K. Agashe, P. Batra, J. Terning, H. Murayama, and C.E.M. Wagner, and are thankful to R. Harnik, and G. Kribs for discussions concerning the original Fat Higgs model. A.D. was partially supported by NSF Grants P420D3620414350 and P420D3620434350. Fermilab is operated by Universities Research Association Inc. under contract no. DE-AC02-76CH02000 with the DOE. Work at ANL is supported in part by the US DOE, Div.\ of HEP, Contract W-31-109-ENG-38.
1,314,259,996,795
arxiv
\section{Introduction} \label{sec:intro} \PARstart{Q}{uantum} computers exist, and have been used to solve small problems~\cite{vandersypen:shor-experiment,gulde03:_implem_deuts_jozsa}. The range of potential uses includes some important problems such as Shor's algorithm for factoring large numbers and physical simulations of quantum systems; for a few applications, quantum computers may exhibit exponential speedup over classical computers~\cite{nielsen-chuang:qci,shor:factor,abrams99:_expo_quant_algo}. However, the engineering challenges of creating large-scale quantum computers are daunting~\cite{van-meter:qarch-impli,kok-2007-79}, and current capacities are only up to about 8-12 quantum bits, or {\em qubits}~\cite{negrevergne06:_12qubit-bench,haeffner05:qubyte}. Therefore, some researchers have suggested that networks of small quantum computers be used to overcome the limitations of individual machines, creating distributed quantum systems~\cite{grover97:_quant_telec,cleve1997sqe,cirac97:_distr_quant_comput_noisy_chann,van-meter07:_distr_arith_jetc,yepez01:_type_ii,lloyd:quantum-internet}. The goals of a quantum network are the same as any classical distributed system: to connect computational resources, data, or people so that the resulting system is more valuable than the sum of its parts. The distant systems may have access to different data, may provide different computational capabilities, or may simply increase total capacity. The first real-world deployments of quantum networks have already begun. The first and most developed application is \emph{quantum key distribution} (QKD), which uses a quantum channel and an authenticated (but not necessarily secret) classical channel to create shared, secret, random classical bits that can be used as a cryptographic key~\cite{bennett:bb84}\footnote{Note that QKD does not completely solve the security problems created by Shor's quantum algorithm for factoring large numbers and finding discrete logarithms; Shor impacts public-key encryption (which is used in authentication mechanisms) and the Diffie-Hellman key agreement protocol. QKD provides key exchange, but requires authentication~\cite{paterson04:why-qkd}.}. An experimental metropolitan-area QKD network has been developed and deployed in the Boston area~\cite{elliott:qkd-net}, and similar efforts are underway in Japan~\cite{nambu06:_one_quant_key_distr_system} and Europe~\cite{alleaume07:_secoqc_white}. Efforts to extend these networks to wider areas are constrained by loss in the communication channel, which results in exponential decay in throughput as distance increases. When the end points of a connection are far apart, the use of specialized devices called {\em quantum repeaters} may be required~\cite{briegel98:_quant_repeater,childress05:_ft-quant-repeater,hartmann06,van-loock06:_hybrid_quant_repeater,ladd06:_hybrid_cqed,yamamoto2003eee}. A quantum repeater is qualitatively different from a classical signal amplifier; it does not copy a quantum state or regenerate signal levels (as this is provably impossible in general~\cite{wootters:no-cloning}). Instead, quantum repeaters transfer quantum data via a distributed quantum algorithm called \emph{teleportation}~\cite{bennett:teleportation,bouwmeester:exp-teleport,furusawa98}, which allows the transfer of a quantum state via classical communication. Experimental progress toward the realization of such repeaters has recently been reported~\cite{Chin-WenChou06012007,zhao2003ere}. Teleportation consumes a special form of \emph{entangled state} known as a \emph{Bell pair}. In an entangled state, two quantum systems that may be physically separated share a non-local correlation that Einstein famously referred to as ``spooky action at a distance''. QKD does not directly require entangled states. However, the distributed Bell pairs created by repeaters will enable long-distance QKD, and most other applications of distributed quantum computation will use distributed Bell pairs as well~\cite{ekert1991qcb,yepez01:_type_ii,lloyd:quantum-internet,van-meter07:_distr_arith_jetc}. We would like to have perfect Bell pairs to use for our distributed computations. Unfortunately, perfect systems do not exist, so we must concern ourselves with the \emph{fidelity} of quantum states, a metric we will use to describe how near we are to perfect Bell states. The fidelity is defined as the probability that a perfect measurement of two qubits would show them to be in the desired Bell state. The fidelity is reduced by channel loss and imperfect control of qubits, but it may be improved by a form of error correction called \emph{purification}~\cite{bennett95:_concen,cirac97:_distr_quant_comput_noisy_chann,maneva2000itp,dur2007epa,pan03:_exper-purification}. The primary contribution of this paper is the introduction of the \emph{banded purification} algorithm, which improves the utilization of physical and temporal resources in a network of repeaters. Our simulations show that banded purification will improve performance by up to a factor of fifty compared to prior schemes. Banding restricts purification to using Bell pairs of similar fidelity, in order to improve both the probability of success of the purification and the resulting boost in fidelity when it succeeds. We have characterized expected gains for some system engineering trade-offs. Our results increase the performance of the system and relax hardware constraints, improving the prospects for experimental implementation of wide-area quantum networks. We also provide a description of repeater operation as a network-centric layered protocol model, outlining the messages transferred, the need for layers to share an addressing scheme for qubits and the repeaters themselves, and buffer management. Section~\ref{sec:repeaters} presents the basic operation of quantum repeaters. Section~\ref{sec:stack} outlines a layered protocol architecture to support these operations. Section~\ref{sec:banded} describes prior work in scheduling purification, then presents our banded algorithm. Our simulation results are detailed in Section~\ref{sec:sims}, showing the improvement in performance using banding, as well as the hardware constraints and trade-offs we have identified. We conclude in Section~\ref{sec:conclusion}. \section{Quantum Repeater Basics} \label{sec:repeaters} A network of quantum repeaters supports distributed quantum computation by creating high-fidelity end-to-end Bell pairs. Once completed, these pairs can then be used to teleport application data, which is generally too valuable to risk in the error-prone process of hop-by-hop teleportation. Section~\ref{sec:intro} identified the three functions that a network of quantum repeaters must provide: a basic entangling mechanism, and the two distributed algorithms, purification and teleportation, which transform large numbers of short-distance, low-fidelity Bell pairs into smaller numbers of long-distance, high-fidelity pairs. A quantum repeater, which we also call a \emph{station}, is a small, special-purpose quantum computer, holding a few physical qubits that it can couple to a transmission medium. The hardware provides the basic capability of creating short-distance, low-fidelity (``base level'') Bell pairs via a physical entanglement mechanism~\footnote{When we speak of the ``creation'' and ``destruction'' of Bell pairs, we are referring the \emph{state} of two qubits in separate repeaters; the physical qubits in the repeaters are not physically created or destroyed.}; the other two functions require classical communications and computation, and local quantum operations. In classical packet-switched networks, an in-flight packet consumes resources such as buffer space, computation, and bandwidth only at its current location (modulo end-to-end reliable delivery considerations, such as TCP). Quantum repeaters, in contrast, involve widely distributed quantum computation; each station participates repeatedly in building an end-to-end distributed Bell pair, through purification and the use of teleportation known as \emph{entanglement swapping}. In this section, we first describe how the base-level Bell pairs are created over a single hop, then how Bell pair fidelity is improved at all distances by using purification. With this background, we turn to teleportation and entanglement swapping. \subsection{Bell Pair Creation} \label{sec:epr-creation} Over distances greater than a few millimeters, the creation of Bell pairs is mediated by photons, which may be sent through free space or over a waveguide such as optical fiber. Schemes for Bell-pair creation may be divided very crudely into those that use very weak amounts of light -- single photons, pairs of photons, or laser pulses of very few photons~\cite{childress05:_ft-quant-repeater,childress2006ftq,cirac1997qst,duan04:_scalab,van-Enk:PhysRevLett.78.4293,duan2001ldq} -- and those that use laser pulses of many photons, which are called \emph{qubus} schemes~\cite{munro05:_weak,spiller05:_qubus,ladd06:_hybrid_cqed,van-loock06:_hybrid_quant_repeater}. Qubus repeaters, also known as ``hybrid'' repeaters because they utilize some analog classical characteristics of light in conjunction with the digital characteristics of qubits, are currently being developed by a multi-institution collaboration involving the authors. The methods of creating photons, performing entangling operations, and making measurements are different in each type of repeater, but at the level relevant for this paper the architectures are similar. At the physical level, the relationship between the probability of successfully creating a Bell pair and the fidelity of the created pair is complex. Only Bell pairs with fidelity bounded well above 0.5 contain useful amounts of entanglement; as the fidelity degrades toward 0.5, we become unable to make use of the pair. Using the qubus scheme, the probability of successfully creating a Bell pair is high, but even when the operation succeeds the fidelity of the created Bell pair is low (these two parameters represent an engineering tradeoff we will not discuss here). For the parameter settings we have chosen, corresponding to the qubus scheme, Bell pairs are created with fidelities of 0.77 or 0.638 for 10km and 20km distances, respectively, and the creation succeeds on thirty-eight to forty percent of the attempts~\cite{ladd06:_hybrid_cqed}. Methods for Bell pair creation that utilize single photons have much lower success probabilities, but create very high-fidelity Bell pairs when they do succeed. \begin{figure} \centerline{\scalebox{0.5}{\hbox{ \input{pure-round-trips.pstex_t}}}} \caption{Messaging sequence for the lowest level of Bell pair creation and purification.} \label{fig:pure-round-trips} \end{figure} Figure~\ref{fig:pure-round-trips} shows the message sequence for creating base-level entangled pairs. The wavy lines in the figure (labeled PE, for Physical Entanglement) indicate the optical pulses that interact directly with the qubits, while the straight lines are classical communication. At the sender, an optical pulse is entangled with each separate physical qubit, then multiplexed into the long-distance fiber. The pulses are very short compared to the propagation delay of tens to hundreds of microseconds ($T_1$ in the figure), so we can treat the pulses as effectively being instantaneous. Upon arriving at the receiver, the pulses are demultiplexed, and an attempt is made to entangle each one with a free qubit. Certain properties of the pulse are then measured~\cite{childress05:_ft-quant-repeater,cirac1997qst,duan04:_scalab,van-Enk:PhysRevLett.78.4293}. The measurement results tell us if the entangling operation succeeded. If so, we have created a {\em Bell pair}, entangling a qubit at the sender with a qubit at the receiver. The receiver prepares ACK/NAK ``keep'' flags for each qubit and sends them back to the sender, letting the sender know which operations succeeded. This measurement and flag preparation is $T_2$ in the figure and the return message is labeled EC (Entanglement Control). \subsection{Purification} \label{sec:purification} If the two stations have successfully created more than one Bell pair, they can next begin the distributed quantum computation known as {\em purification}. Purification raises the fidelity of a Bell pair, essentially performing error correction on a test pattern, taking advantage of the specially-prepared initial state of the qubits. Purification takes two Bell pairs and attempts, via local quantum operations and classical communication, to combine them into one higher-fidelity pair, an operation that takes time $T_3$ in Figure~\ref{fig:pure-round-trips}. Two facets of purification determine its efficiency: the quantum algorithm used on each pair of Bell pairs, for which there are several methods~\cite{bennett1996pne,PhysRevLett.77.2818,dehaene2003lpp}, and the \emph{scheduling}~\cite{dur:PhysRevA.59.169}. Scheduling chooses which pairs to purify with each other, and has an enormous impact on the physical resources required and the rate at which the fidelity of a Bell pair grows. We will discuss scheduling in detail in Section~\ref{sec:banded}. The quantum algorithm used on each pair may be chosen to be the same regardless of each pair's history, as in Refs. \cite{bennett1996pne} and \cite{PhysRevLett.77.2818}, but additional efficiency is gained by tracking the noise accumulated in each pair as it has developed in the repeater and changing the algorithm appropriately. If the noise of the two input Bell pairs is known, one of a small, finite set of possible algorithms may be chosen which minimizes the noise of the resulting purified pair~\cite{dehaene2003lpp}. We use such an approach for our offline simulations, assuming the noise expected from qubus-based hardware~\cite{ladd06:_hybrid_cqed}. Such an approach is also possible in real time, but since we cannot directly measure the quantum parts of the system without disturbing the quantum state, the quantum state must be tracked by simultaneous classical calculations identical to our simulations. Purifying two pairs always destroys one Bell pair and returns its physical resources to the pool of free qubits. If the operation fails, both pairs are freed for reuse, but if the operation succeeds, the resulting higher-fidelity pair is either kept to await more purification (if the target fidelity for this distance has not yet been reached) or is passed to the next higher level in the protocol stack. $T_2$ and $T_3$ are both small compared to $T_1$, so we will ignore them in this paper. \subsection{Teleportation and Swapping} \label{sec:teleport-and-swap} \begin{figure} \centering \subfigure[Operations in entanglement swapping.]{ \includegraphics[width=9cm]{entanglement-swapping.eps} \label{fig:swapping-ops}} \subfigure[Three levels of entanglement swapping.]{ \includegraphics[width=8cm]{entanglement-swapping-three-levels.eps} \label{fig:swapping-3-levels}} \caption{Entanglement swapping. Spiral lines represent distributed Bell pairs, and straight lines are classical communication.} \label{fig:swapping} \end{figure} The use of teleportation in repeaters, known as entanglement swapping, lengthens distributed Bell pairs by teleporting the state of one member of a Bell pair over progressively longer distances, until the pair stretches from end to end. Teleportation consumes Bell pairs; the repeaters are responsible for replenishing their supply of shorter-distance pairs in order to make the end-to-end Bell pairs. In teleportation, the state of a qubit is destroyed in one location and recreated in another. First, a Bell pair is distributed, with one member held at the source (Alice) and the other at the destination (Bob). The qubit to be teleported (which we will call the data qubit) is entangled with Alice's member of the Bell pair. Then both the data qubit and Alice's Bell qubit are measured. Each measurement results in one classical bit, and destroys the quantum state of the qubit. The two classical measurement results are communicated to Bob, who then uses them to decide what quantum operations on his Bell qubit will recreate the original state of the data qubit. The original creation of the Bell pair must begin with a quantum operation that entangles two distant qubits, as described in Section~\ref{sec:epr-creation}, but the teleportation operation itself requires only local quantum operations and classical communication between Alice and Bob. In a system of quantum repeaters, the use of teleportation moves the state of a single qubit from one station to another. If the qubit being teleported is a member of (another) Bell pair, that relationship is preserved, but one of the end points moves. Figure~\ref{fig:swapping-ops} illustrates this process, known as {\em entanglement swapping}. A Bell pair spanning nodes 0 and 1 ($0\leftrightarrow 1$) and a pair spanning nodes 1 and 2 ($1\leftrightarrow 2$) are used to create a single $0\leftrightarrow 2$ pair. The qubit in Station 1 in step 1 is teleported to Station 2, lengthening the distance between the end points of the black Bell pair. Step 1 is the creation of the base-level Bell pairs. Step 2 is local quantum operations at the middle node, including the measurement of both qubits, followed by classical communication to the end nodes, then possibly local operations at the destination node to complete the recreation of the teleported qubit. This teleportation destroys the right-hand Bell pair, and frees the two qubits in the middle for reuse. In theory, any three nodes with two Bell pairs can use entanglement swapping, but most designs presented to date assume that a chain of repeaters will double the distance between end points of the Bell pair each time swapping is performed, combining two $n$-hop Bell pairs into one $2n$-hop pair. Briegel {\em et al.}~\cite{briegel98:_quant_repeater} established the use of such a doubling architecture in early discussions of quantum repeaters, and showed that performance declines polynomially rather than exponentially with distance~\footnote{Portions of their analysis apply to the splicing of more than two links in each swapping step, but they always discuss a regular, exponential growth in the span of Bell pairs, and their most detailed analysis uses the doubling approach.}. If purification always succeeds, this logarithmic-depth combination of links intuitively appears to be optimal, though we know of no proof of this hypothesis. Jiang {\em et al.} have begun investigating relaxing that constraint dynamically, allowing neighboring Bell links of any length to combine~\cite{jiang2007oaq}. This approach is promising for probabilistic systems, and necessary when physical constraints dictate that the number of hops is not a power of two. In the simulations presented here, we assume the use of a basic doubling architecture for swapping. We call the number of times swapping has been performed the ``level'' of the Bell pair, with level 0 being our base Bell pairs at a distance of one hop. A Bell pair of level $i$ spans $2^i$ hops. Figure~\ref{fig:swapping-3-levels} shows three levels of Bell pairs, representing the state after zero, one, and two levels of swapping. In the end, one pair has been stretched to reach four hops, and the other three Bell pairs present at level 0 have been destroyed and the physical qubits freed for reuse. \section{Quantum Repeater Protocol Stack} \label{sec:stack} \begin{figure*} \hfill \subfigure[protocol stack]{ \includegraphics[width=6cm]{protocol-stack.eps} \label{fig:protocol-stack} } \hfill \subfigure[layer interactions]{ \includegraphics[width=8cm]{four-hop-protocol-connections.eps} \label{fig:protocol-interactions} } \hfill \caption{Protocol stack and layer interactions for quantum repeater operation.} \label{fig:protocol-stacks} \end{figure*} \begin{figure} \center{ \includegraphics[width=8cm]{entanglement-swapping-msg-seq.eps}} \caption{Example message sequence for purification control (PC) and entanglement swapping control (ESC). Numbers in parentheses are the level, or distance.} \label{fig:swapping-msg-seq} \end{figure} To give an overview of the processing and message flow in a repeater network, Section~\ref{sec:repeaters} discussed repeater behavior as an integrated phenomenon. However, the actions can be cleanly separated into a layered protocol stack, as shown in Figure~\ref{fig:protocol-stack}. The bottom, {\em physical entanglement} (PE) layer corresponds to the wavy lines in Figure~\ref{fig:pure-round-trips}, using strong laser pulses or single photons to create the shared quantum state between two distant qubits. The next layer, {\em entanglement control} (EC), consists primarily of the ``keep'' flags indicating the success or failure of entanglement attempts. These bottom two layers operate only across a single hop. Above these layers reside the {\em purification control} (PC) and {\em entanglement swapping control} (ESC) layers. PC consists of a series of messages indicating the pairs on which purification was attempted, and the results. PC must operate at each power of two distance, 1 to $2^n$, for a $2^n$-hop link. Figure~\ref{fig:pure-round-trips} shows the messaging sequence for PE, EC, and the lowest layer of PC. ESC, which supports the teleportation that splices two Bell pairs to create one pair that spans a greater distance, must involve three nodes, as shown in Figure~\ref{fig:swapping} and described in Section~\ref{sec:teleport-and-swap}. ESC must inform one of its partners (generally, the one on the right in the diagram, as we assume qubits are being teleported left to right) of the results of its local operations, which are probabilistic. The right-hand node may need to perform local operations based on the results received. The left-hand node must also be informed of the basic fact of the swapping operation. ESC at the middle station unconditionally returns the qubits just measured to PE for reuse. The left and right stations pass control of their qubits to the PC level above the current ESC, for purification at the new distance. In normal operation, purification and swapping (PC and ESC) are repeated at each level until the top, end-to-end level is reached, as shown in Figures~\ref{fig:protocol-interactions} and \ref{fig:swapping-msg-seq}. At that final distance, purification (PC) may be repeated one more time to create the final end-to-end pair of the fidelity required by the application. Of course, purification can be omitted or repeated at any level, depending on the fidelity of the Bell pairs. In Figure~\ref{fig:swapping-msg-seq}, purification at level 0 is shown happening twice on the left. The actual timing of messages may vary somewhat; PC(0) can only be initiated after the status of qubits has been established by EC, as in Figure~\ref{fig:pure-round-trips}. Because the stations run a deterministic algorithm to select which pairs to purify, PC does not need to negotiate which operations to perform, only inform its partner of the outcomes. When the network is a single line of $N = 2^n$ hops, each station can easily determine the other stations to which it must build PC and ESC connections. In a network with a richer topology, this process must involve routing for the end-to-end connection. The middle meeting point of each entanglement swapping level must be identified. Some form of source routing or circuit setup will be required, especially when the number of hops is not a power of two; we defer this problem to future work. Both the stations themselves and the qubits they hold must be addressable. Because PC and ESC can involve any stations, the control protocols must be designed to include general station addresses. EC, PC, and ESC must also be able to address qubits at both ends of each connection and to share those addresses with other nodes and protocol layers. The addresses can be logical, and a station may relocate its half of any Bell pair from one internal qubit to another without notifying its partners, provided it can continue to match incoming and outgoing messages to the correct qubits. Once the base Bell pair is created, the qubits no longer need a direct connection to the long distance quantum communication channel. \section{Purification Scheduling} \label{sec:banded} Section~\ref{sec:purification} deferred discussion of a critical point: two stations trying to take a set of lower-fidelity pairs and create a higher-fidelity pair must decide which pairs to purify. The algorithm used determines the physical resources required and the speed of the convergence to the target fidelity. Our new banded purification scheduling algorithm raises the throughput of a given hardware configuration by a factor of up to fifty, and provides greater flexibility in hardware configuration. Before we present banding, we describe the \emph{symmetric} and \emph{pumping} scheduling algorithms, then our prior greedy algorithm. Symmetric purification, described by D\"ur \emph{et al.} as schemes A and B~\cite{dur:PhysRevA.59.169}, requires pairs to attempt to purify only with other pairs of the same fidelity. Figure~\ref{fig:scheduling-trees}a shows the evolution and history of a symmetrically-grown Bell pair. In the figure, for simplicity, base-level Bell pairs are created in odd-numbered time steps, and purification operations are attempted in even-numbered time steps. (The fidelities in the diagrams in this section are for illustration only, and are not exact.) At $t = 4$, the symmetric algorithm would not attempt to purify the fidelity 0.71 pair with the fidelity 0.638 pair, instead waiting for the development of a second fidelity 0.71 pair at $t = 6$. \begin{figure*} \center{ \includegraphics[width=17cm]{scheduling-trees.eps}} \caption{Different purification scheduling algorithms. Gray bars represent the \emph{history} of the pair. Black horizontal bars represent currently entangled Bell pairs. Numbers show the fidelity of the Bell pair. a) Logical evolution of a symmetrically-grown Bell pair. b) History tree of a Bell pair grown using the entanglement pumping algorithm. c) History tree of a Bell pair grown using the greedy algorithm. d) An example history of the evolution of a Bell pair using our new banded purification algorithm. If the boundary between two bands is placed at e.g. 0.66, at point A, the pairs 0.71 and 0.638 will not be allowed to purify. Dashed lines represent time that Bell pairs are forced to wait for a suitable partner to be created.} \label{fig:scheduling-trees} \end{figure*} \comment{how many nines?} Symmetric purification would take our starting fidelity of 0.638 to e.g. a target fidelity of 0.98 after five rounds. If purification always succeeded, thirty-two ($2^5$) base-level Bell pairs would be required: $32\times 0.638 \rightarrow 16\times 0.71 \rightarrow 8\times 0.797 \rightarrow 4\times 0.867 \rightarrow 2\times 0.952 \rightarrow 1\times 0.988$. Unfortunately, purification is a state-dependent, probabilistic operation. When using our starting state, the first step (0.638 + 0.638) will succeed only 57\% of the time, while the last step will succeed 92\% of the time. In total, symmetric purification actually consumes, on average, more than 450 base-level Bell pairs to make one Bell pair of 0.98 fidelity. The principal drawbacks to the symmetric algorithm are the inflexible use of available resources, both time and space (as shown by e.g. the wait at $t = 4$ in the figure), and the fact that the truly symmetric history tree is effectively impossible to achieve. Memory degradation over time causes two pairs that arrived at different times to have different fidelities, so forcing exact matches only is impractical. Entanglement pumping, defined by D\"ur \emph{et al.} as Scheme C and shown in Figure~\ref{fig:scheduling-trees}b, can be done using only the minimum two qubits in each station~\cite{childress05:_ft-quant-repeater}. The fidelity of one Bell pair is pumped by purifying with base-level pairs created using the physical entanglement mechanism. This scheme uses physical resources efficiently, but improves the fidelity of entanglement only slowly; when the fidelity difference between the base pairs and the final target fidelity is large, pumping is ineffective. Previous work~\cite{ladd06:_hybrid_cqed} considered a greedy scheduling algorithm for purification scheduling: at each time step, all available resources are purified, never deferring immediate actions in favor of potential later operations. Figure~\ref{fig:scheduling-trees}c shows one possible history tree. When the fidelity of a base Bell pair is high, above $\sim 0.75$ or so, this scheme works well. However, when the fidelity is lower, because of longer distances or loss elsewhere in the system, a greedy algorithm results in attempts to purify a high-fidelity pair with a low-fidelity pair, as at the point A in the figure. Using a low-fidelity pair both has a lower probability of success and gives only a small boost in fidelity when it succeeds. Thus, the effective floor for the fidelity of base pairs when using the greedy algorithm is high. We have seen that the greedy algorithm and entanglement pumping sometimes match Bell pairs with very different fidelities, resulting in low probability of success for the purification operation and giving only a limited boost in fidelity even on success. The fully symmetric tree is impractical: it imposes strict minimum hardware requirements, cannot allocate resources flexibly, and in practice cannot take into account memory degradation. A new approach is required. We have developed {\em banded} purification to match purification pairs efficiently but flexibly. We divide the fidelity space into several regions, or bands, and only allow Bell pairs within the same band to purify with each other. Figure~\ref{fig:scheduling-trees}d shows a simple example, assuming two bands divided at a fidelity of 0.66. At the point A, the greedy algorithm would attempt to purify the 0.638 pair with the 0.71 pair (as shown at A in Figure~\ref{fig:scheduling-trees}c). When using banding, the band boundary at 0.66 prevents those pairs from purifying, and so the system waits for the creation of another fidelity 0.638 pair, then purifies the two 0.638 pairs. If that purification is successful, resulting in a second 0.71 pair, then purification will be attempted at point B using the two 0.71 pairs. At point C, the banding structure allows the new 0.71 pair to purify with the 0.797 pair, whereas the symmetric algorithm would block temporarily. Unlike the greedy and pumping algorithms, the banded approach treats high-fidelity pairs as more valuable than low-fidelity pairs, and only uses them when another similarly high pair is available, making those operations more likely to succeed and providing a larger boost in fidelity. Recall that the purification operations can fail, but their probability of success increases as the fidelity of the pairs involved increases. Any attempt to predict the exact best sequence of purification operations from a given state, therefore, must take into account which resources are currently busy, the fidelities of all available Bell pairs, the probability of success of possible purification choices, and the probability that currently unentangled qubits will be successfully entangled in the near future using the physical entanglement mechanism. The banded and symmetric algorithms are potentially subject to deadlock, but the problem is easily solved for the banded algorithm. If a repeater has e.g. seven qubits and seven bands (or seven rounds of purification for the symmetric case), one Bell pair could be in each band. Each pair would have no possible purification partner, and no free qubits would be available to create new pairs to add to the bottom band. Each swapping level is independent, so the minimum number of qubits per station must actually be the number of bands times the number of levels, plus one, for the receive half and send half of the repeater. In our simulations, we select a hardware configuration, then restrict the number of bands used to a number that will not deadlock. The symmetric algorithm has no such flexibility. \section{Simulation Results} \label{sec:sims} We have simulated repeater chains for a broad range of the parameters discussed in prior sections. The majority of our simulations utilize banded purification, and the greedy algorithm is simulated for comparison. We simulate the quantum mechanics of the physical interactions and operations, but a large fraction of the code (7,000 lines of C++) and execution time (several weeks on eight 3.0GHz+ Intel processors) are dedicated to managing the messages that are transferred station to station. The metric we use to evaluate quantum networks is the throughput, measured in Bell pairs per second of a certain fidelity over a given distance. We have chosen a target fidelity of 0.98, and simulate for distances up to 20,000 kilometers. Unless otherwise specified, the simulations presented here are for 64 links of 20 kilometers each with one hundred qubits per station (50 for receive and 50 for send, except at the end points where all 100 can be used for one direction). Our simulations all assume $0.17$ dB/km loss and a signal propagation speed of $0.7c$, corresponding to telecommunications fiber. In the hybrid quantum repeater schemes we simulate, fiber loss translates to reduced fidelity for a Bell pair, rather than a lower success rate~\cite{ladd06:_hybrid_cqed,van-loock06:_hybrid_quant_repeater}. For 20km links at $0.7c$, the one-way latency for signals is just under 100$\mu$sec, so the ``clock rate'' for these simulations is about 10kHz. As noted above, the pulses are very short compared to the propagation latency, and for the settings we use, entanglement is successful about forty percent of the time. With these settings, in the first time step, each station will attempt to entangle fifty qubits, successfully creating about twenty base-level Bell pairs on each link. In successive time steps, the number of attempts on each link is capped by the number of available qubits at each station. Our code is capable of simulating imperfect local gates, but to isolate the individual factors presented here, the simulations in this paper assume perfect local quantum operations and memory. Our simulations have shown that gate errors of 0.1\% result in about a factor of two reduction in the performance of the system, with performance degrading rapidly and a final fidelity of 0.98 being unattainable with gate errors of 0.3\%. A complete discussion of error mechanisms in quantum computing and the current experimental state of the art is beyond the scope of this paper, but this level of quality is well beyond what is currently possible; the number 0.1\% should be viewed as a \emph{target} which experimentalists should strive to achieve. As a rough approximation, the gate error rate can be considered to be the \emph{combination} of both local gate errors and memory errors. With one-way latency in fiber of approximately 6msec at 1,280km, memory must be able to retain its state for times on the order of seconds to meet the above constraint. Hartmann \emph{et al.} have recently examined the role of memory errors in quantum repeaters~\cite{hartmann06}, finding that memory that can successfully retain a quantum state for about one second can support ultimate repeater distances of 5-20,000km, albeit it at large cost in resources and with a cap on the achievable fidelity. If memory times are substantially shorter, then local quantum error correction should be added, which will add substantial additional complexity to the system design. For each banded data point in the graphs presented here, extensive runs over large parameter spaces (up to 800 or so separate sets of parameter settings) were executed to find a good set of bands, and to find a good set of thresholds for entanglement swapping at different distances. Each data point represents a single run in which 200 end-to-end Bell pairs of final fidelity 0.98 or better are created, with the exception of a few of the slowest data points, which were terminated early. The throughput is calculated by linear regression to fit a line to the arrival times of the Bell pairs~\cite{jain:perf-anal}. Error bars are included for all graphs except Figure~\ref{fig:variable-hops-latency} but are almost too small to be seen at many data points; they represent the standard deviation of the fitted slope for that run. The coefficient of determination is above 0.996 for almost every fit except the three data points with the largest error bars in Figure~\ref{fig:variable-hops}, for which it is 0.95, 0.80, and 0.78. These fits confirm that despite the stochastic nature of the quantum operations, the mean arrival rate is constant after the initial transient startup latency. Runs of fewer than 200 Bell pairs were found to have unacceptably large variability. Data, log files, and parameter settings for all runs are available from the authors. First we analyze the performance of the greedy algorithm, then present our primary results, comparing the throughput of greedy and banded purification. We backtrack to explain how bands are selected, then compare several options for setting the fidelity target at each swapping level. The final two subsections explore the hardware configuration, assessing the importance of the number of qubits per station and the trade-off of distance versus repeater size. \subsection{Greedy Algorithm} \begin{figure} \center{ \includegraphics[width=8cm]{variable-hops-20km-compare.eps}} \caption{Throughput versus distance for the banded algorithm using five bands, compared to the greedy bottom up and greedy top down algorithms. The final fidelity is 0.98.} \label{fig:variable-hops} \end{figure} The performance of the greedy top down algorithm, corresponding to our prior work, is the bottom line in Figure~\ref{fig:variable-hops}~\cite{ladd06:_hybrid_cqed}. Throughput in end-to-end Bell pairs created per second is plotted against distance. The X axis is labeled with both the number of hops and total distance in kilometers; the rightmost point of 1,024 hops or 20,000 kilometers corresponds roughly to the distance halfway around the world. For the greedy top down algorithm, throughput is about 21 Bell pairs/second for two hops, and declines to almost exactly 1 Bell pair/second for 1,024 hops. The decline shows a distinct stair-step structure, caused by the discrete nature of purification and our choice to purify until a final fidelity of 0.98 is reached. At a particular length, a certain number of purification steps is required to achieve the final fidelity. As the number of hops increases, the same number of purification steps may continue to serve, until the fidelity drops below the target and an additional round of purification must be added. When this happens, the performance drops by roughly a factor of two, as two high-quality pairs up near the target are required. The greedy algorithm sorts the Bell pairs by fidelity, and pairs them starting with the two highest-fidelity pairs. We discovered that pairing beginning from the bottom of the list, which we term greedy bottom up, increases performance by a factor of three to eight, as the middle curve in Figure~\ref{fig:variable-hops} shows. We attribute this improvement to increased conservatism on the use of the highest-fidelity pair. Beginning at the bottom will bring other pairs up toward the fidelity of the highest pair, perhaps even surpassing it, but first risking the failure of lower-fidelity pairs which have cost less to build. At the left hand edge of the graph, the greedy top down algorithm declines from 400 pairs/second for one hop to 21 for two hops, almost a factor of twenty worse. For this graph, our hardware is assumed to have one hundred qubits per station. For one hop, all one hundred qubits can directly connect to qubits at the far end. For two hops, the middle station must split the use of its one hundred qubits, fifty for the left-hand link and fifty for the right-hand link. The difference is due to more efficient purification pairings as the number of available qubits grows. This effect is assessed in more detail in Section~\ref{sec:qubits-per-station}. \subsection{Banded Performance v. Total Distance} \begin{figure} \center{ \includegraphics[width=8cm]{variable-hops-20km-latency-compare.eps}} \caption{Startup latency versus distance for the banded, greedy bottom up, and greedy top down algorithms.} \label{fig:variable-hops-latency} \end{figure} The top line in Figure~\ref{fig:variable-hops} graphs the performance of our banded algorithm. Throughput starts at 1060 Bell pairs/second for one hop, plateaus at about 100 for 32 to 128 hops, then declines to 20 pairs/second for 1,024 hops. Due to the stair-step behavior, the benefit compared to the greedy top down algorithm varies from a factor of fifteen to a factor of fifty, with the advantage growing unevenly as distance increases. Compared to the greedy bottom up algorithm, banded is 2.5 to 9.3 times better, also increasing unevenly with distance. Entanglement pumpting and symmetric scheduling are not shown in the figure. Entanglement pumping cannot effectively create pairs of fidelity 0.98 with our starting fidelity of 0.638. For the particular configuration shown here, the symmetric algorithm would perform similarly to banding. However, as noted in Section~\ref{sec:banded}, the fully symmetric algorithm cannot be achieved in practice. An important question is whether band structure changes when the total distance (number of hops) is increased. If the band structure does not change, then we can simulate short lines, and apply the simulation results directly to much longer lines, dramatically reducing the amount of computation time needed in simulations. Likewise, in real-world operational environments, distance-independent system controls would be a boon. Unfortunately, our simulations have shown that the banding structure does vary somewhat at different distances. The performance for nearby banding structures can be a factor of two worse, meaning that a careful search is necessary for each specific link configuration. Because the Bell pairs created are a generic resource that do not initially carry application data, the normal operation mode for the system will be steady-state, continuous operation, buffering prepared Bell pairs to the extent possible during times when applications are not consuming them. As noted in Section~\ref{sec:intro}, the distributed nature of repeater operations means that there is no true ``in flight'' time for a qubit. Nevertheless, a quick look at the latency to start up the system is in order. Figure~\ref{fig:variable-hops-latency} shows the latency from the time the system is started until the first end-to-end Bell pair is created. The values graphed are the latency for the first Bell pair for each of the runs in Figure~\ref{fig:variable-hops}. For the banded algorithm, start-up latency is about fifteen times the one-way latency for two hops, declining to about four times the latency for 1,024 hops. \subsection{Finding the Bands} \label{sec:band-finding} \begin{figure} \center{ \includegraphics[width=8cm]{bandscan-64by20km00sl2bands989898t50q-arrow.eps}} \caption{Finding the best band boundary for a 2-band arrangement, for 64 hops of 20km each.} \label{fig:bandscan} \end{figure} \begin{figure}[t] \center{ \includegraphics[width=8cm]{number-of-bands-1-6-bar.eps}} \caption{The best band throughput for different numbers of bands, for 64 hops of 20km each.} \label{fig:number-of-bands} \end{figure} We can theoretically place the boundaries that separate bands at almost any level. To determine a placement that gives good performance, we have performed nearly exhaustive searches over many possibilities, for configurations with two to six bands. Figure~\ref{fig:bandscan} shows a two-band setup. In this figure, we vary the boundary in steps of 0.01, but in most other graphs the steps are 0.02 or 0.04. At the left edge, the division between the two bands is below the initial threshold of 0.638 generated by our physical entanglement process, and at the right edge the division is above the delivery threshold for our final qubits, resulting in the equivalent of the bottom up greedy algorithm for the first and last data points. The performance peaks when the band boundary is 0.87-0.89, showing clearly that the operational imperative is protecting the high-fidelity pairs from purifying with low-fidelity pairs. Increasing the number of bands gives a smooth increase in performance for up to five bands, which perform nearly 50\% better than two bands. Figure~\ref{fig:number-of-bands} shows the increase in performance for increasing numbers of bands. Moving from one band (equivalent to greedy bottom up) to two increases performance by more than a factor of three. The performance has saturated with six bands; it is not clearly better than five bands, because the behavior has essentially been constrained to that of a symmetric tree. For more than two bands, the number of simulation runs to cover the space increases geometrically, so the granularity of our boundary steps is somewhat larger. For three bands, for example, we tried all combinations of boundaries with the lower bound varying 0.60 to 0.95, and the upper boundary varying from 0.80 to 0.99, in steps of 0.02. \subsection{Varying Swapping Thresholds} \begin{figure} \center{ \includegraphics[width=8cm]{threshold-compare-abc.eps}} \caption{Comparing different distance-swapping thresholds.} \label{fig:threshold-compare} \end{figure} Recall the distinction between the purification bands and thresholds at different distances: the former governs purification decisions within PC, while the latter governs the promotion of pairs from PC to ESC for entanglement swapping at the next-higher distance. The experiments in the previous subsections were performed with each of the distance thresholds set to 0.98. In this section, we evaluate several possible sets of thresholds that seem like plausible candidates for good configurations: \begin{itemize} \item[{\bf a.}] 0.9, 0.9, 0.9, 0.9, 0.9, 0.9, 0.98:\\ purify only to an intermediate fidelity of 0.9 at distance 1, 2, 4, 8, 16, and 32, then push to the final fidelity of 0.98 at the full distance of 64 hops; \item[{\bf b.}] 0.98, 0.9, 0.9, 0.9, 0.9, 0.9, 0.98:\\ purify to fidelity 0.98 at distance 1, then allow the fidelity to slip as far as 0.9 at intermediate distances, before pushing back up to 0.98 at 64 hops; and \item[{\bf c.}] 0.98, 0.98, 0.98, 0.98, 0.98, 0.98, 0.98:\\ purify to fidelity 0.98 at distance 1, then maintain that fidelity by purifying as necessary at each distance. \end{itemize} Figure~\ref{fig:threshold-compare} shows clearly that the preferred method of managing the fidelity of a pair as it hops across the network is case {\bf c}, purifying to the desired level at distance one, and maintaining that fidelity at all distances. Case {\bf a} proved to perform so poorly that the simulations were unable to complete. The other two cases are shown in the figure. This data supports the intuitive idea that purifying over short distances will be more efficient than purifying over long distances. D\"ur \emph{et al.} referred to this approach as maintaining a ``working fidelity''~\cite{dur:PhysRevA.59.169}. They did not report on any alternative schemes, but our data confirms that their approach is correct. In addition, we investigated several other candidate schemes, all of which performed worse than maintaining a working fidelity; those results confirm our findings presented here. Because the curves for {\bf b} and {\bf c} have the same shape, despite radically different distance thresholds, Figure~\ref{fig:threshold-compare} also suggests that changing the pattern of fidelity thresholds at different distances is {\em independent} of the choice of the bands for banded purification. That is, a good choice of bands should remain good regardless of the thresholds at various distances. This fact should allow us to optimize these two parameters independently for a given physical configuration. \subsection{Number of Qubits per Station} \label{sec:qubits-per-station} \begin{figure}[t] \center{ \includegraphics[width=8cm]{number-of-qubits-compare.eps}} \caption{Comparing different numbers of qubits per station, for five bands, greedy bottom up, and greedy top down. Simulations are 64 hops of 20km each.} \label{fig:number-of-qubits} \end{figure} \begin{figure}[t] \center{ \includegraphics[width=8cm]{number-of-qubits-compare-ratio.eps}} \caption{Performance ratio of banded purification compared to the two forms of greedy purification, bottom up and top down.} \label{fig:number-of-qubits-ratio} \end{figure} The principal constraint on throughput is the number of qubits per station. What will happen as our hardware capabilities grow? How should we distribute limited physical resources? This subsection and the next address these two important questions. The throughput achieved for two bands with varying numbers of qubits per half-station is shown in Figure~\ref{fig:number-of-qubits}. The throughput achievable with five-band purification scheduling is linear in the number of qubits. Banding is especially valuable in the near term, when station capacities are expected to be one of the principal engineering constraints. The greedy top down algorithm performs poorly with small numbers of qubits per station. With larger numbers of qubits in various stages of development, simply ordering the list and partnering Bell pairs bottom up will naturally tend to use qubits that are of similar fidelity, giving similar behavior to banding without a formal band structure. Figure~\ref{fig:number-of-qubits-ratio} shows this effect, with the banded algorithm outperforming the greedy algorithms at all station sizes, but by a smaller ratio as the station capacity grows. With 50 qubits per half-station, the greedy top down algorithm struggles to meet our fidelity goal of 0.98, and banded performs thirty-seven times better. \subsection{Varying Number of Stations} \label{sec:varying-stations} Because physical qubits may be the scarce resource in a repeater system, it makes sense to ask how best to spread the qubits out along a link to achieve the maximum throughput. With the exception of Section~\ref{sec:qubits-per-station}, most of the experiments presented so far in this paper have used 64 hops of 20km each with 50 qubits per half-station, but what if we were to split each repeater and create 128 hops of 10km with 25 qubits per half-station, or 256 hops of 5km with 13 qubits? These three cases are shown in Figure~\ref{fig:fixed-capacity}. As the number of qubits per half-station decreases, we must restrict the number of bands in order to avoid deadlock. For 64 hops and 50 qubits, we can use five bands; for 128 hops and 25 qubits, only three; and for 256 hops with 13 qubits only a single band. This fact gives us an engineering trade-off; bands are especially useful in lower-fidelity hops, helping to offset the decrease in throughput that comes from lengthening the hops. This section has shown some preliminary explorations of this question, but the space of possibilities is large, and for both long and short hops, additional factors such as memory errors and local gate errors will likely play larger roles. A more complete analysis would require a combinatorial increase in the number of simulations performed; the total simulation time of more than half a CPU-year would be multiplied by the number of swapping thresholds tested for each of seven thresholds in the configurations above. We defer more a complete analysis to future work. \begin{figure} \center{ \includegraphics[width=8cm]{fixed-capacity-6500q-no-sysloss.eps}} \caption{Best throughput for about 6,500 qubits spread over 1,280km.} \label{fig:fixed-capacity} \end{figure} \section{Conclusion} \label{sec:conclusion} The banded purification algorithm and hardware parameters presented here represent a step forward in quantum repeater network design, as shown by the gains in throughput we report, especially with intermediate numbers of qubits per station. Banded purification provides throughput essentially identical to fully symmetric purification. Symmetric purification, however, cannot be achieved in practical systems, due to memory degradation and the possibility of deadlock. These gains are more than hypothetical; the improved operation at low initial fidelities will assist the first laboratory experiments of a complete repeater network, which inevitably will operate at the very edge of a functional system. Although the basic concepts of quantum repeaters are simple, physical realizations remain some years away. The dynamic behavior is analytically intractable and the range of engineering parameters broad, making simulation a valuable tool. Our simulations have helped us to identify important hardware constraints and test possible protocols, allowing us to find improvements that raise performance by a factor of fifteen to fifty across a broad range of distances and parameters, and to extend the possible operating range to lower fidelities (down to a fidelity of below 0.55, compared to greater than 0.7 for prior simulations). We have laid out a rudimentary architecture for the protocols necessary to operate a network of repeaters. We know that buffer qubits must have addresses at the entanglement control (EC) level and above. At the purification control (PC) level and above, stations must also have addresses. These addresses must be shareable across layers of the protocol stack. Software-selectable characteristics of the protocols, such as bands and thresholds for promotion to longer swap distances may be locally-held information only, decided out of band, or dynamically negotiated through an additional session control protocol; we defer such design issues until experimental progress demands. Banded purification will be useful for quantum system-area networks (SANs), as well as wide-area quantum networks. In wide-area quantum networks, loss is dominated by the length of the fiber. In SANs for quantum multicomputers~\cite{van-meter07:_distr_arith_jetc,van-meter07:_commun_links_distr_quant_comput}, fiber losses will be low, but losses elsewhere in the system (e.g., the qubit-fiber coupling or node-to-node switching) will be present, requiring the use of purification. Our results will assist the development of distributed quantum computing systems with node-to-node distances ranging from a handspan to intercontinental, helping to usher in the era of quantum computation. \section*{Acknowledgments} The authors thank NICT for partial support for this research. We thank Kohei M. Itoh, Peter van Loock, John Heidemann and Joe Touch for valuable discussions.
1,314,259,996,796
arxiv
\section{Introduction} Social media platorms have been seeing growing usage and adoption over time. This growth has been further accelerated with the lockdowns in the pandemic where people had no choice but to interact and express themselves online. This has led to increase in new users as well as daily active users. India has seen tremendous adoption in social media usage with the Jio revolution when Reliance Jio made internet charges affordable. Reliance Jio allowed democratizing internet which has benefitted social media platforms operating in India. With the rise of intenet access to Indian people, social media platforms are seeing rise in Indic native content. It is in interest of social media platforms to adapt their systems to understand the regional content to better serve the users and attract more user base. One such company which has been focusing on serving the regional content since its foundation is Sharechat. Sharechat is an Indian company which is one of the leading social networking space in catering to regional content. Their short video app is called Moj which went viral in the lockdown and has been since seeing tremendous growth in its user base and content. Increasing social media usage has made it important to keep the platform safe from abusive content to enable individual safety in the online world. There is more abusive content in the online world when compared to the offline world because Internet enables anonymity. This can be used to share abusive content without being held responsible. Text analysis of English social media content is a well explored domain \cite{sentiment-svm}, \cite{sentiment}. Multilingual NLP is a relative new subfield and has been picking up pace due to globalization \cite{xlm-r-twitter}. Most of the multilingual transfomer models were trained on Wikipedia dataset of Common Crawl dataset \cite{mbert}, \cite{xlm-r}. This is different from training on a social media text dataset as social media text content have vast amount of acceptable misspellings. On top of this, non-English social media content have their own unique set of challenges comprising of code-mixing, transliteration, mixing of scriptures in the same sentences. Code-Mixing is the usage of different languages in the same sentence or utterance. This mostly used in informal conversation and social media. This phenonmenon exits in multilingual societies where the speaker is fluent in more than one language \cite{bertologicomix}. Transliteration is a way of writing a native language in the scripture of a different language. The pronunciation of the transliterated langugae, by following the rules of the scripture used, is similar to the one written using the native scripture. Most common form of transliteration is to use the Latin scripture to express a non-English sentence. Transliteration aggravates the misspellings and variants of the same word. This is because not all syllables and sounds in the native language would be available in the Latin language so the writer has to use non-standard ways to approximate the syllable. Prior works have performed text analysis on non-English languages and have attempted to tackle a subset of the above challenges associated with non-English social media text \cite{2020-semeval}. \cite{sentiment-arabic}, \cite{sentiment-czech}, and \cite{sentiment-telegu} have extended text analysis to Arabic, Czech, and Telugu languages. In our work, we identify the common challenges associated with non-English social media text and propose an approach to tackle all of these challenges in multilingual setting. Our contributions include a spell-correction algorithm that does not rely on a manually defined dictionary but instead infers its own from the training data. This allows our spell-correction algorithm to learn new words by retraining on the any new corpus. This is in contrast to the traditional dictionary based approach where the words need to be added manually. Our approach is scalable in the age of neologism which is prevalant in context of social media \cite{neologism}. The paper is organized as follows. Sec. \ref{sec:dataset} describes the Moj Dataset used in this work. Sec \ref{sec:problem-statement} defines the problem statement and challenges associated with it. Sec. \ref{sec:proposed-method} explains our approach in detail. Sec. \ref{sec:exp-setup} provides the experimental setup. Sec. \ref{sec:result} lists the result of our approach and in \ref{sec:ablation} we perform ablation studies on our approach. Sec. \ref{sec:future-work} discusses the future work and we conclude our work in Sec. \ref{sec:conclusion} \section{Dataset} \label{sec:dataset} The dataset provided by Moj consists of multilingual comments in $10+$ low-resource Indic languages. The comments in the dataset have been human annotated with a label indicating whether the comment is abusive or not. The dataset also has \textit{meta-data} like the \textit{language} of the comment and the number of \textit{likes} and \textit{reports} on each comment and the corresponding post on which the comment was made. The label distribution across different language is show in Table \ref{tab:label-dist} \begin{table}[h] \centering \caption{Label distribution across languages in Moj Dataset} \label{tab:label-dist} \bgroup \def\arraystretch{1.2 \begin{tabular}{ccc} \hline \textbf{Language} & \textbf{Non-Abusive} & \textbf{Abusive}\\ \hline Assamese & 1496 & 1284\\ Bengali & 11428 & 11407\\ Bhojpuri & 2917 & 2887\\ Gujarati & 4426 & 4402\\ Haryanvi & 4395 & 4417\\ Hindi & 153433 & 153747\\ Kannada & 6954 & 6989\\ Malayalam & 31749 & 9216\\ Marathi & 44677 & 27367\\ Odia & 5475 & 5499\\ Rajasthani & 2183 & 2185\\ Tamil & 34792 & 34705\\ Telugu & 48461 & 48551\\ \hline \end{tabular} \egroup \end{table} \section{Problem Statement} \label{sec:problem-statement} The main objective of the paper is to develop a multi-lingual approach to identify abusive comments for social media platforms. Upon exploring the Moj dataset, we identify the following challenges associated with multi-lingual non-English social media content: \begin{itemize} \item \textbf{Code-Mixing}: Social media text is not written in just one language in multi-lingual socities. \item \textbf{Transliteration}: People often use Latin scripture to write sentences in their native language. Pronunciation of both original native sentence and transliterated sentence is almost similar. \item \textbf{Mixing of Scriptures}: Some people write sentences which contain words from different scriptures. \item \textbf{Misspelling and variations} in social media text. \end{itemize} Fig. \ref{fig:challenge} shows different representation of the same Hindi sentence. Our approach attempts to tackle such challenges in multi-lingual setting. \begin{figure}[!ht] \includegraphics[width=0.43\textwidth]{image/challenge-fig.png} \caption{\label{fig:challenge} Example of challenges described in Sec. \ref{sec:problem-statement}.} \end{figure} \section{Proposed Method} \label{sec:proposed-method} \subsection{Data Cleaning} \label{ssec:data-cleaning} The dataset includes multiple comments from the same post as well. We replace the post level meta-data of each comment, i.e., post like count and post report count, with the maximum value of respective meta-data feature among all the comments corresponding to the same post. We pre-process the comments in the Moj dataset with the following steps: \begin{itemize} \item Unicode normalization using \verb|unicodedata| python package \item Remove special characters, e.g, $@$, $"$, $\$$, $\#$ \item Remove Emoji using \verb|demoji|\footnote{https://pypi.org/project/demoji/} python package \item Replace characters which occur more than twice consecutively with the same character, e.g., hellooo \textrightarrow hello \item Transliterate the comments using \verb|indic-trans| python package \cite{indic-trans} to the native laguage scripture to tackle the challenge of transliteration mentioned in Sec. \ref{sec:problem-statement}. The effect of this pre-processing step is discussed in Sec. \ref{sec:ablation}. \end{itemize} \begin{figure*}[!ht] \centering \includegraphics[width=1\textwidth]{image/graph.png} \caption{\label{fig:spell-correct} Overview of the training phase of Spell Correction using Graph Clustering: (a) Graph is constructed using words in the training corpus. Two words have an edge in the graph if they satisfy the nearness criteria specified in Sec. \ref{sssec:train-spell-correct}. (b) Maximal clique is found in this graph. The circled subgraph in the figure is the maximal clique in this graph. This subgraph is considered to contain words that are misspellings of one another and are representing the same concept. (c) This subgraph/cluster is removed from the graph and is processed to obtain its \textit{Parent Word} and \textit{Anchor Words} as explained in Sec. \ref{sssec:train-spell-correct}. The whole process is again repeated using the remaining smaller graph untill all the nodes have been processed. Please refer to Sec. \ref{ssec:spell-correct} for more details.} \end{figure*} \subsection{Spell Correction using Graph Clustering} \label{ssec:spell-correct} Our approach builds a graph of words using the training corpus of Moj dataset. Sec. \ref{sssec:train-spell-correct} shows how graph clustering is used to identify the set of related words and the correct spelling of those words. Sec. \ref{sssec:test-spell-correct} shows the strategy used to identify the correct spelling. This approach works even for the wrong words that were not seen in the training as long as the correct word was seen in training. Sec. \ref{sec:ablation} shows the utility of our approach. \subsubsection{Training: Correct Spelling Identification} \label{sssec:train-spell-correct} We intend to spell-correct the abusive words and ignore words which occur predominantly in non-abusive comments. This is done to prevent non-abusive words from being auto-corrected to abusive words or vice-versa. Since this is a classification problem, presence of abusive word is more important than presence of non-abusive word. This is because any combination of abusive and non-abusive word will be abusive in nature. Consider words that have occured minimum $5$ times in the abusive comments in the training corpus. Ignore words that have length less than $6$ and have number of consonants less than $4$. Construct a graph where nodes are these words. Connect words with an edge if they have the same first $k$ letters and the Levenshtein Distance between them is above a threshold score $t$. In this work, $k$ was $3$ and score $t$ was $85$ based on hyper-parameter tuning on a validation set. Levenshtein Distance was computed using \verb|rapidfuzz|\footnote{https://pypi.org/project/rapidfuzz/}. Algorithm \ref{alg:Graph} is a psuedo-code of the graph construction. Find the maximal clique from the above constructed graph of words. A maximal clique is the largest subgraph where nodes are connected to one another by an edge. This maximal clique represents a subgraph of words which our algorithm thinks are variants or misspellings of one another and represent the same thing, e.g, \verb|bewakoof|, \verb|bewakoofi|, \verb|bewakouf| where \verb|bewakoof| means idiot in Hindi. The word in this cluster with highest occurring frequency in the training corpus is chosen as the correct spelling for this cluster. We refer to this word as the \textit{Parent Word} of the cluster. Next, we find the \textit{Anchor Words} for this cluster. \textit{Anchor Words} are those words which will represent this cluster during Testing phase explained in Sec. \ref{sssec:test-spell-correct}. Top five frequent words of this cluster are chosen as the candidates for the \textit{Anchor Words}. Among them, those words that have frequency above one-fourth of the the \textit{Parent Word} is included in \textit{Anchor Words}. The above maximal clique is removed from the graph once that cluster is processed and its \textit{Parent Words} and \textit{Anchor Words} have been computed. This process of finding a cluster using maximal clique, processing the cluster, and removing it is repeated on the remaining smaller graph obtained from the previous rounds of clustering until all the nodes in the original graph have been processed. Algorithm \ref{alg:train} shows the psuedo-code for training phase. \subsubsection{Testing: Auto-correction during inference} \label{sssec:test-spell-correct} By the end of training phase, we have a set of clusters with \textit{Parent Word} and its \textit{Anchor Words} that will be used during inference to spell-correct the words in train and test corpus. Given a word $w$ during inference, we find out the clusters whose \textit{Parent Word} has the same first $k$ letters as that of word $w$. The association $score_{wi}$ of a cluster $C_{i}$ obtained from the previous filtering step is given as: \begin{equation} \label{eq:y} score_{wi} = \max_{a \in A(C_{i})} \{Levenshtein Distance(w, a)\} \end{equation} where $A(C_{i})$ is the set of \textit{Anchor Words} of cluster $C_{i}$. The cluster $C_{j}$ that has the highest association $score_{wj}$ with the word $w$ is chosen as the appropriate cluster $C^{*}$ if the score $score_{wj}$ is above a threshold $t$. The word $w$ is replaced with the \textit{Parent Word} of the cluster $C^{*}$ if the word $w$ was able to find its appropriate cluster $C^{*}$, e.g., \verb|bewakoufi| will be corrected to \verb|bewakoof|. We spell-correct all the comments in both training and testing data before using them for classification. Algorithm \ref{alg:test} shows the psuedo-code for the testing phase. \subsection{Multilingual Models} \label{ssec:model} Given that our task involves low-resource Indic languages, we use multiple pre-trained multilingual transformer models and finetune them on the comments of Moj dataset. The following models were used in our work: \begin{itemize} \item \textbf{mBERT} \cite{mbert}: is a multilingual variant of BERT \cite{bert} trained on multilingual Wikipedia data from $104$ languages. \item \textbf{XLM-R} \cite{xlm-r}: is trained on cross-lingual masked language modeling task on $100$ languages. \item \textbf{XLM-T} \cite{xlm-r-twitter}: is XLM-R trained on Twitter comments \item \textbf{Multilingual DistilBERT} \cite{distilbert}: is a distilled version of mBERT \item \textbf{MURIL} \cite{muril}: is a variant of BERT trained on monolingual, translated and transliterated data from $17$ Indic languages \item \textbf{Indic-BERT} \cite{indic-bert}: is a variant of ALBERT \cite{albert} trained on IndicCorp dataset \cite{indic-bert}. \item \textbf{Canine} \cite{canine}: is a tokenizer and vocabulary free transformer model which operates directly on sequences of characters. Canine-S and Canine-C are two variants trained on different loss functions defined in \cite{canine}. \end{itemize} We also trained a logistic regression model on an embedding created by using Naive-Bayes over TF-IDF feautures \cite{nb-svm}\footnote{https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline}. \subsection{Meta-Data} The meta-data associated with the comments, i.e, likes and report count of the comments as well as the post on which the comment was made, was leveraged to further improve the predictions. We trained a Random Forest classifier over this meta-data only to provide the predictions. These meta-data specific predictions are later combined with the predictions of the comment-specific models as discussed later in Sec. \ref{sec:result}. \begin{algorithm}[!ht] \caption{Graph Construction} \label{alg:Graph} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{~Words $W$ present in abusive sentences; Frequency count $F$ containing number of times a word has occurred in the corpus; Length of words $L$; Number of consonants in words $C$; Function $T(w)$ gives the first three letters of input word $w$; Levenshtein Distance function $score(a, b)$} \Output{~Word Graph $G(V, E)$ where $V$ is the nodes and $E$ is the edges in the graph $G$} \BlankLine $V \leftarrow \phi$ \\ \For{$i=1...|W|$} { \If{$F_{i}\geq 5$} { \If{$L_{i} \geq 6$} { \If{$C_{i} \geq 4$} { \If{$V = \phi$} { $V_{1} \leftarrow W_{i}$ } \Else { \For{$j=1...|V|$} { \If{$T(W_{i}) = T(V_{j})$} { \If{$score(W_{i}, V_{j}) \ge 85$} { $k \leftarrow |V| + 1$\\ $V_{k} \leftarrow W_{i}$\\ $E_{j} \leftarrow E_{j} + V_{k}$\\ $E_{k} \leftarrow E_{k} + V_{j}$\\ } } } } } } } } \end{algorithm} \begin{algorithm}[!ht] \caption{\textsc{Training: Correct Spelling Identification} algorithm} \label{alg:train} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}\SetKwInOut{Intermediate}{Intermediate} \Input{~Graph $G(V,E)$; Frequency count $F$ containing number of times a word has occurred in the corpus} \Intermediate{~$C$ is the maximal cluster obtained in a particular step; $p$ is the parent element for a particular cluster; $a$ is the anchor elements for a particular cluster} \Output{~Parent $P$ containing list of parent member for the clusters that will be obtained; Anchor $A$ containing list of anchor members for the clusters that will be obtained} \BlankLine $G' \leftarrow G$\\ $A \leftarrow \phi$\\ $P \leftarrow \phi$\\ \While{$G' \ne \phi$} { $C(V^{c}, E^{c}) = MaximalClique(G')$\\ $j = argmax(\{F_{i} | i \in \{1...|V^{c}|\} \})$\\ $p \leftarrow V^{c}_{j}$\\ $p_{freq} = F_{j}$\\ $a \leftarrow \phi$\\ \For{$k \in \{1...|V^{c}|\}\}$} { \If{$F_{k} > \frac{p_{freq}}{4}$} { $a \leftarrow a + V^{c}_{k}$\\ } $G' \leftarrow G' - C$\\ $P \leftarrow P + p$\\ $A \leftarrow A + a$\\ } } \end{algorithm} \begin{algorithm}[!ht] \caption{\textsc{Testing: Auto-Correction duing Inference} algorithm} \label{alg:test} \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \Input{~$w_{T}$ is the test word that needs to be corrected; Parent $P$ containing list of parent member for the clusters obtained during training; Anchor $A$ containing list of anchor members for the clusters obtained during training; $score(a, b)$ is the Levenshtein distance between words $a$ and $c$; Function $T(w)$ gives the first three letters of input word $w$} \Output{~$w^{*}$ is the corrected word for $w_{T}$} \BlankLine $w^{*} = w_{T}$\\ $score^{*} = 0$\\ \For{$i \in \{1...|P|\}$} { \If{$T(w_{T}) = T(P_{i})$} { \For{$j \in {1...|A_{i}|}$} { $s = score(w_{T}, A_{i,j})$\\ \If{$s \ge 85$} { \If{$s \ge score^{*}$} { $s = score^{*}$\\ $w^{*} = P_{i}$ \\ } } } } } \end{algorithm} \section{Experimental Setup} \label{sec:exp-setup} All transformer models except for Canine are finetuned using Adam optimizer \cite{adam} with learning rate of $1e$-$5$. Canine models were finetuned using AdamW optimizer \cite{adamw} using $2e$-$5$ learning rate. All the models were trained for $3$ epochs with binary-cross entropy as the loss function. Threshold of $0.5$ was used to get the output class from output probabilites. Huggingface library \cite{huggingface} was used for training all the transformer models. \section{Result} \label{sec:result} \begin{table}[h] \centering \caption{Comparison of Models on Moj Dataset} \label{tab:model-result} \bgroup \def\arraystretch{1.2 \begin{tabular}{ccc} \hline \textbf{Model} & \textbf{Val. F1} & \textbf{Test F1}\\ \hline MURIL & \textbf{0.8675} & - \\ Indic-BERT & 0.8475 & - \\ XLM-R & 0.8623 & - \\ XLM-T & 0.8633 & - \\ Multilingual DistilBERT & 0.8553 & - \\ mBERT & 0.8551 & - \\ Canine-C & 0.8646 & - \\ Canine-S & 0.8622 & - \\ Logistic Regression + NB + TF-IDF & 0.8559 & - \\ Random Forest (meta-data) & 0.7109 & - \\ Ensemble & - & \textbf{0.8933}\\ \hline \end{tabular} \egroup \end{table} We create a stratified validation set from $10\%$ of the training set. Table \ref{tab:model-result} shows the comparison of all the models discussed in Sec. \ref{ssec:model}. Logistic Regression + Naive Bayed on TF-IDF provides a good baseline even though it is a very simple model. MURIL scores the highest F1 score on the validation set. XLM-T performed slightly better than XLM-R as XLM-T is trained on multilingual Twitter comment which is closer to the Moj Dataset than the Wikipedia dataset on which XLM-R was trained on. The meta-data contains signals essential for this task as the Random Forest obtains an F1 score of $0.71$ using meta-data alone. Our final model is an ensemble of all the models from all the $3$ epochs in Sec. \ref{ssec:model}. A logistic regression model was trained on the validation set to combine the predictions from all models for ensembling. \section{Ablation Analysis} \label{sec:ablation} \begin{table}[h] \centering \caption{Effect of Data Cleaning, Native Transliteration, and Spell correction using Logistic Regression} \label{tab:ablation} \bgroup \def\arraystretch{1.2 \begin{tabular}{cc} \hline \textbf{Model} & \textbf{Val. F1}\\ \hline Raw Dataset & 0.8493 \\ Cleaned Dataset & 0.8508 \\ Cleaned + Native Transliteration Dataset & 0.8547 \\ Cleaned + Native Transliteration + Spell-Corrected Dataset & \textbf{0.8567} \\ Cleaned + English Transliteration Dataset & 0.8496 \\ \hline \end{tabular} \egroup \end{table} \subsection{Data Cleaning} Logistic regression was used to measure the effect of our pre-processing steps as its simple to train, test and iterate. Table. \ref{tab:ablation} measures the validation F1 score on raw data and the various pre-processing steps. It shows that using the data cleaning as specified in Sec. \ref{ssec:data-cleaning} improves the F1 score. \subsection{Transliteration} \cite{how-multilingual-is-mbert} shows that mBERT is unable to perform good on transliterated dataset as it was not trained on such data. Since only MURIL was originally trained on transliterated data, other transformer models would face domain mismatch when finetuned on Moj Dataset. To tackle transliteration and mixture of scriptures in same sentences as discussed on Sec. \ref{sec:problem-statement}, we experimented with transliterating the comments before feeding it to our classifiers. Table. \ref{tab:ablation} shows that transliterating all the comments to the native scripture shows the largest improvement among all the steps. We also experimented with transliterating everything to English so that there is knowledge sharing between languages that have common words but have different scriptures. This would increase the number of training data for lower resource language. However, this led to a decrease in F1 score. Transliterating eveything to native might be working better than transliterating to English because there could be same sounding words that could mean different things in different language, e.g, \verb|kutta| means "dog" in Hindi whereas it means "new" in Telugu. Transliterating to English also leads to loss of syllables which are not present in English language but are present in the native language. \subsection{Spell Correction} Spelling correction allows the number of training samples per concept to increase as different variants of words are mapped to the same underlying concept, e.g, \verb|bakwas| and \verb|bakvas| where \verb|bakwas| means "useless" in Hindi. Increased number of training samples per concept allows the model to learn better representation for that particular concept. Table. \ref{tab:ablation} indicates that our clustering approach in Sec. \ref{ssec:spell-correct} boosts the F1 score. \section{Future Work} \label{sec:future-work} Table. \ref{tab:label-dist} shows that there is class imbalance for Malayalam language. Dravidian-CodeMix-FIRE \cite{dravidiancodemix2021-overview} can be used to augment Moj Dataset to overcome this. Adversarial training \cite{miyato2016adversarial}, \cite{pk-nutcracker}, \cite{pk-paw} can be used to increase robustness of the system. Emojis could be replaced with words instead of completely removing them as done in Sec. \ref{ssec:data-cleaning}. Incorporating homophones in the graph construction in Sec. \ref{ssec:spell-correct} might bring more gains. \section{Conclusion} \label{sec:conclusion} In this paper, we propose an approach to identify abusive comments from a multilingual dataset. We leverage pre-trained multilingual transformer models along with classical machine learning models for abusive identification. We identify the inherent challenges associated with this task such as code-mixing, transliteration, mixing of scriptures, and misspellings. Our approach attempts to tackle all of these challenges to build a robust system. We propose a spell correction algorithm using graph clustering that does not need manual creation of a dictionary for each language. It can also correct unseen incorrect words during inference if the correct word was observed in training. Results shows that our approach is well-suited for non-English social media text analysis. \bibliographystyle{./IEEEtran}
1,314,259,996,797
arxiv
\section{Introduction} There are many astrophysical scenarios governed by relativistic magnetohydrodynamical processes as, e.g., the production of relativistic jets emanating from Active Galactic Nuclei, the structure and dynamics of pulsar wind nebulae, the mechanisms triggering the explosion in core-collapse supernovae, or the production of Gamma Ray Bursts. These scenarios are nowadays the subject of intensive research by means of numerical simulations thanks to recent advances in numerical relativistic magnetohydrodynamics (RMHD) that exploit the fact that the RMHD equations obeying a causal equation of state (EOS) form a hyperbolic system of conservation laws~\cite{Anile89}. Matter at densities higher than nuclear matter density can undergo first-order phase transitions to various phases of matter, such as pion condensates~\cite{Kostyuk01}, hyperonic matter ~\cite{Oertel13} or deconfined quark matter ~\cite{Gorenstein05,Gorenstein12}. Several authors~\cite{Harry09,Bombaci12,Peres13} have studied, from different points of view, the influence that those exotic states of matter at extreme high densities have on, e.g., the dynamics of stellar core collapse supernovae, the evolution of proto-neutron stars, or the collapse to black hole. The classical Van der Waals (VdW) EOS is a well known example of EOS displaying a first-order phase transition. Fluids having a thermodynamics governed by a VdW-like EOS exhibit, outside the region of the phase transition, non-classical gasdynamic behaviours in a range of thermodynamic conditions characterized by the negative value of the so-called fundamental derivative, $\mathcal G$~\cite{Thompson43, Menikoff89, Guardone10} \begin{equation} {\mathcal G} := - \frac{1}{2} \,V \,\displaystyle{\frac{\displaystyle{\left.\frac{\partial^2 p}{\partial V^2}\right|_s}}{\displaystyle{\left.\frac{\partial p}{\partial V}\right|_s}}} \label{G1} \end{equation} \noindent $p$ being the pressure, $V:= 1/\rho$ the specific volume ($\rho$ is the rest-mass density) and $s$ the specific entropy. The fundamental derivative measures the convexity of the isentropes in the $p-V$ plane and if ${\cal G} > 0$ then the isentropes in the $p-V$ plane are convex, leading to expansive rarefaction waves (and compressive shocks) \cite{RZ13}. In a VdW-like EOS, or in general in a non-convex EOS, rarefaction waves can change to compressive and shock waves to expansive depending on the specific thermodynamical state of the system. These non-classical phenomena have been observed experimentally and their study is, currently, of interest in many engineering applications ~\cite{Cinnella07, Cinnella11}. Besides this thermodynamical interpretation of convexity, there is an equivalent definition due to Lax~\cite{Lax57} that connects with the mathematical properties of the hyperbolic system. According to Lax's approach, a hyperbolic system of conservation laws \footnote{The books by LeVeque~\cite{LeVeque92} and Toro~\cite{Toro09} are recommendable references for those readers interested in the basic theory of hyperbolic systems of conservation laws. The monograph of \cite{Le02} on finite-volume methods for hyperbolic problems pays special attention to non-convex flux functions (see their Sects. 13.8.4 -definitions of genuine non-linearity and linear degeneracy, and their relationsphip with convexity-, and 16.1 -devoted entirely to the study of scalar conservation laws with non-convex flux functions-).} is convex if all its characteristic fields are either genuinely non-linear or linearly degenerate. A characteristic field $\lambda$ is said to be genuinely non-linear or linearly degenerate if, respectively, \begin{equation} \label{GNLfieldpm} {\mathcal P} := \vec{\nabla}_{\bf u} \lambda \cdot {\bf r} \ne 0 , \end{equation} \begin{equation} \label{GNLfield0} {\mathcal P} := \vec{\nabla}_{\bf u} \lambda \cdot {\bf r} = 0, \end{equation} \noindent for all ${\bf u}$, where $\vec{\nabla}_{\bf u} \lambda$ is the gradient of $\lambda({\bf u})$ in the space of conserved variables, ${\bf r}$ is the corresponding eigenvector, and the dot stands for the inner product in the space of physical states. In a non-convex system, non-convexity is associated with those states ${\bf u}$ for which the factor ${\mathcal P}$ corresponding to a genuinely non-linear field, Eq.~(\ref{GNLfieldpm}), is zero and changes sign in a neighbourhood of ${\bf u}$. A virtue of Lax's approach is that it can be applied to other hyperbolic systems in which the convex or non-convex character of the dynamics is governed by other ingredients beyond the EOS. Among these systems are those of relativistic hydrodynamics (RHD) and classical magnetohydrodynamics (MHD). In these two cases, the convexity of the system has been characterized with the sign of a generalized fundamental derivative that includes an extra term depending of the local speed of sound (in the case of RHD~\cite{Ibanez13}) and the magnetic field (in the case of MHD~\cite{Serna14}). In this work we use the approach of Lax to characterize, from a theoretical point of view, the effects of magnetic fields in the convexity properties of the RMHD system of equations as a previous step to explore its possible impact in the dynamical evolution of different astrophysical scenarios. The result is presented in the form of an extended fundamental derivative whose sign determines the convex/non-convex character of the RMHD system at a given state. Our result recovers the proper non-relativistic and unmagnetized limits. The paper is organized as follows. In Sect.~2, the equations of RMHD are introduced as a hyperbolic system of conservation laws. The transformation between primitive and conserved variables are explicitly written. In Sect.~3 the characteristic structure of the RMHD equations is discussed and the analysis of convexity in non-degenerate states presented. In Sect.~4 the analysis of convexity is extended to degenerate states. The non-relativistic, unmagnetized limits are recovered in Sect.~5. Section~6 includes a short summary and presents the conclusions. Finally, there is an Appendix that displays the Jacobian matrices of the RMHD system in quasi-linear form, necessary for the characteristic analysis of Sect.~3. \section{The equations of ideal relativistic magnetohydrodynamics} Let $J^{\mu}$, $T^{\mu \nu}$ and $^*F^{\mu \nu}$\footnote{Throughout this paper, Greek indices will run from 0 to 3, while Roman run from 1 to 3, or, respectively, from $t$ to $z$ and from $x$ to $z$, in Cartesian coordinates.} be the components of the rest-mass current density, the energy--momentum tensor and the Maxwell tensor of an ideal (infinite conductivity) magneto-fluid, respectively \begin{equation} J^\mu = \rho u^\mu \end{equation} \begin{equation} T^{\mu \nu} = \rho h^* u^\mu u^\nu + g^{\mu \nu} p^* - b^\mu b^\nu \end{equation} \begin{equation} ^*F^{\mu \nu} = u^\mu b^\nu - u^\nu b^\mu, \end{equation} \noindent where $\rho$ is the proper rest-mass density, $h^* =1 + \epsilon + p/\rho + b^2/\rho$ is the specific enthalpy including the contribution from the magnetic field ($b^2$ stands for $b^\mu b_\mu$), $\epsilon$ is the specific internal energy, $p$ is the thermal pressure, $p^* = p + b^2/2$ is the total pressure, and $g^{\mu \nu}$ is the metric of the space-time where the fluid evolves. Throughout the paper we use units in which the speed of light is $c=1$ and the $(4 \pi)^{1/2}$ factor is absorbed in the definition of the magnetic field. The four-vectors representing the fluid velocity, $u^\mu$, and the magnetic field measured in the comoving frame, $b^\mu$, satisfy the conditions $u^\mu u_\mu = -1$ and $u^\mu b_\mu = 0$. The equations of ideal RMHD correspond to the conservation of rest-mass and energy-momentum, and the Maxwell equations. In a flat space-time and Cartesian coordinates, these equations read: \begin{equation} \label{cont} J^\mu_{\,\,\,\,,\mu} = 0 \end{equation} \begin{equation} \label{e-mom} T^{\mu \nu}_{\,\,\,\,\,\,,\mu} = 0 \end{equation} \begin{equation} \label{Maxwell} ^*F^{\mu \nu}_{\,\,\,\,\,\,\,\,,\mu} = 0, \end{equation} \noindent where subscript $(\,_{,\mu}\,)$ denotes partial derivative with respect to the corresponding coordinate, $(t,x,y,z)$, and the standard Einstein sum convention is assumed. The above system can be written as a system of conservation laws as follows \begin{equation} \frac{\partial {\bf U}}{\partial t} + \frac{\partial {\bf F}^{i}}{\partial x^{i}} = 0 \label{e:system} \end{equation} \noindent where ${\bf V} = (\rho, v^i, \epsilon, B^i)^T$ is the set of primitive variables. The state vector (the set of conserved variables) ${\bf U}$ and the fluxes, ${\bf F}^i$, are, respectively: \begin{eqnarray} {\bf U} & = & \left(\begin{array}{c} D \\ S^i \\ \tau \\ B^i \end{array}\right), \label{state_vector} \end{eqnarray} \begin{eqnarray} {\mathbf F}^i & =& \left(\begin{array}{c} D v^i \\ S^j v^i + p^{*} \delta^{ij} - b^j B^i/W \\ \tau v^i + p^{*} v^i - b^0 B^i/W \\ v^i B^k - v^k B^i \end{array}\right). \label{flux2} \end{eqnarray} In the preceding equations, $D$, $S^j$ and $\tau$ stand, respectively, for the rest-mass density, the momentum density of the magnetized fluid in the $j$-direction, and its total energy density, all of them measured in the laboratory (i.e., Eulerian) frame: \begin{equation} \label{eq:D} D = \rho W, \end{equation} \begin{equation} \label{eq:Sj} S^i = \rho h^* W^2 v^i - b^0 b^i, \end{equation} \begin{equation} \label{eq:tau} \tau = \rho h^* W^2 - p^* - (b^0)^2 - D. \end{equation} \noindent The components of the fluid velocity trivector, $v^i$, as measured in the laboratory frame, are related with the components of the fluid four-velocity according to the following expression: $u^\mu = W(1, v^i)$, where $W$ is the flow Lorentz factor, $W^2=1/(1-v^i v_i)$. The components of the magnetic field four-vector in the comoving frame and the three vector components $B^i$ measured in the laboratory frame satisfy the relations: \begin{eqnarray} \label{b0} b^0 & = & W v_k B^k, \\ \label{bi} b^i & = & \frac{B^i}{W} + b^0 v^i. \end{eqnarray} Finally, the square of the modulus of the magnetic field can be written as \begin{equation} b^2 = \frac{B_k B^k}{W^2} + (v_k B^k)^2. \label{e:b2} \end{equation} The preceding system must be complemented with the time component of equation~(\ref{Maxwell}), that becomes the usual divergence constraint \begin{equation} \label{eq:divb} \frac{\partial B^i}{\partial x^i} = 0. \end{equation} An EOS $p=p(\rho,\varepsilon)$ closes the system. Accordingly, the (relativistic) sound speed $a_{s}:=\displaystyle{\sqrt{\left.\frac{\partial p}{\partial e}\right|_s}}$, $e$ being the mass-energy density of the fluid $e=\rho(1+\epsilon)$, satisfies $\displaystyle{h a_{s}^{2} = \chi + \frac{p}{\rho^{2}} \, \kappa}$, with $\displaystyle{\chi := \left.\frac{\partial\,p}{\partial\,\rho}\right|_{\varepsilon}}$ and $\displaystyle{\kappa := \left.\frac{\partial\,p}{\partial\,\varepsilon}\right|_{\rho}}$. \section{Characteristic structure of the RMHD equations and analysis of convexity in non-degenerate states} \label{s:csrmhde} The characteristic information of the system of RMHD (\ref{e:system}) is contained in the set of eigenvalues and right eigenvectors $\{\lambda_{\alpha}, {\bf r }_{\alpha}\}_{\alpha =1}^8$ of $\zeta_k{\bf {\cal B}}^k$, where ${\bf {\cal B}}^i := \displaystyle{\frac{\partial {\bf F}^i}{\partial {\bf U}}}$ are the Jacobian matrices of the vectors of fluxes along the coordinate directions, and $\zeta_i$ is an arbitrary unitary 3-vector. Since the dependence on ${\bf U}$ of the fluxes ${\bf F}^i$ is implicit, it is useful to write the Jacobian matrices ${\bf {\cal B}}^i$ in terms of matrices involving only explicit derivatives with respect to the primitive variables, ${\bf V}$. If we define ${\bf {\cal A}}^0 := \displaystyle{\frac{\partial {\bf U}}{\partial {\bf V}}}$, and ${\bf {\cal A}}^i :=\displaystyle{ \frac{\partial {\bf F}^i}{\partial {\bf V}}}$, then we have that ${\bf {\cal B}}^i = {\bf {\cal A}}^i ({\bf {\cal A}}^0)^{-1}$. Now, the sets of eigenvalues and right eigenvectors of the system in conservation form, $\{\lambda_{\alpha}, {\bf r }_{\alpha}\}_{\alpha =1}^8$, and of the system in quasi-linear form, $\{\lambda^*_{\alpha}, {\bf r }^*_{\alpha}\}_{\alpha =1}^8$, satisfying $(\zeta_k{\bf {\cal A}}^k - \lambda^*_{\alpha} {\bf {\cal A}}^0) {\bf r}^*_{\alpha} = 0$, are related according to $\{\lambda_{\alpha}, {\bf r }_{\alpha}\}_{\alpha =1}^8 = \{\lambda^*_{\alpha}, {\bf {\cal A}}^0 {\bf r}^*_{\alpha}\}$. Matrices ${\bf {\cal A}}^0$ and $\zeta_k {\bf {\cal A}}^k $ are displayed in the Appendix. Once the eigenvalues and eigenvectors are known, we can analyze the convexity of the system studying the expression ${\mathcal P}_{\alpha} = \vec{\nabla}_{\bf U} \lambda_{\alpha} \cdot {\bf r}_{\alpha}$ (see the Introduction). Finally, we can take advantage of the fact that, since ${\bf {\cal A}}^0$ is non-singular, then ${\mathcal P}_{\alpha} \neq 0$ if, and only if, ${\mathcal P}_{\alpha}^* := \vec{\nabla}_{\bf V} \lambda_{\alpha} \cdot {\bf r}^*_{\alpha} \neq 0$, and perform the analysis of convexity in terms of ${\mathcal P}_{\alpha}^*$. The eigenvalues $\lambda_{\alpha}$ are the solutions of the following polynomial expression for $\lambda$ \bea \lambda a \Big({\cal E} a^2 - \mathcal{B}^2\Big) \Big((b^2 + \rho h a_s^2) a^2 G - W_s^{-2} \rho h a^4 - a_s^2 G \mathcal{B}^2\Big) & = & 0, \label{caract} \eea \noindent where ${\cal E} := \rho h + b^2$, $W_s^{-2} := 1 - a_s^2$ and quantities $a$, $G$ and $\mathcal{B}$ were defined in ref.~\cite{Anile89}, $a := \phi_\alpha u^\alpha$, $G := \phi_\alpha \phi^\alpha$, $\mathcal{B} := \phi_\alpha b^\alpha$, being, in our case, $\phi_\alpha := (-\lambda, \zeta_i)$ the normal to the wavefront propagating with speed $\lambda$ in the spatial direction given by the unit vector $\zeta_i$. As it is well known, the system of (R)MHD is not-strictly hyperbolic ~\cite{Brio88}. This means that in some cases, two or more eigenvalues can be equal leading to well studied cases of degeneracy (see refs.~\cite{Anile89,Anton10}, for the relativistic case). In Type I degeneracy, the magnetic field is normal to the propagation direction of the wavefront (i.e., $\zeta_k B^k = 0$). In Type II degeneracy, $\zeta_k B^k \neq 0$, but the eigenvalues associated with, at least, one Alfv\'en wave and one magnetosonic wave are degenerate. Leaving aside the particular cases associated with both degeneracy types, that will be discussed later, the following list compiles the roots of the characteristic equation (\ref{caract}), $\lambda_\alpha$ ($=\lambda^*_\alpha$), the right eigenvectors, ${\bf r}^*_{\alpha}$ \footnote{The expressions of the eigenvectors have been obtained after tedious algebraic manipulations. They can be verified by direct substituting in the eigenvalue equation, $(\zeta_k{\bf {\cal A}}^k - \lambda^*_{\alpha} {\bf {\cal A}}^0) {\bf r}^*_{\alpha} = 0$.}, and their corresponding scalar products, ${\mathcal P}_{\alpha}^*$ \footnote{For the scalar products ${\cal P}_{a_{\pm}}^*$ and ${\cal P}_{m_{\pm}}^*$, the partial derivatives of the corresponding eigenvalues with respect to the primitive variables, ${\bf V}$, have been computed by implicit derivation of the characteristic equations for $\lambda_{a_{\pm}}$ and $\lambda_{f_{\pm}}$, respectively, i.e., $\mathcal{A} = 0$ and $\mathcal{N}_4 = 0$ (see below).}, in the non-degenerate, general case. \begin{itemize} \item[i)] $\lambda = \lambda_{\rm null} := 0$. In this case, ${\mathcal P}_{\rm null}^*$ is trivially zero. This eigenvalue is spurious and is associated with the fact that although the RMHD system (\ref{e:system}) consists of eight conservation equations, only seven components of the fluxes are non-trivial. Due to the antisymmetric character of the induction equation, the flux of $\zeta_k B^k$ in the $\zeta^k$-direction is identically zero. \item[ii)] $\lambda = \lambda_0 := \zeta_k v^k$ is the eigenvalue associated with the material waves. The corresponding eigenvector is ${\bf r}^*_0 = (-\kappa, 0^i, \chi, 0^i)^T$, where $\kappa$ and $\chi$ are thermodynamical derivatives defined at the end of the previous Section, and $0^i = 0$ ($i = 1, 2, 3$). The scalar product is ${\mathcal P}_{0}^* = 0$ and, consequently, the characteristic field defined by $\lambda_0$ is linearly degenerate. \item[iii)] $\lambda = \lambda_{a_{\pm}}$ are the roots of the second-order polynomial in $\lambda$, $\mathcal{A}$, \be \mathcal{A} := {\cal E} a^2 - \mathcal{B}^2. \label{e:alf} \ee \indent They define the Alfv\'en waves. Since $\zeta_k B^k \neq 0$, then $a\neq 0$ and the corresponding eigenvectors are \bea {\bf r}^*_{a_{\pm}} & = & (0, r_2^i, 0, r_4^i)^T, \eea where $r_2^i = a_1 B^i + a_2 v^i + a_3 \zeta^i$, $r_4^i = W a^{-1} (r_2^i \zeta_k B^k - B^i \zeta_k r_2^k)$. The coefficients $a_p \,(p=1,2,3)$ are such that $v_k r_2^k = 1$, $\zeta_k r_2^k = - W a$, and $B_k r_2^k = - v_k B^k W^2$. The scalar products are \be {\cal P}_{a_{\pm}}^* =\Big(\frac{\partial \lambda_{a_{\pm}}}{\partial v^i}\Big)\, r_2^i + \Big(\frac{\partial \lambda_{a_{\pm}}}{\partial B^i}\Big)\, r_4^i \propto \Big(\zeta_k r_2^k + W\, a\, (v_k r_2^k)\Big) = 0, \ee in agreement with the linearly degenerate character of the Alfv\'en waves. \item[iv)] The four eigenvalues $\lambda_{f_{\pm}}$, $\lambda_{s_{\pm}}$, are the roots of the fourth-order polynomial in $\lambda$, $\mathcal{N}_4$, \be \mathcal{N}_4 := (b^2 + \rho h a_s^2) a^2 G - W_s^{-2} \rho h a^4 - a_s^2 G \mathcal{B}^2, \label{e:pol-lambda4} \ee associated with the fast and slow magnetosonic wavespeeds, respectively. Since $\zeta_k B^k \neq 0$, then $a \neq 0$ and the corresponding eigenvectors are \be {\bf r}^*_{m_\pm} = (r_1, r_2^i, r_3, r_4^i)^T, \label{e:mev} \ee ($m = f,s$), where \bea r_1 & = & \rho W^3 \Big(\rho h a (G + a^2 ) - G \mathcal{B}^2 / a \Big) , \nonumber \\ r_2^i & = & W \Big( G \mathcal{B} B^i + \rho h W a^2 (\lambda_{m_\pm} v^i - \zeta^i) \Big), \nonumber \\ r_3 & = & r_1 p / \rho^2, \nonumber \\ r_4^i & = & \rho h W^3 a \Big((\lambda_{m_\pm} \, v^i - \zeta^i) \zeta_k B^k - B^i (\lambda_{m_\pm} a W^{-1} - G)\Big). \eea The scalar products are \bea {\cal P}_{m_{\pm}}^* & = & \Big(\frac{\partial \lambda_{m_\pm}}{\partial \rho}\Big)\, r_1 + \Big(\frac{\partial \lambda_{m_\pm}}{\partial v^i}\Big)\, r_2^i + \Big(\frac{\partial \lambda_{m_\pm}}{\partial \epsilon}\Big)\, r_3 + \Big(\frac{\partial \lambda_{m_\pm}}{\partial B^i}\Big)\, r_4^i \nonumber \\ & = & \frac{W^3 a^4 G^2}{2 a_s^2 d} {\cal P}^*_1 \,{\cal P}^*_2, \label{e:pm} \eea where $d$, the derivative of $\mathcal{N}_4$ with respect to $\lambda$ at $\lambda = \lambda_{m, \pm}$ ($m = f, s$), $\mathcal{N}_4'(\lambda_{m, \pm}) $, is \be d = a_s^2 G^2 \mathcal{B} (\zeta_k B^k) - (G-\lambda_{m_\pm} a W^{-1}) \rho h W W_s^{-2} a^4, \label{e:d} \ee and \be {\cal P}^*_1 = b^2 G - \rho h a^2, \label{e:p1} \ee \bea \hspace{-2.5cm} {\cal P}^*_2 = \left(\rho \left. \frac{\partial a_s^2}{\partial \rho}\right|_\epsilon + \frac{p}{\rho} \left. \frac{\partial a_s^2}{\partial \epsilon}\right|_\rho\right) W_s^2 \left(\frac{\mathcal{B}^2}{ a^2} - {\cal E}\right) - b^2 (3-a_s^2) - 2 \rho h a_s^2 + \frac{a_s^2 (5 - 3a_s^2) \mathcal{B}^2}{a^2}. & & \nonumber \\ & & \label{e:p2} \eea It is interesting to note that $d$ can only be zero in degenerate states, since it is only in these states where both $\mathcal{N}_4 (\lambda) = 0$ and $\mathcal{N}_4' (\lambda) = 0$ are satisfied simultaneously. Let us now discuss the conditions under which the remaining factors in Eq.~(\ref{e:pm}) can become zero. Quantity $a$ is non-zero as far as $\zeta_k B^k \ne 0$. On the other hand, it can be proven by simple algebraic manipulation of Eqs. $\mathcal{A}(\lambda) = 0$ and $\mathcal{N}_4(\lambda) = 0$ that ${\cal P}^*_1 = 0$ if and only if the corresponding magnetosonic eigenvalue is also an Alfv\'en eigenvalue (i.e., Type II degeneracy). Since we are avoiding degenerate states, and $G$ is always non-zero, we shall concentrate on the changes of sign of ${\cal P}^*_2$, in order to analyze the possible loss of convexity associated with the magnetosonic waves. Since in the case of zero magnetic field, the purely relativistic result has to be recovered, we shall try now to rewrite expression (\ref{e:p2}) in terms of the relativistic fundamental derivative \be \tilde{\cal G} = 1+ \frac{\rho}{2 a_s^2} \left. \frac{\partial a_s^2}{\partial \rho} \right|_s - a_s^2 \label{Gtilde} \ee derived in ref.~\cite{Ibanez13}. The sought expression is \be {\cal P}^*_2 = - 2 a_s^2 W_s^2 {\cal E} (1 - R) \, {\tilde{\cal G}_{\rm M}}, \label{e:p2new} \ee with ${\tilde{\cal G}_{\rm M}}$, the fundamental derivative for relativistic, magnetized fluids, being \be {\tilde{\cal G}_{\rm M}} := { \tilde{\cal G}} + F, \label{C-RMHD-4} \ee where \be F:= \displaystyle{\frac{3}{2} W_s^{-4} \left(\frac{\displaystyle{c_a^2/a_s^2 - R}}{1 - R}\right)}. \label{F-RMHD} \ee In the previous expressions, $R := \displaystyle{\frac{\mathcal{B}^2}{ {\cal E} a^2}}$, and $c_a^2 := \displaystyle{\frac{b^2}{{\cal E}}}$ stands for the square of the Alfv\'en velocity. Moreover, in deriving expression (\ref{e:p2new}) from (\ref{e:p2}) we have used the following relation among thermodynamical derivatives $\displaystyle{\left. \frac{\partial \,\,\,}{\partial \rho}\right|_s = \left. \frac{\partial \,\,\,}{\partial \rho}\right|_\epsilon + \frac{p}{\rho^2} \left. \frac{\partial \,\,\,}{\partial \epsilon}\right|_\rho }$. It is important to note that $R=1$ if and only if the eigenvalue corresponds to an Alfv\'en wavespeed (i.e., it satisfies equation ~$\mathcal{A}(\lambda) = 0$). Since we are not considering degeneracies, we conclude that $R \ne 1$ for magnetosonic waves and, consequently, 1) the denominator in the second term of ${\tilde{\cal G}_{\rm M}}$ is well defined, and 2) ${\cal P}^*_2 = 0$ if and only if ${\tilde{\cal G}_{\rm M}} = 0$. The price to pay for using primitive (or conserved) variables in our analysis of convexity is the loss of covariance and a dependence of the fundamental derivative ${\tilde{\cal G}_{\rm M}}$ on kinematics through quantity $R$. For fast and slow magnetosonic fields, let us carry out the analysis of the magnetic correction to the purely hydrodynamic (relativistic) fundamental derivative (Eq. (\ref{F-RMHD})) in the comoving frame (CF, $u^{\mu} = \delta_0^{\mu}$), which we will name $F_{{\rm CF},m}$ ($m= f, s$) henceforth. A simple algebraic calculation leads to \be F_{{\rm CF}, m} = \frac{3}{2} W_\omega^{-2} \left(\frac{c_m^2 - a_s^2}{c_m^2 - c_a^2}\right), \label{F1} \ee where $c_m^2$ are the solutions of the quadratic equation in $\lambda^2$, $\mathcal{N}_{4, {\rm CF}} (\lambda) = 0$, namely \be c_{m}^2 = \frac{1}{2} \left((\omega^2 + a_s^2 \, c_A^2) \pm \left((\omega^2 + a_s^2 \, c_A^2)^2 - 4 a_{s}^2 \, c_A^2 \right)^{1/2} \right), \label{N4LRF} \ee with $c_A^2 = \displaystyle{\frac{(\zeta_k B^k)^2}{\mathcal{E}}}$ and $W_\omega^{-2} := 1 - \omega^2$, $\omega^2 = a_s^2 + c_a^2 - a_s^2 c_a^2$. Taking into account that, for non-degenerate states, $a_s^2, c_a^2 \in (c_s^2, c_f^2)$\footnote{In the CF it can be easily proven that $\mathcal{N}_{4, {\rm CF}} (a_s) < 0$ and $\mathcal{N}_{4, {\rm CF}} (c_a) < 0$, implying that both $a_s^2$ and $c_a^2$ are between the roots of $\mathcal{N}_{4, {\rm CF}} (\lambda) = 0$, namely $c_s^2$, $c_f^2$.}, we have that $F_{{\rm CF}, m} > 0$ ($m = f, s$). Now, the transformation of $R$ as a scalar ensures that $F_{m} > 0$ ($m = f,s$) in any reference frame, with important consequences for the influence of the magnetic field on the convexity of the system. \end{itemize} \section{Analysis of convexity in degenerate states} \label{s:cds} \subsection{Type I degeneracy} This degeneracy appears in states in which $\zeta_k B^k = 0$. Now, the roots of the characteristic equation~(\ref{caract}), the right eigenvectors, and the corresponding scalar products have the following properties: \begin{itemize} \item[i)] $\lambda = \lambda_{\rm null} := 0$. It is again the spurious eigenvalue analyzed in the previous Section associated with the null flux component. ${\mathcal P}_{\rm null}^*$ is trivially zero. \item[ii)] The eigenvalue $\lambda = \lambda_0 := \zeta_k v^k$ has multiplicity 5. The corresponding eigenvectors are of the form ${\bf r}^*_0 = (r_1, a_1 B^i + a_2 \zeta_\bot^i, r_3, a_3 B^i + a_4 \zeta_\bot^i)^T$, where $\zeta_\bot^i$ is an arbitrary vector orthogonal to $\zeta^i$ and $B^i$, and $r_1$, $r_3$ and $a_p$, ($p=1,2,3,4$) are functions of the primitive variables. Since only the derivative $\partial \lambda/ \partial v^k$ ($= \zeta_k$) is different from zero, the scalar product is \be {\cal P}_{0}^* = \zeta_i (a_1 B^i + a_2 \zeta_\bot^i) = 0. \ee Hence, the characteristic fields defined by $\lambda_0$ are linearly degenerate. \item[iii)] $\lambda_{f,\pm}$ are the solutions of the quadratic equation in $\lambda$ \be \left(b^2 + \rho h a_s^2 - a_s^2 (v_k B^k)^2\right) G - W_s^{-2} \rho h a^2 = 0, \label{e:char_deg_i} \ee and are associated with the fast magnetosonic wavespeeds. The explicit expression of these eigenvalues when $\zeta_k = (1,0,0)$ can be found in ref.~\cite{Leismann05}. The corresponding eigenvectors can be obtained from those of the fast magnetosonic eigenvalues in the general case (see Eq.~(\ref{e:mev})) making $\zeta_kB^k = 0$, i.e., $\mathcal{B} = a (v_kB^k$). The scalar products are \footnote{As in the non-degenerate case, for the scalar products ${\cal P}_{f_{\pm}}^*$, the partial derivatives of the corresponding eigenvalues with respect to the primitive variables, ${\bf V}$, have been computed by implicit derivation of the characteristic equation~(\ref{e:char_deg_i}).} \be {\cal P}_{f_{\pm}}^* = \frac{W_s^2 G^2}{2 \rho h } {\cal P}^*_1 \,{\cal P}^*_2, \label{e:pm-degi} \ee where \be {\cal P}_1^* = \displaystyle{\frac{{\cal E} - (v_k B^k)^2 }{1 - \zeta_k v^k} } \ee \bea {\cal P}_2^* & = & \left( \rho \left. \frac{\partial a_s^2}{\partial \rho}\right|_\epsilon + \frac{p}{\rho} \left. \frac{\partial a_s^2}{\partial \epsilon}\right|_\rho \right) W_s^2 \left((v_k B^k)^2 - {\cal E}\right) \nonumber \\ & & - b^2 (3-a_s^2) - 2 \rho h a_s^2 + a_s^2 (5 - 3a_s^2) (v_k B^k)^2. \eea From Eq.~(\ref{e:b2}), $b^2 - (v_kB^k)^2 \ge 0$ and then ${\cal P}_1^*$ is always positive. Hence the possible changes of sign of ${\cal P}_{f_{\pm}}^*$ coincide with those of ${\cal P}_2^*$. Let us note that the expression for ${\cal P}_2^*$ coincides with that of the general case (Eq.~\ref{e:p2}) making $\mathcal{B} = a (v_kB^k)$. Then, proceeding in exactly the same way as in the general case we conclude that the fundamental derivative for relativistic, magnetized fluids for Type I degenerate states is \be {\tilde{\cal G}_{\rm M, deg \, I}} = { \tilde{\cal G}} + \displaystyle{\frac{3}{2} W_s^{-4} \left(\frac{\displaystyle{c_a^2/a_s^2 - R_{\rm deg \, I}}}{1 - R_{\rm deg \, I}}\right)}, \label{C-RMHD-4_deg_I} \ee where now, $R_{\rm deg \, I} = \displaystyle{\frac{(v_k B^k)^2}{{\cal E}}}$. As discussed in the non-degenerate case, $R_{\rm deg \, I} \ne 1$, and the corresponding factor is $F_{\rm deg \, I} > 0$. The special case when $v_k B^k = 0$ is obtained by making $R_{\rm deg \, I} = 0$ in the previous expression. The same result for this case is obtained through a purely hydrodynamical approach (see Appendix in ref.~\cite{Romero05}) by building up a thermodynamically consistent EOS incorporating the effects of the magnetic field. \end{itemize} \subsection{Type II degeneracy} Now, $\zeta_k B^k\ne 0$ and, at least, one eigenvalue associated with an Alfv\'en wave and an eigenvalue associated with a magnetosonic wave are degenerated. Three cases are distinguished. In cases 1 ($c_a > a_s$) and 2 ($c_a < a_s$) one fast or slow magnetosonic eigenvalue, respectively, and an Alfv\'en eigenvalue are degenerated. In these cases, as discussed in the previous Section, the quantity ${\cal P}^*_1$ defined in Eq.~(\ref{e:p1}) is zero for the degenerate eigenvalues and, hence, the corresponding characteristic fields are linearly degenerate. When $c_a = a_s$ (case 3), an Alfv\'en eigenvalue is degenerated with a pair (slow and fast) of magnetosonic eigenvalues. Now, quantity $d$ defined in Eq.~(\ref{e:d}) is also 0, and we have an indetermination in ${\cal P}_{m_{\pm}}^*$ (Eq.~\ref{e:pm}). In this case, we have checked that the dot product of the magnetosonic eigenvectors associated with the degenerated fields and the gradient of the Alfv\'en eigenvalue is zero, which means that the degenerate characteristic field is again linearly degenerate. \section{Purely hydrodynamical and classical limits} \label{s:limits} The purely (relativistic) hydrodynamical limit can be obtained as a particular case of the Type I degeneracy, in which besides having $\zeta_k B^k = 0$ and $v_k B^k = 0$, we make $b^2 = 0$. Hence, from Eq.~(\ref{C-RMHD-4}), and making $R = 0$ and $c_a = 0$, we have ${\tilde{\cal G}_{{\rm M}, b^2=0}} = { \tilde{\cal G}}$. We now discuss the classical (magnetized) limits for both degenerate and non-degenerate states. These limits are obtained by expanding all the quantities in the definition of the fundamental derivative in powers of $1/c^2$ ($c$ is the speed of light) and keeping the leading term. On one hand, the relativistic (non-magnetized) fundamental derivative is ${\tilde{\cal G}} = {\cal G} + \mathcal{O}(1/c^2)$, where ${\cal G}$ is the classical (non-magnetized) counterpart~\cite{Ibanez13}. On the other hand, $R = (\zeta_k B^k)^2/(\rho c_{m, {\rm cl}}^2) + \mathcal{O}(1/c^2)$, where $c_{m, {\rm cl}}$ ($m=f,s$) is $c_{m, {\rm cl}} = \displaystyle{\frac{1}{\sqrt{2}}} \left(a_{s, {\rm cl}}^2 + B^2/\rho \pm \sqrt{(a_{s, {\rm cl}}^2 + B^2/\rho)^2 - 4 a_{s, {\rm cl}}^2 (\zeta_k B^k)^2/\rho} \right)^{1/2}$, and $a_{s, {\rm cl}}$ stands for the classical definition of the sound speed. Hence, we get from Eq.~(\ref{C-RMHD-4}) \be {\tilde{\cal G}_{\rm M, cl}} := {\cal G} + \displaystyle{\frac{3}{2} \left(\frac{\displaystyle{c_{a, {\rm cl}}^2/a_{s, {\rm cl}}^2 - (\zeta_k B^k)^2/(\rho c_m^2)}}{1 - (\zeta_k B^k)^2/(\rho c_m^2)}\right)}. \label{C-RMHD-4_cl} \ee In the previous expression, $c_{a, {\rm cl}}$ stand for the classical definition of the Alfv\'en speed, $\sqrt{B^2/\rho}$. It can be shown that, taking $\zeta_k = (1,0,0)$, the resulting expression of ${\tilde{\cal G}_{\rm M, cl}}$ is proportional to the non-linearity factor for the non-linear fields of the (classical) MHD system obtained in ref.~\cite{Serna14} (see their equation~(17)). For Type I degenerate states, since $R = \mathcal{O}(1/c^2)$, \be {\tilde{\cal G}_{\rm M, deg \, I, cl}} = {\cal G} + \displaystyle{\frac{3}{2} \left(\frac{c_{a, {\rm cl}}^2}{a_{s, {\rm cl}}^2}\right)}, \label{C-RMHD-4_deg_Ib} \ee proportional to the corresponding result obtained in ref.~\cite{Serna14} (see their table~I). Finally for Type II degenerate states, the eigenvalues that are degenerated lead to characteristic fields which are linearly degenerate, whereas the (hypothetical) non-degenerate magnetosonic field (subcases 1 and 2) is genuinely non-linear and its properties in relation with convexity are governed by the fundamental derivative in Eq.~(\ref{C-RMHD-4_cl}), with $c_{m, {\rm cl}} = c_{s, {\rm cl}}$ (subcase 1), and $c_{m, {\rm cl}} = c_{f, {\rm cl}}$ (subcase 2). \section{Summary and conclusions} \label{s:concl} In this paper we have analyzed the influence of the magnetic field in the convexity properties of the RMHD equations. To this purpose we have used the approach of Lax, based on the analysis of the linearly degenerate/genuinely non-linear nature of the characteristic fields. Degenerate and non-degenerate states have been discussed separately and the non-relativistic, unmagnetized limits are properly recovered. The characteristic fields corresponding to the material and Alfv\'en waves are linearly degenerate and, then, not affected by the convexity issue. The analysis of the characteristic fields associated with the magnetosonic waves reveals, however, a dependence of the convexity condition on the magnetic field. The result is expressed in the form of a generalized fundamental derivative, Eq.~(\ref{C-RMHD-4}), written as the sum of two terms. The first one is the generalized fundamental derivative in the case of purely hydrodynamical (relativistic) flow already obtained in ref.~\cite{Ibanez13}. The second one contains the effects of the magnetic field. The analysis of this term in the comoving frame (extendable to any other reference system given the scalar nature of the term) shows that it is always positive leading to the remarkable result that the presence of a magnetic field in the fluid reduces the domain of thermodynamical states for which the EOS is non-convex, as it happens in the non-relativistic MHD limit~\cite{Serna14}. We speculate with the possibility that our findings can be relevant in the context of massive stellar core collapse. Depending mostly on the pre-collapse stellar magnetic field and on the gradient of the rotational velocity, dynamically relevant magnetic fields may develop after the core bounce (see, e.g., \cite{Akiyama03,Obergaulinger06,Sawai13}). Should these magnetic fields become as large as the existing numerical models point out, then our results indicate that the loss of convexity would be rather limited, if existing at all. However, it is still a matter of debate what is the actual level of magnetic field saturation due to the action of the Magneto Rotational Instability (MRI; see, e.g., \cite{Obergaulinger09,Pessah10}), and hence, whether or not the MRI-amplified magnetic field may have the sufficient strength as to impede the development of non-convex regions in the collapsed core. It is very likely that under the most common conditions (namely, non-rotating or slowly rotating cores), the magnetic field will not play central dynamical role in the post-collapse evolution, though it may set the time scale for supernova explosions (e.g., \cite{Obergaulinger14}). In such cases, we foresee that there might exist a range of physical conditions in which a non-convex EOS may render a convexity loss in the post-collapse core that cannot be compensated by the growth of pre-collapse magnetic fields, e.g., in slowly rotating (including non-rotating) massive stellar cores. Addressing this issue by means of numerical simulations is beyond the scope of the present work, and will be considered elsewhere. \ack Authors acknowledge financial support from the Spanish Government (grants AYA2013-40979-P and AYA2013-42184-P) and from the local Autonomous Government (Generalitat Valenciana, grant Prometeo-II/2014/069). I. C.-C. acknowledges support from the SN2NS project ANR-10-BLAN-0503, the ARC convention No. 11/15-040, and the Fonds de la Recherche Scientifique (FNRS) under grant 4.4501.05. M.A.A. and I.C.-C. acknowledge support from the European Research Councill (ERC) through the Starting Independent Researcher Grant CAMAP-259276. \newpage \noindent {\bf Appendix. Jacobian matrices of the RMHD system in quasi-linear form} \noindent Matrices ${\bf {\cal A}}^0$ and $\zeta_k {\bf {\cal A}}^k$ associated with the system~(\ref{e:system}) in quasilinear form are: \bea {\bf {\cal A}}^0 = \left( \begin{tabular}{cccc} $W$ & $\rho W^3 v_j$ & $0$ & $0_j$ \\ $({\bf {\cal A}}^0)^{S^i}_\rho$ & $({\bf {\cal A}}^0)^{S^i}_{v^j}$ & $({\bf {\cal A}}^0)^{S^i}_\epsilon$ & $({\bf {\cal A}}^0)^{S^i}_{B^j}$ \\ $({\bf {\cal A}}^0)^\tau_\rho$ & $({\bf {\cal A}}^0)^\tau_{v^j}$ & $({\bf {\cal A}}^0)^\tau_\epsilon$ & $({\bf {\cal A}}^0)^\tau_{B^j}$ \\ $0^i$ & $0^i_j$ & $0^i$ & $\delta^i_j$ \end{tabular} \right), \nonumber \eea where \bea ({\bf {\cal A}}^0)^{S^i}_\rho &=& (1+\epsilon+\chi) W^2 v^i, \nonumber \\ ({\bf {\cal A}}^0)^{S^i}_{v^j} &=& B^i B_j + B^2 \delta^i_j + h W^2 (\delta^i_j + 2 W^2 v^i v_j), \nonumber \\ ({\bf {\cal A}}^0)^{S^i}_\epsilon &=& (\rho+\kappa) W^2 v^i, \nonumber \\ ({\bf {\cal A}}^0)^{S^i}_{B^j} &=& -\delta^i_j v_k B^k - B^i v_j + 2 v^i B_j, \nonumber \\ ({\bf {\cal A}}^0)^\tau_\rho &=& (1+\epsilon) W^2 - W + \chi (W^2-1), \nonumber \\ ({\bf {\cal A}}^0)^\tau_{v^j} &=& -B_j v_k B^k + v_j [B^2 + \rho W^3 (2 h W-1)], \nonumber \\ ({\bf {\cal A}}^0)^\tau_\epsilon &=& \rho W^2 + \kappa (W^2-1), \nonumber \\ ({\bf {\cal A}}^0)^\tau_{B^j} &=& -v_j v_k B^k + B_j (2-1/W^2). \nonumber \eea \bea \zeta_k {\bf{\cal A}}^k = \left( \begin{tabular}{cccc} $W \zeta_k v^k$ & $(\zeta_k {\bf{\cal A}}^k)^D_{v^j}$ & $0$ & $0_j$ \\ $(\zeta_k {\bf{\cal A}}^k)^{S^i}_\rho$ & $(\zeta_k {\bf{\cal A}}^k)^{S^i}_{v^j}$ & $(\zeta_k {\bf{\cal A}}^k)^{S^i}_\epsilon$ & $(\zeta_k {\bf{\cal A}}^k)^{S^i}_{B^j}$ \\ $(\zeta_k {\bf{\cal A}}^k)^\tau_\rho$ & $(\zeta_k {\bf{\cal A}}^k)^\tau_{v^j}$ & $(\zeta_k {\bf{\cal A}}^k)^\tau_\epsilon$ & $(\zeta_k {\bf{\cal A}}^k)^\tau_{B^j}$ \\ $0^i$ & $B^i \zeta_j - \delta^i_j \zeta_k B^k$ & $0^i$ & $\delta^i_j \zeta_k v^k - v^i \zeta_j$\end{tabular} \right), \nonumber \eea where \bea (\zeta_k {\bf{\cal A}}^k)^D_{v^j} &=& \rho W (W^2 v_j \zeta_k v^k + \zeta_j), \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^{S^i}_\rho &=& (1+\epsilon+\chi) W^2 v^i \zeta_k v^k + \chi \zeta^i, \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^{S^i}_{v^j} &=& (\zeta_i B_j - \delta^i_j \zeta_l B^l) v_k B^k + B^2 (\delta^i_j \zeta_k v^k - \zeta^i v_j + v^i \zeta_j) \nonumber \\ &&- B^i (\zeta_j v_k B^k - 2 v_j \zeta_k B^k + B_j \zeta_k v^k) - v^i B_j \zeta_k B^k \nonumber \\ &&+ \rho h W^2 (\delta^i_j \zeta_k v^k + v^i \zeta_j + 2 W^2 v^i v_j \zeta_k v^k), \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^{S^i}_\epsilon &=& v^i (\rho+\kappa) W^2 \zeta_k v^k + \zeta^i \kappa, \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^{S^i}_{B^j} &=& \zeta^i v_j v_k B^k - \delta^i_j v_k B^k \zeta_l v^l - B^i v_j \zeta_k v^k \nonumber \\ &&- v^i (\zeta_j v_k B^k + v_j \zeta_k B^k - 2 B_j \zeta_k v^k) - W^{-2}(B^i \zeta_j - \zeta^i B_j + \delta^i_j \zeta_k B^k), \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^\tau_\rho &=& (1+\epsilon+\chi) W^2 \zeta_k v^k - W \zeta_k v^k, \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^\tau_{v^j} &=& -B_j \zeta_k B^k + B^2 \zeta_j + \rho W [\zeta_j (h W-1) + v_j \zeta_k v^k W^2 (2 h W-1)], \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^\tau_\epsilon &=& (\rho+\kappa) W^2 \zeta_k v^k, \nonumber \\ (\zeta_k {\bf{\cal A}}^k)^\tau_{B^j} &=& 2 B_j \zeta_k v^k - v_j \zeta_k B^k - \zeta_j v_k B^k. \nonumber \eea All the quantities appearing in the definition of the matrices are defined in the body of the paper and $0^i = (0,0,0)^T$, $0_j = (0,0,0)$ and $0^i_j$ is the null $3 \times 3$ matrix. \newpage \section*{References}
1,314,259,996,798
arxiv
\section{Introduction} We consider a variant of the pursuit-evasion game in which multiple pursuers must simultaneously reach the evader's location to capture it. Specifically, an evader $\mathbf{e}$, who is free to move in an $m$-dimensional Euclidean space, is being pursued by $n$ agents $\mathbf{p}_1, \ldots, \mathbf{p}_n$. The evader and the pursuers have identical motion capabilities and, in particular, have equal maximum speed. Unlike the classical pursuit evasion, our game requires at least $k$ pursuers to \emph{simultaneously} reach the evader's location to capture it, for some given value of $k \leq n$. If fewer than $k$ pursuers attack (reach) the evader, then those pursuers are destroyed by the evader. We assume that no two players ever occupy the same position in the environment \emph{except} at the moment of capture; that is, co-location either ends the game or only one player survives among the co-located ones. By disallowing co-location, we are assuming a weaker model of pursuers, which may also be more realistic because in many physical systems only one agent can occupy a point in the space. We call this version the \emph{$k$-capture pursuit evasion}, and investigate necessary and sufficient conditions, as well as worst-case time bounds, for the $k$-capture. \medskip Pursuit-evasion games provide an elegant setting to study algorithmic and strategic questions of exploration or monitoring by autonomous agents. Their rich mathematical history can be traced back to at least 1930s when Rado posed the now-classical Lion-and-Man'' problem~\cite{JEL:86}: \emph{a lion and a man in a closed arena have equal maximum speeds; what tactics should the lion employ to be sure of his meal?} The problem was settled by Besicovitch who showed that the man can escape regardless of the lion's strategy~\cite{JEL:86}. An important aspect of this pursuit-evasion problem, and its solution, is the assumption of \emph{continuous time}: each player's motion is a continuous function of time, which allows the lion to get arbitrarily close to the man but never capture him. If, however, the players move in discrete time steps, taking alternating turns but still in continuous space, the outcome is different, as first conjectured by Gale~\cite{RKG:91} and proved by Sgall~\cite{JS:01}. \medskip The distinction between continuous and discrete time models is significant albeit subtle. Formulations based on the continuous time lead to differential games, whose solution requires solving the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation. This is a partial differential equation, whose solution becomes intractable in complex scenarios. (See the seminal work of Isaacs~\cite{RI:65} on several continuous time classical games including the Homicidal Chauffeur game and the Game of Two Cars.) Besides this \emph{theoretical} difficulty, one also faces the \emph{practical} problem that continuous time solutions usually are expressed as a feedback law requiring an \emph{instantaneous} measurement of each player's position and its communication to the opponent. This is impractical from an implementation point of view, and especially problematic for non-smooth motions. \medskip Consequently, discrete time alternate moves versions of pursuit evasion have been favored in recent past, especially due to their algorithmic tractability. In these formulations, the evader and the pursuers move in alternating time instants, with the evader moving first. We note that a capture in this formulation is equivalent to the evader being inside a specified small neighborhood of the pursuer in the continuous time formulation. In the discrete time model, Sgall~\cite{JS:01} is able to circumvent the problem of lion approaching but not reaching the man in the continuous formulation, and shows that the lion can always capture the man in finite time inside a semi-open bounded environment. \medskip When the evader is free to move inside an \emph{unbounded} environment, multiple pursuers are clearly required to keep the evader from escaping. The capture condition is the same as before: if at some time $t$, \emph{any} pursuer can reach the position of the evader then the latter is captured. In this setting, it is known that the evader can be captured if and only if it lies in the convex hull of the pursuers~\cite{SK-CVR:05}. Many other pursuit evasion problems have also been studied, with focus on different types of environments~\cite{LA-ASG-EMR:92,VI-SK-SK:05}, characterization of environments in which a certain capture strategy works~\cite{SBA-RB-RG:09}, visibility-based pursuit-evasion~\cite{LJG-JCL-SML-DL-RM:98}, sensing limitations~\cite{SDB-FB-JPH:07o,NK-VI:08} etc. Finally, if both time \emph{and} space are assumed to be discrete, then the underlying space is represented as a graph with nodes and edges, and on each move a player can move from one node to another by traversing the edge(s) connecting them. The techniques in this formulation tend to be different, and we refer the reader to a representative set of papers~\cite{TDP:78, MA-MF:84, VI-NK:08, KK-SS:10}. \medskip Our objective in this paper is to study the $k$-capture problem in the unbounded continuous space and discrete time framework. In particular, we assume that a group of $n$ pursuers (hyenas) wish to capture an evader (lion) who is free to move in the $m$-dimensional Euclidean space. The players take turns: the evader moves first, the pursuers move next and all of the pursuers can move simultaneously. On its turn, each player can move anywhere inside a unit disk centered at its current position. (In other words, the maximum speed of the players is normalized to one.) We assume that no two players may occupy the same position in the environment \emph{except} at the moment of capture. Technically, this assumption is used only to rule out the possibility of a trivial pursuer strategy where they partition themselves into size $k$ subgroups, with each subgroup moving as a ``meta pursuer.'' Co-location may also be unrealistic in many physical systems, and by disallowing it we only strengthen our results because pursuers without co-location are weaker in power than those with co-location. \medskip We say that the evader is $k$-captured, for some specified value of $k$, if after a finite time, at least $k$ pursuers reach the evader's location. However, if fewer than $k$ pursuers reach the evader's location, then the evader is able to capture (or destroy) those pursuers. In other words, if at the end of a pursuer move, the evader occupies the same position as some of the pursuers, then the game either ends ($k$-capture occurs), or all the $j$, where $j < k$, pursuers at that location are captured leaving only the evader. We study the necessary and sufficient conditions under which such a $k$-capture is possible, and derive bounds on the worst-case time needed to achieve this. Additionally, we address a version of this problem played in a compact and convex subset of a Euclidean space. \medskip In particular, our paper makes four main contributions. First, we show that a necessary condition for $k$-capture is that the evader must be located inside the $k$-Hull of the pursuers at the beginning of every evader move. The $k$-Hull is the set of all points $p$ such that any line through $p$ divides the given points into two sets of at least $k$ points each. Second, we show that this simple $k$-Hull condition is also sufficient. In other words, if there is ever a time when this condition is satisfied, and in particular if it holds at time $t=0$, then the pursuers can $k$-capture the evader in finite time. Our proof of sufficiency is constructive, and based on a new multi-pursuer strategy. Third, we derive an upper bound for the time needed to capture the evader, as a function of the initial positions of the pursuers and of the evader. Finally, for a version of this problem played in a compact and convex environment in a Euclidean space, we design a novel strategy and show that the evader is $k$-captured using $k$ pursuers. \medskip This paper is organized as follows. The problem formulation and the necessity of the $k$-Hull condition for capture are presented in Section~\ref{sec:problem}. Our multi-pursuer capture strategy and the sufficiency of the $k$-Hull condition is presented in Section~\ref{sec:suff}. A version of this problem played in a compact and convex environment is analyzed in Section~\ref{sec:compact}. The conclusions and future directions for this work are summarized in Section~\ref{sec:conclusions}. \section{Problem Formulation and Necessary Condition for $k$-Capture} \label{sec:problem} \medskip Our pursuit-evasion game is played in an $m$-dimensional Euclidean space, with $n$ pursuers $\mathbf{p}_1, \mathbf{p}_2, \dots,\mathbf{p}_n$ and a single evader $\mathbf{e}$. The positions of these agents at any time $t$ are denoted as $\mathbf{p}_j (t)$, for $j=1, 2, \ldots, n$, and $\mathbf{e}(t)$, where $t \in \mathbb{Z}_{\geq 0}$. In Section~\ref{sec:compact}, we also consider the capture problem in a compact convex environment. \medskip We assume that the game is played in discrete time using alternate moves: on a turn, the evader moves first, all the pursuers simultaneously move next. We assume a normalized maximum speed of one, meaning that each player can move to any position inside a closed ball of radius one centered at the player's current position. More precisely, the players' motions are described by the following equations: \begin{align*} \mathbf{e}(t+1) &= \mathbf{e}(t) + \mathbf{u}_e(t,\mathbf{p}_1(t),\dots,\mathbf{p}_n(t)), \\ \mathbf{p}_j(t+1) &= \mathbf{p}_j(t) + \mathbf{u}_{\mathbf{p}_j}(t,\mathbf{e}(t),\mathbf{e}(t+1)), \end{align*} where $\mathbf{u}_e$ and $\mathbf{u}_{\mathbf{p}_j}$ are unit vectors, termed as \emph{strategies} of the evader $\mathbf{e}$ and the pursuer $\mathbf{p}_j$, respectively. These motion equations say that each agent's strategy depends on the current positions of all other players, and that each agent can move to any position within distance one of its current position. (The apparent asymmetry in the equations of the evader and the pursuers is due to the fact that the evader moves first, so the pursuers' moves can depend on the evader's positions at times $t$ \emph{and} $t+1$.) \medskip The capture occurs when evader is at the same location as some of the pursuers. The $k$-capture of the evader requires at least $k$ pursuers, while fewer than $k$ pursuers are themselves captured by the evader.\footnote{% We remark, however, that in the discrete time alternate moves model, the evader cannot force a pursuer's capture because the pursuers move \emph{after} the evader. Indeed, if the evader moves to the current location of a pursuer $\mathbf{p}$, then $\mathbf{p}$ can always move away from the evader at its turn. However, one cannot rule out a pursuers' strategy that involves \emph{sacrificing} some of them to ultimately achieve $k$-capture. } Formally, we say that the evader is \emph{$k$-captured} if there exists a finite time $T$ and a subset $C \subset \{\mathbf{p}_1, \ldots, \mathbf{p}_n \}$ of $k$ pursuers such that $\norm{\mathbf{p}_j (T)-\mathbf{e}(T)}\;=\;0$ but $\norm{\mathbf{p}_j (t)-\mathbf{e}(t)}>0$, for all $t<T$ and all $\mathbf{p}_j \in C$. In other words, the $k$-capture occurs at a time $T$ if at least $k$ pursuers simultaneously reach the evader's location at time $T$, and none of these pursuers have ever been captured in the past.\footnote{% While it is sufficient to ensure the safety of only the $k$ pursuers who perform the $k$-capture, in our strategy, \emph{all} the pursuers will remain safe.} We say that the evader \emph{escapes} if there exists no finite time at which the pursuers $k$-capture the evader. Finally, we require that no two players occupy the same point in the environment \emph{except} at the time of capture. \medskip We now formulate a necessary condition for $k$-capture, which is then complemented by Section~\ref{sec:suff} that shows that this condition is also sufficient. Our necessary condition prescribes the location of the evader relative to the locations of the pursuers for the $k$-capture to occur. This condition is independent of the pursuers' strategy: that is, if the condition is violated, then there always exists an evader strategy for escape regardless of the pursuers' strategy. \medskip Naturally, the convex hull of the pursuers' locations plays a key role in the game. This is not surprising because the convex hull is precisely the set of all evader locations that are capturable in the classical single pursuer game, as is well-known~\cite{SK-CVR:05}. \begin{lemma} \label{lem:conv} If the evader's initial location is not inside the interior of the convex hull of the pursuers, then it cannot be $k$-captured, even for $k=1$. \end{lemma} \begin{proof} If the evader is not in the interior of the convex hull, then there exists a hyperplane through the evader's location such that all the pursuers lie in one (closed) half-space defined by the hyperplane. The evader simply escapes by moving perpendicular to this hyperplane, away from the pursuers, at maximum speed. \end{proof} \medskip \subsection{The {\Large $k$}-Hull} \medskip When $k > 1$, we need a generalized notion of the convex hull. The standard convex hull can be defined as the set of points so that any hyperplane tangent to the hull contains at least one point of the hull in each of the two half-spaces. If we require that at least $k$ points lie in each half-space, then we get a structure called $k$-Hull, introduced by Cole, Sharir and Yap~\cite{RC-MS-CKY:87}, which also has intimate connections to other fundamental structures in computational geometry such as $k$-levels and $k$-sets~\cite{HE:87}. \begin{definition}[$k$-Hull]\label{def:khull} Let $S$ be a set of $n$ points in the plane, and let $k$ be an integer. The $k$-Hull of $S$ denoted by $\operatorname{Hull}_k(S)$ is the set of points $p$ such that, for any hyperplane $\ell(p)$ through $p$, there are at least $k$ points of $S$ in each closed half-space of $\ell(p)$. \end{definition} Clearly, the standard convex hull is the same as the $1$-Hull, and it is also easy to see that the $(k+1)$-Hull is contained in the $k$-Hull. One can also show, using Helly's Theorem~\cite{EH:23}, that the $k$-Hull is always non-empty for $k \leq \lceil n/(m+1)\rceil$, where $m$ is the dimension of the underlying Euclidean space. We, therefore, assume throughout this paper that $1 \leq k \leq \lceil n/(m+1) \rceil$. In particular, the standard convex hull in two dimensions, is well-defined for 3 or more non-collinear points, but $2$-Hull requires at least $n=5$ points in the plane. Fig.~\ref{fig:constraint} shows the two possible configurations for $n=5, k=2$ for a planar environment. \begin{figure}[htbp] \centering \includegraphics[width=0.8\columnwidth]{khull} \caption{Illustrating the $k$-Hull. Configurations for $2$-Hull of $n=5$ points in the plane.} \label{fig:constraint} \end{figure} \begin{remark}[$k$-Hull Computation] While computational complexity is the not focus of our paper, we do point out that $k$-Hulls are also efficiently computable. Under the point-hyperplane duality, they correspond to the level $k$ in an arrangement of hyperplanes, and therefore computed easily in $O(n^m)$ time in $m$ dimensions. The bound can be improved somewhat using more sophisticated algorithms and analysis. For instance, in the two-dimensional plane, $k$-Hull contains at most $O(n k^{1/3})$ vertices, and using dynamic convex hull data structures, it can be computed in worst-case time $O(n k^{1/3} \log^2 n)$~\cite{RC-MS-CKY:87}. \end{remark} \medskip \subsection{The Necessary Condition} \medskip In a pleasing generalization of 1-capture, it turns out that the $k$-Hull of the pursuers' locations is precisely the set of evader locations that are $k$-capturable. Throughout, we will use the notation $\intHull{k}$ to denote the \emph{interior} of the $k$-Hull. We have the following theorem stating our necessary condition. \begin{theorem} \label{thm:necessary} The evader $\mathbf{e}$ can be $k$-captured by pursuers $\mathbf{p}_1, \mathbf{p}_2, \ldots, \mathbf{p}_n$ only if $$\mathbf{e}(t) \:\in\: \intHull{k} (\mathbf{p}_1(t),\dots,\mathbf{p}_n(t)),$$ at all times $t\in \mathbb{Z}_{\geq 0}$, where $\intHull{k}$ is the interior of the $k$-Hull. \end{theorem} \begin{proof} The proof is by contradiction. Suppose that the evader's position $\mathbf{e}(t)$ violates the condition of the theorem at some time $t$. Then, there exist a hyperplane $\bar H$ through $\mathbf{e}(t)$ that separates a subset $C$ of the pursuers from the rest, which we call $\bar{C}$, such that $|C| < k$, and therefore $|\bar{C}| \geq n-k+1$. In this case, the evader can escape by moving according to the following strategy: \begin{quote} {\sl Move with maximum speed in the direction normal to $\bar H$ towards the side containing $C$.} \end{quote} We observe that $\mathbf{e}(t) \not\in \intHull{1} (\bar{C})$---this is because $\mathbf{e}(t)$ lies on $\bar H$, the hyperplane defining the half-space that contains $\bar {C}$. By Lemma~\ref{lem:conv}, therefore, none of the $n-k+1$ pursuers in $\bar{C}$ can catch the evader when the evader uses the above-mentioned strategy. Therefore, only the (at most) $k-1$ pursuers in $C$ can reach the evader, and the $k$-capture of evader is not possible. This completes the proof. \end{proof} The necessary condition asserts that if there is ever a time when the evader is outside the $k$-Hull of the pursuers, then it has an escape strategy. The main result of our paper, presented in the following section, shows that this necessary condition is also sufficient. In particular, if the evader lies in the pursuers' $k$-Hull at the initial time instant $t=0$, then the pursuers are able to $k$-capture it. (Clearly, if evader is not inside the $k$-Hull initially, then it can escape unless it plays sub-optimally and move inside the pursuers' $k$-Hull at a later time, allowing them to capture it.) \section{Proof of Sufficiency}\label{sec:suff} In this section, we prove our main result, which is to show that the necessary condition of Theorem~\ref{thm:necessary} is also sufficient. The proof, which is constructive, outlines a strategy for the pursuers and derives an upper bound on the time needed for the capture. Our analysis exploits properties of the pursuers' $k$-Hull, and so we begin with some geometric preliminaries. \medskip \subsection{Geometric Preliminaries and an Orientation-Preserving Strategy} \medskip In general, the orientations of the pursuers with respect to the evader will change once the pursuit begins. We will show, however, that pursuers can coordinate their moves to preserve their individual directions relative to the evader's location. Such a strategy will allow us to conclude that if the evader is in the $k$-Hull of the pursuers at the initial instant, then it will remain in the $k$-Hull at all subsequent instants. \medskip Let us call a pursuers' strategy \emph{orientation-preserving} if the orientations of the vectors $\mathbf{p}_i-\mathbf{e}$ are preserved throughout the pursuit. We will prove that there is an orientation-preserving $k$-capture strategy for the pursuers. But first, we establish a key geometric lemma about such an strategy. \medskip Let $\mathbf{u}_e(t)$ denote the evader's move at time $t$, where the vector $\mathbf{u}_e$ is a point on the $m$-dimensional sphere $\mathbb{S}$. Let $\theta_i(\mathbf{u}_e)$ denote the (smaller) angle between vectors $\mathbf{p}_i(t)-\mathbf{e}(t)$ and $\mathbf{u}_e(t)$. Define, \[ g(\mathbf{u}_e):= \operatorname{max}_k \{\cos\theta_1(\mathbf{u}_e),\ldots,\cos\theta_n(\mathbf{u}_e)\}, \] where $\operatorname{max}_k$ refers to the $k$-th maximum of the $n$ quantities. The following result states that as long as the pursuers follow an orientation-preserving strategy, one can always find $k$ favorable pursuers at each instant of time, for whom the $k$ respective $\theta$'s are all less than a number which remains invariant at all times and which is strictly less than $\pi/2$. \begin{lemma} \label{lem:initial_k} Suppose that the evader lies inside the $k$-Hull of the pursuers' initial locations, and the pursuers follow an orientation-preserving strategy throughout the pursuit. Then, the following facts hold at all times: \begin{itemize} \item There exists a $\subscr{\beta}{max} \:<\: \pi/2$, such that at every instant of time, \begin{equation}\label{eq:beta_k} \subscr{\beta}{max}:= \arccos\left(\min_{\mathbf{u}_e \in \mathbb{S}}g(\mathbf{u}_e)\right). \end{equation} \item After any move by the evader at time $t+1$, there exist at least $k$ pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ such that $\theta_{i_j} \;\leq\; \subscr{\beta}{max}$, for all $j \in \{i_1,\ldots,i_k\}$. \end{itemize} \end{lemma} \begin{proof} Since $\mathbf{e}(0) \in \intHull{k} (\mathbf{p}_1(0),\dots,\mathbf{p}_n(0))$, an orientation preserving strategy will ensure that $\mathbf{e}(t) \in \intHull{k} (\mathbf{p}_1(t), \dots,\mathbf{p}_n(t))$, for all time instants $t$. Thus, for any $t$, the quantity $g(\mathbf{u}_e)$ is identically defined as for $t=0$, and therefore for the first claim, it suffices to show the existence of a $\subscr{\beta}{max} < \pi/2$ at time $t=0$, which satisfies Eq.~\eqref{eq:beta_k}. To see this, we can write $g(\mathbf{u}_e)$ at time $t=0$ as \[ g(\mathbf{u}_e) = \operatorname{max}_k \left \{\frac{(\mathbf{p}_1(0)-\mathbf{e}(0)) \cdot \mathbf{u}_e}{\norm{\mathbf{p}_1(0)-\mathbf{e}(0)}},\ldots,\frac{(\mathbf{p}_n(0)-\mathbf{e}(0)) \cdot \mathbf{u}_e}{\norm{\mathbf{p}_n(0)-\mathbf{e}(0)}}\right\}, \] and therefore $g(\cdot)$ is a continuous function of $\mathbf{u}_e$. Since $\mathbf{u}_e \in \mathbb{S}$, which is a compact set, $g(\cdot)$ attains a minimum for some $\mathbf{u}_e^*$ in $\mathbb{S}$. It now remains to show that the minimum value $g(\mathbf{u}_e^*)>0$. Now, for every choice of $\mathbf{u}_e$, we must have at least $k$ pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ such that $\theta_{j} \;<\; \pi/2$, for all $j \in \{i_1,\ldots,i_k\}$. If this were not the case for some $\mathbf{\bar u}_e$, then the hyperplane perpendicular to $\mathbf{\bar u}_e$ through $\mathbf{e}$ would separate $k-1$ pursuers from the remaining, implying that $\mathbf{e}\notin \intHull{k} (\mathbf{p}_1,\dots,\mathbf{p}_n)$. Thus, for every $\mathbf{u}_e \in \mathbb{S}$, $g(\mathbf{u}_e)>0$ and in particular, $g(\mathbf{u}_e^*) > 0$. Thus, $\subscr{\beta}{max} = \arccos(g(\mathbf{u}_e^*)) < \pi/2$. Thus, the first claim is established. \medskip The second claim follows from the fact that there always exist some $k$ pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ such that $\theta_{j} \;<\; \pi/2$, for all $j \in \{i_1,\ldots,i_k\}$, and since $\cos\subscr{\beta}{max} \leq \cos\theta_{j}$. \end{proof} \begin{remark}[General Position] Throughout this section, we assume that no two pursuers are collinear with the evader, which implies that the vectors $\mathbf{p}_i(0)-\mathbf{e}(0)$ all have distinct orientations at $t=0$, for all $1\leq i \leq n$. We could easily ensure this condition by an initial move by the pursuers, as follows. Suppose $\angle \mathbf{p}_i(0)\mathbf{e}(0)\mathbf{p}_{j}(0) = 0$, for some $i,j$, where the notation $\angle \mathbf{p}\,\mathbf{x}\,\mathbf{q}$ denotes the (smaller) angle between vectors $\mathbf{p}-\mathbf{x}$ and $\mathbf{q}-\mathbf{x}$. Let the evader's initial move is from position $\mathbf{e}(0)$ to $\mathbf{e}(1)$. Then, all the pursuers except $\mathbf{p}_i$ move parallel to $\mathbf{e}(1)-\mathbf{e}(0)$ with step size $\norm{\mathbf{e}(1)-\mathbf{e}(0)}$. The pursuer $\mathbf{p}_i$ also moves with step size $\norm{\mathbf{e}(1)-\mathbf{e}(0)}$ but in a direction making a sufficiently small but positive angle $\alpha$ with $\mathbf{e}(1)-\mathbf{e}(0)$. Since $\intHull{k}(\mathbf{p}_1,\ldots,\mathbf{p}_n)$ is an open set and a continuous function of the pursuer locations, there exists a sufficiently small but positive angle $\alpha$ so that $\mathbf{e}(1)$ still lies inside $\intHull{k}(\mathbf{p}_1,\ldots,\mathbf{p}_n)$ at time $t=1$. If there are multiple collinearities, then the same strategy can be used to break all of them while preserving the invariant that the evader lies inside the $k$-Hull. \end{remark} We are now ready to describe our $k$-capture strategy and prove its correctness. \subsection{A Strategy for {\large $k$}-Capture}\label{sec:strategy} \medskip One simple-minded strategy for capture is to let each pursuer maximally advance towards the evader's new position at each move. Because the evader lies in $\intHull{k}$, this strategy reduces at least one pursuer's distance to $\mathbf{e}$. But it does not ensure that $k$ pursuers reach the evader simultaneously and so cannot guarantee $k$-capture. Instead, we let only those pursuers that are \emph{not the closest} to the evader execute this kind of move, while those closest to the evader carry out a \emph{parallel} move that maintains their distance and angle to the evader. We call this the \emph{advance move}. More specifically, the pursuers who are closest to the evader move to maintain their distance and angle to the evader, while the remaining pursuers advance towards the evader. Unfortunately, while this strategy keeps the pursuers safe, it also keeps them away from the evader, and in the worst-case all the pursuers may become equidistant to the evader and then stagnate. We, therefore, introduce a second move, called the \emph{cone move}, which ensures that the distance of the closest pursuers itself decreases but in such a way that at least $k$ pursuers remain closest to the evader. \medskip The following algorithm describes at a pseudo-code level the overall strategy. The terms $\subscr{\mathbf{P}}{closest}$ and $\operatorname{Cone}$, respectively, denote the set of closest pursuers and a Cone region, and are defined precisely following the algorithm. \begin{algorithm}[htbp] \KwAssumes{$\mathbf{e}(0)$ satisfies the $k$-Hull necessary condition.} % \textbf{For} each $t = 1,2, \ldots$ and for each $j\in\{1,\dots,n\}$,\\ Determine $\subscr{d}{min}(t) = \norm{\mathbf{p}-\mathbf{e}(t)}$, where $\mathbf{p} \in \subscr{\mathbf{P}}{closest}(t)$\\ \eIf{\textup{$\mathbf{p}_j$ is among $k$ pursuers $\mathbf{p}_{i_1},\ldots,\mathbf{p}_{i_k}$ that are in $\subscr{\mathbf{P}}{closest}(t)$ and in $\operatorname{Cone}(k,t)$}} { $\mathbf{p}_j$ uses Cone move corresponding to $\mathbf{p}_{i_1},\ldots,\mathbf{p}_{i_k}$ }{ $\mathbf{p}_j$ uses Advance move with parameter $\subscr{d}{min}(t)$ } \textbf{end for} \\ \caption{\bf $k$-Capture} \label{algo:strategy_k} \end{algorithm} In the following, we give a precise definition of the Advance move and the Cone move. Informally, the \emph{Advance} move is used by a pursuer to reduce its distance from the evader if it is sufficiently far from the evader. The \emph{Cone} move is used by a pursuer \emph{together} with at least $k-1$ other pursuers, if all of them are among the closest to the evader, and if the evader has made a move which is favorable for those pursuers. When both the moves are possible for a pursuer, the Cone move has the priority. \begin{definition}[Advance Move]\label{def:planes} Suppose the evader moves from $\mathbf{e}(t)$ to $\mathbf{e}(t+1)$. Then, given a parameter $d\geq 0$, the \emph{Advance move} of a pursuer $\mathbf{p}_j$ is the following: \begin{itemize} \advance\itemsep by -4pt \item Draw a line through $\mathbf{e}(t+1)$ parallel to the vector $\mathbf{p}_j(t)-\mathbf{e}(t)$. \item Move to the position $\mathbf{p}_j(t+1)$ on this line for which $\abs{d-\norm{\mathbf{e}(t+1)-\mathbf{p}_j(t+1)}}$ is minimized. \end{itemize} \end{definition} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{advance} \caption{Illustrating the Advance move.} \label{fig:planes} \end{figure} Fig.~\ref{fig:planes} illustrates the Advance move. Our modification to the original Planes algorithm~\cite{SK-CVR:05} is the inclusion of the parameter $d$. This parameter keeps a pursuer $\mathbf{p}_j$ from moving straight towards $\mathbf{e}$ \emph{if} that pursuer is the closest one to the evader. Therefore, with the parameter setting $d = \norm{\mathbf{e}(t)-\mathbf{p}_j(t)}$, the Advance move forces the pursuer $\mathbf{p}_j$ to move \emph{parallel} to the evader, and with exactly the same step size as the evader. This is shown in the right subfigure. The left subfigure shows a generic application of the Advance move. We now describe the Cone move, which is used by $k$ or more pursuers when they are among the closest pursuers to the evader, and when they are located inside a Cone region, which we define next. We show later (cf.~Lemma~\ref{lem:strategy_k}) that after a finite time, there will be at least $k$ closest pursuers, so the following discussion focuses on such pursuers. \medskip Let $\subscr{\mathbf{P}}{closest}(t)$ denote the set of pursuers that are closest to the evader $\mathbf{e}(t)$ at time $t$. That is, \[ \subscr{\mathbf{P}}{closest}(t):= \{\mathbf{p}_i(t) \,: \, i \in \operatorname{argmin}_{1,\dots,n} \norm{\mathbf{p}_i(t)-\mathbf{e}(t)} \}. \] \begin{definition}[Cone] The closed positive cone formed with vertex at $\mathbf{e}(t)$, the axis along $\mathbf{e}(t+1)-\mathbf{e}(t)$ (i.e., along $\mathbf{u}_e(t)$), and with half angle equal to $\subscr{\beta}{max}$ is called the $\operatorname{Cone}(k,t)$. \end{definition} \begin{definition}[Cone Move]\label{def:cone_k} If some $k$ pursuers $\mathbf{p}_{i_1}(t),\ldots,\\ \mathbf{p}_{i_k}(t)$ are in $\subscr{\mathbf{P}}{closest}(t)$ and also in $\operatorname{Cone}(k,t)$, then the Cone move for $\mathbf{p}_{i_1},\ldots,\mathbf{p}_{i_k}$ is defined as follows: \begin{itemize}\advance\itemsep by -4pt \item draw a line $l_j$ through $\mathbf{e}(t+1)$, parallel to $\mathbf{p}_j(t)-\mathbf{e}(t)$, for all $j \in \{i_1,\ldots, i_k\}$. \item $\mathbf{p}_j(t+1)$ is the point on line $l_j$ that minimizes $\norm{\mathbf{p}_j(t+1)-\mathbf{e}(t+1)}$ subject to the constraint that \\ $\norm{\mathbf{p}_{i_1}(t+1)-\mathbf{e}(t+1)}=\ldots=\norm{\mathbf{p}_{i_k}(t+1)-\mathbf{e}(t+1)}$. \end{itemize} \end{definition} \begin{figure}[htbp] \centering \includegraphics[width=0.6\columnwidth]{cone} \caption{Illustrating the Cone Move for $k$ pursuers. The shaded region is $\operatorname{Cone}(k,t)$. If the evader moves into $\operatorname{Cone}(k,t)$, then the pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ move so that their distances to the evader decrease by the same finite amount.} \label{fig:closein_k} \end{figure} Fig.~\ref{fig:closein_k} offers a geometric interpretation of the Cone move. The key intuition behind the Cone move is that the $k$ closest pursuers in $\operatorname{Cone}(k,t)$ can reduce their distance to the evader, and remain in $\subscr{\mathbf{P}}{closest}(t+1)$, using the Cone move. We can give a more precise expression for the new locations in a Cone move, as follows. \medskip Assume without loss of generality that $\theta_{i_1}\geq \theta_{j}, \, \forall j \in \{i_1,\ldots,i_{k}\}$ . Then, choose $\mathbf{p}_{i_1}(t+1)$ satisfying the following conditions: \begin{itemize} \advance\itemsep by -4pt \item $\norm{\mathbf{p}_{i_1}(t+1)-\mathbf{p}_{i_1}(t)} = 1$, and \item $\angle \mathbf{p}_{i_1}(t+1)\mathbf{p}_{i_1}(t)\mathbf{e}(t) = \arcsin(u_e(t)\sin\theta_{i_1})$, where $u_e(t) = \norm{\mathbf{e}(t+1)-\mathbf{e}(t)}$. \end{itemize} The positions $\mathbf{p}_j(t+1)$, for all $j \in\{i_2,\ldots, i_k\}$, are chosen to satisfy the following conditions: \begin{itemize} \advance\itemsep by -4pt \item$\norm{\mathbf{p}_{j}(t+1)-\mathbf{p}_{j}(t)}^2 = 1 + u_e^2(\cos\theta_{j}-\cos\theta_i)^2 + 2u_e(\cos\theta_{j}-\cos\theta_i)\sqrt{1-u_e^2\sin^2\theta_i}$. \item $\angle\mathbf{p}_{j}(t+1)\mathbf{p}_{j}(t)\mathbf{e}(t) = \arcsin(u_e(t)\sin\theta_{j})$. \end{itemize} \medskip \subsection{Proof of {\large $k$}-Capture Sufficiency}\label{sec:strategy_k} \medskip In the rest of this section, we prove that Algorithm~\ref{algo:strategy_k} succeeds. We begin with the observation that $k$-Capture is orientation-preserving, since throughout the algorithm, the direction of the vectors $\mathbf{p}_j-\mathbf{e}$ remains invariant for each $j$. \begin{proposition}[Orientation Preserving]\label{prop:invariants_k} The Algorithm $k$-Capture is orientation-preserving. \end{proposition} Our proof of $k$-Capture depends on three technical lemmas showing, respectively, that some $k$ pursuers become closest to the evader, that every cone move reduces the minimum distance by a finite amount, and that irrespective of the evader's strategy, the minimum distance decreases by a finite amount. Throughout the following discussion, it is assumed that the pursuers all follow the Algorithm $k$-Capture. \medskip The bound on the capture time depends on $\subscr{d}{min}(0):=\min_{i=1}^n \norm{\mathbf{p}_i(0)-\mathbf{e}(0)}$ and $\subscr{d}{max}(0):=\max_{i=1}^n \norm{\mathbf{p}_i(0)-\mathbf{e}(0)}$, which are the minimum and the maximum distance between a pursuer and the evader at the initial time $t=0$. The following lemma proves the closest pursuers property. \begin{lemma}[$\subscr{\mathbf{P}}{closest}$ cardinality]\label{lem:strategy_k} After a finite time upper bounded by $n(1+\subscr{d}{max}/\subscr{\beta}{max})$, some $k$ pursuers are in the set $\subscr{\mathbf{P}}{closest}$. \end{lemma} \begin{proof} From statement 2 of Lemma~\ref{lem:initial_k}, at every instant of time and for any move of the evader, there exists some $k$ pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ such that for all $j \in \{ i_1,\ldots, i_k\}$, $\theta_j \leq \subscr{\beta}{max}$. If all of these $k$ pursuers are in $\subscr{\mathbf{P}}{closest}(t)$, then this result stands proved. Otherwise, for every $t$, there exists some pursuer (say $\mathbf{p}_j(t)$) out of the $k$ pursuers, which is not in $\subscr{\mathbf{P}}{closest}(t)$, and is such that $\theta_j \leq \subscr{\beta}{max}$. So at time $t+1$, the Advance move by $\mathbf{p}_j$ will ensure that either $\norm{\mathbf{p}_j(t+1)-\mathbf{e}(t+1)}\leq \norm{\mathbf{p}_j(t)-\mathbf{e}(t)}-\cos\subscr{\beta}{max}$ or $\mathbf{p}_j(t+1)\in\subscr{\mathbf{P}}{closest}(t+1)$. \medskip Thus, in the worst case, after at most $n(1+\subscr{d}{max}/\cos\subscr{\beta}{max})$ time instants, some $k$ pursuers must be $\subscr{\mathbf{P}}{closest}$. \end{proof} Let $\subscr{d}{min}(t)$ be the distance of the closest pursuer from the evader at time $t$. Once $k$ pursuers are in $\subscr{\mathbf{P}}{closest}$, the following lemma establishes a lower bound on the decrease of $\subscr{d}{min}$ assuming that a Cone move occurs, which is favorable for the pursuers. \begin{lemma} \label{lem:simcap_k} Let $\mathbf{p}_{i_1}, \dots, \mathbf{p}_{i_k} \in \subscr{\mathbf{P}}{closest}$ be $k$ pursuers closest to the evader at time $t$. If these pursuers' next move is a Cone move, then after the pursuers' move, we have \[ \subscr{d}{min}(t+1) \leq \subscr{d}{min}(t) - \cos\subscr{\beta}{max}. \] \end{lemma} \begin{proof} Let $\theta_j$ be the largest among the angles $\theta_{i_1},\dots,\theta_{i_k}$. Using the new locations of the pursuers in the Cone move, we obtain, \begin{align*} \subscr{d}{min}(t)-\subscr{d}{min}(t+1) &= u_e\cos\theta_j + 1\cdot \cos \angle \mathbf{p}_j(t+1)\mathbf{p}_j(t)\mathbf{e}(t) \\ &= u_e\cos\theta_j+\sqrt{1-u_e^2\sin^2\theta_j}\\ &\geq \cos\theta_j \geq \cos\subscr{\beta}{max}, \end{align*} since $\theta_j\leq \subscr{\beta}{max}$ from the definition of the Cone region. The lemma follows. \end{proof} Finally, the next lemma derives a lower bound on the decrease of $\subscr{d}{min}$ for the worst-case evader move, while the pursuers follow the strategy of Algorithm $k$-Capture. \begin{lemma} \label{lem:dist_k} If some $k$ pursuers become closest to the evader at some time $t$, then the following holds: \begin{itemize} \advance\itemsep by -4pt \item after every subsequent pursuer move, some $k$ pursuers are in $\subscr{\mathbf{P}}{closest}$, and \item after at most $n(1+\subscr{d}{max}/\cos\subscr{\beta}{max})$ pursuer moves, $\subscr{d}{min}$ decreases by at least $\cos\subscr{\beta}{max}$. \end{itemize} \end{lemma} \begin{proof} Let $A$ and $B$ be two groups of pursuers in $\subscr{\mathbf{P}}{closest}$ at time $t$, of which group $A$ comprises of some $k$ pursuers. If all pursuers of group $A$ are in the Cone region at time $t$, then group $A$ will make a Cone move which ensures that all pursuers in $A$ are in $\subscr{\mathbf{P}}{closest}$ at time $t+1$. Thus, the first claim trivially holds. Otherwise, all pursuers in $A$ move parallel to the evader at time $t+1$. Now, if group $B$ does not contain $k$ pursuers, then at time $t+1$, all pursuers in group $B$ are forced to move parallel to the evader, since they do not satisfy the criterion to make a Cone move. Thus, the pursuers in group $A$ satisfy the first claim at time $t+1$. Finally, if group $B$ contains some $k$ pursuers and are in the Cone region at time $t$, then these $k$ pursuers make a Cone move and satisfy the first claim at time $t+1$. Thus, the first claim holds at all times. \medskip Now, let us consider the second claim. From Proposition~\ref{prop:invariants_k} and statement 2 of Lemma~\ref{lem:initial_k}, at every instant of time and for any move of the evader, there exists some $k$ pursuers $\mathbf{p}_{i_1},\dots,\mathbf{p}_{i_k}$ such that $\theta_j(t) \leq \subscr{\beta}{max}$, for all $j\in\{i_1,\dots, i_k\}$. We need to consider two cases: \begin{itemize} \item {[All of $\mathbf{p}_{i_1}(t),\dots,\mathbf{p}_{i_k}(t)$ are in $\subscr{\mathbf{P}}{closest}(t)$:]}\\ In this case, the claim follows from Lemma~\ref{lem:simcap_k} because all of these pursuers lie in $\operatorname{Cone}(k,t)$. \item {[At least one of out of the $k$ pursuers, say $\mathbf{p}_j(t)$ is not in $\subscr{\mathbf{P}}{closest}(t)$:]}\\ Without loss of generality, assume that $\mathbf{p}_j (t) \not\in \subscr{\mathbf{P}}{closest}(t)$. Then, at time $t+1$, the Advance move by $\mathbf{p}_j$ will ensure that either $\norm{\mathbf{p}_j(t+1)-\mathbf{e}(t+1)}\leq \norm{\mathbf{p}_j(t)-\mathbf{e}(t)}-\cos\subscr{\beta}{max}$ or $\mathbf{p}_j(t+1)\in\subscr{\mathbf{P}}{closest}(t+1)$. Thus, in the worst-case, it requires at most $n(1+\subscr{d}{max}/\cos\subscr{\beta}{max})$ moves before all $n$ pursuers are in $\subscr{\mathbf{P}}{closest}$. Then, the next pursuer move is necessarily a Cone move, because for any choice of the evader move, there exists some $k$ pursuers which are now equidistant from the evader, which lie in the Cone region. By Lemma~\ref{lem:simcap_k}, the distance of the $k$ closest pursuers from the evader strictly decreases by at least $\cos\subscr{\beta}{max}$. \end{itemize} This completes the proof of the lemma. \end{proof} We can now state our main theorem on $k$-Capture. \begin{theorem}\label{thm:suff_k} If the evader lies in the interior of the pursuers' $k$-Hull at $t=0$, i.e., $\mathbf{e}(0) \in \intHull{k} (\mathbf{p}_1 (0), \ldots, \mathbf{p}_n (0))$, then it can be $k$-Captured in at most $n(1+ \subscr{d}{max}/\cos\subscr{\beta}{max})^2$ moves. \end{theorem} \begin{proof} By Lemma~\ref{lem:strategy_k}, after at most \[ n(1+\subscr{d}{max}/\subscr{\beta}{max}) \] moves, some $k$ pursuers are in $\subscr{\mathbf{P}}{closest}$. Thereafter, Lemma~\ref{lem:dist_k} ensures that the distance of some $k$ closest pursuers to the evader decreases by at least $\cos\subscr{\beta}{max}$ after every $n(1+ \subscr{d}{max}/ \cos\subscr{\beta}{max})$ moves. Since capture is defined after the pursuers' move, after at most $n(1+ \subscr{d}{max}/\cos\subscr{\beta}{max})\subscr{d}{max}/\cos\subscr{\beta}{max}$ pursuer moves, we obtain $\subscr{d}{min} = 0$, that is, the evader and some $k$ pursuers are coincident, which satisfies the conditions of $k$-capture. An upper bound on the time taken for the $k$-capture of the evader follows by summing the bounds of Lemma~\ref{lem:strategy_k} and Lemma~\ref{lem:dist_k}. This completes the proof of the theorem. \end{proof} \begin{remark}[Lower bound on Capture time] A lower bound on the time taken to capture is $\subscr{d}{max}/\cos\subscr{\beta}{max}$. To see this, consider the following initial condition and evader strategy. The evader's strategy is to move along a fixed vector $\mathbf{u}_e$ with unit step. Let $\mathbf{p}_1, \dots, \mathbf{p}_k$ be furthest from the evader initially, and be located on the boundary of the resulting $\operatorname{Cone}(k,0)$. The rest of the pursuers are located outside $\operatorname{Cone}(k,0)$. This evader strategy and the initial pursuer locations ensure that the evader is captured after a time of at least $\subscr{d}{max}/\cos\subscr{\beta}{max}$, independent of the pursuers' strategy. \end{remark} \section{Bounded Environments}\label{sec:compact} In this section, we show a simple strategy for $k$-capture that always succeeds in a compact and convex subset of a Euclidean space. If every pursuer were to use an established strategy by Sgall~\cite{JS:01} independently of the other pursuers, at each instant of time, then the distance between each pursuer and the evader would decrease to zero, but at different instants in time. Although this approach does not guarantee $k$-capture in general, it suggests that intuitively, it should be possible to coordinate the moves of each pursuer to achieve $k$-capture from any set of initial locations in the environment. Therefore, in contrast with the previous sections wherein there existed a necessary condition for $k$-capture, we will now directly present a strategy which requires $k$ pursuers, and which achieves $k$-capture of the evader in at most $O(D^2)$ time steps, where $D$ is the diameter of the environment. \medskip Our strategy comprises of two phases. The first phase is an \emph{initializing} move, which gets the pursuers in a favorable formation so that they can apply the steps in the second phase. In particular, the initializing move will show that it is possible to achieve a configuration of the pursuers and the evader such that $k-1$ pursuers are located between a \emph{lead} pursuer and the evader. \medskip The second phase will mimic Sgall's strategy~\cite{JS:01} for the lead pursuer, while the other $k-1$ pursuers will maintain an invariant of being located between the lead pursuer and the evader at all times. The initial locations of the pursuers being sufficiently close to each other ensures that the evader gets captured if it moves to the location of any pursuer. We show that this phase terminates into the evader being $k$-captured. Let us begin with the Initializing move. \subsection{Initializing Move} \medskip In this phase, the pursuers first group themselves such that they are located inside a sphere of radius equal to half. This essentially means that every pursuer can reach the location of any other pursuer, in one time step. \medskip Now, consider a closed sphere $O$ of radius half which contains the pursuers at time $t = 0$. Let $\ell$ denote the intersection of the sphere $O$ with line joining the evader's location at time $t=1$ to the center of $O$. Now, independent of the location $\mathbf{e}(1)$, it is always possible to find $k$ distinct locations $\mathbf{p}_1(1),\dots,\mathbf{p}_k(1)$ each contained in $\ell$, such that $\mathbf{p}_1(1),\dots,\mathbf{p}_k(1)$ are collinear with $\mathbf{e}(1)$ and $\mathbf{p}_2(1),\dots,\mathbf{p}_k(1)$ lie between $\mathbf{p}_1(1)$ and $\mathbf{e}(1)$. Figure~\ref{fig:initial} shows an illustration of this move. \begin{figure}[h] \centering \includegraphics[width=0.3\columnwidth]{initial} \caption{Illustrating the initializing move. It is always possible to ensure that the pursuers are collinear with the evader and within a unit distance of each other. In this figure, the circle centered at $O$, has radius equal to half unit.} \label{fig:initial} \end{figure} This terminates the initializing move, and we are now ready to present the $k$-capture strategy. \medskip \subsection{An algorithm for $k$-Capture} \medskip At each time instant $t$, $\mathbf{p}_1$ makes the Sgall move, described as below. \begin{enumerate} \item Join $\mathbf{e}(t-1)$ and $\mathbf{p}_1(t-1)$ and extend it beyond $\mathbf{p}_1(t-1)$ to intersect the environment at $C$. \item Move to the point closest to $\mathbf{e}(t)$ and on the line joining $\mathbf{e}(t)$ and $C$. \end{enumerate} All other pursuers pick distinct points between $\mathbf{p}_1(t)$ and $\mathbf{e}(t)$. This strategy is illustrated in Figure~\ref{fig:sgall}, and is summarized in Algorithm~\ref{algo:compact}. \begin{figure}[htbp] \centering \includegraphics[width=0.3\columnwidth]{lion} \caption{Illustrating a move of Algorithm~\ref{algo:compact}. Pursuer $\mathbf{p}_1$ follows the Sgall move, while all the others pick distinct points between $\mathbf{p}_1$ and $\mathbf{e}$ to move to.} \label{fig:sgall} \end{figure} \begin{algorithm}[h] \KwAssumes{The players are in a configuration resulting from the Initializing move.} % \textbf{For} each $t = 1,2, \ldots$,\\ \quad $\mathbf{p}_1$ makes the \emph{Sgall Move}.\\ \quad \textbf{For} each $j\in \{2,\dots,k\}$,\\ \qquad $\mathbf{p}_j$ moves to the furthest point from $\mathbf{p}_1$ between $\mathbf{p}_1(t)$ and $\mathbf{e}(t)$, and on the line joining $\mathbf{p}_1(t)$ and $\mathbf{e}(t)$. \\ \quad \textbf{end for} \\ \textbf{end for} \\ \caption{\bf Sgall-like strategy} \label{algo:compact} \end{algorithm} Thus, we obtain the following result. \begin{proposition} With the initializing move and subsequently Algorithm~\ref{algo:compact}, the pursuers $k$-capture the evader in $O(D^2)$ number of time steps, where $D$ is the diameter of the compact environment. \end{proposition} \begin{proof} Since $\mathbf{p}_1$ uses the Sgall move throughout the pursuit, $\norm{\mathbf{e}-\mathbf{p}_1}$ becomes equal to zero in at most $O(D^2)$ number of time steps. Each pursuer move in step 4 of the algorithm exists since the environment is convex. Thus, the remaining $k-1$ pursuers ensure that the evader is $k$-captured when $\mathbf{p}_1$ becomes coincident with $\mathbf{e}$. \end{proof} \section{Closing Remarks}\label{sec:conclusions} In this paper, we introduced a new variant of the classical pursuit-evasion problem in an $m$-dimensional Euclidean space, which requires multiple pursuers to simultaneously reach the evader for capture. We showed that, for $k$-capture to occur, the evader must lie inside the $k$-Hull, in a pleasing generalization of the convex hull rule for the single pursuer capture. The main result of the paper was to show that this simple necessary condition is also sufficient. The proof of this sufficiency required a new pursuit strategy, requiring both an Advance move, which is a modified version of a known Planes algorithm and a new type of Cone move, which requires a careful coordination among the pursuers. For a version of this problem played in a compact and convex environment, we showed that $k$-capture is always possible. \medskip Our work suggests a number of intriguing problems for future research. Interesting directions include improving the upper bound on the time taken to capture the evader and addressing versions of this problem in general environments, with obstacles.
1,314,259,996,799
arxiv
\section{Introduction} \label{p} Standard systems of equations in fluid mechanics including the Navier--Stokes--Fourier system governing the motion of a compressible, viscous, and heat conducting fluid are well posed in the class of strong solutions on a possibly short time interval $[0,T_{\rm max})$. The recent results of Merle at al. \cite{MeRaRoSz}, \cite{MeRaRoSzbis} strongly indicate that $T_{\rm max}$ may be finite, at least in the idealized case of ``isentropic'' viscous flow. Conditional regularity results guarantee that a blow up will not occur as soon as some lower order norms of solutions are controlled. We consider the \emph{Navier--Stokes--Fourier system} governing the time evolution of the mass density $\varrho = \varrho(t,x)$, the (absolute) temperature $\vartheta = \vartheta(t,x)$, and the velocity $\vc{u} = \vc{u}(t,x)$ of a compressible, viscous, and heat conducting fluid: \begin{mdframed}[style=MyFrame] \begin{align} \partial_t \varrho + {\rm div}_x (\varrho \vc{u}) &= 0, \label{i1} \\ \partial_t (\varrho \vc{u}) + {\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x p(\varrho, \vartheta) &= {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) + \varrho \vc{f},\ \mathbb{D}_x \vc{u} = \frac{1}{2} \left( \nabla_x \vc{u} + \nabla_x^t \vc{u} \right), \label{i2} \\ \partial_t (\varrho e(\varrho, \vartheta)) + {\rm div}_x (\varrho e (\varrho, \vartheta) \vc{u}) + {\rm div}_x \vc{q}(\nabla_x \vartheta) &= \mathbb{S} (\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} - p (\varrho, \vartheta) {\rm div}_x \vc{u}. \label{i3} \end{align} \end{mdframed} \noindent The fluid is Newtonian, the viscous stress $\mathbb{S}$ is given by Newton's rheological law \begin{equation} \label{i4} \mathbb{S}(\mathbb{D}_x \vc{u}) = 2\mu \left( \mathbb{D}_x \vc{u} - \frac{1}{3} {\rm div}_x \vc{u} \mathbb{I} \right) + \eta {\rm div}_x \vc{u} \mathbb{I},\ \mu > 0,\ \eta \geq 0. \end{equation} The heat flux obeys Fourier's law \begin{equation} \label{i5} \vc{q}(\nabla_x \vartheta) = - \kappa \nabla_x \vartheta,\ \kappa > 0. \end{equation} The equation of state for the pressure $p$ and the internal energy $e$ is given by the standard Boyle--Mariotte law of perfect gas, \begin{equation} \label{i6} p(\varrho, \vartheta) = \varrho \vartheta, \ e (\varrho, \vartheta) = c_v \vartheta,\ c_v > 0. \end{equation} For the sake of simplicity, we suppose that the viscosity coefficients $\mu$, $\eta$, the heat conductivity coefficient $\kappa$ as well as the specific heat at constant volume $c_v$ are constant. There is a large number of recent results concerning conditional regularity for the Navier--Stokes--Fourier system in terms of various norms. Fan, Jiang, and Ou \cite{FaJiOu} consider a bounded fluid domain $\Omega \subset R^3$ with the conservative boundary conditions \begin{equation} \label{i7} \vc{u}|_{\partial \Omega} = 0, \ \nabla_x \vartheta \cdot \vc{n}|_{\partial \Omega} = 0. \end{equation} The same problem is studied by Sun, Wang, and Zhang \cite{SuWaZh} and later by Huang, Li, Wang \cite{Huang}. There are results for the Cauchy problem $\Omega = R^3$ by Huang and Li \cite{HuaLi}, and Jiu, Wang and Ye \cite{JiuWanYe}. Possibly the best result so far has been established in \cite{FeWeZh}, where the blow up criterion for both the Cauchy problem and the boundary value problem \eqref{i7} is formulated in terms of the maximum of the density and a Serrin type regularity for the temperature: \[ \limsup_{t \to T_{\rm max}-} \left( \| \varrho(t, \cdot) \|_{L^\infty} + \| \vartheta - \vartheta_\infty \|_{L^s(0,t)(L^r)} \right) = \infty, \ \frac{3}{2} < r \leq \infty,\ 1 \leq s \leq \infty,\ \frac{2}{s} + \frac{3}{r} \leq 2, \] where $\vartheta_\infty$ denotes the far field temperature in the Cauchy problem, cf. also the previous results by Wen and Zhu \cite{WenZhu1}, \cite{WenZhu2}. Much less is known in the case of the Dirichlet boundary conditions \begin{equation} \label{i8} \vc{u}|_{\partial \Omega} = \vc{u}_B,\ \vartheta|_{\partial \Omega} = \vt_B. \end{equation} Fan, Zhi, and Zhang \cite{FaZiZh} showed that a strong solution of the Navier--Stokes--Fourier system remains regular up to a time $T > 0$ if (i) $\Omega \subset R^2$ is a bounded domain, (ii) $\vc{u}_B = 0$, $\vt_B = 0$, and (iii) \begin{equation} \label{i9} \limsup_{t \to T-} \left( \| \varrho \|_{L^\infty} + \| \vartheta \|_{L^\infty} \right) < \infty. \end{equation} All results mentioned above describe fluids in a conservative regime, meaning solutions are close to equilibrium in the long run. However, many real world applications concern fluids out of equilibrium driven by possibly large driving forces $\vc{f}$ and/or inhomogeneous boundary conditions. The iconic examples are the Rayleigh--B\' enard and Taylor--Couette flows where the fluid is driven to a turbulent regime by a large temperature gradient and large boundary velocity, respectively, see Davidson \cite{DAVI}. Motivated by these physically relevant examples, we consider a fluid confined to a bounded domain $\Omega \subset R^3$ with \emph{impermeable boundary}, where the temperature and the (tangential) velocity are given on $\partial \Omega$, \begin{mdframed}[style=MyFrame] \begin{align} \vartheta|_{\partial \Omega} &= \vt_B,\ \vt_B = \vt_B(x),\ \vt_B > 0 \ \mbox{on}\ \partial \Omega, \label{i10} \\ \vc{u}|_{\partial\Omega} &= \vc{u}_B,\ \vc{u}_B = \vc{u}_B(x),\ \vc{u}_B \cdot \vc{n} = 0 \ \mbox{on}\ \partial \Omega. \label{i11} \end{align} \end{mdframed} \noindent The initial state of the fluid is prescribed: \begin{mdframed}[style=MyFrame] \begin{equation} \label{i12} \varrho(0, \cdot) = \varrho_0,\ \varrho_0 > 0 \ \mbox{in}\ \Ov{\Omega},\ \vartheta(0, \cdot) = \vartheta_0,\ \vartheta_0 > 0 \ \mbox{in}\ \Ov{\Omega},\ \vc{u}(0, \cdot) = \vc{u}_0. \end{equation} \end{mdframed} \noindent The initial and boundary data are supposed to satisfy suitable \emph{compatibility conditions} specified below. The existence of local in time strong solutions for the problem \eqref{i1}--\eqref{i6}, endowed with the inhomogeneous boundary conditions \eqref{i10}, \eqref{i11} was established by Valli \cite{Vall2}, \cite{Vall1} , see also Valli and Zajaczkowski \cite{VAZA}. The solution exists on a maximal time interval $[0, T_{\rm max})$, $T_{\rm max} > 0$. Our goal is to show that if $T_{\rm max} < \infty$, then necessarily \begin{equation} \label{i13} \limsup_{t \to T_{\rm max}-} \Big( \| \varrho (t, \cdot) \|_{L^\infty(\Omega)} + \| \vartheta (t, \cdot) \|_{L^\infty(\Omega)} + \| \vc{u} (t, \cdot) \|_{L^\infty(\Omega; R^3)} \Big) = \infty. \end{equation} The proof is based on deriving suitable {\it a priori} bounds assuming boundedness of all norms involved in \eqref{i13} as well as the norm of the initial/boundary data in a suitable function space. Although approach shares some similarity with Fang, Zi, and Zhang \cite{FaZiZh}, essential modifications must be made to accommodate the inhomogeneous boundary data as well as the driving force $\vc{f}$. The importance of conditional regularity results in numerical analysis of flows with uncertain initial data was discussed recently in \cite{FeiLuk2021}. The paper is organized as follows. In Section \ref{M}, we introduce the class of strong solutions to the Navier--Stokes--Fourier system and state our main result concerning conditional regularity. The remaining part of the paper is devoted to the proof of the main result -- deriving suitable {\it a priori} bounds. In Section \ref{e} we recall the standard energy estimates that hold even in the class of weak solutions. Section \ref{g} is the heart of the paper. We establish the necessary estimates on the velocity gradient by means of the celebrated Gagliardo--Nirenberg interpolation inequality. In Section \ref{s}, higher order estimates on the velocity gradient are derived, and, finally, the estimates are closed by proving bounds on the temperature time derivative in Section \ref{d}. This last part borrows the main ideas from \cite{FeNoSun1}. \section{Strong solutions, main result} \label{M} We start the analysis by recalling the concept of strong solution introduced by Valli \cite{Vall1}. Similarly to the boundary data $\vc{u}_B$, $\vt_B$ we suppose that the driving force $\vc{f} = \vc{f}(x)$ is independent of time, meaning we deal with an autonomous problem. Following \cite{Vall1}, we suppose that $\Omega \subset R^3$ is a bounded domain with $\partial \Omega$ of class $C^4$. We assume the data belong to the following class: \begin{align} \varrho_0 &\in W^{3,2}(\Omega),\ 0 < \underline{\varrho}_0 \leq \min_{x \in \Omega} \varrho_0 (x), \nonumber \\ \vartheta_0 &\in W^{3,2}(\Omega),\ 0 < \underline{\vartheta}_0\leq \min_{x \in \Omega} \vartheta_0 (x), \nonumber \\ \vc{u}_0 &\in W^{3,2}(\Omega; R^3), \nonumber \\ \vt_B &\in W^{\frac{7}{2}} (\partial \Omega),\ 0 < \underline{\vartheta}_B \leq \min_{x \in \partial \Omega} \vt_B (x), \nonumber \\ \vc{u}_B &\in W^{\frac{7}{2}} (\partial \Omega; R^3),\ \vc{u}_B \cdot \vc{n} = 0, \nonumber \\ \vc{f}& \in W^{2,2}(\Omega; R^3). \label{M1} \end{align} In addition, the data must satisfy the compatibility conditions \begin{align} \vartheta_0 = \vt_B,\ \vc{u}_0 &= \vc{u}_B \ \mbox{on}\ \partial \Omega, \nonumber \\ \varrho_0 \vc{u}_0 \cdot \nabla_x \vc{u}_0 + \nabla_x p(\varrho_0, \vartheta_0) &= {\rm div}_x \mathbb{S} (\mathbb{D}_x \vc{u}_0) + \varrho_0 \vc{f} \ \mbox{on}\ \partial \Omega, \nonumber \\ \varrho_0 \vc{u}_0 \cdot \nabla_x \vartheta_0 + {\rm div}_x \vc{q}(\vartheta_0) &= \mathbb{S}(\mathbb{D}_x \vc{u}_0) : \mathbb{D}_x \vc{u}_0 - p(\varrho_0, \vartheta_0) {\rm div}_x \vc{u}_0 \ \mbox{on}\ \partial \Omega. \label{M2} \end{align} We set \begin{equation} \label{M3} \mathcal{D}_0 = \max\left\{ \| (\varrho_0, \vartheta_0, \vc{u}_0) \|_{W^{3,2}(\Omega; R^5)}, \frac{1}{\underline{\varrho}_0},\ \frac{1}{\underline{\vartheta}_0}, \frac{1}{\underline{\vartheta}_B}, \| \vt_B \|_{W^{\frac{7}{2}}(\partial \Omega)}, \| \vc{u}_B \|_{W^{\frac{7}{2}}(\partial \Omega; R^3)} , \| \vc{f} \|_{W^{2,2}(\Omega; R^3)} \right\}. \end{equation} \subsection{Local existence} The following result was proved by Valli \cite[Theorem A]{Vall1} (see also \cite{Vall2}). \begin{Theorem} \label{TVal} {\bf (Local existence of strong solutions)} Let $\Omega \subset R^3$ be a bounded domain of class $C^4$. Suppose that the data $(\varrho_0, \vartheta_0, \vc{u}_0)$, $(\vt_B, \vc{u}_B)$ and $\vc{f}$ belong to the class \eqref{M1} and satisfy the compatibility conditions \eqref{M2}. Then there exists a maximal time $T_{\rm max} > 0$ such that the Navier--Stokes--Fourier system \eqref{i1}--\eqref{i6}, with the boundary conditions \eqref{i10}, \eqref{i11}, and the initial conditions \eqref{i12} admits a solution $(\varrho, \vartheta, \vc{u})$ in $[0, T_{\rm max}) \times \Omega$ unique in the class \begin{align} \varrho,\ \vartheta &\in C([0,T]; W^{3,2}(\Omega)),\ \vc{u} \in C([0,T]; W^{3,2}(\Omega; R^3)), \nonumber \\ \vartheta &\in L^2(0,T; W^{4,2}(\Omega)),\ \vc{u} \in L^2(0,T; W^{4,2}(\Omega; R^3)) \label{M4} \end{align} for any $0 < T < T_{\rm max}$. The existence time $T_{\rm max}$ is bounded below by a quantity $c(\mathcal{D}_0)$ depending solely on the norms of the data specified in \eqref{M3}. In particular, \begin{equation} \label{M5} \lim_{\tau \to T_{\rm max}-} \| (\varrho, \vartheta, \vc{u}) (\tau, \cdot) \|_{W^{3,2}(\Omega; R^5)} = \infty. \end{equation} \end{Theorem} \subsection{Blow up criterion, conditional regularity} \label{c} Our goal is to show the following result. \begin{mdframed}[style=MyFrame] \begin{Theorem} \label{MT} {\bf (Blow up criterion)} Under the hypotheses of Theorem \ref{TVal}, suppose that the maximal existence time $T_{\rm max} < \infty$ is finite. Then \begin{equation} \label{M6} \limsup_{\tau \to T_{\rm max}-} \left\| (\varrho, \vartheta, \vc{u})(\tau, \cdot) \right\|_{L^\infty (\Omega; R^5)} = \infty. \end{equation} \end{Theorem} \end{mdframed} Theorem \ref{MT} is in the spirit of the blow up criteria for general parabolic systems -- the solution remains regular as long as it is bounded. Of course, our problem in question is of mixed hyperbolic--parabolic type. The proof of Theorem \ref{MT} follows from suitable {\it a priori} bounds applied on a compact time interval. \begin{Proposition} \label{PT} {\bf (Conditional regularity)} \noindent Under the hypotheses of Theorem \ref{TVal}, let $(\varrho, \vartheta, \vc{u})$ be the strong solution of the Navier--Stokes--Fourier system belonging to the class \eqref{M4} and satisfying \begin{equation} \label{c1} \sup_{(\tau,x) \in [0,T) \times \Omega} \varrho (\tau,x) \leq \Ov{\varrho},\ \sup_{(\tau,x) \in [0,T) \times \Omega} \vartheta (\tau,x) \leq \Ov{\vartheta},\ \sup_{(\tau ,x) \in [0,T) \times \Omega} | \vc{u} (\tau,x) | \leq \Ov{u} \end{equation} for some $T < T_{\rm max}$. Then there is a quantity $c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u})$, bounded for bounded arguments, such that \begin{equation} \label{M7} \sup_{\tau \in [0,T)} \max \left\{ \| (\varrho, \vartheta, \vc{u}) (\tau, \cdot) \|_{W^{3,2}(\Omega; R^5)} ; \sup_{x \in \Omega} \frac{1}{\varrho (\tau,x) } ; \sup_{x \in \Omega} \frac{1}{\vartheta (\tau,x) } \right\} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u}). \end{equation} \end{Proposition} In view of Theorem \ref{TVal}, the conclusion of Theorem \ref{MT} follows from Proposition \ref{PT}. The rest of the paper is therefore devoted to the proof of Proposition \ref{PT}. \begin{Remark} \label{Rem1} As observed in \cite{FeiLuk2022}, the conditional regularity results established in Proposition \ref{PT} gives rise to \emph{stability} with respect to the data. More specifically, the maximal existence time $T_{\rm max}$ is a lower semicontinuous function of the data with respect to the topologies in \eqref{M1}. \end{Remark} \begin{Remark} \label{Rem2} Conditional regularity results in combination with the weak--strong uniqueness principle in the class of measure--valued solutions is an efficient tool for proving convergence of numerical schemes, see \cite[Chapter 11]{FeLMMiSh}. The concept of measure--valued solutions to the Navier--Stokes--Fourier system with inhomogeneous Dirichlet boundary conditions has been introduced recently by Chaudhuri \cite{Chaudh}. \end{Remark} \section{Energy estimates} \label{e} To begin, it is suitable to extend the boundary data into $\Omega$. For definiteness, we consider the (unique) solutions of the Dirichlet problem \begin{equation} \label{e5} \begin{aligned} \Delta_x \tilde \vartheta &= 0 \ \mbox{in}\ \Omega,\ \tilde \vartheta|_{\partial \Omega} = \vt_B, \\ {\rm div}_x \mathbb{S}(\mathbb{D}_x {\tilde \vc{u}}) &= 0 \ \mbox{in}\ \Omega,\ {\tilde \vc{u}}|_{\partial \Omega} = \vc{u}_B. \end{aligned} \end{equation} By abuse of notation, we use the same symbol $\vt_B$, $\vc{u}_B$ for both the boundary values and their $C^1$ extensions $\tilde \vartheta = \tilde \vartheta(x)$, ${\tilde \vc{u}} = {\tilde \vc{u}}(x)$ inside $\Omega$. We start with the ballistic energy equality, see \cite[Section 2.4]{ChauFei}, \begin{align} \frac{{\rm d} }{\,{\rm d} t } &\intO{ \left( \frac{1}{2} \varrho |\vc{u} - \vc{u}_B|^2 + \varrho e - \vt_B \varrho s \right) } +\intO{ \frac{\vt_B}{\vartheta} \left( \mathbb{S}(\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} + \kappa \frac{ |\nabla_x \vartheta|^2 }{\vartheta} \right) } \nonumber \\ &= - \intO{ \Big( \varrho \vc{u} \otimes \vc{u} + p \mathbb{I} - \mathbb{S} (\mathbb{D}_x \vc{u}) \Big) : \mathbb{D}_x \vc{u}_B } + \frac{1}{2} \intO{ \varrho \vc{u} \cdot \nabla_x |\vc{u}_B|^2 } \nonumber \\ &+ \intO{ \varrho (\vc{u} - \vc{u}_B) \cdot \vc{f} } - \intO{ \varrho s \vc{u} \cdot \nabla_x \vt_B } + \kappa \intO{ \frac{\nabla_x \vartheta}{\vartheta} \cdot \nabla_x \vt_B }, \label{e1} \end{align} where we have introduced the entropy \[ s = c_v \log (\vartheta) - \log(\varrho) . \] Thus the choice \eqref{e5} yields the following bounds \begin{align} \sup_{t \in [0,T) } \intO{ \varrho | \log(\vartheta) | (t, \cdot) } \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{e2}\\ \int_0^T \intO{ |\nabla_x \vc{u} |^2 } \,{\rm d} t \leq C(\Ov{\varrho}, \Ov{\vartheta}, \Ov{u}; {\rm data} ) \ \Rightarrow \ \int_0^T \| \vc{u} \|^2_{W^{1,2}(\Omega; R^3) } \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{e3}\\ \int_0^T \intO{ \left( |\nabla_x \vartheta |^2 + |\nabla_x \log(\vartheta) |^2 \right) } \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \nonumber \\ \Rightarrow \ \int_0^T \| \vartheta \|^2_{W^{1,2}(\Omega) } \,{\rm d} t + \int_0^T \| \log (\vartheta) \|^2_{W^{1,2}(\Omega) } \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) .\label{e4} \end{align} \section{Estimates of the velocity gradient} \label{g} This section is the heart of the paper. In principle, we follow the arguments similar to Fang, Zi, and Zhang \cite[Section 3]{FaZiZh} but here adapted to the inhomogeneous boundary conditions. \subsection{Estimates of the velocity material derivative} Let us introduce the material derivative of a function $g$, \[ D_t g = \partial_t g + \vc{u} \cdot \nabla_x g. \] Accordingly, we may rewrite the momentum equation \eqref{i2} as \begin{equation} \label{g1} \varrho D_t \vc{u} + \nabla_x p = {\rm div}_x \mathbb{S} + \varrho \vc{f}. \end{equation} Now, consider the scalar product of the momentum equation \eqref{g1} with $D_t (\vc{u} - \vc{u}_B)$, \begin{equation} \label{g1b} \varrho |D_t \vc{u}|^2 + \nabla_x p \cdot D_t (\vc{u} - \vc{u}_B) = {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) \cdot D_t (\vc{u} - \vc{u}_B) + \varrho \vc{f} \cdot D_t (\vc{u}- \vc{u}_B) + \varrho D_t \vc{u} \cdot D_t \vc{u}_B. \end{equation} The next step is integrating \eqref{g1b} over $\Omega$. Here and hereafter we use the hypothesis $\vc{u}_B \cdot \vc{n}|_{\partial \Omega} = 0$ yielding \begin{equation} \label{g1a} D_t (\vc{u} - \vc{u}_B) |_{\partial \Omega} = \left( \partial_t \vc{u} - \vc{u} \cdot \nabla_x (\vc{u} - \vc{u}_B) \right) |_{\partial \Omega} = - \vc{u}_B \cdot \nabla_x (\vc{u} - \vc{u}_B)|_{\partial \Omega} = 0. \end{equation} Writing \[ {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) = \mu \Delta_x \vc{u} + \left( \eta+ \frac{\mu}{3} \right) \nabla_x {\rm div}_x \vc{u}, \] and making use of \eqref{g1a} we obtain \begin{align} &\intO{ {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{u}) \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ =& - \intO{ \mathbb{S}(\mathbb{D}_x \vc{u}) : \nabla_x \partial_t \vc{u} } \nonumber \\ &- \mu \intO{ \nabla_x \vc{u} : \nabla_x \big(\vc{u} \cdot \nabla_x (\vc{u} - \vc{u}_B) \big) } - \left( \eta+ \frac{\mu}{3} \right) \intO{ {\rm div}_x \vc{u} \ {\rm div}_x \big(\vc{u} \cdot \nabla_x (\vc{u} - \vc{u}_B)\big) } \nonumber \\ =& - \frac{1}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ \mathbb{S} (\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } \nonumber \\ &-\mu \intO{ \nabla_x \vc{u} : \nabla_x \big(\vc{u} \cdot \nabla_x (\vc{u} - \vc{u}_B) \big) } - \left( \eta+\frac{\mu}{3} \right) \intO{ {\rm div}_x \vc{u} \ {\rm div}_x \big(\vc{u} \cdot \nabla_x (\vc{u} - \vc{u}_B) \big) }, \label{g1g} \end{align} where, furthermore, \begin{align} \intO{\nabla_x \vc{u} : \nabla_x (\vc{u} \cdot \nabla_x \vc{u}) } &= \intO{\nabla_x \vc{u} : ( \nabla_x \vc{u} \cdot \nabla_x \vc{u}) } + \frac{1}{2} \intO{ \vc{u} \cdot \nabla_x |\nabla_x \vc{u}|^2 } \nonumber \\ &= \intO{\nabla_x \vc{u} : ( \nabla_x \vc{u} \cdot \nabla_x \vc{u}) } - \frac{1}{2} \intO{ {\rm div}_x \vc{u} |\nabla_x \vc{u}|^2 } \label{g1c} \end{align} Note carefully we have used $\vc{u} \cdot \vc{n}|_{\partial \Omega} = 0$ in the last integration. Similarly, \begin{align} \intO{ {\rm div}_x \vc{u} \ {\rm div}_x (\vc{u} \cdot \nabla_x \vc{u}) } = \intO{ {\rm div}_x \vc{u} \ \nabla_x \vc{u} : \nabla_x^t \vc{u} } - \frac{1}{2} \intO{ ({\rm div}_x \vc{u} )^3 }. \label{g1ca} \end{align} Thus summing up the previous observations, we get \begin{align} \frac{1}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ \mathbb{S} (\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } &+ \frac{1}{2} \intO{ \varrho |D_t \vc{u}|^2 } + \intO{ \nabla_x p \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \intO{ |\nabla_x \vc{u} |^3 } \right). \label{g2} \end{align} Moreover, \begin{align} \intO{ \nabla_x p \cdot D_t (\vc{u} - \vc{u}_B) } &= - \intO{ p \ {\rm div}_x ( D_t (\vc{u} - \vc{u}_B) ) } \nonumber \\ &= - \intO{ p \ {\rm div}_x D_t \vc{u} } + \intO{ p \ {\rm div}_x (\vc{u} \cdot \nabla_x \vc{u}_B)}, \label{g3a} \end{align} where \begin{align*} p \ {\rm div}_x D_t \vc{u} &= \partial_t (p \ {\rm div}_x \vc{u}) - \big( \partial_t p + {\rm div}_x (p \vc{u}) \big) {\rm div}_x \vc{u} + {\rm div}_x (p \vc{u}) {\rm div}_x \vc{u} + p \ {\rm div}_x (\vc{u} \cdot \nabla_x \vc{u}) \\[0.1cm] &= \partial_t (p \ {\rm div}_x \vc{u}) - \big( \partial_t p + {\rm div}_x (p \vc{u}) \big) {\rm div}_x \vc{u} + p \nabla_x \vc{u} : \nabla_x^t \vc{u} + {\rm div}_x \big( p\vc{u} \ {\rm div}_x \vc{u} \big). \nonumber \end{align*} As $\vc{u} \cdot \vc{n}|_{\partial \Omega} = 0$, we have \[ \intO{ {\rm div}_x \big( p \vc{u} \ {\rm div}_x \vc{u} \big) } = 0, \] and the above estimates together with \eqref{g2} give rise to \begin{align} \frac{1}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ \mathbb{S} (\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } &- \frac{{\rm d} }{\,{\rm d} t } \intO{ p {\rm div}_x \vc{u} } + \frac{1}{2} \intO{ \varrho |D_t \vc{u}|^2 } \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \intO{ |\nabla_x \vc{u} |^3 } \right) -\intO{ \big( \partial_t p + {\rm div}_x (p \vc{u}) \big) {\rm div}_x \vc{u} }. \nonumber \end{align} Finally, we realize \[ \partial_t p + {\rm div}_x (p \vc{u}) = \varrho D_t \vartheta \] to conclude \begin{align} \frac{1}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ \mathbb{S} (\mathbb{D}_x \vc{u}) : \mathbb{D}_x \vc{u} } &- \frac{{\rm d} }{\,{\rm d} t } \intO{ p {\rm div}_x \vc{u} } + \frac{1}{2} \intO{ \varrho |D_t \vc{u}|^2 } \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \intO{ \varrho |D_t \vartheta| |\nabla_x \vc{u}| } + \intO{ |\nabla_x \vc{u} |^3 } \right). \label{g2a} \end{align} \subsection{Higher order velocity material derivative estimates} Following \cite[Section 3, Lemma 3.3]{FaZiZh}, see also Hoff \cite{HOF1}, we deduce \begin{align} \varrho &D^2_t \vc{u} + \nabla_x \partial_t p + {\rm div}_x (\nabla_x p \otimes \vc{u}) \nonumber \\ &= \mu \Big( \Delta_x \partial_t \vc{u} + {\rm div}_x (\Delta_x \vc{u} \otimes \vc{u}) \Big) + \left( \eta + \frac{\mu}{3} \right) \Big( \nabla_x {\rm div}_x \partial_t \vc{u} + {\rm div}_x \left( (\nabla_x {\rm div}_x \vc{u}) \otimes \vc{u} \right) \Big) + \varrho \vc{u} \cdot \nabla_x \vc{f}. \label{g3} \end{align} Next, we compute \begin{align} D_t \vc{u}_B = \vc{u} \cdot \nabla_x \vc{u}_B, \quad D^2_t \vc{u}_B &= \partial_t \vc{u} \cdot \nabla_x \vc{u}_B + \vc{u} \cdot \nabla_x (\vc{u} \cdot \nabla_x \vc{u}_B) \nonumber \\ &= D_t \vc{u} \cdot \nabla_x \vc{u}_B - ( \vc{u} \cdot \nabla_x \vc{u} ) \cdot \nabla_x \vc{u}_B + \vc{u} \cdot \nabla_x (\vc{u} \cdot \nabla_x \vc{u}_B) \nonumber \\ &= D_t \vc{u} \cdot \nabla_x \vc{u}_B + (\vc{u} \otimes \vc{u}) : \nabla^2_x \vc{u}_B. \label{g3b} \end{align} Consequently, we may rewrite \eqref{g3} in the form \begin{align} \varrho &D^2_t ( \vc{u} - \vc{u}_B) + \nabla_x \partial_t p + {\rm div}_x (\nabla_x p \otimes \vc{u}) \nonumber \\ &= \mu \Big( \Delta_x \partial_t \vc{u} + {\rm div}_x (\Delta_x \vc{u} \otimes \vc{u}) \Big) + \left( \eta + \frac{\mu}{3} \right) \Big( \nabla_x {\rm div}_x \partial_t \vc{u} + {\rm div}_x \left( (\nabla_x {\rm div}_x \vc{u}) \otimes \vc{u} \right) \Big) + \varrho \vc{u} \cdot \nabla_x \vc{f} \nonumber \\ &- \varrho D_t \vc{u} \cdot \nabla_x \vc{u}_B - \varrho (\vc{u} \otimes \vc{u}) : \nabla^2_x \vc{u}_B. \label{g3c} \end{align} The next step is considering the scalar product of \eqref{g3c} with $D_t (\vc{u} - \vc{u}_B)$ and integrating over $\Omega$. The resulting integrals can be handled as follows: \begin{align} \varrho D^2_t (\vc{u} - \vc{u}_B) \cdot D_t (\vc{u} - \vc{u}_B) &= \varrho \frac{1}{2} D_t | D_t (\vc{u} - \vc{u}_B) |^2 \nonumber \\ &= \frac{1}{2} \varrho \left( \partial_t | D_t (\vc{u} - \vc{u}_B) |^2 + \vc{u} \cdot \nabla_x | D_t (\vc{u} - \vc{u}_B) |^2 \right) \nonumber \\ &= \frac{1}{2} \partial_t \left( \varrho | D_t (\vc{u} - \vc{u}_B) |^2 \right) + \frac{1}{2} {\rm div}_x \left( \varrho \vc{u} | D_t (\vc{u} - \vc{u}_B) |^2 \right), \nonumber \end{align} where we have used the equation of continuity \eqref{i1}. Seeing that $\vc{u} \cdot \vc{n}|_{\partial \Omega} = 0$ we get \begin{equation} \label{G1} \intO{ \varrho D^2_t (\vc{u} - \vc{u}_B) \cdot D_t (\vc{u} - \vc{u}_B) } = \frac{{\rm d} }{\,{\rm d} t } \frac{1}{2} \intO{ \varrho |D_t (\vc{u} - \vc{u}_B) |^2 }. \end{equation} Similarly, \begin{align} &\intO{ \Big( \nabla_x \partial_t p + {\rm div}_x (\nabla_x p \otimes \vc{u}) \Big) \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &=- \intO{ \Big( \partial_t p + {\rm div}_x (p \vc{u}) \Big) {\rm div}_x D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &+ \intO{ \Big( {\rm div}_x (p\vc{u}) {\rm div}_x D_t (\vc{u} - \vc{u}_B) - \nabla_x p \otimes \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) \Big) }, \label{G2} \end{align} where \begin{align} &\intO{ \nabla_x p \otimes \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= - \intO{ p \nabla_x \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) } + \intO{ \nabla_x ( p \vc{u} ) : \nabla_x D_t (\vc{u} - \vc{u}_B) }. \nonumber \end{align} In addition, as $D_t (\vc{u} - \vc{u}_B)$ vanishes on $\partial \Omega$, we can perform by parts integration in the last integral obtaining \[ \intO{ \nabla_x ( p \vc{u} ) : \nabla_x D_t (\vc{u} - \vc{u}_B) } = \intO{ {\rm div}_x ( p \vc{u} ) {\rm div}_x D_t (\vc{u} - \vc{u}_B) }. \] Thus, similarly to the preceding section, we conclude \begin{align} &\intO{ \Big( \nabla_x \partial_t p + {\rm div}_x (\nabla_x p \otimes \vc{u}) \Big) \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= - \intO{ \varrho D_t \vartheta {\rm div}_x D_t (\vc{u} - \vc{u}_B) } \ + \intO{ p \nabla_x \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) }. \label{G3} \end{align} Analogously, \begin{align} &\intO{ \Big( \Delta_x \partial_t \vc{u} + {\rm div}_x (\Delta_x \vc{u} \otimes \vc{u}) \Big) \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= - \intO{ \nabla_x \partial_t \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) } - \intO{ ( \Delta_x \vc{u} \otimes \vc{u} ) : \nabla_x D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= - \intO{ \nabla_x D_t \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) } - \intO{ \Big( \Delta_x \vc{u} \otimes \vc{u} - \nabla_x ( \vc{u} \cdot \nabla_x \vc{u}) \Big) : \nabla_x D_t (\vc{u} - \vc{u}_B) }, \label{G4} \end{align} where, using summation convention, \begin{align} &\intO{ \big(\Delta_x \vc{u} \otimes \vc{u}\big) : \nabla_x D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= \intO{ \partial_{x_k} \Big( u_j\partial_{x_k} u_i \Big) \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i } - \intO{ \partial_{x_k} u_i \partial_{x_k} u_j \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i } \nonumber \\ &= \intO{ \partial_{x_j} \Big( u_j \partial_{x_k} u_i \Big) \partial_{x_k} D_t( \vc{u} - \vc{u}_B)_i } - \intO{ \partial_{x_k} u_i \partial_{x_k} u_j \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i } \nonumber \\ &= \intO{ {\rm div}_x \vc{u} \ \nabla_x \vc{u} : \nabla_x D_t( \vc{u} - \vc{u}_B) } \nonumber \\ &+ \intO{ \Big( u_j \partial_{x_k} \partial_{x_j} u_i \Big) \partial_{x_k} D_t( \vc{u} - \vc{u}_B)_i } - \intO{ \partial_{x_k} u_i \partial_{x_k} u_j \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i } \nonumber \\ &= \intO{ \nabla_x (\vc{u} \cdot \nabla_x \vc{u} ) : \nabla_x D_t( \vc{u} - \vc{u}_B) } + \intO{ {\rm div}_x \vc{u} \ \nabla_x \vc{u} : \nabla_x D_t( \vc{u} - \vc{u}_B) } \nonumber \\ &-\intO{ \partial_{x_j} u_i \partial_{x_k} u_j \partial_{x_k} D_t( \vc{u} - \vc{u}_B)_i } - \intO{ \partial_{x_k} u_i \partial_{x_k} u_j \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i }. \label{G5} \end{align} Summing up \eqref{G4}, \eqref{G5} we conclude \begin{align} &\intO{ \Big( \Delta_x \partial_t \vc{u} + {\rm div}_x (\Delta_x \vc{u} \otimes \vc{u}) \Big) \cdot D_t (\vc{u} - \vc{u}_B) } \nonumber \\ &= - \intO{ \nabla_x D_t \vc{u} : \nabla_x D_t (\vc{u} - \vc{u}_B) } -\intO{ {\rm div}_x \vc{u} \ \nabla_x \vc{u} : \nabla_x D_t( \vc{u} - \vc{u}_B) } \nonumber \\ &+\intO{ \partial_{x_j} u_i \partial_{x_k} u_j \partial_{x_k} D_t( \vc{u} - \vc{u}_B)_i } + \intO{ \partial_{x_k} u_i \partial_{x_k} u_j \partial_{x_j} D_t( \vc{u} - \vc{u}_B)_i }. \label{G6} \end{align} Estimating the remaining integrals in \eqref{g3c} in a similar manner we may infer \begin{align} \frac{1}{2} \frac{{\rm d}}{\,{\rm d} t } &\intO{ \varrho |D_t (\vc{u} - \vc{u}_B) |^2 } + \mu \intO{ |\nabla_x D_t (\vc{u} - \vc{u}_B) |^2 } + \left( \eta +\frac{\mu}{3}\right) \intO{ |{\rm div}_x D_t ( \vc{u} - \vc{u}_B )|^2 } \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left( 1 + \intO{ \varrho |D_t \vartheta |^2 } + \intO{ |\nabla_x \vc{u} |^4 } + \intO{ \varrho |D_t \vc{u} |^2 } \right). \label{g4} \end{align} cf. \cite[Section 3, Lemma 3.3]{FaZiZh}. \subsection{Velocity decomposition} Following the original idea of Sun, Wang, and Zhang \cite{SuWaZh1}, we decompose the velocity field in the form: \begin{align} \vc{u} &= \vc{v} + \vc{w}, \label{g5} \\ {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{v} ) &= \nabla_x p \ \mbox{in}\ (0,T) \times \Omega ,\ \vc{v}|_{\partial \Omega} = 0, \label{g6} \\ {\rm div}_x \mathbb{S}(\mathbb{D}_x \vc{w} ) &= \varrho D_t \vc{u} - \varrho \vc{f} \ \mbox{in}\ (0,T) \times \Omega,\ \vc{w}|_{\partial \Omega} = \vc{u}_B. \label{g7} \end{align} Since \[ {\rm div}_x \mathbb{S}(\mathbb{D}_x \partial_t \vc{v} ) = \nabla_x \partial_t p \ \mbox{in}\ (0,T) \times \Omega ,\ \vc{v}|_{\partial \Omega} = 0, \] we get \begin{equation} \label{g8} \intO{ \partial_t p \ {\rm div}_x \vc{v} } = - \intO{ \nabla_x \partial_t p \cdot \vc{v} } = \frac{1}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ \mathbb{S}(\mathbb{D}_x \vc{v}) : \mathbb{D}_x \vc{v} }. \end{equation} Moreover, the standard elliptic estimates for the Lam\' e operator yield: \begin{align} \| \vc{v} \|_{W^{1,q}(\Omega; R^3)} &\leq c(q, \Ov{\varrho}, \Ov{\vartheta}) \ \mbox{for all}\ 1 \leq q < \infty, \label{g9} \\ \| \vc{v} \|_{W^{2,q}(\Omega; R^3)} &\leq c(q, \Ov{\varrho}, \Ov{\vartheta}) \left( \| \nabla_x \varrho \|_{L^q(\Omega; R^3)} + \| \nabla_x \vartheta \|_{L^q(\Omega; R^3)} \right),\ 1 < q < \infty . \label{g10} \end{align} Similarly, \begin{equation} \label{g11} \| \vc{w} \|_{W^{2,2}(\Omega; R^3)} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left( 1 + \| \sqrt{\varrho} \partial_t \vc{u} \|_{L^2(\Omega; R^3)} + \| \nabla_x \vc{u} \|_{L^2(\Omega; R^{3 \times 3})} \right). \end{equation} The estimates \eqref{g9}--\eqref{g11} are uniform in the time interval $[0,T)$. \subsection{Temperature estimates} Similarly to Fang, Zi, Zhang \cite[Section 3, Lemma 3.4]{FaZiZh} we multiply the internal energy equation \eqref{i3} on $\partial_t \vartheta$ and integrate over $\Omega$ obtaining \begin{align} c_v \intO{ \varrho |D_t \vartheta|^2 } & + \frac{\kappa}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ |\nabla_x \vartheta |^2 } \nonumber \\ &= c_v \intO{ \varrho D_t \vartheta \ \vc{u} \cdot \nabla_x \vartheta } - \intO{ \varrho \vartheta \ {\rm div}_x \vc{u} \ D_t \vartheta} + \intO{ \varrho \vartheta \ {\rm div}_x \vc{u}\ \vc{u} \cdot \nabla_x \vartheta } \nonumber \\ &+ \frac{{\rm d} }{\,{\rm d} t } \intO{ \vartheta \ \mathbb{S}(\mathbb{D}_x \vc{u}) : \nabla_x \vc{u} } \nonumber \\ &- \mu \intO{ \vartheta \left( \nabla_x \vc{u} + \nabla_x^t \vc{u} - \frac{2}{3} {\rm div}_x \vc{u} \mathbb{I} \right): \left( \nabla_x \partial_t \vc{u} + \nabla_x^t \partial_t \vc{u} - \frac{2}{3} {\rm div}_x \partial_t \vc{u} \mathbb{I} \right) } \nonumber \\ &- 2 \eta \intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x \partial_t \vc{u} }. \label{g12} \end{align} Indeed the term involving the boundary integral is handled as \[ - \kappa \intO{ \Delta_x \vartheta \ \partial_t \vartheta } = - \kappa \int_{\partial \Omega} \partial_t \vt_B \nabla_x \vartheta \cdot \vc{n} \ {\rm d} S_x + \frac{\kappa}{2} \frac{{\rm d} }{\,{\rm d} t } \intO{ |\nabla_x \vartheta |^2 }, \] where \[ \int_{\partial \Omega} \partial_t \vt_B \nabla_x \vartheta \cdot \vc{n} \ {\rm d} S_x = 0 \] as the boundary temperature is independent of $t$. Similarly to Fang, Zi, Zhang \cite[Section 3, Lemma 3.4]{FaZiZh}, we have to show that the intergrals \[ \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x \partial_t \vc{u} },\ \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x^t \partial_t \vc{u} },\ \mbox{and} \ \intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x \partial_t \vc{u} } \] can be rewritten in the form compatible with \eqref{g4}, meaning with the time derivatives replaced by material derivatives. Fortunately, this step can be carried out in the present setting using only the boundary condition $\vc{u} \cdot \vc{n}|_{\partial \Omega} = 0$. Indeed we get \[ \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x \partial_t \vc{u} } = \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x (D_t \vc{u}) } - \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x (\vc{u} \cdot \nabla_x \vc{u})}, \] where \begin{align*} &\intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x (\vc{u} \cdot \nabla_x \vc{u})} \nonumber \\ &= \intO{\vartheta \ \nabla_x \vc{u} : (\nabla_x \vc{u} \cdot \nabla_x \vc{u})} + \frac{1}{2} \intO{ \vartheta \ \vc{u} \cdot \nabla_x | \nabla_x \vc{u} |^2 } \nonumber \\ &= \intO{\vartheta \ \nabla_x \vc{u} : (\nabla_x \vc{u} \cdot \nabla_x \vc{u})} - \frac{1}{2} \intO{ |\nabla_x \vc{u}|^2 \ \nabla_x \vartheta \cdot \vc{u} } - \frac{1}{2} \intO{ |\nabla_x \vc{u}|^2 \ \vartheta {\rm div}_x \vc{u} }. \end{align*} Similarly, \[ \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x^t \partial_t \vc{u} } = \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x^t (D_t \vc{u}) } - \intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x^t (\vc{u} \cdot \nabla_x \vc{u})}, \] where \begin{align} &\intO{ \vartheta \ \nabla_x \vc{u} : \nabla_x^t (\vc{u} \cdot \nabla_x \vc{u})} \nonumber \\ &= \intO{\vartheta \ \nabla_x \vc{u} : (\nabla_x^t \vc{u} \cdot \nabla_x^t \vc{u})} + \frac{1}{2} \intO{ \vartheta \ \vc{u} \cdot \nabla_x ( \nabla_x \vc{u} : \nabla_x^t \vc{u} ) } \nonumber \\ &= \intO{\vartheta \ \nabla_x \vc{u} : (\nabla_x^t \vc{u} \cdot \nabla_x^t \vc{u})}- \frac{1}{2} \intO{ ( \nabla_x \vc{u} : \nabla_x^t \vc{u} ) \ \nabla_x \vartheta \cdot \vc{u} }- \frac{1}{2} \intO{ ( \nabla_x \vc{u} : \nabla_x^t \vc{u} ) \ \vartheta {\rm div}_x \vc{u} }. \nonumber \end{align} Finally, \[ \intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x \partial_t \vc{u} } = \intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x D_t \vc{u} } - \intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x (\vc{u} \cdot \nabla_x \vc{u}) }, \] where \begin{align} &\intO{ \vartheta \ {\rm div}_x \vc{u} \ {\rm div}_x (\vc{u} \cdot \nabla_x \vc{u}) } \nonumber \\ &= \intO{ \vartheta \ {\rm div}_x \vc{u} \ (\nabla_x \vc{u} : \nabla_x^t \vc{u})} + \frac{1}{2} \intO{ \vartheta \vc{u} \cdot \nabla_x |{\rm div}_x \vc{u} |^2 } \nonumber \\ &= \intO{ \vartheta \ {\rm div}_x \vc{u} \ (\nabla_x \vc{u} : \nabla_x^t \vc{u})} - \frac{1}{2} \intO{ |{\rm div}_x \vc{u} |^2 \ \nabla_x \vartheta \cdot \vc{u} }-\frac{1}{2} \intO{ |{\rm div}_x \vc{u} |^2 \ \vartheta {\rm div}_x \vc{u} }. \nonumber \end{align} We conclude, using \eqref{g2}, \eqref{g4}, and \eqref{g12}, \begin{align} \intO{ |\nabla_x \vartheta |^2 (\tau, \cdot) } &+ \int_0^\tau \intO{ \varrho |D_t \vartheta |^2 } \,{\rm d} t \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \int_0^\tau \intO{ |\nabla_x \vc{u} |^4 } \,{\rm d} t \right). \label{g13} \end{align} Next, by virtue of the decomposition $\vc{u} = \vc{v} + \vc{w}$ and the bound \eqref{g9}, \begin{equation} \label{g14} \intO{ |\nabla_x \vc{u} |^4 } \stackrel{<}{\sim} \intO{ |\nabla_x \vc{v} |^4 } + \intO{ |\nabla_x \vc{w} |^4 } \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left( 1 + \intO{ |\nabla_x \vc{w} |^4 } \right), \end{equation} and, similarly, \begin{equation} \label{g15} \| \vc{w} \|_{L^\infty (\Omega; R^3) } \leq \| \vc{u} \|_{L^\infty (\Omega; R^3) } + \| \vc{v} \|_{L^\infty (\Omega; R^3) } \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) . \end{equation} Recalling the Gagliardo--Nirenberg interpolation inequality in the form \begin{equation} \label{g16} \| \nabla_x U \|_{L^4(\Omega; R^3)}^2 \leq \| U \|_{L^\infty(\Omega)} \| \Delta_x U \|_{L^2(\Omega)} \ \mbox{whenever}\ U|_{\partial \Omega} = 0, \end{equation} we may use \eqref{g14}, \eqref{g15} to rewrite \eqref{g13} in the form \begin{align} \intO{ |\nabla_x \vartheta |^2 (\tau, \cdot) } &+ \int_0^\tau \intO{ \varrho |D_t \vartheta |^2 } \,{\rm d} t \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \int_0^\tau \intO{ |\nabla_x \vartheta |^2 } \,{\rm d} t + \int_0^\tau \| \vc{w} \|_{W^{2,2}(\Omega; R^3)}^2 \,{\rm d} t \right). \label{g17} \end{align} Finally, we use the elliptic estimates \eqref{g11} to conclude \begin{align} &\intO{ |\nabla_x \vartheta |^2 (\tau, \cdot) } + \int_0^\tau \intO{ \varrho |D_t \vartheta |^2 } \,{\rm d} t \nonumber \\ &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \left(1 + \int_0^\tau \intO{ \left( |\nabla_x \vartheta |^2 + |\nabla_x \vc{u} |^2 \right) } \,{\rm d} t + \int_0^\tau \| \sqrt{\varrho} \partial_t \vc{u} \|_{L^2(\Omega; R^3)}^2 \,{\rm d} t \right). \label{g18} \end{align} Summing up \eqref{g2}, \eqref{g4}, and \eqref{g18} we may apply Gronwall's lemma to obtain the following bounds: \begin{align} \sup_{t \in [0,T)} \| \vc{u} (t, \cdot) \|_{W^{1,2}(\Omega; R^3)} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{g19}\\ \sup_{t \in [0,T)} \|\sqrt{\varrho} D_t \vc{u} (t, \cdot) \|_{L^{2}(\Omega; R^3)} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{g20} \\ \sup_{t \in [0,T)} \| \vartheta (t, \cdot) \|_{W^{1,2}(\Omega)} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{g21}\\ \int_0^T \intO{ |\nabla_x D_t \vc{u} |^2 } \,{\rm d} t &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{g22} \\ \int_0^T \intO{ \varrho |D_t \vartheta |^2 } \,{\rm d} t &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \label{g23} \end{align} Moreover, it follows from \eqref{g9}, \eqref{g16}, \eqref{g20} \begin{equation} \label{g24} \sup_{t \in [0,T) } \| \nabla_x \vc{u} (t, \cdot) \|_{L^4(\Omega; R^{3\times 3})} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} In addition, \eqref{g23}, \eqref{g24} and the standard parabolic estimates applied to the internal energy balance \eqref{i3} yield \begin{equation} \label{g25} \int_0^T \| \vartheta \|^2_{W^{2,2}(\Omega)} \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} \section{Second energy bound} \label{s} It follows from \eqref{g11}, \eqref{g20} that \begin{equation} \label{s1} \sup_{t \in [0,T) } \| \vc{w}(t, \cdot) \|_{W^{2,2}(\Omega; R^3)} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ); \end{equation} whence, by virtue of \eqref{g9} and Sobolev embedding $W^{1,2}(\Omega) \hookrightarrow L^6(\Omega)$, \begin{equation} \label{s2} \sup_{t \in [0,T)} \| \nabla_x \vc{u} (t, \cdot) \|^2_{L^6(\Omega; R^{3\times 3})} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} Moreover, as a consequence of \eqref{g22}, $D_t \vc{u}$ is bounded in $L^2(L^6)$, which, combined with \eqref{s2}, gives rise to \begin{equation} \label{s3} \int_0^T \left\| \partial_t \vc{u} \right\|^2_{L^6(\Omega; R^3)} \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} Finally, going back to \eqref{g7} we conclude \begin{equation} \label{s4} \int_0^T \| \vc{w} \|_{W^{2,6}(\Omega; R^3)}^2 \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u}), \end{equation} and \begin{equation} \label{s5} \int_0^T \| \vc{u} \|_{W^{1,q}(\Omega; R^3)}^2 \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u},q ) \ \mbox{for any} \ 1 \leq q < \infty. \end{equation} \section{Estimates of the derivatives of the density} \label{d} Using \eqref{s4}, \eqref{s5}, we may proceed as in \cite[Section 5]{SuWaZh} to deduce the bounds \begin{equation} \label{d1} {\rm sup}_{t \in [0,T)} \left( \| \partial_t \varrho (t, \cdot) \|_{L^6(\Omega)} + \| \varrho (t,\cdot) \|_{W^{1,6} (\Omega)} \right) \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} Revisiting the momentum equation \eqref{i2} we use \eqref{d1} together with the other bounds established above to obtain \begin{equation} \label{d2} \int_0^T \| \vc{u} \|^2_{W^{2,6}(\Omega; R^3)} \,{\rm d} t \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} \subsection{Positivity of the density and temperature} It follows from \eqref{d2} that ${\rm div}_x \vc{u}$ is bounded in $L^1(0,T; L^\infty(\Omega))$. Thus the equation of continuity \eqref{i1} yields a positive lower bound on the density \begin{equation} \label{d3} \inf_{(t,x) \in [0,T)\times\Ov{\Omega}}\ \varrho(t,x) \geq \underline{\varrho} > 0, \end{equation} where the lower bound depends on the data as well as on the length $T$ of the time interval. Similarly, rewriting the internal energy balance equation \eqref{i3} in the form \begin{equation} \label{d4} c_v \left( \partial_t \vartheta + \vc{u} \cdot \nabla_x \vartheta \right) - \frac{\kappa}{\varrho} \Delta_x \vartheta = \frac{1}{\varrho} \mathbb{S} : \mathbb{D}_x \vc{u} - \vartheta {\rm div}_x \vc{u} \end{equation} we may apply the standard parabolic maximum/minimum principle to deduce \begin{equation} \label{d5} \inf_{(t,x) \in [0,T)\times\Ov{\Omega}} \ \vartheta(t,x) \geq \underline{\vartheta} > 0. \end{equation} \section{Parabolic regularity for the heat equation} \label{p} We rewrite the parabolic equation \eqref{d4} in terms of $\Theta = \vartheta - \vt_B$. Recalling $\Delta_x \vt_B = 0$ we get \begin{equation} \label{p2} c_v \left( \partial_t \Theta + \vc{u} \cdot \nabla_x \vartheta \right) - \frac{\kappa}{\varrho} \Delta_x \Theta = \frac{1}{\varrho} \mathbb{S} : \mathbb{D}_x \vc{u} - \vartheta {\rm div}_x \vc{u} \end{equation} with the \emph{homogeneous} Dirichlet boundary conditions \begin{equation} \label{p3} \Theta|_{\partial \Omega} = 0. \end{equation} Now, we can apply all arguments of \cite[Sections 4.6, 4.7]{FeSu2015_N1} to $\Theta$ obtaining the bounds \begin{align} \| \vartheta \|_{C^\alpha([0,T] \times \Ov{\Omega})} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \ \mbox{for some}\ \alpha > 0, \label{p4} \\ \| \vartheta \|_{L^p(0,T; W^{2,3}(\Omega))} + \| \partial_t \vartheta \|_{L^p(0,T; L^{3}(\Omega))} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \ \mbox{for all}\ 1 \leq p < \infty, \label{p5} \end{align} together with \begin{equation} \label{p6} \| \vc{u} \|_{L^p(0,T; W^{2,6}(\Omega;R^3))} + \| \partial_t \vc{u} \|_{L^p(0,T; L^{6}(\Omega;R^3))} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) \ \mbox{for any}\ 1 \leq p < \infty. \end{equation} \section{Final estimates} \label{f} The bounds \eqref{p6} imply, in particular, \begin{equation} \label{f1} \sup_{(t,x) \in [0,T)\times \Ov{\Omega}}|\nabla_x \vc{u} (t,x) | \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} Thus the desired higher order estimates can be obtained exactly as in \cite[Section 4.6]{FeNoSun1}. Indeed the arguments of \cite[Section 4.6]{FeNoSun1} are based on differentiating the equation \eqref{p2} with respect to time which gives rise to a parabolic problem for $\partial_t \vartheta$ with the \emph{homogeneous} Dirichlet boundary conditions $\partial_t \vartheta|_{\partial \Omega} = 0$. Indeed we get \begin{equation*} \begin{aligned} { c_v \partial_{tt}^2 \vartheta + c_v \vc{u} \cdot \nabla_x \partial_t \vartheta - \frac{\kappa}{\varrho} \Delta_x \partial_t\vartheta =} & { - c_{v} \partial_t \vc{u} \cdot \nabla_x \vartheta - \frac{1}{\varrho^2}\partial_t \varrho \left( \kappa \Delta_x \vartheta + \mathbb{S} (\mathbb{D}_x \vc{u}): \mathbb{D}_x \vc{u} \right)} \\ &{ + \frac{2}{\varrho}\ \mathbb{S} (\mathbb{D}_x\vc{u}): \mathbb{D}_x \partial_t\vc{u}-\partial_t \vartheta \ {\rm div}_x \vc{u} - \vartheta \ {\rm div}_x \partial_t \vc{u}.} \end{aligned} \end{equation*} The estimates obtained in the previous sections imply that the right--hand side of the above equation is bounded in $L^2(0,T; L^2(\Omega))$. Thus multiplying the equation on $\Delta_x \partial_t \vartheta$ and performing the standard by parts integration, we get the desired estimates as in \cite[Section 4.6]{FeNoSun1}. The remaining estimates are obtained exactly as in \cite[Section 4.6]{FeNoSun1} : \begin{align} \label{f2} \sup_{t \in [0,T) } \| \vartheta (t, \cdot) \|_{W^{3,2}(\Omega)} + \sup_{t \in [0,T) } \| \partial_t \vartheta (t, \cdot) \|_{W^{1,2}(\Omega)} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \\ \int_0^T \left( \| \partial_t \vartheta \|^2_{W^{2,2}(\Omega)} + \| \vartheta \|^2_{W^{4,2}(\Omega)} \right) \,{\rm d} t &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{f3} \\ \sup_{t \in [0,T) } \| \vc{u} (t, \cdot) \|_{W^{3,2}(\Omega; R^3)} + \sup_{t \in [0,T) } \| \partial_t \vc{u} (t, \cdot) \|_{W^{1,2}(\Omega; R^3)} &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ) , \label{f4} \\ \int_0^T \left( \| \partial_t \vc{u} \|^2_{W^{2,2}(\Omega; R^3)} + \| \vc{u} \|^2_{W^{4,2}(\Omega; R^3)} \right) \,{\rm d} t &\leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ), \label{f5} \end{align} and \begin{equation} \label{f6} \sup_{t \in [0,T) } \| \varrho (t, \cdot) \|_{W^{3,2}(\Omega)} \leq c(T, \mathcal{D}_0, \Ov{\varrho}, \Ov{\vartheta}, \Ov{u} ). \end{equation} We have completed the proof of Proposition \ref{PT}. \def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0 \hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
1,314,259,996,800
arxiv
\section{Introduction} \label{sec:intro} Bookshare~\cite{bookshare} and Sugamya Pustakalaya~\cite{sugamyapustakalaya} are libraries of accessible content used by blind, low-vision, and otherwise print-disabled (BLV) individuals. Volunteers for the non-profit scan and digitize books requested by its members. However, it currently takes volunteers close to one month to create accessible books despite the availability of highly-accurate cloud-based OCR services~\cite{google,acs}. This slow pace of making content accessible naturally results in an acute lack of accessible content for BLV individuals. Our goal in this paper is to design tools that empower volunteers to make content accessible an order of magnitude faster. Making OCR output \emph{navigable} is the primary time-consuming task volunteers perform. This includes extracting chapter, heading and subheading structure, necessary for directly navigating to sections of interest or to skip over content. Automatically detecting heading levels is, in principle, easy using heuristics on font size and weight. A significantly harder problem in making content navigable is resolving internal references such as citations, footnotes, and references to tables, images, and equations in a form that screen-readers can make accessible to BLV individuals. To illustrate the (in-)accessibility of internal references, consider the following sentence from a research paper: ``[52] presents a breakthrough result''. A sighted reader can effortlessly flip to the references at the back, look up reference~[52], and continue reading within just a few seconds. This effortless lookup is not possible using screen-readers that, in general, do not have a notion of internal references. While internal hyperlinks can be used to approximate the task of jumping to the references, and then navigating back, the interaction disrupts the flow of reading. A better approach is to replace the original sentence with: ``XYZ presents a breakthrough result''. In order to automatically replace internal references, one must be able to \begin{inparaenum}[i)] \item identify internal references in text, \item resolve them to the destination content, \item construct a short readable representation, and \item replace the original reference with the short representation in a grammatically correct manner. \end{inparaenum} In this paper, we focus on solving the first two problems, that of identifying and resolving internal references to citations, tables and figures, and footnotes. In doing so, we discover the lack of tools and datasets for training the necessary models. Overall, this paper makes three contributions. First, we present our tool for generating ground-truth data on internal references by analysing digital PDFs. Second, we present a dataset for training computer vision models for internal references by applying our tool to several thousand arXiv papers. And finally, we train a vision model on this dataset and apply to a sampling of images of scanned research papers to demonstrate viability of our approach. \section{Background and Related Work} PDF has established itself as the de facto standard for fixed-format document exchange and publication but when it comes to reading these documents, a vast majority (75.1 percent) of screen reader users believe that PDF documents are extremely or moderately likely to cause severe accessibility concerns \cite{ref4}. In two earlier research studies, BLV users also reported inaccessible PDFs as a huge barrier in understanding content within a document \cite{ref5},\cite{ref6} as assistive technologies experience significant difficulty trying to interpret the information when the PDF documents do not comply with accessibility standards. Most PDF documents are intrinsically inaccessible, due to the combination of visual layout information and semantic content. The PDF Association has also created the Matterhorn Protocol \cite{ref7}, which defines a precise list of 136 test requirements that PDF documents must meet in order to be accessible. This contains detailed guidance for content authors on how to add specifications to their documents and make them compatible with assistive technologies \cite{ref8}. Despite the existence of these thorough guidelines, improving the accessibility of a PDF document remains a challenging research problem due to a combination of limited tools, lack of knowledge and the structure of PDF format itself. Although there are some tools like PAC 3 ~\cite{ref9}, WebAIM’s WAVE ~\cite{ref10} etc. which can identify the problems related to accessibility, there are limited tools that can remediate the issues identified. There are very few tools which are open-source and provide a free version to the user. One of the most commonly used tools for remediation is Adobe Acrobat Pro/DC but its accessibility toolkit is a part of the paid version and hence, very few people have access to it. Most research articles that are delivered today exist in PDF format, and a recent study showed that only 2.4\% of them are accessible to people with disabilities \cite{ref11}. Recent research has focused on addressing some important aspects of paper accessibility, such as how screen readers should comprehend and read mathematical equations \cite{ref12, ref13, ref14, ref15}, generate figure captions automatically \cite{ref16, ref17} and explain graphs and charts \cite{ref18, ref19, ref20}. In the recent work, a lot of traction is given upon improving accessibility of various types of media content such as images \cite{ref21, ref22}, videos \cite{ref23} and automatic classification of the content of figures \cite{ref24}. Other work focused on improving automatic text and layout detection in scanned documents \cite{ref25} as well as table content extraction \cite{ref26, ref27} within them. To date, not much work has been done for introducing navigability within a document which enables skimming and scanning \cite{ref28}, currently under-supported by PDF documents. Hence, our work focuses on taking the first step towards automatically replacing the internal references and hence, improving the reading experience of a wide range of users. \section{Methodology} \label{method-section} This section talks about the tool we created to generate ground-truth data on internal references by analyzing digital PDFs. The process for building this tool is divided into 3 phases: First, we segment the document into regions of interest. Second, we identify explicit internal reference keys for these regions. And finally, we infer implicit keys associated with references using the hyperlink text. These three key elements are depicted in Fig. \ref{fig:methodology}. \textbf{Regions.} A research paper can be decomposed into the following elements: headings, text blocks, figures, tables, lists and equations. For detecting internal references within these elements, we need to first identify the regions containing internal references and we pose this problem as an instance segmentation task. We use a pre-trained model based on Mask R-CNN architecture from Detectron2 model zoo to decompose a document into five categories: title, text block, list, figure and table. The model is based on the ResNet50 feature pyramid network (FPN) base config and is trained on the Publaynet dataset for document layout analysis. Similarly, we use another instance segmentation model of the same base config to detect equations but the only difference is that it has been trained on MFD (Mathematical Formula Detection) dataset with output layer modified to predict only one category: equation. The result obtained from these models is depicted in Fig. \ref{fig:segmentation model}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{images/Picture1.png} \caption{Result from document segmentation model: A document page is segmented into six categories: Text, Title, List, Table, Figure and Equation.). } \label{fig:segmentation model} \vspace{-10pt} \end{figure} \textbf{Explicit Keys.} After extracting the regions of interest, our focus is to extract explicit internal reference keys for each of these regions. This involves identification of reference items, table captions, figure captions, equation numbers and footnote markers. However, there is a dearth of tools and data which makes it difficult to extract them directly. Hence, we use different heuristics for extracting such elements. Since we already know certain standardized keywords which mark the beginning of these elements, we make use of OCR to extract text within the regions and hence, create appropriate annotations for the internal references. These keywords are extracted as explicit keys, used for resolving reference items later. For instance, a table or figure is generally surrounded by its caption which can be segmented as a text block from the document segmentation model. Now, doing OCR on these text blocks and a simple keyword matching would give us the appropriate caption. Similarly, superscript detection would provide us with the footnote markers. Using the annotations obtained from this process, we create a structured representation for all reference items with fields for bounding box coordinates as well as text contained within them. \vspace{-10pt} \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{images/original-ann.jpg} \caption{A document page decomposed into regions of interest (yellow box), explicit keys (green box) and implicit keys (red box) for tables, figures, references and footnotes. } \label{fig:methodology} \vspace{-5pt} \end{figure} \textbf{Implicit Keys.} Within a scientific document, there can be multiple hyperlinks associated with the same internal reference. Since the objective of our work is to create a dataset for resolving internal references with their corresponding in-text citations and vice-versa, we need to obtain all hyperlink texts which would act as implicit keys for internal references. Firstly, we incorporate document’s metadata using a python module named fitz \cite{fitz} from an open source project PyMuPDF. This way we collect information regarding the source location as well as the target point location for all types of embedded links. For each target point location, we find the nearest reference item from all the items extracted earlier. Also, we extract all implicit keys by applying OCR at all hyperlink locations obtained from fitz. Finally, a ground-truth dataset is created by combining source keys (implicit and explicit keys) with the annotated reference bounding box and associated text. \section{Experiments and Result} This section focuses on applying our tool to generate ground-truth annotated data for bibliographic reference items present in scientific documents. Further, we would also demonstrate the viability of our approach by training a vision model on this data for bibliographic reference item detection and applying it to both born-digital as well as scanned documents. The entire pipeline is described in the following sub-sections. \subsection{Data} \label{data} The first step is to construct a dataset of research papers by sampling PDFs published in the years of 2016-2021 from arXiv dataset published on Kaggle \cite{kaggle}, stratified across subjects ranging from physics, computer science, mathematics, physics, statistics, quantitative biology, economics and others. We sampled papers from each field of study, with Physics on the minimum end and Economics and Computer Science on the maximum end. The resulting dataset consisted of around 22,081 papers. It is used for generating ground-truth labels for bibliographic reference items and also, for training the vision model to detect bibliographic reference items. \subsection{Ground-Truth Labels} This section talks about generating ground-truth labels for bibliographic reference items by applying our tool to research paper dataset as mentioned in section \ref{data}. To identify the regions of interest for bibliographic references, we utilized the document segmentation model as described in section \ref{method-section} to detect two classes of elements - Title and List blocks within a research paper. Then, we extracted text within the title blocks using OCR to check for references section. Further, a keyword search based heuristic is used to identify phrases that are likely to appear as a title for the references section. Once we detected the section, we segmented out list blocks within it and this way, we extracted 88,786 list images containing references. Next, we segmented reference items out of the list images by exploiting some natural properties of reference items, such as the use of certain characters that are used to mark the beginning of a reference item. We have also assumed that they are written in a consistent manner as is required by most academic venues. But since these items can span through multiple lines, the detection becomes difficult. Hence, we used a heuristics based approach based on identifying these start characters from references using OCR and further, labeling the region between them as a reference item. Further, these start characters are extracted as an explicit key for the reference. To extract implicit keys, we obtained embedded links information using fitz module. Then, we filtered out only those in-text hyperlinks which were pointing to pages beyond reference section. For each hyperlink, we detected its corresponding reference item by finding an annotated bounding box nearest to hyperlink's target point location. Also, we obtained implicit keys by extracting hyperlink text from all the locations pointing to a bibliographic reference. Finally, we combined all these key elements to generate annotated ground-truth data for bibliographic reference items. \subsection{Model} To generalize the process of detecting all types of references and linking them back to their in-text citations, we trained a vision based model for extraction of bibliographic reference items along with their source keys. We used the base config from the Faster-RCNN model present in the model zoo of the Detectron2 library. It is pre-trained on COCO dataset for object detection task and is based on the ResNet101 feature pyramid network (FPN) base config. This backbone network is used to extract features from an image followed by a Region Proposal Network for proposing region proposals and a box head for tightening the bounding box. To train this model on our dataset, we first divided our ground-truth dataset into two parts as train and validation set with 0.85 as the split ratio. Out of 88,786 list images data and their annotations, 75,468 were used for training and the remaining 13,318 were utilized for evaluation purposes. The model was trained for 20k iterations. \begin{table}[h] \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{@{}c|cccccc@{}} \toprule \footnotesize Model & \footnotesize AP & \footnotesize AP@$.50$ & \footnotesize AP@$.75$ & \footnotesize AP$_{m}$ & \footnotesize AP$_l$ \\ \midrule \footnotesize Reference segmentation & \footnotesize 81.70 & \footnotesize 86.98 & \footnotesize 84.47 & \footnotesize 74.68 & \footnotesize 84.72 \\ \bottomrule \end{tabular} \caption{Average precision values at different IoU thresholds} \label{tab:apscore} \vspace{-5pt} \end{table} \subsection{Results} We inferred the results by testing our model on validation dataset using hold-out cross validation technique. Usually, object detection models are evaluated following the COCO standards of evaluation. Hence, mAP (mean average precision) is an important metric considered for evaluating the performance of the model. The average precision values at different IoU thresholds are present in Table \ref{tab:apscore}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{images/Picture2.png} \caption{Output for research paper image from born-digital PDF } \label{fig:sample output1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{images/Picture3.png} \caption{Output for scanned research paper image } \label{fig:sample output2} \end{figure} We also validated our approach by applying it in two different settings - 1) images from born-digital PDFs and 2) images from a flatbed scanner. The output is shown in Fig. \ref{fig:sample output1} and Fig. \ref{fig:sample output2} respectively. In both cases, we were able to extract reference items and their source keys successfully. This depicts the viability of our approach and hence, it can be used to identify and resolve other types of internal references as well. \section{Discussion} From Fig. \ref{fig:sample output1} and Fig. \ref{fig:sample output2}, we observe that the performance of our model is significantly better on images from born-digital PDF than a real-world scanned image. Hence, one of the future directions is to make our model more robust to data distribution shifts through data augmentation. For some research paper images, we also observed false negatives for reference items continuing from the previous page. We plan to handle this case by adding more data annotations of this type in future. Also, we found that exploiting some natural properties of reference items, such as the use of certain keywords in the beginning of a reference item, makes heuristic approaches very effective for our task. However, in future, we plan to use a considerably expanded set of keywords to make our approach effective against more kinds of papers. \section{Conclusion} In this paper we presented our ongoing work in making published documents accessible to blind, low-vision, and print-disabled individuals. We specifically focused on the problem of poor accessibility of internal references like citations, footnotes, table and figure references. We presented a vision based technique to extract metadata needed to make these internal references accessible. We successfully applied our technique to extract requisite metadata from the bibliography section. We continue to work on summarizing the referenced content and inlining into the audio narration to make internal references fully accessible to print-impaired individuals. \iffalse \begin{abstract} The ABSTRACT is to be in fully justified italicized text, at the top of the left-hand column, below the author and affiliation information. Use the word ``Abstract'' as the title, in 12-point Times, boldface type, centered relative to the column, initially capitalized. The abstract is to be in 10-point, single-spaced type. Leave two blank lines after the Abstract, then begin the main text. Look at previous CVPR abstracts to get a feel for style and length. \end{abstract} \section{Introduction} \label{sec:intro} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsubsection{Dialect} Hello world. \subsection{Dual submission} Please refer to the author guidelines on the CVPR\ 2022\ web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR\ 2022.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera-ready copy should not contain a ruler. (\LaTeX\ users may use options of cvpr.sty to switch between different versions.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (\eg, this line is $087.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Paper ID} Make sure that the Paper ID from the submission system is visible in the version submitted for review (replacing the ``*****'' you see in this document). If you are using the \LaTeX\ template, \textbf{make sure to update paper ID in the appropriate place in the tex file}. \subsection{Mathematics} Please number all of your sections and displayed equations as in these examples: \begin{equation} E = m\cdot c^2 \label{eq:important} \end{equation} and \begin{equation} v = a\cdot t. \label{eq:also-important} \end{equation} It is important for readers to be able to refer to any particular equation. Just because you did not refer to it in the text does not mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for tech reports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as supplemental material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as supplemental material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a tech report for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the tech report as supplemental material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool that is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Do not write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] did not handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours, which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \medskip \noindent FAQ\medskip\\ {\bf Q:} Are acknowledgements OK?\\ {\bf A:} No. Leave them for the final copy.\medskip\\ {\bf Q:} How do I cite my results reported in open challenges? {\bf A:} To conform with the double-blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\ \begin{figure}[t] \centering \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word). If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. However, use it only when there are three or more authors. Thus, the following is correct: ``Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. \begin{figure*} \centering \begin{subfigure}{0.68\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{An example of a subfigure.} \label{fig:short-a} \end{subfigure} \hfill \begin{subfigure}{0.28\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{Another example of a subfigure.} \label{fig:short-b} \end{subfigure} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} \label{sec:formatting} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area $6\frac{7}{8}$ inches (17.46 cm) wide by $8\frac{7}{8}$ inches (22.54 cm) high. Page numbers should be in the footer, centered and $\frac{3}{4}$ inches from the bottom of the page. The review version should have page numbers, yet the final version submitted as camera ready should not show any page numbers. The \LaTeX\ template takes care of this when used properly. \subsection{Type style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title $1\frac{3}{8}$ inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in \cref{fig:onecol,fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote{This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. For the benefit of author(s) and readers, please use the {\small\begin{verbatim} \cref{...} \end{verbatim}} command for cross-referencing to figures, tables, equations, or sections. This will automatically insert the appropriate label alongside the cross-reference as in this example: \begin{quotation} To see how our method outperforms previous work, please see \cref{fig:onecol} and \cref{tab:example}. It is also possible to refer to multiple targets as once, \eg~to \cref{fig:onecol,fig:short-a}. You may also return to \cref{sec:formatting} or look at \cref{eq:also-important}. \end{quotation} If you do not wish to abbreviate the label, for example at the beginning of the sentence, you can use the {\small\begin{verbatim} \Cref{...} \end{verbatim}} command. Here is an example: \begin{quotation} \Cref{fig:onecol} is also quite important. \end{quotation} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include page numbers and the name(s) of editors of referenced books. When you cite multiple papers at once, please make sure that you cite them in numerical order like this \cite{Alpher02,Alpher03,Alpher05,Authors14b,Authors14}. If you use the template as advised, this will be taken care of automatically. \begin{table} \centering \begin{tabular}{@{}lc@{}} \toprule Method & Frobnability \\ \midrule Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \bottomrule \end{tabular} \caption{Results. Ours is better.} \label{tab:example} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. In \LaTeX, avoid using the \texttt{center} environment for this purpose, as this adds potentially unwanted whitespace. Instead use {\small\begin{verbatim} \centering \end{verbatim}} at the beginning of your figure. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths that render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR\ 2022\ web page for a discussion of the use of color in your document. If you use color in your plots, please keep in mind that a significant subset of reviewers and readers may have a color vision deficiency; red-green blindness is the most frequent kind. Hence avoid relying only on color as the discriminative feature in plots (such as red \vs green lines), but add a second discriminative feature to ease disambiguation. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: \url{https://www.computer.org/about/contact}. \fi {\small \bibliographystyle{ieee_fullname} \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname}
1,314,259,996,801
arxiv
\section{Introduction} Hilbert $C^{\ast }$-modules are generalizations of Hilbert spaces by allowing the inner product to take values in a $C^{\ast }$-algebra rather than in the field of complex numbers. But the theory of Hilbert $C^{\ast }$% -modules is different from the theory of Hilbert spaces, for example, no any closed submodule of a Hilbert $C^{\ast }$-module is complemented. The notion of frames in Hilbert $C^{\ast }$-modules was introduced and some properties were investigated in \cite{2,3,13}. Using the well known Kasparov stabilisation theorem \cite{10} which states that any countably generated Hilbert $C^{\ast }$-module over $A$ is unitarily equivalent with a complemented submodule of $H_{A}$, Frank and Larson \cite{2} showed that any countably generated Hilbert module has a standard normalised frame. In \cite% {13}, Raeburn and Thompson showed that the Kasparov stabilisation theorem is valid for countably generated Hilbert $C^{\ast }$-modules in the multiplier. Also they defined the concept of standard frame of multipliers for a Hilbert $C^{\ast }$-module, and showed that every Hilbert $C^{\ast }$-module countably generated in the multiplier module admits a frame of multipliers, thus generalizing results of Frank and Larson \cite{2}. Pro-$C^{\ast }$-algebras are generalizations of $C^{\ast }$-algebras. Instead of being given by a single $C^{\ast }$-norm, the topology of a pro-$% C^{\ast }$-algebra is given by a directed family of $C^{\ast }$-seminorms. In the literature, pro-$C^{\ast }$-algebras have been given by different names such as $b^{\ast }$-algebras ( C. Apostol), $LMC^{\ast }$-algebras (G. Lessner, K. Schm\"{u}dgen) or locally $C^{\ast }$-algebras (A. Inoue, M.\ Fragoulopoulou, A. Mallios, etc). Hilbert modules over pro-$C^{\ast }$% -algebras were considered independently by Mallios and Phillips \cite{12}. Phillips showed that the Kasparov stabilisation theorem is valid for countably generated Hilbert modules over metrizable pro-$C^{\ast }$-algebras and we shoved that this theorem is valid for countably generated Hilbert modules over arbitrary pro-$C^{\ast }$-algebras \cite{7}. In this paper we extend some results from \cite{2,3,13} in the context of Hilbert modules over pro-$C^{\ast }$-algebras. The paper is organized as follows. In Section 2 we recall some facts about pro-$C^{\ast }$-algebras and Hilbert modules over pro-$C^{\ast }$-algebras. In Section 3, we introduce the concept of frame of multipliers for Hilbert modules over pro-$% C^{\ast }$-algebras and prove that any Hilbert module over a pro-$C^{\ast }$% -algebra countably generated in the multiplier module admits a standard normalised frame of multipliers. Also we show that the reconstruction formula is valid for standard normalised frames of multipliers and the existence of the frame transform. It is known that if the bounded part $b(E)$ of a Hilbert module $E$ over a pro-$C^{\ast }$-algebra $A$ is a Hilbert $% C^{\ast }$-module over $b(A)$, and if $b(E)$ is countably generated in the multiplier module, then $E$ is countably generated in the multiplier module. We show that $b(E)$ admits a standard frame of multipliers, then $E$ admits a standard frame of multipliers. In Section 4, we introduce the notion of dual frame of multipliers and prove a necessary and sufficient condition for that two frames of multipliers are duals to each other. \section{Preliminaries} Let $A$ be a pro-$C^{\ast }$-algebra. The set $S(A)$ of all continuous $% C^{\ast }$-seminorms on $A$ is directed ($p\geq q$ if $p(a)\geq q(a)$ for all $a\in A$). For each $p\in S(A)$, the quotient $\ast $-algebra $A/\ker p$% , where $\ker p=\{a\in A;p(a)=0\}$, denoted by $A_{p}$, is a $C^{\ast }$% -algebra in the $C^{\ast }$-norm induced by $p$. The canonical map from $A$ onto $A_{p}$ is denoted by $\pi _{p}^{A}$. For $p,q\in S(A)$ with $p\geq q$, there is a canonical surjective morphism of $C^{\ast }$-algebras $\pi _{pq}^{A}:A_{p}\rightarrow A_{q}$ such that $\pi _{pq}^{A}(\pi _{p}^{A}(a))=\pi _{q}^{A}(a)$ for all $a\in A$, and $\{A_{p};\pi _{pq}^{A}\}_{p,q\in S(A),p\geq q}$ is an inverse system of $C^{\ast }$% -algebras. Moreover, the pro-$C^{\ast }$-algebras $A$ and $\lim\limits_{% \underset{p\in S(A)}{\leftarrow }}A_{p}$ can be identified. An element $a\in A$ is bounded if \begin{equation*} \left\Vert a\right\Vert _{\infty }=\sup \{p(a);p\in S(A)\}<\infty . \end{equation*}% The set $b(A)$ of all bounded elements in $A$ is a $C^{\ast }$-algebra in the $C^{\ast }$-norm $\left\Vert \cdot \right\Vert _{\infty }$. Moreover, $% b(A)$ is dense in $A$. Here we recall some facts about Hilbert modules over pro-$C^{\ast }$% -algebras from \cite{7,12}. \begin{definition} A pre-Hilbert$\ A$-module is a complex vector space$\ E$\ which is also a right $A$-module, compatible with the complex algebra structure, equipped with an $A$-valued inner product $\left\langle \cdot ,\cdot \right\rangle _{E}:E\times E\rightarrow A\;$which is $\mathbb{C}$-and $A$-linear in its second variable and satisfies the following relations: \begin{enumerate} \item $\left\langle \xi ,\eta \right\rangle _{E}^{\ast }=\left\langle \eta ,\xi \right\rangle _{E}\;\;$for every $\xi ,\eta \in E;$ \item $\left\langle \xi ,\xi \right\rangle _{E}\geq 0\;\;$for every $\xi \in E;$ \item $\left\langle \xi ,\xi \right\rangle _{E}=0\;$\ if and only if $\xi =0. $ \end{enumerate} We say that $E\;$is a Hilbert $A$-module if $E\;$is complete with respect to the topology determined by the family of seminorms $\{\overline{p}% _{E}\}_{p\in S(A)}\;$where $\overline{p}_{E}(\xi )=\sqrt{p\left( \left\langle \xi ,\xi \right\rangle _{E}\right) },\xi \in E$.\smallskip \end{definition} An element $\xi $ in a Hilbert $A$-module $E$ is bounded if \begin{equation*} \left\Vert \xi \right\Vert _{\infty }=\sup \{\overline{p}_{E}(\xi );p\in S(A)\}<\infty . \end{equation*}% The set $b(E)$ of all bounded elements is a Hilbert $b(A)$-module which is dense in $E$. A Hilbert $A$-module $E$ is countably generated if there is a countable set $% \{\xi _{n}\}_{n}$ in $E$ such that the submodule of $E$ generated by $\{\xi _{n}a;a\in A,n=1,2,...\}$ is the whole of $E$. If $A$ is a pro-$C^{\ast }$-algebra, then $A$ is a Hilbert $A$-module with $% \left\langle a,b\right\rangle _{A}=a^{\ast }b,$ and the set $H_{A}$ of all sequences $(a_{n})_{n}$ with $a_{n}\in A$ such that $\tsum\limits_{n}a_{n}^{% \ast }a_{n}$ converges in $A$ is a Hilbert $A$-module with the action of $A$ on $H_{A}$ defined by $(a_{n})_{n}b=(a_{n}b)_{n}$ and the inner product defined by $\left\langle (a_{n})_{n},(b_{n})_{n}\right\rangle _{H_{A}}=\tsum\limits_{n}a_{n}^{\ast }b_{n}.$ Moreover, if $A$ has a countable approximate unit, then the Hilbert $A$-modules $A$ and $H_{A}$ are countably generated. Let $E\;$be a Hilbert $A$-module.\ For $p\in S(A)$,$\;$the quotient vector space $E/\ker \left( \overline{p}_{E}\right) ,$ where $\ker \left( \overline{% p}_{E}\right) =\{\xi \in E;\overline{p}_{E}(\xi )=0\}$,$\;$denoted by $E_{p}$% , is a Hilbert $A_{p}$-module with $(\xi +\ker \left( \overline{p}% _{E}\right) {})\pi _{p}^{A}(a)=\xi a+\ker \left( \overline{p}_{E}\right) {}\; $and $\left\langle \xi +\ker \left( \overline{p}_{E}\right) {},\eta +\ker \left( \overline{p}_{E}\right) {}\right\rangle _{E_{p}}=\pi _{p}^{A}(\left\langle \xi ,\eta \right\rangle _{E})$.\ The canonical map from $E\;$onto $E_{p}$ is denoted by $\sigma _{p}^{E}$. For $p,q\in S(A)$ with $p\geq q\;$there is a canonical surjective morphism of vector spaces $% \sigma _{pq}^{E}\;$from $E_{p}\;$onto $E_{q}\;$such that $\sigma _{pq}^{E}(\sigma _{p}^{E}(\xi ))=\sigma _{q}^{E}(\xi )$ for all $\xi \in E,\; $and $\ \{E_{p};A_{p};\sigma _{pq}^{E},\pi _{pq}^{A}$\ $\}_{p,q\in S(A),p\geq q}$ is an inverse system of Hilbert $C^{\ast }$-modules in the following sense: $\sigma _{pq}^{E}(\xi _{p}a_{p})=\sigma _{pq}^{E}(\xi _{p})\pi _{pq}^{A}(a_{p}),\xi _{p}\in E_{p},a_{p}\in A_{p};$ $\left\langle \sigma _{pq}^{E}(\xi _{p}),\sigma _{pq}^{E}(\eta _{p})\right\rangle _{E_{q}}=\pi _{pq}^{A}(\left\langle \xi _{p},\eta _{p}\right\rangle _{E_{p}}),$\ $\ $\ $\xi _{p},\eta _{p}\in E_{p};$ $\sigma _{pp}^{E}(\xi _{p})=\xi _{p},\;\xi _{p}\in E_{p}\;$and $\sigma _{qr}^{E}\circ \sigma _{pq}^{E}=\sigma _{pr}^{E}\;$if $p\geq q\geq r$.$\ $The Hilbert $A$-modules $% \lim\limits_{\underset{p\in S(A)}{\leftarrow }}E_{p}$ and $E$ can be identified. Given two Hilbert $A$-modules $E$ and $F$, a module morphism $T:E\rightarrow F\;$is continuous if for each $p\in S(A)$ there is $M_{p}>0$ such that $% \overline{p}_{F}\left( T\xi \right) \leq M_{p}\overline{p}_{E}\left( \xi \right) $ for all $\xi \in E$, and it is adjointable if there is a module morphism $T^{\ast }:F\rightarrow E$\ such that $\left\langle T\xi ,\eta \right\rangle =\left\langle \xi ,T^{\ast }\eta \right\rangle \;$for every $% \xi \in E$ and $\eta \in F$. Any adjointable module morphism is continuous. The set of all adjointable module morphisms from $E$ to $F$ is denoted by $% L\left( E,F\right) $. For each $p\in S(A),$ there is a linear map $\left( \pi _{p}^{E,F}\right) _{\ast }:L(E,F)\rightarrow L\left( E_{p},F_{p}\right) $ defined by \begin{equation*} \left( \pi _{p}^{E,F}\right) _{\ast }\left( T\right) \left( \sigma _{p}^{E}(\xi )\right) =\sigma _{p}^{F}\left( T\left( \xi \right) \right) \end{equation*}% for $T\in L(E)$ and $\xi \in E$. The vector space $L(E,F)$ is a complete locally convex space with respect to the topology defined by the family of seminorms $\{\widetilde{p}_{L(E,F)}\}_{p\in S(A)}$, where $\widetilde{p}% _{L(E,F)}(T)=\left\Vert (\pi _{p}^{E,F})_{\ast }(T)\right\Vert _{L(E_{p},F_{p})},$ $T\in L(E,F)$. If $E=F,$ $L(E,E)$ is denoted by $L(E)$ and it is a pro-$C^{\ast }$-algebra. For $p,q\in S(A)$ with $p\geq q$, there is a linear map $\left( \pi _{pq}^{E,F}\right) _{\ast }:L(E_{p},F_{p})\rightarrow L(E_{q},F_{q})$ such that \begin{equation*} \left( \pi _{pq}^{E,F}\right) _{\ast }\left( T_{p}\right) \left( \sigma _{q}^{E}\left( \xi \right) \right) =\sigma _{pq}^{F}\left( T_{p}\left( \sigma _{p}^{E}\left( \xi \right) \right) \right) \end{equation*} for all $T_{p}\in L(E_{p},F_{p})$ and $\xi \in E$, and $\{L(E_{p},F_{p});$ $% \;(\pi _{pq}^{E,F})_{\ast }\}_{p,q\in S(A),p\geq q}$ is an inverse system of Banach spaces. Moreover, the complete locally convex spaces $\lim\limits_{% \underset{p\in S(A)}{\leftarrow }}L(E_{p},F_{p})$ and $L(E,F)$ are isomorphic. The set $b(L(E,F))$ of all bounded elements in $L(E,F)$ ($T\in b(L(E,F)$ if $% \sup \{$\ $\ \widetilde{p}_{L(E,F)}(T);p\in S(A)\}<\infty $), is a Banach space which is isomorphic with $L(b(E),b(F))$. Now we recall some facts about multiplier modules from \cite{8,13}. The set $% L(A,E)$ of all adjointable module morphisms from $A$ to $E$ is a Hilbert $% L(A)$-module with the action of $L(A)$ on $L(A,E)$ defined by \begin{equation*} L(A,E)\times L(A)\backepsilon \left( T,S\right) \mapsto T\cdot S=T\circ S\in L(A,E) \end{equation*}% and the inner-product defined by \begin{equation*} L(A,E)\times L(A,E)\backepsilon \left( T,R\right) \mapsto \left\langle T,S\right\rangle =T^{\ast }\circ S\in L(A). \end{equation*}% Since the locally $C^{\ast }$-algebras $L(A)$ and $M(A),$ the multiplier algebra of $A$, can be identified \cite{12,7}, the Hilbert $L(A)$-module $% L(A,E)$ can be regarded as a Hilbert $M(A)$-module. The Hilbert $M(A)$% -module $L(A,E)$ is called \textit{the multiplier module} of $E$ and it is denoted by $M(E)$. Moreover, the topology on $M(E)$ induced by the inner product coincides with the topology defined by the family of seminorms $\{% \overline{p}_{M(E)}\}_{p\in S(A)}$, with \begin{equation*} \overline{p}_{M(E)}(h)=\widetilde{p}_{L(A,E)}\left( h\right) \end{equation*}% for all $h\in M(E)\;$and for all $p\in S(A)$. The map i$_{E}:E\rightarrow M(E)$ defined by \begin{equation*} \text{i}_{E}(\xi )\left( a\right) =\xi a,\xi \in E,a\in A \end{equation*}% identifies $E$ with a Hilbert submodule of $M(E)$ and then \begin{equation*} \left\langle h,\xi \right\rangle _{M(E)}=h^{\ast }\left( \xi \right) \end{equation*}% for all $h\in M(E)$ and $\xi \in E$. Moreover, if $a\in A$ and $h\in M(E)$, then $h\cdot a$ can be identified with $h(a)$. \begin{definition} A Hilbert $A$-module $E$ is countably generated in $M(E)$ if there is a countable set $\{h_{n};$ $h_{n}\in M(E),$ $n=1,2,...\}$ such that the closed submodule of $M(E)$ generated by $\{h_{n}\cdot a;$ $a\in A,$ $n=1,2,...\}$ is the whole of $E$. \end{definition} \begin{remark} If the Hilbert $A$-module $E$ is countably generated in $M(E)$, then, for each $p\in S(A),$ $\{(\pi _{p}^{A,E})_{\ast }(h_{n});$ $h_{n}\in M(E),$ $% n=1,2,...\}\subseteq M(E_{p})$ is a generating set for $E_{p}$, since $(\pi _{p}^{A,E})_{\ast }(h_{n})\cdot \pi _{p}^{A}(a)=\sigma _{p}^{E}\left( h_{n}\cdot a\right) $ for all $n=1,2,...$ and for all $a\in A$, and since $% \sigma _{p}^{E}(E)=E_{p}$. Therefore the Hilbert $A_{p}$-module $E_{p}$ is countably generated in $M(E_{p})$ for each $p\in S(A)$. \end{remark} \begin{example} If $A$ is a pro-$C^{\ast }$-algebra, then the Hilbert $A$-module $A$ is countably generated in $M(A)$, $\{1_{M(A)}\}$ being a generating set. \end{example} \begin{example} For any pro-$C^{\ast }$-algebra $A$, the Hilbert $A$-module $H_{A}$ is countably generated in $M(H_{A})$. Indeed, for each positive integer $n$ consider the linear map $% e_{n}:A\rightarrow H_{A}$ by $e_{n}(a)=(0,...0,a,0,...)$, the element in $% H_{A}$ whose all the components are $0$ excepts at $n^{\text{th}}$ component which is $a$. Clearly, $e_{n}$ is a module morphism. Moreover, $e_{n}$ is adjointable and $e_{n}^{\ast }\left( \left( a_{m}\right) _{m}\right) =$ $% a_{n}$. Let $\left( a_{n}\right) _{n}\in H_{A}$. Since \begin{equation*} \overline{p}_{H_{A}}\left( \left( a_{n}\right) _{n}-\sum\limits_{k=1}^{m}e_{k}\cdot a_{k}\right) ^{2}=p\left( \sum\limits_{k=m+1}^{\infty }a_{k}^{\ast }a_{k}\right) \end{equation*}% for all $p\in S(A)$ and for all positive integer $m,$ $\left( a_{n}\right) _{n}=\sum\limits_{k}e_{k}\cdot a_{k}.$ Therefore, the Hilbert submodule of $% M(H_{A})$ generated by $\{e_{m}\cdot a;a\in A,m=1,2,...\}$ is $H_{A}$, and so $\{e_{m},m=1,2,...\}$ $\subseteq M(H_{A})$ is a generating set for $H_{A}$% . \end{example} \begin{remark} If $E$ is a countably generated Hilbert $A$-module, then $E$ is countably generated in $M(E)$. In general, $E$ is not countably generated when $E$ is countably generated in $M(E)$. \textbf{Example.} Let $A$ be a pro-$C^{\ast }$-algebra which does not have a countable approximate unit. We seen that the Hilbert $A$-module $A$ is countably generated in $M(A)$ but it is not countably generated. \end{remark} \begin{remark} If $E$ is a Hilbert $A$-module such that $b(E)$ is countably generated in $% M(b(E)),$ then $E$ is countably generated in $M(E)$. \end{remark} \section{Frame transform and reconstruction frame} Let $E$ be a Hilbert $A$-module. \begin{definition} A sequence $\{h_{n}\}_{n}$ in $M(E)$ is a standard frame of multipliers in $% E $ if for each $\xi \in E$, $\sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}$ converges in $A$, and there are two positive constants $C$ and $D$ such that \begin{equation*} C\left\langle \xi ,\xi \right\rangle _{E}\leq \sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}\leq D\left\langle \xi ,\xi \right\rangle _{E} \end{equation*}% for all $\xi \in E$. If $D=C=1$ we say that $\{h_{n}\}_{n}$ is a standard normalized frame of multipliers. \end{definition} \begin{remark} Let $\{h_{n}\}_{n}$ be a sequence in $M(E)$. If $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$, then $\{\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \}_{n}$ is a standard frame of multipliers in $% E_{p}$ for each $p\in S(A)$. \end{remark} \begin{remark} Let $\{h_{n}\}_{n}$ be a sequence in $M(E)$. Then $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$ if and only if $\{\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \}_{n}$ is a standard normalised frame of multipliers in $E_{p}$ for each $p\in S(A).$ \end{remark} \begin{remark} If $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$, then $h_{n}\in b(M(E))$ for all positive integer $n$. Indeed, let $\xi \in b(E)$ and let $m$ be a positive integer. From \begin{equation*} 0\leq \left\langle \xi ,h_{m}\right\rangle _{M(E)}\left\langle h_{m},\xi \right\rangle _{M(E)}\leq \sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}\leq D\left\langle \xi ,\xi \right\rangle _{E} \end{equation*}% and \cite{4}, we deduce that \begin{equation*} p\left( \left\langle h_{m},\xi \right\rangle _{M(E)}\right) ^{2}\leq Dp\left( \left\langle \xi ,\xi \right\rangle _{E}\right) \end{equation*}% for all $p\in S(A)$. Then \begin{equation*} \overline{p}_{A}\left( h_{m}^{\ast }\left( \xi \right) \right) ^{2}\leq D% \overline{p}_{E}\left( \xi \right) ^{2}\leq D\left\Vert \xi \right\Vert _{\infty }^{2} \end{equation*}% for all $p\in S(A)$. This implies that $h_{m}^{\ast }\in b(M(E))$. Therefore $h_{m}\in b(M(E))$. \end{remark} \begin{example} For any pro-$C^{\ast }$-algebra, $\{e_{n}\}_{n}$ is a standard normalised frame of multipliers in $H_{A}$. Indeed, if $\left( a_{n}\right) _{n}\in H_{A}$ then, since \begin{equation*} \left\langle \left( a_{m}\right) _{m},e_{n}\right\rangle _{M\left( H_{A}\right) }\left\langle e_{n},\left( a_{m}\right) _{m}\right\rangle _{M\left( H_{A}\right) }=a_{n}^{\ast }a_{n} \end{equation*}% for each positive integer $n$, we have \begin{equation*} \sum\limits_{n}\left\langle \left( a_{m}\right) _{m},e_{n}\right\rangle _{M\left( H_{A}\right) }\left\langle e_{n},\left( a_{m}\right) _{m}\right\rangle _{M\left( H_{A}\right) }=\sum\limits_{n}a_{n}^{\ast }a_{n}=\left\langle \left( a_{m}\right) _{m},\left( a_{m}\right) _{m}\right\rangle _{H_{A}} \end{equation*}% and so $\{e_{n}\}_{n}$ is a standard normalised frame of multipliers in $% H_{A}$. \end{example} \begin{proposition} Any countably generated Hilbert $A$-module $E$ in $M(E)$ admits a standard normalised frame of multipliers. \end{proposition} \proof% Indeed, let $P:H_{A}\rightarrow E$ be the projection of $H_{A}$ on $E$ \cite[% Theorem 4.2]{8}. Then $\{\left( \pi _{p}^{H_{A},E}\right) _{\ast }(P)\circ \left( \pi _{p}^{A,H_{A}}\right) _{\ast }\left( e_{n}\right) \}_{n}$ is a standard normalised frame of multipliers in $E_{p}$ for each $p\in S(A)$ \cite[Corollary 3.3]{13}. From this fact, Remark 3.3 and taking into account that $\left( \pi _{p}^{H_{A},E}\right) _{\ast }(P)\circ \left( \pi _{p}^{A,H_{A}}\right) _{\ast }\left( e_{n}\right) =$ $\left( \pi _{p}^{A,E}\right) _{\ast }\left( P\circ e_{n}\right) $ for all $n$ and for all $p\in S(A)$ we conclude that $\{P\circ e_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$.% \endproof% \begin{theorem} ( The reconstruction formula) Let $E$ be a countably generated Hilbert $A$% -module in $M(E)$ and let $\{h_{n}\}_{n}$ be a sequence in $M(E)$. Then $% \{h_{n}\}_{n}$ is a standard normalised frame of multipliers if and only if for all $\xi \in E$, $\sum\limits_{n}h_{n}\cdot \left\langle h_{n},\xi \right\rangle _{M(E)}$ converges in $E$ and moreover, \begin{equation*} \xi =\sum\limits_{n}h_{n}\cdot \left\langle h_{n},\xi \right\rangle _{M(E)}% \text{.} \end{equation*} \end{theorem} \proof By Remark 3.3 and \cite[Theorem 3.4]{13}, $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$ if and only if $\sum\limits_{n}\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \cdot \left\langle \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) ,\sigma _{p}^{E}\left( \xi \right) \right\rangle _{M(E)}$ converges in $E_{p}$ for all $\xi \in E$ and for each $p\in S(A)$, and moreover, \begin{equation*} \sigma _{p}^{E}\left( \xi \right) =\sum\limits_{n}\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \cdot \left\langle \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) ,\sigma _{p}^{E}\left( \xi \right) \right\rangle _{M(E)} \end{equation*}% From this fact and taking into account that \begin{eqnarray*} &&\overline{p}_{E}\left( \xi -\sum\limits_{k=1}^{n}h_{k}\cdot \left\langle h_{k},\xi \right\rangle _{M(E)}\right) \\ &=&\left\Vert \sigma _{p}^{E}\left( \xi \right) -\sigma _{p}^{E}\left( \sum\limits_{k=1}^{n}h_{k}\cdot \left\langle h_{k},\xi \right\rangle _{M(E)}\right) \right\Vert _{E_{p}} \\ &=&\left\Vert \sigma _{p}^{E}\left( \xi \right) -\sum\limits_{k=1}^{n}\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{k}\right) \cdot \left\langle \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{k}\right) ,\sigma _{p}^{E}\left( \xi \right) \right\rangle _{M(E)}\right\Vert _{E_{p}} \end{eqnarray*}% for all $\xi \in E$, for all $p\in S(A)$ and for all positive integer $n$, we deduce that $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$ if and only if $\sum\limits_{n}h_{n}\cdot \left\langle h_{n},\xi \right\rangle _{M(E)}$ converges in $E$ for all $\xi \in E$, and moreover, $% \xi =\sum\limits_{n}h_{n}\cdot \left\langle h_{n},\xi \right\rangle _{M(E)}$ for all $\xi \in E$.% \endproof% \begin{remark} If $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$, then\ $\tsum\limits_{n}(h_{n}\circ $ $h_{n}^{\ast })\left( \xi \right) =\xi $ for all $\xi \in E$, since $\left( h_{n}\circ h_{n}^{\ast }\right) \left( \xi \right) =h_{n}\cdot \left\langle h_{n},\xi \right\rangle _{M(E)}$ for each positive integer $n$. Therefore, $\{h_{n}\}_{n}$ is a standard normalied frame of multipliers in $E$ if and only if $\tsum\limits_{n}\left( h_{n}\circ h_{n}^{\ast }\right) \left( \xi \right) $ converges in $E$ for each $\xi \in E$ and moreover, $\tsum\limits_{n}\left( h_{n}\circ h_{n}^{\ast }\right) \left( \xi \right) =\xi $. \end{remark} \begin{corollary} If $b(E)$ admits a standard normalised frame of multipliers, then $E$ admits a standard normalised frame of multipliers. \end{corollary} \proof We will show that if $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $b(E)$, then $\{\widetilde{h_{n}}\}_{n}$, where $\widetilde{% h_{n}}$ is the extension of $h_{n}$ to an element in $M(E)$, is a standard normalised frame of multipliers in $E$. Let $\xi \in E$, $p\in S(A)$ and $\varepsilon >0$. Since $b(E)$ is dense in $% E$, there is $\xi _{0}\in b(E)$ such that $\overline{p}_{E}\left( \xi -\xi _{0}\right) \leq \varepsilon /3$. Since $\{h_{n}\}_{n}$ is a standard normalised frame of multipliers in $b(E)$, there is $n_{0}$ such that \begin{equation*} \left\Vert \xi _{0}-\tsum\limits_{k=1}^{n}\left( h_{k}\circ h_{k}^{\ast }\right) \left( \xi _{0}\right) \right\Vert _{\infty }\leq \varepsilon /3 \end{equation*}% for all $n$ with $n\geq n_{0}$. Then \begin{eqnarray*} \overline{p}_{E}\left( \xi -\tsum\limits_{k=1}^{n}\left( \widetilde{h_{k}}% \circ \widetilde{h_{k}}^{\ast }\right) \left( \xi \right) \right) &\leq &% \overline{p}_{E}\left( \xi -\xi _{0}\right) +\overline{p}_{E}\left( \xi _{0}-\tsum\limits_{k=1}^{n}\left( h_{k}\circ h_{k}^{\ast }\right) \left( \xi _{0}\right) \right) \\ &&+\overline{p}_{E}\left( \tsum\limits_{k=1}^{n}\left( \widetilde{h_{k}}% \circ \widetilde{h_{k}}^{\ast }\right) \left( \xi -\xi _{0}\right) \right) \\ &\leq &\varepsilon /3+\left\Vert \xi _{0}-\tsum\limits_{k=1}^{n}\left( h_{k}\circ h_{k}^{\ast }\right) \left( \xi _{0}\right) \right\Vert _{\infty } \\ &&+\widetilde{p}_{L(E)}\left( \tsum\limits_{k=1}^{n}\widetilde{h_{k}}\circ \widetilde{h_{k}}^{\ast }\right) \overline{p}_{E}\left( \xi -\xi _{0}\right) \\ &\leq &\varepsilon /3\left( 2+\left\Vert \tsum\limits_{k=1}^{n}\widetilde{% h_{k}}\circ \widetilde{h_{k}}^{\ast }\right\Vert _{\infty }\right) \\ &=&\varepsilon /3\left( 2+\left\Vert \tsum\limits_{k=1}^{n}h_{k}\circ h_{k}^{\ast }\right\Vert \right) \\ &\leq &\varepsilon \end{eqnarray*}% for all $n$ with $n\geq n_{0}.$ This shows that $\tsum\limits_{n}\left( \widetilde{h_{n}}\circ \widetilde{h_{n}}^{\ast }\right) \left( \xi \right) $ converges to $\xi $ in $E$ for each $\xi \in E$ and so $\{\widetilde{h_{n}}% \}_{n}$ is a standard normalised frame of multipliers in $E.$ \endproof% If $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$, then $% \sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}$ converges in $E$ for all $\xi \in E$. From this fact and taking into account that $\left\langle h_{n},\xi \right\rangle _{M(E)}\in A$ for all positive integer $n$, we conclude that $\left( \left\langle h_{n},\xi \right\rangle _{M\left( E\right) }\right) _{n}\in H_{A}$. Thus we can define a linear map $\theta :E\rightarrow H_{A}$ by \begin{equation*} \theta \left( \xi \right) =\left( \left\langle h_{n},\xi \right\rangle _{M\left( E\right) }\right) _{n}. \end{equation*}% Moreover, $\theta $ is a continuous module morphism, since \begin{equation*} \theta \left( \xi a\right) =\left( \left\langle h_{n},\xi a\right\rangle _{M\left( E\right) }\right) _{n}=\left( \left\langle h_{n},\xi \right\rangle _{M\left( E\right) }a\right) _{n}=\theta \left( \xi \right) a \end{equation*}% for all $\xi \in E$ and for all $a\in A$ and \begin{equation*} \overline{p}_{H_{A}}\left( \theta \left( \xi \right) \right) ^{2}=p\left( \sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}\right) \leq C\overline{p}_{E}\left( \xi \right) ^{2} \end{equation*}% for all $\xi \in E$ and for all $p\in S(A)$. \begin{definition} Let $\{h_{n}\}_{n}$ be a standard frame of multipliers in $E$. The module morphism $\theta :E\rightarrow H_{A}$ defined by $\theta \left( \xi \right) =\left( \left\langle h_{n},\xi \right\rangle _{M\left( E\right) }\right) _{n} $ is called the frame transform for $\{h_{n}\}_{n}$. \end{definition} \begin{theorem} ( The frame transform) Let $E$ be a countably generated Hilbert $A$-module in $M(E)$ and let $\{h_{n}\}_{n}$ be a standard frame of multipliers in $E$. The frame transform $\theta $ is an adjointable module morphism which realizes an embedding of $E$ onto an orthogonal summand of $H_{A}$, and $% \theta ^{\ast }\circ e_{n}=h_{n}$ for all $n$. Moreover, $\theta ^{\ast }\circ \theta $ is an invertible element in $b(L\left( E\right) )$. \end{theorem} \proof% Since $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$, $\{\left( \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \right) \}_{n}$ is a standard frame of multipliers in $E_{p}$ for each $p\in S(A)$. Let $p\in S(A)$. The frame transform for $\{\left( \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) \right) \}_{n}$ is an adjointable operator $\theta _{p}:E_{p}\rightarrow H_{A_{p}}$ defined by \begin{equation*} \theta _{p}\left( \sigma _{p}^{E}\left( \xi \right) \right) =\left( \left\langle \left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) ,\sigma _{p}^{E}\left( \xi \right) \right\rangle _{M(E_{p})}\right) _{n}. \end{equation*}% Moreover, $\theta _{p}$ preserves the inner product and $\theta _{p}^{\ast }\circ \left( \pi _{p}^{A,H_{A}}\right) _{\ast }\left( e_{n}\right) =\left( \pi _{p}^{A,E}\right) _{\ast }\left( h_{n}\right) $ \cite[Theorem 3.5]{13}. Since \begin{equation*} \left( \pi _{pq}^{E,H_{A}}\right) _{\ast }\left( \theta _{p}\right) =\theta _{q} \end{equation*}% for all $p,q\in S(A)$ with $p\geq q$, there is $\theta \in L(E,H_{A})$ such that \begin{equation*} \left( \pi _{p}^{E,H_{A}}\right) _{\ast }\left( \theta \right) =\theta _{p} \end{equation*}% for all $p\in S(A)$. Clearly \begin{equation*} \theta \left( \xi \right) =\left( \left\langle h_{n},\xi \right\rangle _{M\left( E\right) }\right) _{n} \end{equation*}% for all $\xi \in E$. Therefore $\theta $ is the frame transform for $% \{h_{n}\}_{n}$. Moreover, $\theta $ preserves the inner product and $\theta ^{\ast }\circ e_{n}=h_{n}$ for all $n$. From \begin{eqnarray*} C\overline{p}_{E}\left( \xi \right) ^{2} &=&Cp\left( \left\langle \xi ,\xi \right\rangle _{E}\right) \leq p\left( \sum\limits_{n}\left\langle \xi ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\xi \right\rangle _{M(E)}\right) \\ &=&\overline{p}_{H_{A}}\left( \theta \left( \xi \right) \right) ^{2}\leq Dp\left( \left\langle \xi ,\xi \right\rangle _{E}\right) =D\overline{p}% _{E}\left( \xi \right) ^{2} \end{eqnarray*}% for all $p\in S(A)$ and for all $\xi \in E$, we conclude that $\theta $ is an injective adjointable module morphism from $E$ to $H_{A}$ with closed range. Moreover, $\theta \in b(L(E,H_{A})$. By \cite[Theorem 2.2]{5}, $% \theta \left( E\right) $ has an orthogonal complement in $H_{A},$ $\theta ^{\ast }$ is surjective and the restriction $\theta ^{\ast }|_{\theta (E)}$ of $\theta ^{\ast }$ on $\theta (E)$ is an invertible element in $b(L(\theta (E),H_{A})$. Then $\theta ^{\ast }\circ \theta $ is invertible, and moreover, $\theta ^{\ast }\circ \theta \in b(L(E))$. \endproof% \begin{theorem} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$ and let $% \{h_{n}\}_{n}$ be a sequence in $M(E)$. Then $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$ if and only if there is an invertible element $T$ in $b(L(E))$ such that $\{T\circ h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$. \end{theorem} \proof% Suppose that $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$. Let $% \theta $ be the frame transform. Then $\theta ^{\ast }\circ \theta $, and so $\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}$, is an invertible element in $b(L(E))$. Let $\xi \in E$. Then there is $\eta \in E$ such that $\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \eta \right) =\xi $ and \begin{eqnarray*} \left\langle \xi ,\xi \right\rangle _{E} &=&\left\langle \left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \eta \right) ,\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \eta \right) \right\rangle _{E}=\left\langle \theta \left( \eta \right) ,\theta \left( \eta \right) \right\rangle _{H_{A}} \\ &=&\sum\limits_{n}\left\langle \eta ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)} \\ &=&\sum\limits_{n}\left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-% \frac{1}{2}}(\xi ),h_{n}\right\rangle _{M(E)}\left\langle h_{n},\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}(\xi )\right\rangle _{M(E)} \\ &=&\sum\limits_{n}\left\langle \xi ,\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\circ h_{n}\right\rangle _{M(E)}\left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\circ h_{n},\xi \right\rangle _{M(E)}\text{.} \end{eqnarray*}% Therefore $\{\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\circ h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$. Conversely, suppose that there is an invertible element $T$ in $b(L(E))$ such that $\{T\circ h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$. Since $T\in b(L(E))$,% \begin{eqnarray*} \pi _{p}^{A}\left( \left\langle T(\xi ),T(\xi )\right\rangle _{E}\right) &=&\left\langle \left( \pi _{p}^{E,E}\right) _{\ast }(T)(\sigma _{p}^{E}\left( \xi \right) ),\left( \pi _{p}^{E,E}\right) _{\ast }(T)(\sigma _{p}^{E}\left( \xi \right) )\right\rangle _{E_{p}} \\ &&\text{( see, for example, \cite[Proposition 1.2]{11})} \\ &\leq &\left\Vert \left( \pi _{p}^{E,E}\right) _{\ast }(T)\right\Vert ^{2}\left\langle \sigma _{p}^{E}\left( \xi \right) ,\sigma _{p}^{E}\left( \xi \right) \right\rangle _{E_{p}} \\ &=&\widetilde{p}_{L(E)}(T)^{2}\pi _{p}^{A}\left( \left\langle \xi ,\xi \right\rangle _{E}\right) \\ &\leq &\left\Vert T\right\Vert _{\infty }^{2}\pi _{p}^{A}\left( \left\langle \xi ,\xi \right\rangle _{E}\right) \end{eqnarray*}% for all $p\in S(A)$ and for all $\xi \in E,$ and then by \cite{4} \begin{equation*} \left\langle T(\xi ),T(\xi )\right\rangle _{E}\leq \left\Vert T\right\Vert _{\infty }^{2}\left\langle \xi ,\xi \right\rangle _{E} \end{equation*}% for all $\xi \in E.$ Then, since $T$ is invertible we have \begin{equation*} \left\Vert T\right\Vert _{\infty }^{-2}\left\langle \xi ,\xi \right\rangle _{E}\leq \left\langle \left( T^{\ast }\right) ^{-1}\left( \xi \right) ,\left( T^{\ast }\right) ^{-1}\left( \xi \right) \right\rangle _{E}\leq \left\Vert T^{-1}\right\Vert _{\infty }^{2}\left\langle \xi ,\xi \right\rangle _{E} \end{equation*}% for all $\xi \in E$. From these relations and taking into account that \begin{eqnarray*} &&\left\langle \left( T^{\ast }\right) ^{-1}\left( \xi \right) ,\left( T^{\ast }\right) ^{-1}\left( \xi \right) \right\rangle _{E} \\ &&\{T\circ h_{n}\}_{n}\ \text{is a standard normalised frame of }E\text{ in }% M(E) \\ &=&\sum\limits_{n}\left\langle \left( T^{\ast }\right) ^{-1}\left( \xi \right) ,T\circ h_{n}\right\rangle _{M(E)}\left\langle T\circ h_{n},\left( T^{\ast }\right) ^{-1}\left( \xi \right) \right\rangle _{M(E)} \\ &=&\lim_{n}\sum\limits_{k=1}^{n}\left\langle \left( T^{\ast }\right) ^{-1}\left( \xi \right) ,T\circ h_{k}\right\rangle _{M(E)}\left\langle T\circ h_{k},\left( T^{\ast }\right) ^{-1}\left( \xi \right) \right\rangle _{M(E)} \\ &=&\lim_{n}\sum\limits_{k=1}^{n}\left\langle \xi ,T^{-1}\circ \left( T\circ h_{k}\right) \right\rangle _{M(E)}\left\langle T^{-1}\circ \left( T\circ h_{k}\right) ,\xi \right\rangle _{M(E)} \\ &=&\lim_{n}\sum\limits_{k=1}^{n}\left\langle \xi ,h_{k}\right\rangle _{M(E)}\left\langle h_{k},\xi \right\rangle _{M(E)} \end{eqnarray*}% for all $\xi \in E$, we deduce that $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$. \endproof% \begin{corollary} If $b(E)$ admits a standard frame of multipliers, then $E$ admits a standard frame of multipliers. \end{corollary} \proof Let $\{h_{n}\}_{n}$ be a standard frame of multipliers in $b(E)$. By Theorem 3.12 there is an invertible element $T\in L(b(E))$ such that $\{T\circ h_{n}\}_{n}$ is a standard normalised frame of multipliers in $b(E)$. Since $% \widetilde{T\circ h_{n}}=\widetilde{T}\circ \widetilde{h_{n}}$ ($\widetilde{% T\circ h_{n}}$ denotes the extension of $T\circ h_{n}$ to an element in $% M(E) $ ) for each positive integer $n$, by Corollary 3.9, $\{\widetilde{T}% \circ \widetilde{h_{n}}\}_{n}$ is a standard normalised frame of multipliers in $E, $ and then, by Theorem 3.12, $\{\widetilde{h_{n}}\}_{n}$ is a standard frame of multipliers in $E$, since $\widetilde{T}$ is an invertible element in $b(L(E))$.% \endproof% \section{Dual frames} \begin{lemma} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$, let $% \{h_{n}\}_{n}$ be a standard frame of multipliers in $E$, and let $S$ be an invertible positive element in $b(L(E))$. Then $\{S\circ h_{n}\}_{n}$ is a standard frame of multipliers in $E$. \end{lemma} \proof% Let $\xi \in E$. Since \begin{equation*} \sum\limits_{k=1}^{n}\left\langle \xi ,S\circ h_{k}\right\rangle _{M(E)}\left\langle S\circ h_{k},\xi \right\rangle _{M(E)}=\sum\limits_{k=1}^{n}\left\langle S^{\ast }\left( \xi \right) ,h_{k}\right\rangle _{M(E)}\left\langle h_{k},S^{\ast }\left( \xi \right) \right\rangle _{M(E)} \end{equation*}% and since $\{h_{n}\}_{n}$ is a standard frame of multipliers in $E$, $% \sum\limits_{n}\left\langle \xi ,S\circ h_{n}\right\rangle _{M(E)}\left\langle S\circ h_{n},\xi \right\rangle _{M(E)}$ converges in $A$% . Moreover, \begin{equation*} C\left\langle S^{\ast }(\xi ),S^{\ast }(\xi )\right\rangle _{E}\leq \sum\limits_{n}\left\langle \xi ,S\circ h_{n}\right\rangle _{M(E)}\left\langle S\circ h_{n},\xi \right\rangle _{M(E)}\leq D\left\langle S^{\ast }(\xi ),S^{\ast }(\xi )\right\rangle _{E}. \end{equation*}% Since $S$ is an invertible positive element in $b(L(E))$, $S^{\ast }$ is an invertible element in $b(L(E))$ and then \begin{equation*} \left\Vert S^{-1}\right\Vert _{\infty }^{-2}\left\langle \xi ,\xi \right\rangle _{E}\leq \left\langle S^{\ast }(\xi ),S^{\ast }(\xi )\right\rangle _{E}\leq \left\Vert S\right\Vert _{\infty }^{2}\left\langle \xi ,\xi \right\rangle _{E}. \end{equation*}% From these facts we conclude that $\{S\circ h_{n}\}_{n}$ is a standard frame of multipliers in $E$. \endproof% \begin{remark} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$, let $% \{h_{n}\}_{n}$ be a standard frame of multipliers in $E.$ If $\theta $ is the frame transform of $\{h_{n}\}_{n},$ then, by Theorem 3.11, $\left( \theta ^{\ast }\circ \theta \right) ^{-1}$ is an invertible positive element in $b(L(E))$ and by Lemma 4.1, $\{\left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n}\}_{n}$ is a standard frame of multipliers in $E$. Let $\xi \in E$. Since $\{\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}% }\circ h_{n}\}_{n}$ is a standard normalised frame of multipliers in $E$\ ( the proof of Theorem 3.12) we have \begin{equation*} \left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\left( \xi \right) =\sum\limits_{n}\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}% }\circ h_{n}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-% \frac{1}{2}}\circ h_{n},\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1% }{2}}\left( \xi \right) \right\rangle _{M(E)}. \end{equation*}% From this fact and Theorem 3.7, we obtain \begin{eqnarray*} \xi &=&\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\left( \xi \right) \right) \\ &=&\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \sum\limits_{n}\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}% }\circ h_{n}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-% \frac{1}{2}}\circ h_{n},\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1% }{2}}\left( \xi \right) \right\rangle _{M(E)}\right) \\ &=&\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \sum\limits_{n}\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}% }\circ h_{n}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n},\xi \right\rangle _{M(E)}\right) \\ &=&\left( \theta ^{\ast }\circ \theta \right) ^{\frac{1}{2}}\left( \lim\limits_{n}\sum\limits_{k=1}^{n}\left( \theta ^{\ast }\circ \theta \right) ^{-\frac{1}{2}}\circ h_{k}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{k},\xi \right\rangle _{M(E)}\right) \\ &=&\lim\limits_{n}\sum\limits_{k=1}^{n}h_{k}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{k},\xi \right\rangle _{M(E)}. \end{eqnarray*}% This shows that $\sum\limits_{n}h_{n}\cdot \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n},\xi \right\rangle _{M(E)}$ converges to $\xi .$ \end{remark} \begin{definition} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$, let $% \{h_{n}\}_{n}$ be a standard frame of multipliers in $E$. We say that a standard frame of multipliers $\{t_{n}\}_{n}$ in $E$ is a dual frame of multipliers of $\{h_{n}\}_{n}$ if $\sum\limits_{n}h_{n}\cdot \left\langle t_{n},\xi \right\rangle _{M(E)}$ converges in $E$ for all $\xi \in E$ and moreover, \begin{equation*} \xi =\sum\limits_{n}h_{n}\cdot \left\langle t_{n},\xi \right\rangle _{M(E)}. \end{equation*}% The standard frame of multipliers $\{\left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n}\}_{n}$ in $E$, where $\theta $ is the frame transform of $\{h_{n}\}_{n}$, is called the canonical dual frame of multipliers of $\{h_{n}\}_{n}$ and $\left( \theta ^{\ast }\circ \theta \right) ^{-1}$ is said the frame operator of the standard frame of multipliers $\{h_{n}\}_{n}$. \end{definition} \begin{theorem} ( Reconstruction formula) Let $E$ be a countably generated Hilbert $A$% -module in $M(E)$ and let $\{h_{n}\}_{n}$ be a standard frame of multipliers in $E$. Then there is a unique invertible positive element $S\in b(L\left( E\right) )$ such that $\sum\limits_{n}h_{n}\cdot \left\langle S\circ h_{n},\xi \right\rangle _{M(E)}$ converges in $E$ and moreover, \begin{equation*} \xi =\sum\limits_{n}h_{n}\cdot \left\langle S\circ h_{n},\xi \right\rangle _{M(E)} \end{equation*}% for all $\xi \in E.$ \end{theorem} \proof By Theorem 3.12 there is an invertible element $T\in b(L(E))$ such that $% \{T\circ h_{n}\}_{n}$ is a standard normalized frame of multipliers in $E$. By Theorem 3.7, for each $\xi \in E,$ $\sum\limits_{n}T\circ h_{n}\cdot \left\langle T\circ h_{n},\xi \right\rangle _{M(E)}$ converges in $E$ for each $\xi \in E$, and moreover, \begin{equation*} \xi =\sum\limits_{n}T\circ h_{n}\cdot \left\langle T\circ h_{n},\xi \right\rangle _{M(E)}. \end{equation*}% Then \begin{eqnarray*} &&\overline{p}_{E}\left( \xi -\sum\limits_{k=1}^{n}h_{k}\cdot \left\langle S\circ h_{k},\xi \right\rangle _{M(E)}\right) \\ &=&\overline{p}_{E}\left( \xi -\sum\limits_{k=1}^{n}h_{k}\cdot \left\langle T\circ h_{k},T(\xi )\right\rangle _{M(E)}\right) \\ &=&\overline{p}_{E}\left( T^{-1}\left( T(\xi )-\sum\limits_{k=1}^{n}T\circ h_{k}\cdot \left\langle T\circ h_{k},T(\xi )\right\rangle _{M(E)}\right) \right) \\ &\leq &\widetilde{p}_{L(E)}(T^{-1})\overline{p}_{E}\left( T(\xi )-\sum\limits_{k=1}^{n}T\circ h_{k}\cdot \left\langle T\circ h_{k},T(\xi )\right\rangle _{M(E)}\right) \\ &\leq &\left\Vert T^{-1}\right\Vert _{\infty }\overline{p}_{E}\left( T(\xi )-\sum\limits_{k=1}^{n}T\circ h_{k}\cdot \left\langle T\circ h_{k},T(\xi )\right\rangle _{M(E)}\right) \end{eqnarray*}% for all $\xi \in E$, for all $p\in S(A)$ and for all positive integer $n$. From this fact and taking into account that \begin{equation*} T(\xi )=\sum\limits_{n}T\circ h_{n}\cdot \left\langle T\circ h_{n},T(\xi )\right\rangle _{M(E)} \end{equation*}% for all $\xi \in E$, we deduce that $\sum\limits_{n}h_{n}\cdot \left\langle \left( T^{\ast }\circ T\right) \circ h_{n},\xi \right\rangle _{M(E)}$ converges in $E$, and moreover, \begin{equation*} \xi =\sum\limits_{n}h_{n}\cdot \left\langle \left( T^{\ast }\circ T\right) \circ h_{n},\xi \right\rangle _{M(E)}. \end{equation*}% Let $S=T^{\ast }\circ T.$ Clearly, $S$ is a positive invertible element in $% b(L(E)).$ Moreover, $\sum\limits_{n}h_{n}\cdot \left\langle S\circ h_{n},\xi \right\rangle _{M(E)}$ converges in $E$, and $\xi =\sum\limits_{n}h_{n}\cdot \left\langle S\circ h_{n},\xi \right\rangle _{M(E)}$ for all $\xi \in E.$ To show that $S$ is unique with the above properties, suppose that there is two positive invertible elements $S_{1}$ and $S_{2}$ in $b(L(E))$ such that for each $\xi \in E$, $\sum\limits_{n}h_{n}\cdot \left\langle S_{1}\circ h_{n},\xi \right\rangle _{M(E)}$ and $\sum\limits_{n}h_{n}\cdot \left\langle S_{2}\circ h_{n},\xi \right\rangle _{M(E)}$ converge in $E$, and \begin{equation*} \xi =\sum\limits_{n}h_{n}\cdot \left\langle S_{1}\circ h_{n},\xi \right\rangle _{M(E)}=\sum\limits_{n}h_{n}\cdot \left\langle S_{2}\circ h_{n},\xi \right\rangle _{M(E)}. \end{equation*}% Then \begin{eqnarray*} \xi &=&\sum\limits_{n}h_{n}\cdot \left\langle S_{1}\circ h_{n},\xi \right\rangle _{M(E)}=\sum\limits_{n}h_{n}\cdot \left\langle S_{1}\circ S_{2}^{-1}\circ S_{2}\circ h_{n},\xi \right\rangle _{M(E)} \\ &=&\sum\limits_{n}h_{n}\cdot \left\langle S_{2}\circ h_{n},\left( S_{2}^{-1}\circ S_{1}\right) \left( \xi \right) \right\rangle _{M(E)}=\left( S_{2}^{-1}\circ S_{1}\right) \left( \xi \right) \end{eqnarray*}% for all $\xi \in E$. This implies that $S_{1}=S_{2}$ and the uniqueness is proved. \endproof% \begin{remark} The dual frame of multipliers of a given standard frame of multipliers is unique. \end{remark} \begin{proposition} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$ and let $% \{h_{n}\}_{n}$ and $\{t_{n}\}_{n}$ be two standard frames of multipliers in $% E$ with the frame transforms $\theta _{1}$ and $\theta _{2}$. Then these frames of multipliers are duals to each other if and only if $\theta _{1}^{\ast }\circ \theta _{2}=$id$_{E}.$ \end{proposition} \proof% First we suppose that the standard frames of multipliers $\{h_{n}\}_{n}$ and $\{t_{n}\}_{n}$ are duals to each other. Then \begin{eqnarray*} \left\langle \left( \theta _{1}^{\ast }\circ \theta _{2}\right) \left( \xi \right) ,\eta \right\rangle _{E} &=&\left\langle \theta _{2}\left( \xi \right) ,\theta _{1}\left( \eta \right) \right\rangle _{H_{A}} \\ &=&\dsum\limits_{n}\left\langle \xi ,t_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)}=\lim_{n}\dsum\limits_{k=1}^{n}\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle h_{k},\eta \right\rangle _{M(E)} \\ &=&\lim_{n}\left\langle \dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{M(E)} \\ &=&\left\langle \lim_{n}\left( \dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)}\right) ,\eta \right\rangle _{M(E)} \\ &=&\left\langle \xi ,\eta \right\rangle _{E} \end{eqnarray*}% for all $\xi ,\eta \in E,$ and so $\theta _{1}^{\ast }\circ \theta _{2}=$id$% _{E}$. Conversely, suppose that $\theta _{1}^{\ast }\circ \theta _{2}=$id$_{E}$. Let $\xi .$ From $\overline{p}_{E}\left( \dsum\limits_{k=n+1}^{\infty }h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)}\right) $ $=\sup \{p\left( \left\langle \dsum\limits_{k=n+1}^{\infty }h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{E}\right) ;\eta \in E,\overline{p}_{E}\left( \eta \right) \leq 1\}$ $=\sup \{p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle h_{k},\eta \right\rangle _{M(E)}\right) ;\eta \in E\overline{p}_{E}\left( \eta \right) \leq 1\}$ $\leq \sup \{p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle t_{k},\xi \right\rangle _{M(E)}\right) ^{\frac{1}{2}}p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \eta ,h_{k}\right\rangle _{M(E)}\left\langle h_{k},\eta \right\rangle _{M(E)}\right) ^{\frac{1}{2}};\ \ $ $\eta \in E,\overline{p}_{E}\left( \eta \right) \leq 1\}$ $\leq p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle t_{k},\xi \right\rangle _{M(E)}\right) ^{\frac{1}{2}}\sup \{p\left( \dsum\limits_{n}\left\langle \eta ,h_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)}\right) ^{\frac{1}{2}};$ $\ \eta \in E,\overline{p}_{E}\left( \eta \right) \leq 1\}$ $\leq p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle t_{k},\xi \right\rangle _{M(E)}\right) ^{\frac{1}{2}}\sup \{p\left( D_{1}\left\langle \eta ,\eta \right\rangle _{E}\right) ^{\frac{1}{2}};\eta \in E,\overline{p}_{E}\left( \eta \right) \leq 1\}$ $\leq D_{1}p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle t_{k},\xi \right\rangle _{M(E)}\right) ^{\frac{1}{2}}$ $\ $for all $p\in S(A)$ and taking into account that $\dsum\limits_{n}\left% \langle \xi ,t_{n}\right\rangle _{M(E)}\left\langle t_{n},\xi \right\rangle _{M(E)}$ converges in $A$, we deduce that $\dsum\limits_{n}h_{n}\cdot \left\langle t_{n},\xi \right\rangle _{M(E)}$ converges in $A$. Since \begin{equation*} \left\langle \xi ,\eta \right\rangle _{E}=\left\langle \left( \theta _{1}^{\ast }\circ \theta _{2}\right) \left( \xi \right) ,\eta \right\rangle _{E}=\dsum\limits_{n}\left\langle \xi ,t_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)} \end{equation*}% for all $\eta \in E$, we have \begin{eqnarray*} &&p\left( \left\langle \xi -\dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{E}\right) \\ &=&p\left( \left\langle \xi ,\eta \right\rangle -\left\langle \dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{E}\right) \\ &=&p\left( \dsum\limits_{n}\left\langle \xi ,t_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)}-\left\langle \dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{E}\right) \\ &=&p\left( \dsum\limits_{n}\left\langle \xi ,t_{n}\right\rangle _{M(E)}\left\langle h_{n},\eta \right\rangle _{M(E)}-\dsum\limits_{k=1}^{n}\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle h_{k},\eta \right\rangle _{M(E)}\right) \\ &=&p\left( \dsum\limits_{k=n+1}^{\infty }\left\langle \xi ,t_{k}\right\rangle _{M(E)}\left\langle h_{k},\eta \right\rangle _{M(E)}\right) \end{eqnarray*}% for all positive integer $n$, for all $\eta \in E$ and for all $p\in S(A)$ and since $\dsum\limits_{n}\left\langle \xi ,t_{n}\right\rangle _{M(E)}$ $% \left\langle h_{n},\eta \right\rangle _{M(E)}$ converges in $A$, \begin{equation*} \lim\limits_{n}\left\langle \xi -\dsum\limits_{k=1}^{n}h_{k}\cdot \left\langle t_{k},\xi \right\rangle _{M(E)},\eta \right\rangle _{E}=0 \end{equation*}% for all $\eta \in E.$ From these facts, we deduce that $\dsum% \limits_{n}h_{n}\cdot \left\langle t_{n},\xi \right\rangle _{M(E)}$ converges to $\xi $ and so the standard frames of multipliers $\{h_{n}\}_{n}$ and $\{t_{n}\}_{n}$ are duals to each other.% \endproof% \begin{corollary} Let $E$ be a countably generated Hilbert $A$-module in $M(E)$. The canonical bi-dual frame of a standard frame of multipliers $\{h_{n}\}_{n}$ in $E$ is the frame itself. \end{corollary} \proof% Let $\xi \in E$. If $\theta ^{\prime }$ is the frame transform of the dual frame of multipliers $\{\left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n}\}_{n},$ then \begin{eqnarray*} \theta ^{\prime }\left( \xi \right) &=&\left( \left\langle \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n},\xi \right\rangle \right) _{n}=\left( \left\langle h_{n},\left( \theta ^{\ast }\circ \theta \right) ^{-1}\left( \xi \right) \right\rangle \right) _{n} \\ &=&\theta \left( \left( \theta ^{\ast }\circ \theta \right) ^{-1}\left( \xi \right) \right) =\left( \theta \circ \left( \theta ^{\ast }\circ \theta \right) ^{-1}\right) \left( \xi \right) . \end{eqnarray*}% This shows that $\theta ^{\prime }=\theta \circ \left( \theta ^{\ast }\circ \theta \right) ^{-1}$. A simple calculus show that \begin{equation*} \left( \left( \theta ^{\prime }\right) ^{\ast }\circ \theta ^{\prime }\right) ^{-1}=\theta ^{\ast }\circ \theta \end{equation*}% and then \begin{equation*} \left( \left( \theta ^{\prime }\right) ^{\ast }\circ \theta ^{\prime }\right) ^{-1}\circ \left( \theta ^{\ast }\circ \theta \right) ^{-1}\circ h_{n}=h_{n} \end{equation*}% for all positive integer $n$. Moreover, \begin{equation*} \left( \theta ^{\prime }\right) ^{\ast }\circ \theta =\text{id}_{E}\text{.} \end{equation*}% From these facts and Proposition 4.6, we conclude that the canonical bi-dual frame of multipliers of the standard frame of multipliers $\{h_{n}\}_{n}$ in $E$ is the frame itself.% \endproof%
1,314,259,996,802
arxiv
\section{Conclusions} Humans are the bottleneck when proofreading segmentation data and minimizing the manual labor is the goal. Our classifiers suggest potential errors and corrections better than existing methods. This reduces the time spent finding and correcting errors. Our experiments also show that automatic proofreading has potential to further reduce human involvement. This will be the target of future research. We provide our framework and data as free and open research at \url{http://rhoana.org/guidedproofreading/}. \section*{Acknowledgements} We would like to thank Stephen Plaza for detailed explanations of focused proofreading and Toufiq Parag for the configuration of the NeuroProof classifier. \section{Evaluation} \label{sec:evaluation} We evaluate guided proofreading on multiple different real-world connectomics datasets of different species. All datasets were acquired using either serial section electron microscopy (ssEM) or serial section transmission electron microscopy (ssTEM). We perform experiments with the selection oracle, with automatic selection with threshold, and in the forced choice setting via a between-subjects user study with both novice and expert participants. \subsection{Datasets} \paragraph{L. Cylinder.} We use the left part of the 3-cylinder mouse cortex volume of Kasthuri \etal~\cite{kasthuri2015saturated} ($2048\times2048\times300$ voxels). The tissue is dense mammalian neuropil from layers 4 and 5 of the S1 primary somatosensory cortex, acquired using ssEM. The dataset resolution is $3\times3\times30~\text{nm}^3\text{/voxel}$. Image data and a manually-labeled expert `ground truth' segmentation is publicly available\footnote{\scriptsize{\url{https://software.rc.fas.harvard.edu/lichtman/vast/}}}. \paragraph{AC4 subvolume.} This is part of a publicly-available dataset of mouse cortex that was published for the ISBI 2013 challenge ``SNEMI3D: 3D Segmentation of neurites in EM images''. The dataset resolution is $6\times6\times30~\text{nm}^3\text{/voxel}$ and it was acquired using ssEM. Haehn~\etal~\cite{haehn_dojo_2014} found the most representative subvolume ($400\times400\times10$ voxels) of this dataset with respect to the distribution of object sizes, and used it for their interactive connectomics proofreading tool experiments. We use their publicly available data, labeled ground truth, and study findings\footnote{\scriptsize{\url{http://rhoana.org/dojo/}}}. \paragraph{Automatic segmentation pipeline.} We use a state-of-the-art method to create a dense automatic segmentation of the data. Membrane probabilities are generated using a CNN based on the U-net architecture (trained exclusively on different data than the GP classifiers)~\cite{RonnebergerFB15}. The probabilities are used to seed watershed and generate an oversegmentation using superpixels. Agglomeration is then performed by the GALA active learning classifier with a fixed agglomeration threshold of 0.3~\cite{nunez2014graph}. We describe this approach in the supplemental material. \subsection{Classifier Training} We train our split error classifier on the L. Cylinder dataset. We use the first 250 sections of the data for training and validation. For n-fold cross validation, we select one quarter of this data and re-select after each epoch. We minimize cross-entropy loss and update using stochastic gradient descent with Nesterov momentum~\cite{nesterov}. To generate training data, we identify correct regions and split errors in the automatic segmentation by intersection with ground truth regions. This is required since extracellular space is not labeled in the ground truth, but is in our dense automatic segmentation. From these regions, we sample 112,760 correct and 112,760 split error patches with 4-channels (Sec.~\ref{sec:spliterrordetection}). The patches are normalized. To augment our training data, we rotate patches within each mini-batch by $k*90$ degrees with randomly chosen integer $k$. The training parameters such as filter size, number of filters, learning rate, and momentum are the result of intuition and experience, studying recent machine learning research, and a limited brute force parameter search (see supplementary material). Table~\ref{tab:parameters} lists the final parameters. Our CNN configuration results in 171,474 learnable parameters. We assume that training has converged if the validation loss does not decrease for 50 epochs. We test the CNN by generating a balanced set of 8,780 correct and 8,780 error patches using unseen data of the left cylinder dataset. \begin{table}[t] \caption{Training parameters, cost, and results of our guided proofreading classifier versus focused proofreading by Plaza~\cite{focused_proofreading}. Both methods were trained on the same mouse brain dataset using the same hardware (Tesla X GPU).} \small{ \begin{tabular}{ll} \toprule \begin{tabular}{l} \textbf{Guided Proofreading} \\ \midrule \emph{Parameters} \\ \midrule Filter size: 3x3 \\ No. Filters 1: 64 \\ No. Filters 2--4: 48 \\ Dense units: 512 \\ Learning rate: 0.03--0.00001\\ Momentum: 0.9--0.999\\Mini-Batchsize: 128 \\ \end{tabular} & \begin{tabular}{l} \vspace{0.2mm} \\ \midrule \emph{Results---Test Set} \\ \midrule Cost [m]: 383 \\ Val. loss: 0.0845 \\ Val. acc.: 0.969 \\ Test. acc.: 0.94 \\ Prec./Recall: 0.94/0.94 \\ F1 Score: 0.94 \\ ~ \\ \end{tabular} \end{tabular} \vspace{0.5mm} \begin{tabular}{ll} \toprule \begin{tabular}{l} \textbf{Focused Proofreading}\\ \midrule \emph{Parameters} \\ \midrule Iterations: 3 \\ Learning strategy: 2\\ Mito agglomeration: Off~~~~~~ \\ Threshold: 0.0\\~\\ \end{tabular} & \begin{tabular}{l} \vspace{0.2mm} \\ \midrule \emph{Results---Test Set} \\ \midrule Cost [m]: 217 \\ Val. acc.: 0.99 \\ Test. acc.: 0.68 \\ Prec./Recall: 0.58/0.56 \\ F1 Score: 0.54 \\ \end{tabular} \end{tabular} \hrule } \label{tab:parameters} \end{table} \subsection{Baseline Comparisons} \paragraph{Interactive proofreading.} Haehn~\etal's comparison of interactive proofreading tools concludes that novices perform best when using Dojo~\cite{haehn_dojo_2014}. We studied the publicly available findings of their user study and use the data of all Dojo users in aggregate as a baseline. \paragraph{Computer-aided proofreading.} We compare against focused proofreading by Plaza~\cite{focused_proofreading}. Focused proofreading performs graph analysis on the output from NeuroProof~\cite{neuroproof2013}, instead of our GALA approach. Therefore, for training our focused proofreading baseline, we replace GALA in our automatic segmentation pipeline with NeuroProof but use exactly the same input data including membrane probabilities. We obtained the best possible parameters for NeuroProof by consulting the developers (Tab.~\ref{tab:parameters}). Rather than using Raveler as the frontend, we use our own interface (Fig.~\ref{fig:ui}) to compare only the classifier from Plaza's approach. \subsection{Experiments} \paragraph{Selection oracle evaluation.} We use the selection oracle as described in Sec.~\ref{sec:errorcorrection} for the decision whether to accept or reject a correction. The purpose of this experiment is to investigate how many corrections are required to reach the best possible outcome. This is a direct comparison of the guided proofreading and focused proofreading classifiers but can only be performed if ground truth data is available. We perform this experiment on all datasets listed above. \paragraph{Automatic method evaluation.} For this experiment, we accept all suggested corrections if the rankings are above a configured threshold $p_t=.95$ (Sec.~\ref{sec:errorcorrection}). We observed this value as stable in previous experiments with the guided proofreading classifiers (see supplementary material). We compare against the focused proofreading classifier and perform this experiment on all reported datasets. \paragraph{Forced choice user experiments.} We conducted a quantitative user study to evaluate the forced choice setting (Sec.~\ref{sec:errorcorrection}). In particular, we evaluated how participants perform while correcting an automatic segmentation using the guided proofreading and focused proofreading tools. We designed a single factor between-subjects experiment with the factor \textit{proofreading classifier}, and asked participants to proofread the AC4 subvolume in a fixed time frame of 30 minutes. To enable comparison against the interactive proofreading study by~Haehn~\etal~\cite{haehn_dojo_2014}, we use the exact same study conditions, dataset, and time limit. The experiment was performed on a machine with standard off-the-shelf hardware. All participants received monetary compensation. \paragraph{Novice study design.} We recruited participants with no experience in electron microscopy data or proofreading through flyers, mailing lists, and personal interaction. Based on sample size calculation theory, we estimated the study needed ten users per proofreading tool including four potential dropouts~\cite{samplesize1,samplesize2}. All twenty participants completed the study ($N=20$, 10 female; 19-65 years old, $M$=30). Each study session began with a five minute standardized explanation of the task. Then, the participants were asked to perform a 3 minute proofreading task on separate but representative data using focused proofreading. The participants were allowed to ask questions during this time. The classifier did not matter in this case since the user interface was the same. The experimenter then loaded the AC4 subvolume with initial pre-computed classifications by either guided proofreading or focused proofreading depending on assignment. After 30 minutes, the participants completed the raw NASA-TLX standard questions for task evaluation~\cite{NASATLX}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{ac4trails_combined.pdf} \caption{Performance comparison of Plaza's focused proofreading (red) and our guided proofreading (blue) on the AC4 subvolume. All measurements are reported as median VI, the lower the better. We compare different approaches of accepting or rejecting corrections for each method: automatic selection with threshold (green line), forced choice by ten novice users, forced choice by two domain experts, and the selection oracle. In all cases, guided proofreading yields better results with fewer corrections.} \label{fig:ac4trails} \end{figure*} \paragraph{Expert study design.} We recruited 4 domain experts to evaluate the performance of both guided and focused proofreading. We obtained study consent and randomly assigned 2 experts to proofread using each classifier. The experts performed the 3 minute test run on different data prior to proofreading for 30 minutes. After the task ended, the experts were asked to complete the raw NASA-TLX questionnaire. \paragraph{Evaluation metric.} We measure the similarity between proofread segmentations and the manual `ground truth' labelings using \textit{variation of information} (VI). VI is a measure of the distance between two clusterings, closely related to mutual information (the lower, the better). \section{Introduction} In connectomics, neuroscientists annotate neurons and their connectivity within 3D volumes to gain insight into the functional structure of the brain. Rapid progress in automatic sample preparation and electron microscopy (EM) acquisition techniques has made it possible to image large volumes of brain tissue at nanometer resolution. With a voxel size of $4\times4\times40~\text{nm}^3$, a cubic millimeter volume is one petabyte of data. With so much data, manual annotation is not feasible, and automatic annotation methods are needed~\cite{jain2010,Liu2014,GALA2014,kaynig2015large}. Automatic annotation by segmentation and classification of brain tissue is challenging~\cite{isbi_challenge} and all available methods make errors, so the results must be \emph{proofread} by humans. This crucial task serves two purposes: 1) to correct errors in the segmentation, and 2) to increase the body of labeled data from which to train better automatic segmentation methods. Recent proofreading tools provide intuitive user interfaces to browse segmentation data in 2D and 3D and to identify and manually correct errors~\cite{markus_proofreading,raveler,mojo2,haehn_dojo_2014}. Many kinds of errors exist, such as inaccurate boundaries, but the most common are \emph{split errors}, where a single segment is labeled as two, and \emph{merge errors}, where two segments are labeled as one (Fig.~\ref{fig:merge_and_slit_errors}). With user interaction, split errors can be joined, and the missing boundary in a merge error can be defined with manually-seeded watersheds~\cite{haehn_dojo_2014}. However, the visual inspection to find errors takes the majority of the time, even with semi-automatic correction tools~\cite{proofreading_bottleneck}. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{merge_and_split_errors.pdf} \end{center} \vspace{-4mm} \caption{The most common proofreading corrections are fixing split errors (red arrows) and merge errors (yellow arrow). A fixed segmentation matches the cell borders.} \label{fig:merge_and_slit_errors} \end{figure} Our goal is to automatically detect potential split and merge errors to reduce visual inspection time. Further, to reduce correction time, we propose corrections to the user to accept or reject. We call this process \textit{guided proofreading}. We train a classifier for split error detection with a convolutional neural network (CNN). This takes as input patches of membrane segmentation probabilities, cell segmentation masks, and boundary masks, and outputs a split-probability score. As we must process large data, this classifier only operates on cell boundaries, which reduces computation over methods that analyze every pixel. For merge errors, we invert and reuse the split classification network, and ask it to rate a set of generated boundaries that hypothesize a split. Possible erroneous regions are sorted by their score, and a candidate correction is generated for each region. Then, a user works through this list of regions and corrections. In a forced choice setting, the user either selects a correction or skips it to advance to the next region. In an automatic setting, errors with a high probability are automatically corrected first, given an appropriate probability threshold, after which the user would take over. Finally, to test the limits of performance, we create an oracle which only accepts corrections that improve the segmentation, based on knowledge of the ground truth. This is guided proofreading with a perfect user. We evaluate these methods on multiple connectomics datasets. For the forced choice setting, we perform a quantitative user study with 20 novice users who have no previous experience of proofreading EM data. We ask participants to proofread a small segmentation volume in a fixed time frame. In a between-subjects design, we compare guided proofreading to the semi-automatic \textit{focused proofreading} approach by Plaza~\cite{focused_proofreading}. In addition, we compare against the manual interactive proofreading tool \textit{Dojo} by Haehn~\etal~\cite{haehn_dojo_2014}. We also asked four domain experts to use guided proofreading and focused proofreading for comparison. This paper makes the following contributions. First, we present a CNN-based boundary classifier for split errors, plus a merge error classifier that inverts the split error classifier. This is used to propose merge error corrections, removing the need to manually draw the missing edge. These classifiers perform well without much training data, which is expensive to collect for connectomics data. Second, we developed a guided proofreading approach to correcting segmentation volumes, and an assessment scenario comparing forced-choice interaction with automatic and oracle proofreading. Third, we present the results of a quantitative user study assessing guided proofreading. Our method is able to reduce segmentation error faster than state-of-the-art semi-automatic tools for both novice and expert users. Guided proofreading is applicable to all existing automatic segmentation methods that produce a label map. As such, we believe that our approach is a promising direction to proofread segmentations more efficiently and better tackle large volumes of connectomics imagery. \section{Method} \label{sec:methods} \subsection{Split Error Detection} \label{sec:spliterrordetection} We build a split error classifier with output $p$ using a CNN to check whether an edge within an existing automatic segmentation is valid ($p=0$) or not ($p=1$). Rather than analyzing every input pixel, the classifier operates only on segment boundaries, which requires less pixel context and is faster. In contrast to Bogovic \etal~\cite{BogovicHJ13}, we work with 2D slices rather than 3D volumes. This enables proofreading prior or in parallel to a computationally expensive stitching and 3D alignment of individual EM images. \paragraph{CNN Architecture.} Boundary split error detection is a binary classification task since the boundary is either correct or erroneous. However, in reality, the score $p$ is between 0 and 1. In connectomics, classification complexity arises from hundreds of different cell types, rather than from the classification decision itself. Intuitively, this yields a wider architecture with more filters rather than a deeper architecture with more layers. We explored different architectures---including residual networks~\cite{resnet}---with brute force parameter searches and precision and recall comparisons (see supplementary materials). Our final CNN configuration for split error detection has four convolutional layers, each followed by max pooling with dropout regularization to prevent overfitting due to limited training data (Fig.~\ref{fig:architecture}). \paragraph{Classifier Inputs.} To train the CNN, we consider boundary context in the decision making process via a $75\times75$ patch over the center of an existing boundary. This size covers approximately $80\%$ of all boundaries in the 6~nm Mouse S1 AC3 Open Connectome Project dataset. If the boundary is not fully covered, we sample up to 10 non-overlapping patches along the boundary, and average the resulting scores weighted by the boundary length coverage per patch. Similar to Bogovic~\etal~\cite{BogovicHJ13}, we use grayscale image data, corresponding boundary probabilities, and a single binary mask combining the two neighboring labels as inputs to our CNN. However, we observed that the boundary probability information generated from EM images is often misleading due to noise or artifacts in the data. This can result in merge errors within the automatic segmentation. To better direct our classifier to train on the true boundary, we extract the border between two segments. Then, we dilate this border by 5 pixels to consider slight edge ambiguities as well as cover extra-cellular space, and use this binary mask as an additional input. This creates a stacked 4-channel input patch. Fig.~\ref{fig:cnn_inputs} shows examples of correct and erroneous input patches and their corresponding automatic segmentation and ground truth. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{cnn_inputs.pdf} \caption{Example inputs for learning correct splits and split errors (candidate segmentation versus the ground truth). Image, membrane probabilities, merged binary labels, and a dilated border mask provide 4-channel input patches.} \label{fig:cnn_inputs} \end{figure} \subsection{Merge Error Detection} Identification and correction of merge errors is more challenging than finding and fixing split errors, because we must look inside segmentation regions for missing or incomplete boundaries and then propose the correct boundary. However, we can reuse the same trained CNN for this task. Similar to guided volume editing by Karimov~\etal~\cite{karimov_guided_volume_editing}, we generate potential borders within a segment. For each segmentation label, we dilate the label by 20 pixel and generate 50 potential boundaries through the region by randomly placing watershed seed points at opposite sides of the label boundary. We perform watershed on the inverted grayscale EM image. This yields 50 candidate splits. Dilation of the segment prior to watershed is motivated by our observation that the generated splits tend to attach to real membrane boundaries. These boundaries are then individually rated using our split error classifier. For this, we invert the probability score such that a correct split (previously encoded as $p=0$) is most likely a candidate for a merge error (now encoded as $p=1$). In other words, if a generated boundary is ranked as correct, it probably should be in the segmentation. Fig. \ref{fig:merge_error} illustrates this procedure. Pseudo code is available as supplemental material to promote understanding. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{merge_error_v6.pdf} \caption{Merge error detection: Potential borders are generated using inverted images by randomly placing watershed seeds (green) on the boundary of a dilated segment. The best ranked seeds and border (both in red) result in the shown error correction.} \label{fig:merge_error} \end{figure} \subsection{Error Correction} \label{sec:errorcorrection} We combine the proposed classifiers to perform corrections of split and merge errors in automatic segmentations. For this, we first perform merge error detection for all existing segments in a dataset and store the inverted rankings $1-p$ as well as potential corrections. After that, we perform split error detection and store the ranking $p$ for all neighboring segments in the segmentation. Then, we sort the merge and split error rankings separately from highest to lowest. For error correction, first we loop through the potential merge error regions and then through the potential split error regions. During this process, each error is now subject to a yes/no decision which can be provided in different ways: \paragraph{Selection oracle.} If ground truth data is available, the selection oracle \textit{knows} whether a possible correction improves an automatic segmentation. This is realized by simply comparing the outcome of a correction using a defined measure. The oracle only accepts corrections which improve the automatic segmentation---others get discarded. This is guided proofreading with a perfect user, and allows us to assess the upper limit of improvements. \paragraph{Automatic selection with threshold.} The decision whether to accept or reject a potential correction is taken by comparing rankings to a threshold $p_t$. If the inverted score $1-p$ of a merge error is higher than a threshold $1-p_t$, the correction is accepted. Similarly, a correction is accepted for a split error if the ranking $p$ is higher than $p_t$. Our experiments have shown that the threshold $p_t$ is the same for merge and split errors for a balanced classifier that has been trained on equal numbers of correct and error patches. \paragraph{Forced choice setting.} We present a user with the choice to accept or reject a correction. All potential split errors are seen. Inspecting all merge errors is not possible for users due to the sheer amount of generated borders. Therefore, we only present merge errors that have a probability threshold higher than $1-p_t$. \noindent \newline In all cases, a decision has to be made to advance to the next possible erroneous region. If a merge error correction was accepted, the newly found boundary is added to the segmentation data. This partially updates the merge error and split error ranking with respect to the new segment. If a split error correction was accepted, two segments are merged in the segmentation data and the disappearing segment is removed from all error rankings. Then, we perform merge error detection on the now larger segment and update the ranking. We also update the split error rankings to include all new neighbors, and re-sort. The error with the next highest ranking then forces a choice. \subsection{User Interface} We integrate guided proofreading into an existing large data connectomics workflow. The web-based system is designed with a novice-friendly user interface (Fig.~\ref{fig:ui}). We show the current labeling of a cell boundary outline and its proposed correction overlayed on EM image data. The user cannot distinguish the current labeling from the proposed correction to avoid selection bias. We also show a solid overlay of the current and the proposed labeling. In addition, we show the image without overlays to provide an unoccluded view. User interaction is simple and involves one mouse click on either the current labeling or the correction. After interaction, the next potential error is shown. \begin{figure}[t] \includegraphics[width=\linewidth]{user_interface_split.pdf} \caption{User interface. A candidate error region is shown on the left. The user must choose between the region being a split error which needs correcting (center) or not (right). Confirming the choice advances to the next potential error.} \label{fig:ui} \end{figure} \section{Related Work} \textbf{Automatic Segmentation.} Multi-terabyte EM brain volumes require automatic segmentation~\cite{jain2010,Liu2014,NunezIglesias2013Machine,GALA2014}, but can be hard to classify due to ambiguous intercellular space: the 2013 IEEE ISBI neurites 3D segmentation challenge~\cite{isbi_challenge} showed that existing algorithms that learn from expert-segmented training data still exhibit high error rates. Many works tackle this problem. NeuroProof~\cite{neuroproof2013} decreases error rates by learning an agglomeration on over-segmentations of images based on a random forest classifier. Vazquez-Reina \etal~\cite{amelio_segmentation} consider whole EM volumes rather than a per-section approach, then solve a fusion problem with a global context. Kaynig \etal~\cite{kaynig10} propose a random forest classifier coupled with an anisotropic smoothing prior in a conditional random field framework with 3D segment fusion. Bogovic \etal~\cite{BogovicHJ13} learn 3D features unsupervised, and show that they can be better than by-hand designs. It is also possible to learn segmentation classification features directly from images with CNNs. Ronneberger \etal~\cite{RonnebergerFB15} use a contracting/expanding CNN path architecture to enable precise boundary localization with small amounts of training data. Lee \etal~\cite{lee2015recursive} recursively train very deep networks with 2D and 3D filters to detect boundaries. All these approaches make good progress; however, in general, proofreading is still required to correct errors. \paragraph{Interactive Proofreading.} While proofreading is very time consuming, it is fairly easy for humans to perform corrections through splitting and merging segments. One expert tool is Raveler, introduced by Chklovskii~\etal~\cite{chklovskii2010, raveler}. Raveler is used today by professional proofreaders, and it offers many parameters for tweaking the process. Similar systems exist as products or plugins to visualization systems,~\eg, V3D~\cite{proofreading_bottleneck} and AVIZO~\cite{markus_proofreading}. Recent papers have tackled the problem of proofreading massive datasets through crowdsourcing with novices~\cite{saalfeld09,anderson2011,Giuly2013DP2}. One popular platform is EyeWire, by Kim \etal~\cite{eyewire_nature}, where participants earn virtual rewards for merging over-segmented labelings to reconstruct retina cells. Between expert systems and online games sit Mojo and Dojo, by Haehn \etal~\cite{haehn_dojo_2014,Neuroblocks}, which use simple scribble interfaces for error correction. Dojo extends this to distributed proofreading via a minimalistic web-based user interface. The authors define requirements for general proofreading tools, and then evaluate the accuracy and speed of Raveler, Mojo, and Dojo through a quantitative user study (Sec.~3 and 4, ~\cite{haehn_dojo_2014}). Dojo had the highest performance. In this paper, we use Dojo as a baseline for interactive proofreading. All interactive proofreading solutions require the user to find potential errors manually, which takes the majority of the time~\cite{proofreading_bottleneck,haehn_dojo_2014}. Recent papers propose computer-aided proofreading systems to quicken this visual search task. \paragraph{Computer-aided Proofreading.} Uzunbas \etal~\cite{uzunbas} showed that potential labeling errors can be found by considering the merge tree of an automatic segmentation method. The authors track uncertainty throughout the automatic labeling by training a conditional random field. This segmentation technique produces uncertainty estimates, which inform potential regions for proofreading to the user. While this applies to isotropic volumes, more work is needed to apply it to the typically anisotropic connectomics dataset volumes. Karimov \etal~\cite{karimov_guided_volume_editing} propose guided volume editing, which measures the difference in histogram distributions in image data to find potential split and merge errors in the corresponding segmentation. This lets expert users correct labeled computer-tomography datasets, using several interactions per correction. To correct merge errors, the authors create a large number of superpixels within a single segment and then successively group them based on dissimilarities. We were inspired by this approach, but instead we generate single watershed boundaries to handle the intracellular variance in high-resolution EM images (Sec.~\ref{sec:methods}). Most closely related to our approach is the work of Plaza~\cite{focused_proofreading}, who proposed \textit{focused proofreading}. This method generates affinity scores by analyzing a region adjacency graph across slices, then finds the largest affinities based on a defined impact score. This yields edges of potential split errors which can be presented to the proofreader. Plaza reports that additional manual work is required to find and correct merge errors. Focused proofreading builds upon NeuroProof~\cite{neuroproof2013} as its agglomerator, and is open source with integration into Raveler. As the closest related work, we wish to use this method as a baseline to evaluate our approach (Sec.~\ref{sec:evaluation}). However, we separate the backend affinity score calculation from the Raveler expert-level front end, and present our own novice-friendly interface (Sec.~\ref{sec:evaluation}). \section{Results and Discussion} Additional plots are available as supplemental material due to limited space. \subsection{Classification Performance} \paragraph{L.~Cylinder.} Evaluation was performed on previously unseen sections of the mouse cortex volume from Kasthuri~\etal~\cite{kasthuri2015saturated}. We generated a dataset of 81,184 correct and 8,780 split error patches with respect to the ground truth labeling. Then, we classified each patch by using focused proofreading and guided proofreading, and compare performance (Fig. \ref{fig:pr}). Our method exhibits higher sensitivity and lower fall-out. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{roc.pdf} \caption{Receiver Operating Characteristic curves comparing focused proofreading and guided proofreading automatic correction. We evaluate on unbalanced test sets of the AC4 subvolume (darker colors) and the L. Cylinder volume (lighter colors). Guided proofreading performs better.} \label{fig:pr} \end{figure} \paragraph{AC4 subvolume.} We generated 3,488 correct and 332 error patches (10 merge errors, 322 split errors). Guided proofreading achieves better classification performance (Fig. \ref{fig:pr}). \subsection{Forced Choice User Experiment} We performed a user study to evaluate the forced choice error correction method among novices and experts. To be comparable to Haehn~\etal's Dojo user study~\cite{haehn_dojo_2014}, participants were asked to proofread the AC4 subvolume for 30 minutes. We counted 10 merge errors and 322 split errors by computing the maximum overlap of the initial segmentation with respect to the ground truth labeling (provided in \cite{haehn_dojo_2014}). For evaluation, we measure the performance of proofreading quantitatively by comparing VI scores of segmentations. Median VI $=0.476$ ($SD=0.089$), with mean VI $=0.512$ ($SD=0.09$). Most novices and all experts were able to improve upon this score with both focused proofreading and guided proofreading (Fig.~\ref{fig:ac4trails}). \paragraph{Novice performance.} Participants using focused proofreading were able to reduce the median VI of the automatic segmentation to $0.469$ ($SD=0.87$). On average, users viewed $423.4$ corrections and accepted $45.8$. Participants using guided proofreading were able to reduce the median VI to $0.424$ ($SD=0.037$). Here, users viewed on average $353.4$ corrections and accepted $106.9$. While three users of focused proofreading made the initial segmentation worse, all participants using guided proofreading were able to improve it. In comparison to the results of Haehn~\etal, focused and guided proofreading outperform interactive proofreading with Dojo (median VI $0.535$, $SD=0.055$). The slope of VI score per correction (Fig.~\ref{fig:ac4trails}) and average timings (Tab.~\ref{tab:correctiontimes}) show that guided proofreading enables improvements with fewer corrections than the other tools. Interestingly, novice performance decreases after approximately $300$ corrections. There are two explanations for this: user fatigue, and increasing uncertainty during error suggestion from the classifier. \begin{table}[t] \caption{Average proofreading speed for novice users of Dojo, Focused Proofreading (FP) and our Guided Proofreading (GP). Our system achieves significantly higher VI reduction per minute (7.5$\times$) over state-of-the-art FP, while being slightly slower per correction.} \resizebox{\linewidth}{!}{ \begin{tabular}{lrrrr} \toprule \makecell{Approach\\(Novice)} & \makecell{Time Per\\Correction (s)} & \makecell{VI Reduction\\Per Minute} & \makecell{Improvement} \\ \midrule \emph{Dojo} & 30.5 & -0.00200 & $-8.7\times$ \\ \emph{FP} & 4.9 & 0.00023 & $1.0\times$\\ \emph{GP} & 6.2 & 0.00173 & $7.5\times$\\ \bottomrule \end{tabular} } \label{tab:correctiontimes} \end{table} \paragraph{Expert performance.} Domain experts were able to improve the initial segmentation. With focused proofreading, the median VI of the automatic segmentation was $0.439$ ($SD=0.084$). With guided proofreading, the median VI was $0.396$ ($SD=0.032$, Fig.~\ref{fig:ac4boxplot}). \paragraph{Subjective responses.} We used the NASA-TLX workload index to record subjective responses. Mental, physical, and temporal demands were reported slightly higher for participants using focused proofreading. However, these differences were not statistically significant. This is unsurprising as the user interface was the same for both groups. \subsection{Automatic Error Correction} \paragraph{Selection oracle.} As expected, the selection oracle yields the best performance on all datasets. Fig.~\ref{fig:ac4trails} shows VI reduction using the selection oracle on the AC4 subvolume (initial median VI $0.476$, $SD=0.089$). With focused proofreading, the selection oracle reaches a median VI of $0.353$ ($SD=0.037$) after $1600$ corrections. With guided proofreading, the oracle reaches a minimum median VI of $0.342$ ($SD=0.03$) after $800$ corrections. Both results are close to the best possible median VI of $0.334$ (calculated by computing maximum overlap with the ground truth). The slope of the trails in Fig.~\ref{fig:ac4trails} shows that guided proofreading requires fewer corrections to reach a reasonable reduction in VI. Fig.~\ref{fig:ac4boxplot} shows the VI distribution across methods. On the L.~Cylinder dataset (initial VI $0.379$, $SD=0.118$), focused proofreading reduces the median VI to $0.298$ ($SD=0.075$) after $26,170$ corrections ($2,419$ accepted). Guided proofreading reaches the minimum median VI $0.2996$ ($SD=0.073$) after $10,000$ corrections (in total $27,491$, $2,696$ accepted). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{ac4boxplot.pdf} \caption{VI distributions of guided proofreading (GP), focused proofreading (FP) and Dojo output across slices of the AC4 subvolume, with different error correction approaches. The variation resulting from performance of FP with automatic selection is $4.5\times$ higher than GP (as indicated by the arrow), with median VI of $1.9$ and $SD=0.496$.} \label{fig:ac4boxplot} \end{figure} \paragraph{Automatic selection with threshold.} Focused proofreading was not designed to run automatically. This explains the poor performance on the AC4 subvolume (VI of $1.9$, $SD=0.496$) and on the L.~Cylinder dataset (VI of $2.75$, $SD=0.789$). For guided proofreading, we set $p_t=0.95$ for both datasets. This reduces median VI in the AC4 subvolume to $0.398$ ($SD=0.068$). This result is comparable to expert performance. Guided proofreading also reduces VI in the L.~Cylinder data to $0.352$ ($SD=0.087$). \paragraph{Merge Error Detection.} Guided proofreading performs merge error detection prior to split error detection. The classifier found 10 merge errors in the AC4 subvolume, of which 4 reduced VI. Automatic selection with $p_t=0.95$ corrected 6 of these errors (Prec./Recall 0.87/0.80, F1-score 0.80). This was not captured in median VI, but resulted in a mean VI reduction from $0.512$ ($SD=0.09$) to $0.509$ ($SD=0.086$). The selection oracle reduced mean VI with only merge errors to $0.508$ ($SD=0.086$). In the forced choice user study, novices marked 1.9 merge errors for correction and reduced mean VI to $0.502$ (experts marked 2, VI $0.503$, $SD=0.086$). This shows how hard it is to identify merge errors. In 50 sections of the L.~Cylinder dataset, 151 merge errors were automatically found of which 17 reduced VI. Automatic selection with $p_t=0.95$ corrected 6 true VI-reducing errors and 30 VI-increasing ones (Prec./Recall 0.82/0.73, F1-score 0.77) to negligible VI effect.
1,314,259,996,803
arxiv
\section{Introduction} Heartbeat stars are eccentric ellipsoidal variables (where the majority have eccentricities \,$\gtrsim$\,0.3) that undergo strong tidal interactions at the time of periastron passage, relative to the rest of the orbit. A consequence of these tidal interactions is that (for both components) the stellar cross-section changes shape, and the temperature across the stellar surface varies due to reflection and gravity darkening. These variations appear in the light curve primarily during periastron and the morphology of the variation depends on the eccentricity (and, in some cases, reflection). Heartbeat stars were initially identified as a separate class of binary star by \citet{Thompson2012} based on their unusual light curve morphology, specifically, their prominent periastron variations. Prior to this classification, there were two obvious cases in the literature: HD 174884 and KOI-54, which were reported by \citet{Maceroni2009} and \citet{Welsh2011}, respectively. Following this, many more were identified, the majority of which were found with the \kep\ satellite: KIC\,4544587 by \citet{Hambleton2013}; 17 red giant \hb\ stars by \citet{Beck2014}; KIC\,10080943 by \citet{Schmid2015}; six by \citet{Smullen2015} and 19 by \citet{Shporer2016}\footnote{\sf http://web.gps.caltech.edu/$\sim$shporer/heartbeatstars/}. The most up-to-date and extensive list of \kep\ \hb\ stars, containing \hbno\ objects, has been published by \citet{Kirk2015} and can be found at the \kep\ eclipsing binary website\footnote{\sf http://keplerEBs.villanova.edu}. Heartbeat stars have also been identified using other missions including seven with the Optical Gravitational Lensing Experiment, OGLE \citep{Nicholls2012}; one with CoRoT by \citet{Hareter2014}; and one discovered using {\sc most} and followed up with the CHARA array \citep{Richardson2016}. \begin{figure*} \hfill{} \centering \includegraphics[width=\hsize]{figures/phase_8164.eps} \small\caption{Left panel: The observed \kep\ light curve of KIC\,8164262 for a single orbit of 87.45\,d during Quarter 9. Right panel: a magnified region of the phase-binned \kep\ light curve from Quarters 0--17, containing the ellipsoidal variation at phase zero and showing the prominent pulsation variations. } \label{fig:lc} \hfill{} \centering \end{figure*} Heartbeat stars are a diverse collection of objects, some of which display additional interesting characteristics such as solar-like oscillations \citep{Beck2014}, rapid apsidal motion \citep{Hambleton2013,Hambleton2016b} and tidally induced pulsations \citep{Welsh2011}. Tidally induced pulsations, initially theorised by \citet{Zahn1975}, \citet{Goldreich1989} and \citet{Witte2002}, are pulsations driven by the varying tidal forces that occur as the stars orbit each other. They were hypothesised to cause the circularisation of binary star orbits and the spin-up of the stellar components, although, until \kep\ their presence had only been identified in HD\,174884 \citep{Maceroni2009}. Thanks to \kep, we now have a plethora of objects with tidally induced pulsations; approximately 20\% of the current \kep\ \hb\ star sample show tidally induced pulsations, providing us with a range of pulsation frequencies ($\lesssim$\,10\,\cd) and amplitudes ($\lesssim$\,1\,mmag) to investigate. Interestingly, the phase of each tidally induced pulsation, relative to the phase of periastron, is determied by the azimuthal order of the mode \citep{Burkart2012}. To determine the azimuthal order, however, extremely high precision is required for both the argument of periastron and the pulsation phases. Tidally induced modes are forced to oscillate at integer multiples of the orbital frequency, and their amplitude is determined by the difference between the tidal forcing frequency and a mode's natural frequency (Kumar et al. 1995). Very large amplitude tidally induced modes are unlikely because they require a near exact resonance between tidal forcing and one of the star's natural mode frequencies. Interestingly, however, with \kep\ we have observed thirteen objects that exhibit a dominant high-amplitude pulsation mode that appears to be in resonance with the binary star orbit (these are classified as resonant modes). KIC\,8164262, the focus of this work, is an extreme case with a dominant, high-amplitude ($\sim$1\,mmag) mode. Here we propose the theory of resonance locking, hypothesised by \citet{Witte1999}, \citet{Witte2001}, \citet{Fuller2012}, \citet{Burkart2012} and \citet{Burkart2014}, as the mechanism for KIC\,8164262 maintaining its resonant mode \citep{Zahn1975,Zahn1977}. The proposed mechanism of resonance locking achieves this by locking the evolution of the binary star orbital period with the evolution of the eigenmodes as the stars evolve. Resonance locking can occur because a star's oscillation mode frequencies change due to stellar evolution. The tidal forcing frequencies also change due to orbital evolution caused by tidal dissipation, at a rate that depends on how close mode frequiences are to resonance. Near resonances, these rates of evolution can be equal to one another, allowing for a stable equilibrium in which the star and orbit evolve in tandem such that a tidally forced mode remains near resonance \citep{Witte1999}. Consequently, rather than passing through resonance, tidally induced pulsations can be locked in resonance and are more likely to be observed at large amplitudes. A more extensive theoretical discussion of resonance locking, both generally and for the case of KIC\,8164262, is described in the companion paper by Fuller et al. (2017, submitted; hereafter F17), who provide theoretical models showing that the prominent pulsation in KIC\,8164262 aligns with the predictions of the resonant locking mechanism, given the fundamental stellar parameters (mass and radius of the primary component) we have determined. Here we present the observations and characterization of KIC\,8164262. In \S\ref{sec:obs} we describe the ground- and space-based observations; in \S\ref{sec:binary} we outline the detailed binary model of KIC\,8164262; in \S\ref{sec:pulse} we discuss the pulsations; and in \S\ref{sec:summary} we discuss and summarise our findings. \section{Observations} \label{sec:obs} KIC\,8164262 (see Fig.\,\ref{fig:lc}) was initially identified as a heartbeat star by the Planet Hunters \footnote{\sf https://www.planethunters.org} and forwarded to the Kepler Eclipsing Binary Working Group where it was subsequently added to the Kepler Eclipsing Binary Catalog \citep{Kirk2015}. It was also previously identified in the \citet{Slawson2011} catalog, but with an erroneous period based on the prominent pulsation rather than the binary features. KIC\,8164262 was selected for detailed study primarily due to its high amplitude resonantly excited mode, which makes it a strong candidate for resonant locking. \subsection{\kep\ Data} The \kep\ telescope \citep{Borucki2010, Gilliland2010,Batalha2010} observed KIC\,8164262, which has a \kep\ magnitude of Kp\,=\,13.36, nearly continuously for 1470.5\,d or 17 Quarters. The observations of KIC\,8164262 were obtained using the long cadence (hereafter LC) data mode at a mean sampling rate of 29.4244\,min. All observations were obtained from the Mikulski Archive for Space Telescopes and were a part of Data Releases 21--23 \citep{DRN21, DRN22, DRN23,DRN25}. To create a time series of the relative flux variations of KIC\,8164262, we used barycentric times as reported in the {\sc time} column and the fluxes reported in the {\sc pdcsap\_flux} data column of the \kep\ data files. These data have been processed through the \kep\ pipeline \citep{DataProcessingHandbook}, including the PDC (Presearch Data Conditioning) module of the pipeline, which uses a Bayesian, multi-scale principal component analysis to remove common features from the time series \citep{Smith2012,Stumpe2012,Stumpe2014}. We then fitted a low order ($<$4) polynomial to the time series of each Quarter individually. Our final light curve was created by dividing by this fit to yield the fractional variation in the flux. As each \kep\ pixel is 4\,$\times$\,4\,arcsec, it is possible that some contamination may occur within the photometric mask in the form of light from an additional object. The maximum reported statistical contamination value for KIC\,8164262 is 0.002 for all observed Quarters on a scale of 0 to 1, where 0 implies no contamination and 1 implies complete contamination of the CCD pixels by other stars in the aperture. The low value of 0.002 suggests that KIC\,8164262 suffers minimally from third light, if at all. To assess the flux incident on each individual pixel we used pyKE \citep{Still2012} to generate the per-pixel light curve plots and examine the flux distribution over the newly defined masks. From this we visually confirmed that no other source is contaminating our observations. \subsection{Period Determination} \label{sec:period} Period analysis was performed to identify the orbital period of the binary system. The analysis was done on all data, Quarters, using {\sc kephem} \citep{Prsa2011}, an interactive package with a graphical user interface that incorporates 3 period finding methods: Lomb-Scargle (LS; Lomb 1976; Scargle 1982),\nocite{Lomb1976, Scargle1982} Analysis of Variance (AoV; Schwarzenberg-Czerny 1989),\nocite{Schwarzenberg-Czerny1989} and Box-fitting Least Squares (BLS; Kov{\'a}cs et al. 2002),\nocite{Kovacs2002} as implemented in the {\sc vartools} package (Hartman et al. 1998)\nocite{Hartmann1998}. Using {\sc kephem}, the period and time of the deepest minimum of the ellipsoidal variation were found interactively. The ephemeris was found to be:\newline \newline Min\Rmnum{1} = BJD 2455668.82920+87.4549(6) $\times$ E\newline \newline \noindent where Min\Rmnum{1} refers to the deepest minimum of the ellipsoidal variation (identified by eye) counted from the centre of the data set. The values in the parenthesis gives the 1$\sigma$ uncertainty in the last digit. The period uncertainty was obtained by applying an adaptation of the Period Error Calculator algorithm of \citet{Mighell2013}, as specified by \citet{Kirk2015}. \subsection{Ground Based Spectroscopy} We obtained three sets of spectra: fifteen spectra from the HIRES spectrograph on the Keck telescope, Mauna Kea; two spectra using the Tull spectrograph on the 2.7-m telescope at McDonald Observatory; and two spectra using the Echelle Spectrograph on the 4-m Mayall telescope, Kitt Peak National Observatory (KPNO). The object was determined to be an single lined spectroscopic binary system, as described in \S\ref{sec:todcor}. The radial velocities derived from the three sets of observations are reported in Table\,\ref{tab:rvs}. Keck observations were taken with the standard setup of the California Planet Search \citep{Howard2010} over the course of three months, beginning in 2015 May. Exposure times were between 120 and 180\,s and each spectrum has a signal-to-noise (SNR) of ~52 per resolution element at 5500\,\AA\ with a resolution of ~60\,000. In order to calculate the systemic radial velocity, we utilize the telluric A and B absorption features that fall on 7594--7621\,\AA\ and 6867--6884\,\AA, respectively. Using the method from \citet{Chubak2012}, the positions of the primary star's spectral lines were measured relative to the telluric features. The positions of the spectral lines were converted into radial velocities and an offset was applied to place the relative radial velocities on the standard scale used by \citet{Nidever2002} and \citet{Latham2002}. We further observed KIC 8164262 with the Tull Coud\'e Spectrograph mounted on the Harlan J. Smith 2.7-m Telescope \citep{Tull1995} at McDonald Observatory. The Tull spectrograph covers the entire optical spectrum at a resolving power of R\,=\,60\,000. We collected 2 spectra in 2015 August using exposure times of 800~s. The resulting spectra have SNR ratios from 23 to 26 per resolution element at 5650\,\AA. For each target visit we also obtained a spectrum of HD~182488, the radial velocity standard star used for the \kep\ field, which we used to measure absolute radial velocities by cross-correlating the target star's spectra with this standard star spectrum. The KPNO observations were taken in sets of back-to-back exposures on 2013 May 29--30 (900\,s each) and 2013 June 17--18 (750\,s each). Calibration exposures were taken using a ThAr lamp prior to each exposure. Using the echelle spectrograph, a wavelength coverage of $4600-9050$\,\AA\ was obtained with a resolving power of R\,$\sim$20\,000. The signal-to-noise ratio obtained was $\sim$40 per resolution element. The data were reduced using the {\sc iraf} (Image Reduction and Analysis Facility) software package \citep{Tody1986, Tody1993}. \begin{figure} \hfill{} \hspace{-0.9cm} \includegraphics[width=10cm]{figures/spectrum_fit.eps} \small\caption{A section of a KPNO spectrum from 5190--5245\,\AA\ (blue line) with the best fit model (red line) using \tdcr\ combined with \mcmc. The entire fitted region extends from 4800\,\AA\ to 6750\,\AA.} \label{fig:spectra} \hfill{} \end{figure} \begin{figure} \hfill{} \centerline{\includegraphics[width=9cm]{figures/posteriors_weight.eps}} \small\caption{Posterior probability distribution functions produced by applying \tdcr\ combined with \mcmc\ to the spectra obtained using the Mayall telescope at KPNO. Lower left subplots: two dimensional cross-sections of the posterior probability distribution functions. The crosses show the 1$\sigma$ (red) and 2$\sigma$ (green) uncertainties, and are centred on the minima. Diagonal subplots from top left to bottom right: histograms displaying the probability distributions of the effective temperature, $T_{\rm eff}$ (K); the surface gravity, $\log g$ (dex); and the projected rotational velocity, $v \sin i$ or $vrot$ ($\kms$); for the primary component, and $\alpha$, the fractional light contribution, $f_2/(f_1+f_2)$, where $f_1$ and $f_2$ are the light contributions of the primary and secondary, respectively. Upper right subplots: the correlations for the two-dimensional cross-sections mirrored in the diagonal line where 1 is a direct correlation and -1 is a direct anti-correlation. The values above the plot give the mean value and one sigma uncertainty for each parameter, based on the fitted Gaussians. For the $\alpha$ parameter, the peak of the distribution is at $\sim$0 and thus the normal distribution is not a good approximation. For this reason, the fitted value above the plot is not representative.} \label{fig:tdcr} \hfill{} \end{figure} \subsubsection{Deriving Fundamental Parameters and Radial Velocities from the KPNO Spectra} \label{sec:todcor} The radial velocity data from the KPNO observations were generated using the 2-D cross-correlation technique as implemented in \tdcr\ \citep{Zucker1994} combined with the {\sc python} implementation of {\sc emcee}, an affine invariant version of Markov chain Monte Carlo ({\sc mcmc}) method, proposed by \citet{Goodman2010} and implemented by \citet{DFM2013}. By combining these software packages, we were able to simultaneously obtain the posteriors of the fundamental parameters: effective temperature, $T_{\rm eff}$, projected rotational velocity, $v$\,$\sin$\,$i$, and gravity, $\log g$; and obtain radial velocity distributions (distributions of possible radial velocities based on the range of possible spectral models and \tdcr\ uncertainties; \citet{Hambleton2016a}). Although \tdcr\ is designed for double lined spectroscopic binaries, by applying it to KIC\,8164262, we were able to marginalize over the parameters relating to the unconstrained secondary component and thus propagate them forward. \mcmc\ is used to explore the binary star parameter space using a set of Markov chains, in this case 128. These chains begin with random distributions based only on their prior probability distribution functions. For both components, we provided uniform priors for $T_{\rm eff}$, $v \sin i$ and $\log g$: 5000--8500\,K, 0--100\,km\,s$^{-1}$ and 2--5\,dex, for the primary component; and 3000--5000\,K, 0--100\,km\,s$^{-1}$ and 4--5\,dex for the invisible secondary component. We also fitted the fractional light contribution, $\alpha = f_2/(f_1+f_2)$, where $f_1$ and $f_2$ are the light contributions of the primary and secondary, respectively and provided a prior of 0--0.1. While the results of the secondary component are inconclusive due to the low light contribution ($<$\,0.5 per cent), we marginalized (integrated) over all possible values of the secondary star's atmospheric parameters to avoid biasing our results by selecting a specific spectrum. Consequently, the uncertainty stemming from the unconstrained secondary star parameters is propagated forward. At each step, two spectra were generated (one for each component), from a grid of templates that are synthesized with {\sc spectrum} \citep{Gray1999} using \citet{Castelli2004} model atmospheres. The radial velocities of the primary component were then determined by applying \tdcr\ to the observations using the templates (adjusted to account for their light contributions). The $\chi^2$ value was determined between the shifted, synthetic spectra and the observed spectra. We specified a global per-point uncertainty for the two spectra of $\sigma$\,=\,0.03, which we determined by considering the noise level of the spectra. Each $\chi^2$ value was then multiplied by -0.5 to obtain the log likelihood for each observed spectrum and the results were summed over all spectra. At each iteration the radial velocities and associated errors produced by \tdcr\ were also stored. The radial velocity distributions were then determined by combining the \tdcr\ radial velocity values and errors with the spread caused by the uncertainty in the model spectra. The outcome is a distribution of radial velocities that is marginalized over the model spectra and includes the uncertainties from \tdcr. During the initial burn-in time, the Markov chains converge towards their maximum likelihood value. The statistics of a large number of iterations ($\sim$10\,000, excluding the burn-in time), provide posterior probability distributions for the model parameters. We applied this scheme to the two high resolution KPNO spectra of KIC\,8164262 using the spectral range of 4800--6750\,\AA. Due to its slow rotation (relative to stars above the \citet{Kraft1967} break), we anticipated that KIC\,8164262 would be a metal rich star. Consequently, we repeated the aforementioned spectral fitting for a range of metallicities: [Fe/H] = -0.2 to +0.5 in steps of 0.1. By comparing the log likelihoods in each case, we found that the metallicity of [Fe/H]\,=\,0.50\,$\pm$\,0.05 provided the best fit (where the \citet{Castelli2004} model libraries have a maximum metallicity of [Fe/H]\,=\,0.5). As we fit the entire spectrum, rather than specific lines, we cannot precisely infer the metallicity using this method, although we conclude that the spectrum of KIC\,8164262 is metal rich. Using this method we determined the KPNO radial velocities provided in Table\,\ref{tab:rvs} and found that KIC\,8164262 is a single-lined spectroscopic binary with the fundamental parameters listed in Table\,\ref{tab:fundamental}. The best-fit model to the spectrum is depicted in Fig.\,\ref{fig:spectra}. The posterior distributions of the spectral parameters are depicted in Fig.\,\ref{fig:tdcr} and are all Gaussian with the exception of the light ratio, which is consistent with $\sim$0, suggesting that the light from the secondary component is negligible. This demonstrates that our model is well constrained. \begin{table} \setlength{\tabcolsep}{2pt} \centering \caption[]{Original, unshifted radial velocities and their uncertainties for the primary component of KIC\,8164262. The spectral observations were taken using the echelle spectrograph on the 4-m Mayall telescope, Kitt Peak (KPNO), the HIRES spectrograph on Keck and Cross-Dispersed Echelle Spectrograph on the 2.7-m telescope at the McDonald Observatory, Fort Davis.} \begin{tabular}{l l r c l} \hline \multicolumn {2} {l}{Time (BJD)}&\multicolumn {3} {c} {RV1 (km\,s$^{-1}$)}\\\hline Keck & & & \\\hline 2457151.059455 & & $24.9$& $\pm$& 5.7 \\ 2457202.892222 & & $15.3$& $\pm$& 3.1 \\ 2457228.978804 & & $17.6$& $\pm$& 1.8 \\ 2457237.073545 & & $28.4$& $\pm$& 6.0 \\ 2457239.988444 & & $33.5$& $\pm$& 0.7 \\ 2457241.068023 & & $37.2$& $\pm$& 0.8 \\ 2457243.031959 & & $15.2$& $\pm$& 9.5 \\ 2457244.791056 & & $-9.8$& $\pm$& 0.5 \\ 2457247.027160 & & $-0.5$& $\pm$& 2.5 \\ 2457255.883316 & & $3.0$& $\pm$& 0.9 \\ 2417568.929067 & & $23 $& $\pm$& 4 \\ 2417592.053308 & & $36 $& $\pm$& 7 \\ 2417596.036900 & & $-8 $& $\pm$& 4 \\ 2417599.999567 & & $4.0$& $\pm$& 0.9 \\ 2417621.935110 & & $16 $& $\pm$& 2 \\ \hline \multicolumn {4} {l}{KPNO}\\\hline 2456442.7814 & & 16.4& $\pm$& 2.1 \\ 2456461.7050 & & -2.3& $\pm$& 2.0 \\ \hline \multicolumn {4}{l}{McDonald}\\\hline 2457242.6297& & 29.3 &$\pm$& 1.3 \\ 2457250.8001& & 1.81 &$\pm$& 0.94 \\ \hline \end{tabular} \label{tab:rvs} \end{table} \begin{table} \setlength{\tabcolsep}{2pt} \centering \caption[]{Fundamental parameters determined using \tdcr\ combined with {\sc emcee}. The method was applied to the spectral range 4800--6750\,\AA. } \begin{tabular}{l l r c l} \hline \multicolumn {2} {l}{Parameters} & \multicolumn {3}{c}{Values}\\\hline $T_{\rm eff}$ (K) & & 6890 &$\pm$& 80 \\ $\log\,g$ (dex) & & 3.9 &$\pm$& 0.1\\ $v \sin i$ (km\,s$^{-1}$) & & 23 &$\pm$& 1\\ \hline \end{tabular} \label{tab:fundamental} \end{table} \section{Binary Star Model} \label{sec:binary} \subsection{Creating a Binary Model} \label{sec:ph} We used the binary modelling code {\sc phoebe} \citep{Prsa2005}, which is an extension of the Wilson-Devinney code \citep{Wilson1971,Wilson1979,Wilson2004}, to the light curve of KIC\,8164262. {\sc phoebe} combines the complete treatment of the Roche potential with the detailed treatment of surface and horizon effects such as limb darkening, reflection, and gravity brightening to derive an accurate model of the binary star. The implementation used here (version 1.0) relies on the Wilson-Devinney method of summing over the discrete rectangular surface elements that cover the distorted stellar surfaces. An accurate representation of the total observed flux and consequently a complete set of stellar and orbital parameters is then obtained by integrating over the visible elements. {\sc phoebe} incorporates all the functionality of the Wilson-Devinney code, but also provides an intuitive graphical user interface alongside many other extensions, including updated filters and bindings that enable interfacing between {\sc phoebe} and {\sc python}. As modelling a large number of data points is computationally expensive, we elected to phase-bin the data for the purpose of binary modelling. This is appropriate for KIC\,8164262, as the binary features and tidally induced pulsation both repeat precisely every orbital cycle. Furthermore, KIC\,8164262 has no significant temporal variations that would affect the binned light curve, \ie\ apsidal motion, which would cause a change in the shape of the ellipsoidal variation as a function of time. We also note that this method significantly weakens the rotational signal due to spots, which is advantageous as we do not fit the rotation signal as part of our model (rather, we assumed pseudo-synchronous rotation based on this signal). As the information content of the light curve peaks at the time of periastron passage, we did not bin the data between the phases -0.01 and 0.01 where we kept all the data points. At all other phases we binned the data into bins of 100 points, thus significantly reducing the number of data points in these regions. Rather than having discrete segments in the light curve with different cadences, we used a sigmoid function to bridge the number of data points between regions. By using this method, we avoided discrete jumps in the number of data points. Finally, we removed any obvious outliers ($\sim$100 data points) from the data by eye. \begin{table} \caption{ \label{tab:fix} \small Fixed parameters and coefficients for the {\sc phoebe} model to the light and radial velocity curves for all available quarters. As the secondary component contributes an insignificant amount of light, the secondary parameters are also insignificant; however, the parameter values that we selected (based on estimates) are presented here for completeness.} \begin{center} \begin{tabular}{||l|r||} \hline Parameter & Value\\ \hline Orbital Period (d) & 87.4549\\ Time of primary minimum (BJD) & 2455668.829\\ Primary $T_{\mathrm{eff}}$ (K), $T_{1}$ & 6890\\ Primary synchronicity parameter, $F$ & 29.2547\\ Primary bolometric albedo & 0.6\\ Primary gravity brightening & 0.32\\ Secondary $T_{\mathrm{eff}}$ (K), $T_2$ & 3500\\ Secondary radius (\Rsun), $R_2$ & 0.3\\ Secondary synchronicity parameter, $F$ & 29.2547\\ Secondary bolometric albedo & 0.6\\ Secondary gravity brightening & 0.32\\ Third light & 0.0\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \hfill{} \caption{ \label{tab:free} \small Adjusted parameters and coefficients of the best-fit model to the light and radial velocity curves for the phased light curve data and all radial velocity measurements. The limb darkening coefficients were calculated using \ph. The passband luminosities were derived using the assumed temperature and radius of the secondary. The RV shift is a vertical shift applied to {\sc kpno} and McDonald radial velocities to shift them onto the standard scale used by \citet{Nidever2002} and $F$ is the stellar to orbital rotation rate. The fit was performed using MCMC methods and the values in the brackets denote the 1\,$\sigma$ uncertainties.} \begin{center} \begin{tabular}{||l|r||} \hline Parameter & {Value}\\ \hline Mass ratio, $q$ & 0.20(4)\\ Primary mass ($\Msun$), $M_1$ & 1.70(9)\\ Secondary mass ($\Msun$), $M_2$ & 0.36(2)\\ Primary radius (\Rsun), $R_1$ & 2.4(1)\\ Phase shift, $\phi$ & 0.014(1)\\ Orbital eccentricity, $e$ & 0.886(3)\\ Argument of periastron (rad), $\omega$ & 1.48(1)\\ Orbital inclination (degrees), $incl$ & 65(1)\\ Primary passband luminosity (\%), $L_1$ & 98.9(2)\\ Secondary passband luminosity (\%), $L_2$ & 1.1(1)\\ Semi-major axis (\Rsun), $sma$ & 106(2)\\ Primary $\log$\,$g$ (cgs), $\log g1$ & 3.90(3)\\ Primary linear limb darkening coeff. & 0.647\\ Secondary linear limb darkening coeff. & 0.714\\ Primary logarithmic limb darkening coeff. & 0.220\\ Secondary logarithmic limb darkening coeff. & 0.148\\ KPNO RV shift ($\kms$), & 2.7(1)\\ McDonald RV shift ($\kms$), & 0.01(2)\\ \hline \end{tabular} \hfill{} \end{center} \end{table} The final binary star model was converged using a combination of \ph\ and \emcee. However, to understand our model parameters, we initially created a binary star model using \ph's GUI (graphical user interface). For this initial stage we prewhitened the primary pulsation from the light curve. When using the \ph\ GUI, we identified the parameters that significantly impact the light curve shape of KIC\,8164262. As this object is a single-lined non-eclipsing spectroscopic binary, this excludes the majority of parameters that pertain solely to the secondary component, with the exception of the upper limit on the secondary star's relative luminosity. To ensure that this was the case for the secondary radius, we computed the results with a radius of $R_2$\,=\,0.3\,\Rsun\ and $R_2 = 0.5$\,\Rsun\ and found that the results were identical within statistical uncertainties. The parameters that were found to affect the binary star light and radial velocity curves are the eccentricity, inclination, argument of periastron, primary radius, primary gravity brightening exponent, luminosity ratio, mass ratio, semi-major axis and systemic velocity, where the phase shift is a convenience parameter that shifts the model horizontally to keep the minimum of the ellipsoidal variation at phase 0.0. As the gravity darkening exponent is degenerate with the primary star's radius, we elected to fix the gravity darkening exponent to 0.32, which is the theoretical value for stars with convective outer envelopes \citep{Lucy1967}, even when the envelope is very thin. As the secondary is small and cool, the amount of reflection in the light curve is negligible. As such, we elected to fix the albedo to the theoretical value for stars with convective envelopes, 0.6 \citep{Lucy1967}, for both components. A list of all the fixed parameters in our binary model can be found in Table\,\ref{tab:fix}. \subsection{Parameter Space Sampling} To create the final model we combined \ph\ with \mcmc\ to integrate the power of \ph\ as a binary modelling code with Bayesian inference. This was possible due to the recent update of \ph\ to include \python\ interfacing. We again elected to use \emcee, which is discussed in detail in \S\ref{sec:todcor}. In addition to the standard functionality of \ph, our models include the ability to fit Doppler boosting, as described by \citet{Bloemen2011}, and tidally induced pulsations. For KIC\,8164262 we elected to fit the high-amplitude prominent pulsation simultaneously with the binary star features. The signature of a tidally induced pulsation is a frequency that is a precise multiple of the orbital frequency. The prominent pulsation in KIC\,8164262 is 228.999(2)\,$\nu_{orb}$ which is equal to 229\,$\nu_{orb}$, given the uncertainty on the orbital period and pulsation frequency. In our model we fixed the frequency of the pulsation to the multiple of the orbital frequency and fitted the phase and amplitude to create a comprehensive binary star model. Doppler boosting is proportional to the radial velocity of the two stars and is the combined effect of shifting the stars' spectral energy distributions (relative to the \textit{Kepler}\ passband), aberration and an altered photon arrival rate. The net result of Doppler boosting is an increase in the observed flux from a star when it moves towards the observer, and a decrease when it moves away from the observer. It was predicted to be seen in the \textit{Kepler}\ data by \citet{Loeb2003} and \citet{Zucker2007}, and has been observed in several systems from ground-based data, as well as \textit{Kepler}\ and CoRoT light curves (see e.g. \citealt{Mazeh2010,van-Kerkwijk2010,Shporer2010,Bloemen2011,Shporer2016}. To determine the Doppler boosting coefficients, we used look-up tables, based on each component's effective temperature and $\log g$. These look-up tables take into account the spectrum of the star and the wavelength of the observations, and were computed from Kurucz 2004 model spectra \citep{Castelli2004}, using Eq.\,(3) of \citet{Bloemen2011}. The Doppler boosting contribution was estimated to be $B\sim$400\,ppm, which is significant given the peak-to-peak amplitude of the light curve is $\sim 4000$\,ppm. The calculation for Doppler boosting was performed at each iteration. \begin{figure} \centerline{\includegraphics[height=7cm]{figures/model_8164.eps}} \centerline{\includegraphics[height=7cm]{figures/model_8164_zoom.eps}} \small\caption{Upper panel: The best-fit light curve model (red line) to the \kep\ data of KIC\,8164262 (black). The 1\,$\sigma$ uncertainties are denoted on the plot. The residuals of the best-fit model are provided below the model and have been binned for visual purposes. The red dashed line denotes zero flux. Lower panel: A magnified section of the periastron variation displaying the best-fit model. The blue dotted vertical line denotes the time of periastron passage, which is slightly offset from the observed time of minimum} \label{fig:lc_model} \end{figure} In our model we restricted the log\,$g$ of the primary component to that determined from spectral fitting, $\log g1$\,=\,3.9\,$\pm$\,0.3 (where $\sigma = 0.1$). Consequently, at each iteration we calculated the primary star's gravitational potential (an input for \ph\ that is a proxy for the stellar radius), which is a function of the mass ratio, instantaneous separation, spin-to-orbital period ratio and log\,g. We calculated stellar luminosity, thus reducing the number of fitted parameters to twelve. Of these fitted parameters, eight are binary star parameters: the inclination, eccentricity, argument of periastron, phase shift, mass ratio, semi-major axis, systemic velocity and log\,$g$ of the primary component. Two pulsation parameters are the amplitude and phase of the high-amplitude, tidally-induced pulsation, and two are vertical radial velocity shifts to account for having radial velocity data from three different telescopes (these shifts were added to the original radial velocity values that are presented in Table\,\ref{tab:rvs}). These parameters were selected based on their significant contribution to the light curve. Other important parameters that were not fitted include: the primary effective temperature, which we fixed based on the spectral information; the period and zero point in time, which were fixed based on our period determination; the stellar rotation rate, which we fixed based on the stellar rotation signature in the light curve due to spots; and the aforementioned primary gravity darkening exponent, which we fixed to the theoretically determined value of 0.32. For each parameter we used a flat, uniform prior where the ranges were selected to be as large as possible without creating unphysical models, with the exception of log\,$g$, which we constrained to be within three sigma of the value obtained through spectral fitting. The likelihood was generated by multiplying the $\chi^2$ value from the light curve data by -0.5 and summing this with the $\chi^2$ value from the radial velocity data, again, multiplied by -0.5. Fig.\,\ref{fig:lc_model} depicts the model fit to the light curve. The light curve fit obtained is well constrained, as shown by the residuals presented in the lower panels. It can be seen that the amplitude of the pulsations surrounding the periastron variation are slightly underestimated. We believe that the comprehensive treatment of pulsations within the binary model framework would elevate this discrepancy. However, this is beyond the scope of this work. Fig.\,\ref{fig:rv_model} depicts the radial velocity curve (red line) and data. \begin{figure*} \hfill{} \centerline{\includegraphics[height=12cm]{figures/8164model_rv.eps}} \small\caption{Upper panel: The best-fit radial velocity curve (red) to the KPNO radial velocity points (green), Keck radial velocity points (blue) and McDonald radial velocity points (cyan). Vertical shifts, which were fitted simultaniously with the binary star model, have been applied to the McDonald and KPNO data to align the radial velocity points onto a single scale. The vertical shifts are attributed to the use of different telescopes. Bottom panel: the residuals of the best-fit model (note the change of scale). The error bars denote the one sigma uncertainty on the radial velocities.} \label{fig:rv_model} \hfill{} \end{figure*} \begin{figure*} \hfill{} \centerline{\includegraphics[height=\hsize]{figures/posteriors_beam.eps}} \small\caption{Posterior distributions of the binary star parameters for KIC\,8164262, where $i$ is the inclination of the binary star orbit in degrees; $\omega$ is the argument of periastron in radians; $ecc$ is the eccentricity; $\phi$ is the orbital phase shift; $q$ is the mass ratio ($m_2/m_1$); $sma$ is the semi-major axis in \Rsun; $vga$ is the gamma velocity in $\kms$; $logg1$ is the surface gravity in dex; and $amp1$ and $phase1$ are the amplitude (in mmag) and phase of the high-amplitude pulsation. The best fit values and uncertainties are given at the top of each column. The layout is analogous to that in Fig.\,\ref{fig:tdcr}.} \label{fig:posteriors} \hfill{} \end{figure*} The posteriors for all parameters shown in Fig.~\ref{fig:posteriors}, are well approximated by Gaussians. The list of adjustable parameters and their final values from our \mcmc\ model fit are presented in Table\,\ref{tab:free}. The parameters are indicative of a slightly evolved F star primary component. The light ratio provides an upper estimate of the secondary component's light contribution, which is suggestive of a main sequence M star. From the parameters obtained, the stars are 12.1(2)\,$\Rsun$ apart at periastron and 200(4)\,$\Rsun$ apart at apastron. This significant variation in separation is the driving force of the tidally induced pulsations observed in KIC\,8164262. \subsection{Stellar Rotation} \label{sec:rotation} After pre-whitening the light curve by our orbital model, we identified two peaks in the amplitude spectrum ($\nu$\,=\,0.3345\,\cd\ and $\nu$\,=\,0.6690\,\cd) that are not orbital harmonics. The second peak is the harmonic of the first, which suggests that the peaks may be formed by rotational variations in the light curve due to spots. Furthermore, both the amplitude and phase of the peaks were found to change over the duration of the data set, consistent with spot activity. \citet{Zimmerman2017} identified 24 heartbeat stars with similar harmonic features, all of which were attributed to rotation. We thus obtain a rotational period of 2.98942(6)\,d. This is significantly faster than the pseudo-synchronous period, 4.6(2)\,d \citep{Hut1981}, in-line with the findings of \citep{Zimmerman2017}. In our binary star model we fixed the rotational period of both components to the value of 2.98942(6)\,d or $F = 29.2548(6)$, where $F$ is ratio of the stellar rotation period to the orbital period. Currently, we are unable to discern which star has the spots that are providing the rotational signal. As M stars are known to have significant spot coverage, even though the M star is extremely faint, it could present variations on the order of 40\,ppm. However, if we assume that the spots are on the primary star, considering the spot rotation period combined with the primary star radius determined in \S\,\ref{sec:ph} ($R_1$\,=\,2.4(1)\,$\Rsun$) and $v \sin i$ determined through spectra ($v \sin i$\,=\,23(1)\,$\kms$), we infer that the inclination of the primary star would be $i= 35(3)$\degs, compared to the orbital inclination of $incl = 65(1)$\degs. This suggests that if the spots originate on the primary star, the sky-projected inclination angle difference is, $\lambda$, is 30(3)\degs. Similar angles have been observed in objects such as DI Her, which is also a detached, eccentric ($e = 0.489$) binary system \citep{Albrecht2009} and CV Velorum, which is slightly more evolved, but shows an obliquity of $\sim$65\degs \citep{Albrecht2014}. However, we cannot rule out that the spots originate from the secondary, for which we are unable to calculate the sky-projected inclination angle difference ($\lambda$). \section{Pulsation Characteristics} \label{sec:pulse} \begin{table} \caption{ \label{tab:pulse} \small Frequencies extracted from the masked light curve of KIC\,8164262. The majority of the frequencies extracted are multiples of the orbital frequency, with the exception of the two rotation peaks and three frequencies under 1\,d$^{-1}$. The asterisks denote that the frequency extracted is not resolved from the large amplitude 229$^{th}$ orbital harmonic. The phase is relative to the time of periastron (2455668.7898(2)). The values in parentheses denote the uncertainty in the last digits of the previous value.} \begin{center} \begin{tabular}{|l|c|r|r|} \hline Freq &Notes&Amp & Phase \\ (c\,d$^{-1}$)&& (ppm) & (rad) \\\hline 2.6184922(3)&229\,$\nu_{orb}$ & 1010(20)& 2.844(1)\\ 0.334512(7)&rotation & 41.0(8) & 2.40(2)\\ 2.755699(9)&241\,$\nu_{orb}$ & 35.3(8) & -1.74(2)\\ 1.40645(1)&123\,$\nu_{orb}$ & 22.9(9) & -2.51(3)\\ 2.61912(2)&229\,$\nu_{orb}$* & 15(2) & -0.40(9)\\ 1.80665(2)&158\,$\nu_{orb}$ & 15.2(8) & 2.08(5)\\ 1.41786(2)&124\,$\nu_{orb}$ & 15.1(9) & -0.93(6)\\ 1.50933(2)&132\,$\nu_{orb}$ & 13.3(9) & -1.00(6)\\ 0.66907(2)&rotation & 12.4(8) & -2.70(7)\\ 2.21831(2)&194\,$\nu_{orb}$ & 12.3(8) & 1.23(7)\\ 1.46360(3)&128\,$\nu_{orb}$ & 11.8(9) & 2.63(7)\\ 2.61832(3)&229\,$\nu_{orb}$* & 11(1) & 2.7(1)\\ 3.62472(3)&317\,$\nu_{orb}$ & 9.5(8) & 2.41(8)\\ 1.47501(4)&129\,$\nu_{orb}$ & 8.3(9) & -0.7(1)\\ 0.28033(4)& -- & 8.3(8) & -2.0(1)\\ 0.28383(4)& -- & 7.6(8) & 0.7(1)\\ 1.42931(4)&125\,$\nu_{orb}$ & 6.9(9) & -0.5(1)\\ 1.56644(4)&137\,$\nu_{orb}$ & 6.8(8) & 2.3(1)\\ 0.28504(5)& -- & 6.6(8) & -0.9(1)\\ 1.30355(5)&114\,$\nu_{orb}$ & 6.4(8) & -1.5(1)\\ 2.61901(5)&229\,$\nu_{orb}$* & 6(1) & 2.9(2)\\ 3.01870(6)&264\,$\nu_{orb}$ & 5.6(8) & -3.1(1)\\ 0.25121(5)&22\,$\nu_{orb}$ & 5.6(8) & -1.7(1)\\ \hline \end{tabular} \end{center} \end{table} \begin{figure*} \hfill{} \centerline{\includegraphics[height=10cm]{figures/ft_8164_stack.eps}} \small\caption{Amplitude spectra showing the frequency spectrum at different stages. Starting from the top, depicted are the amplitude spectra with: no peaks removed (note the single prominent peak at 2.6\cd, the 229$^{th}$ orbital harmonic); the binary model and primary pulsation subtracted (note the change of scale), where the two rotation peaks at 0.3345\,\cd\ and 0.6690\cd\ are highlighted in red; all peaks removed to an amplitude of 4\,$\mu$mag.} \label{fig:ft} \hfill{} \end{figure*} The light curve of KIC\,8164262 contains one high-amplitude, tidally-excited mode ($\nu$ = 229\,$\nu_{orb}$, cf. Fig.\,\ref{fig:lc_model}), which we fitted simultaneously with the binary star model. The amplitude and phase of the 229$^{th}$ orbital harmonic were found to be $A = 1.01(2)$\,mmag and $\phi = 5.19(6)$\,rad relative to the zero point specified in \S\,\ref{sec:period} or $\phi = 2.844(1)$ relative to periastron. In Fig.\,\ref{fig:ft}, amplitude spectra with no peaks removed (top panel), with the binary model and 229$^{th}$ orbital harmonic subtracted (middle panel) and with all the peaks to an amplitude of 4\,$\mu$mag removed can be seen. The stunning prominence of the 229$^{th}$ orbital harmonic can be seen in the top panel (note the change of scale for the different panels). The binary star model, including the high amplitude mode, was then subtracted from the time-series light-curve data and the residuals were analysed. To remove any residual information from the binary star features, the data points were removed from the region between phases -0.01 to 0.01, the phases of the ellipsoidal variation in the original time series. In the amplitude spectra, these gaps appear in the window pattern, separated from the main peak by the orbital frequency. However, as the binary orbital period is long compared to the duration of the ellipsoidal variation, the removal of these points did not create a window pattern with significant additional amplitude in the sidelobes. We calculated amplitude spectra to these data and found that the highest amplitude pulsation peak that remained (after the significant high amplitude peak had been removed) had an amplitude of $35$\,ppm, which is 3.5 per cent of the dominant pulsation. We individually extracted each peak from the amplitude spectra until we reached an amplitude of 5\,$\mu$mag. Beyond this point we were unable to distinguish between pulsation frequencies and noise. Table\,\ref{tab:pulse} provides a list of the extracted frequencies, amplitudes and phases relative to periastron. We identified three peaks that are unresolved from the prominent 229th orbital harmonic. It is likely that these are due to phase and/or amplitude variation, which is common in \DS\ stars \citep{Bowman2014, Bowman2016}. We also provide the orbital harmonic number for each pulsation frequency -- all peaks are orbital harmonics with the exception of three at $\nu = 0.28033(4)$\,\cd, $\nu = 0.28383(4)$\,\cd and $\nu = 0.28504(5)$\,\cd, and the two rotational peaks at $\nu =$\,0.3345\,\cd\ and $\nu =$\,0.6690\,\cd, discussed in \S\ref{sec:rotation}. We speculate that the former three pulsations are either the unresolved rotation signal of the secondary component; or naturally occurring g mode pulsations, as the primary star temperature is consistent with that of a \GD\ pulsator, which pulsates on the order of 1\,\cd \citep{Grigahcene2010a}. We extended our frequency search beyond the Nyquist frequency to ensure that we had identified all the peaks and that the selected peaks were not Nyquist aliases. All peaks beyond the Nyquist frequency showed a multiplet structure caused by the irregular time sampling of \kep\ data due to satellite motion \citep{Murphy2013}. Thus we conclude that the identified peaks are the real peaks. \section{Comparison with Stellar Evolutionary Models} To assess the consistency of our models and to ascertain the age of the primary, pulsating star, we generated evolutionary models and isochrones and compared them with the results for the primary component. This was not possible for the secondary component, as we were unable to determine its radius and effective temperature due to its low luminosity (relative to the primary component). We used the {\sc {mist}} ({\sc {mesa}} Isochrones and Stellar Tracks) software \citep{Dotter2016,Choi2016}, which is based on the {\sc {mesa}} (Modules for Experiments in Stellar Astrophysics) package \citep{Paxton2011,Paxton2013,Paxton2015}. We generated evolutionary models with [Fe/H]\,=\,0.5, the metalicity determined when fitting the KPNO spectra (see \S\ref{sec:todcor}). The models incorporated time-dependent, diffusive convective overshoot \citep{Herwig2000}, \citet{asplund2009} solar abundances and the OPAL \citep{Iglesias1993, Iglesias1996} opacities. Fig.\,\ref{fig:evo} shows the evolutionary tracks for stars ranging from 1.63 to 1.73\,$\Msun$ in steps of 0.01\,$\Msun$, which bracket the results of the binary model fit ($M_1 = 1.7 \pm 0.1$) in steps of $1\sigma$. The boxes represent the one, two and three\,$\sigma$ results from the model fit in the Teff--radius plane. \begin{figure} \centerline{\includegraphics[height=6cm]{figures/rad_teff_2.eps}} \small\caption{Main-sequence and post-main-sequence evolutionary tracks from the {\sc {mist}} series for the measured mass of the primary component of KIC\,8164262 in the Teff--radius plane. The lines represent evolutionary models in one sigma increments about the central mass value of $M_1 = 1.7 \pm 0.1 \Msun$. The boxes represent the 1$\sigma$ (white), 2$\sigma$ (blue) and 3$\sigma$ (grey) results from the binary model fit. The legend provides the mass for each evolutionary track in units of solar mass.} \label{fig:evo} \end{figure} To determine the age of the primary component, we compared the binary model results with stellar isochrones, again with [Fe/H]\,=\,0.5. As seen in Fig.\,\ref{fig:iso}, the models suggest the primary component is log(Age[yr])\,=\,9.1 or Age\,=\,1.2\,Gyr. The models also show that the primary component is nearing the end of the main sequence. \begin{figure} \centerline{\includegraphics[height=6cm]{figures/iso_zoom2.eps}} \small\caption{Stellar isochrones in the Teff--radius plane to determine the age of the primary component of KIC\,8164262. The lines represent log(Age[yr]) and the legend provides the ages in log(Age[yr]). The best matching age is log(Age[yr])\,=\,9.1 or 1.2\,Gyr. The boxes represent the 1$\sigma$ (white), 2$\sigma$ (blue) and 3$\sigma$ (grey) results from the binary model fit. } \label{fig:iso} \end{figure} \section{Discussion and Conclusions} \label{sec:summary} KIC\,8164262 is an extreme \hb\ star, in that its orbital period and eccentricity are larger than most of the heartbeat stars discovered so far. Its most striking feature is the high amplitude ($\sim$1\,ppt) tidally excited pulsation at 229 $\nu_{\rm orb}$ --- the largest amplitude tidally excited pulsation observed to date. The frequency of this pulsation is not unusual (frequencies of $0.5 \, {\rm d}^{-1} \lesssim \nu_{\rm orb} \lesssim 3 \, {\rm d}^{-1}$ are common in \hb\ stars), it simply occurs at a large orbital harmonic because of the small orbital frequency. However, the amplitude of the pulsation is exceptional, as it is over twenty times larger than any other pulsation in KIC\,8164262, and roughly four times larger than any pulsations in KOI-54. The LC \kep\ data of Quarters 0--17 and radial velocities from three different telescopes (Keck, the 4-m Mayall telescope at KPNO and the 2.7-m telescope on the McDonald Observatory) were modelled using \ph\ combined with \emcee. We further augmented the software to add sine waves to the light curve to model the tidally induced pulsations, which we used to model the prominent tidally induced pulsation at 229\,$\nu_{orb}$, and to model Doppler boosting. The results of the spectral analysis on the KPNO spectra, specifically the effective temperature and log\,$g$ of the primary component, were also incorporated into the modelling effort to fully constrain the fundamental parameters. Using these combined software packages, we determined that KIC\,8164262 contains a slightly evolved F star, which is experiencing the tidally induced pulsations, and a tentatively classified M dwarf star. Comparing the model results with MIST stellar evolution tracks and isochrones confirms the primary is approaching the end of the main sequence and suggests an age of 1.2 Gyr. We performed pulsational analysis on the complete \kep\ light curve with the binary star model removed. We found that all the identified peaks, with the exception of five, were multiples of the orbital frequency, thus we conclude that these are all tidally induced pulsations. Of the remaining 5 we identified three peaks at $\nu = 0.28033(4)$\,\cd, $\nu = 0.28383(4)$\,\cd and $\nu = 0.28504(5)$\,\cd, which are likely g-mode pulsations. The remaining two peaks have frequencies at $\nu =$\,0.3345\,\cd\ and $\nu =$\,0.6690\,\cd, where the latter is the harmonic of the former, suggestive of rotational variations due to spots \citep{Zimmerman2017}. The a large amplitude mode requires explanation and may yield clues to tidal dissipation processes in binary star systems. In a companion paper, F17, Fuller et al. calculate the expected frequencies and amplitudes of tidally excited pulsations from theoretical considerations. Fuller et al. find that an extremely finely tuned resonance is required to tidally excite a mode to the observed amplitude, and such a resonance is unlikely to occur by chance. Instead, they find that the pulsation is well explained (in both frequency and amplitude) as a resonantly locked mode. In this scenario, the combined effects of stellar evolution and spin-down are balanced by ongoing tidal circularization and synchronization in a self-regulating process such that the frequency of a single pulsation mode is held at resonance with the tidal forcing. The result is an increased rate of tidal dissipation compared to conventional expectations (see \citet{Zahn2008} for a review). For A--F stars, tidal interactions are expected to be weak due to the absence of a thick convective envelope and the presence of only a small convective core, entailing an effective tidal quality factor (which measures the efficiency of tidal dissipation) of $ Q \gtrsim 10^6$. However, for KIC\,8164262 F17 calculate the effective tidal quality factor to be $Q \sim 5\times 10^4$ while the resonance lock is active, corresponding to an orbital circularization timescale of $\sim 5\, {\rm Gyr}$. Furthermore, in the absence of the prominent pulsation, F17 finds that the circularization timescale is $\sim500$ times longer. This is suggestive of the importance of resonance locking for the acceleration of orbital circularization. Further details are presented in F17. \section{Acknowledgments} The authors express their sincere thanks to NASA and the \kep\ team for the high quality \kep\ data. The \kep\ mission is funded by NASA's Science Mission Directorate. We thank the Planet Hunters for finding this object and Professor Dan Fabrycky for notifying us about it. This work was also supported by the STFC (Science and Technology Funding Council). KH, ST and JF acknowledge support through NASA K2 GO grant (11-KEPLER11-0056) and NASA ADAP grant (16-ADAP16-0201). AP and KH acknowledge support from the NSF (grant \#1517460). JF acknowledges partial support from NSF under grant nos. AST-1205732, PHY-1125915, and through a Lee DuBridge Fellowship at Caltech. We would like to thank the RAS for providing grants which enabled KH's attendance to conferences and thus enabled the development of collaborations and the successful completion of this work. AP acknowledges support through NASA K2 GO grant (NASA 14-K2GO1\_2-0057). We acknowledge the observations taken using the 4-m Mayall telescope, KPNO (NOAO 2011A-0022); the Keck telescope, Mauna Kea; and the 2.7-m telescope at the McDonald Observatory. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. \bibliographystyle{mn2e}
1,314,259,996,804
arxiv
\section{Introduction} \vspace{0.7cm} There is a general existence theorem for solutions of analytic differential equations, but we have no general existence theorem for analytic difference equations. For example, we consider the following first order nonlinear difference equation $$ x(t+1) = 2 x(t) + x(t)^2. \eqno{(*)} $$ Putting $x(t) = -1 + y(t),$ we get $y(t+1) = y(t)^2$ and $\log y(t+1) = 2 \log y(t).$ Then $u(t) = 2^t$ is a (particular) solution of the equation $u(t+1) = 2u(t).$ Putting $C(t) = (\log y(t))/u(t),$ we have $C(t+1) = C(t),$ that is $C(t) = \pi(t),$ where $\pi(t)$ is an entire solution with the period $1.$ Therefore, a general solution of $(*),$ which tends to $0$ as $t \to - \infty,$ is given by $x(t) = \exp[\pi(t) 2^t] - 1.$ This simple example $(*)$ illustrates the whole make-up of the present paper. $0$ is its equilibrium point of $(*)$, with the characteristic value $2.$ A formal solution of it is obtained by putting $x(t) = \sum_{n=1}^{\infty} a_n (2^t)^n.$ If its convergence is shown, then we have a solution $x(t)$ of the initial value problem, which tends to $0$ as $t \to - \infty.$ Further we proceed to seek general solutions. For analytic differential equations, a solution of initial value problem is always represented by a power series. This is the reason that the general existence theorem can be established for differential equations. But for difference equations, this is not the case. Next we consider the following first order difference equation $$ x(t+1) = x(t) + x(t)^2, \eqno{(**)} $$ for which $0$ is the equilibrium point with characteristic value $1,$ but we can not put its formal solution in the form $\sum_{n=1}^{\infty} a_n (1^t)^n.$ That is, selection of appropriate formal solution depends on the problem. Of course, by \cite{Kimu} p.237 Theorem 14.2, ($**$) has a local solution with the asymptotic expansion $$ x(t) \sim -\frac{1}{t} \left\{1 + \sum_{j+k \geq 1} \hat{q}_{jk} t^{-j} \left(\frac{\log t}{t} \right)^k \right\}^{-1}, $$ where $\hat{q}_{jk}$ are constants. \par Further, for analytic differential equations, the solution is determined uniquely by the initial condition. However, for analytic difference equations, solution cannot be determined by the (initial) condition $x(t) \to 0$ as $t \to - \infty,$ hence we need to consider general solutions. In this paper, we consider the following second order nonlinear difference equation, \begin{equation} u(t+2)=f(u(t),u(t+1)), \label{1.1} \end{equation} where $f(x,y)$ is an entire function of $x$, $y$. We assume that there is an equilibrium point $u^*: u^* = f(u^*, u^*).$ We can take $u^* = 0,$ that is $f(0,0)= 0$ without losing generality. \par Many studies for difference equations are considered with discrete variables. Indeed such the equation (\ref{1.1}) is often considered for $t\in {\Bbb N}$. However in our study, we consider the difference equation (\ref{1.1}) with a continuous variable $t$. If "$t$" of equation (\ref{1.1}) represents "time", then $t$ is of course a real variable. However hereafter, in (\ref{1.1}), $t$ represents a complex variable, because we consist more general theorems. \par Our aim is to obtain analytic general solutions $u(t)$ of (\ref{1.1}) such that $u(t+n) \to 0$ as $n \to +\infty$ or $n \to - \infty.$ \par We define $f(x,y)$ in (\ref{1.1}) such that \begin{equation} f(x,y) = - \beta x - \alpha y + g(x,y), \quad \beta \ne 0, \label{1.2} \end{equation} where $g$ consists of higher order terms for $x$, $y$ such that $g(x,y)=\sum_{i,j\geqq 0, i+j\geqq 2} b_{i,j} x^iy^j\not\equiv 0,$ and $\alpha$, $\beta$, $b_{i,j}$ are constants. Further we assume that at least one of moduli of the characteristic values is neither $0$ nor $1.$ The case that both of characteristics equal to $1$ will be treated in another paper. \par \quad The processes of my work are as follows: {\bf 1)} determination of formal solutions, {\bf 2)} getting particular solution by Schauder's Fixed Point Theorem in a locally convex topological space, {\bf 3)} obtaining general solutions by the method of Kimura\cite{Kimu} and Yanagihara\cite{Ya}. \par \section{Analytic Solutions} \subsection{ A formal solution.} The characteristic equation of (\ref{1.1}) with (\ref{1.2}) is \begin{equation} D(\lambda)=\lambda^2+\alpha\lambda+\beta=0.\label{2.1} \end{equation} Let $\lambda_1$, $\lambda_2$ be roots of the characteristic equation and $|\lambda_1|\leqq |\lambda_2|$. Then we consider following two cases, i) $|\lambda_1|< 1$ and ii) $|\lambda_2|>1$. Of course, some characteristic equations have properties both i) and ii). \par In case i), we consider solutions such that $$u(t+n)\to \,0,\quad \text {as}\,\,\,n\to\, +\infty.$$ In case ii), we consider solutions such that $$u(t+n)\to \,0,\quad \text {as}\,\,\,n\to\, -\infty.$$ In case i) we put $\lambda=\lambda_1$, and in case ii) we put $\lambda=\lambda_2$. Then we put a formal solution to (\ref{1.1}) $$u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt},$$ in both cases. We substitute $u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt}$, $u(t+1)=\sum_{n=1}^{\infty} a_n\lambda^{n(t+1)}$, $u(t+2)=\sum_{n=1}^{\infty} a_n\lambda^{n(t+2)}$, into (1.1). And we compare the coefficients of $\lambda^{nt},\,(n=1,2,\cdots)$, then we have, with $D(\lambda)$ in (\ref{2.1}), \begin{equation} \left\{ \begin{array}{l} a_1\cdot D(\lambda)=0,\notag\\ a_2\cdot D(\lambda^2)= a_1^2(b_{2,0}+b_{1,1} \lambda+ b_{0,2}\lambda^2), \\ a_3\cdot D(\lambda^3)= b_{2,0}2 a_1a_2+b_{1,1} a_1 a_2\lambda(\lambda+1) +b_{0,2}2 a_1 a_2\lambda^3\notag\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\,\,\,\,\,+ a_1^3(b_{3,0}+b_{2,1}\lambda+b_{1,2}\lambda^2+b_{0,3}\lambda^3),\notag \\ \cdots\\ a_k\cdot D(\lambda^k)=C_k(a_1,\cdots,a_{k-1}),\quad\notag\\ \cdots,\notag \end{array} \right.% \end{equation} where $C_k(a_1,\cdots,a_{k-1})$ are polynomials of $a_1,\cdots,a_{k-1}$ with coefficients $b_{i,j}\lambda^l$, $0\leqq i \leqq k$, $0\leqq j\leqq k$, $0\leqq l\leqq k$, $2\leqq i+j\leqq k$. From definition of $\lambda$ and $D$, we have $D(\lambda)=0$ and $D(\lambda^k)\neq 0\, (k\geqq 2)$, and we can have $a_1$ is arbitrary. \par Here we suppose that $a_1\neq 0$. Then we have \begin{equation} a_k=\frac{a_1^k}{D(\lambda^k)}C^*_k(b_{i,j},\lambda^l), \,\, k\geqq 2, \label{2.2} \end{equation} where $C^*_k(b_{i,j},\lambda^l)$ are constants which are given by the function $f$, in which they consist of $b_{i,j}$, $2\leqq i+j\leqq k$ and $\lambda^l$, $0\leqq l\leqq k$. Hence we can determine a formal solution of (\ref{1.1}), \begin{equation} u(t)=\sum_{n=1}^{\infty} a_n\lambda^{nt},\label{2.3} \end{equation} in both cases i) and ii). Here we have $a_1$ is arbitrary not $0$, and $a_k$ are determined by $a_1$. \subsection{ Existence of an analytic solution } Here we put $u(t)=s,u(t+1)=w, u(t+2)=z$, and $H(s,w,z)=-z+f(s,w)$. Then the equation (\ref{1.1}) can be written such as \begin{equation} H(u(t),u(t+1),u(t+2))=0.\label{2.4} \end{equation} $H(s,w,z)$ is holomorphic in a neighborhood of $(0,0,0)$ and we have $H(0,0,0)=0$ easily. Furthermore we have $ \frac{\partial H}{\partial s}(0,0,0) =\frac{\partial f}{\partial s} \Bigr |_{s=w=0} =-\beta \neq 0$ as remarked in (\ref{1.2}). From implicit function theorem, for the equation $H(s,w,z)=0$, we have a holomorphic function $\phi$ such that \begin{equation} s=\phi(w,z) \quad \mbox{for}\quad |w|,\,|z|\leqq \rho\label {2.5} \end{equation} for some $\rho>0$. Furthermore we have a constant $K$ such that \begin{equation} |s|=|\phi(w,z)|\leqq K(|w|+|z|) \quad \mbox{for}\quad |w|,|z|\leqq \rho.\label{2.6} \end{equation} \quad Let $N$ be a positive integer. Put the partial sum of formal solution as $P_N(t)=\sum_{n=1}^N \alpha_n \lambda^{nt}$, and put $p_N(t)=u(t)-P_N(t)$. Here we rewrite $p(t)=p_N(t)$. \par Moreover we define following sets, \begin{align} &S(\eta)=\{t\in \Bbb{C}: |\lambda^t|\leqq \eta \},\notag\\ &J(A,\eta)=\{p:p(t) \,\text{is holomorphic and }\, |p(t)|\leqq A|\lambda^t|^{N+1} \, \mbox{for }\, t\in S(\eta)\}\notag. \end{align} in which $A>0$ and $\eta$, $0<\eta<1$ are constants. We determined these constants in a proof of existence for a fixed point of following maps $T_i$ ($i=1,2$).\par Suppose there would exist a solution $u(t)$ of (\ref{1.1}) in $S(\eta)$. Then $p_N(t)=u(t)-P_N(t)$ would belong to $J(A,\eta)$ for some suitably chosen constants $A$, $\eta$, and would satisfy the equation \begin{equation} p(t+2)=f(p(t)+P_N(t),p(t+1)+P_N(t+1))-P_N(t+2),\label{2.7} \end{equation} with $p(t)=p_N(t)$. Conversely, suppose there would exist a solution $p(t)$ of (\ref{2.7}), then $u(t)=p(t)+P_N(t)$ would be a solution of (\ref{1.1}). So, hereafter we concentrate on proving the existence of $p(t)\in J(A,\eta)$ such that $u(t)=p(t)+P_N(t)$ satisfies (\ref{2.7}). In case i) $|\lambda|<1$, the existence of solutions $u(t)$ of (\ref{2.7}) is equivalent to the existence of $p(t)$ which satisfies $$p(t) =\phi(p(t+1)+P_N(t+1),p(t+2)+P_N(t+2))-P_N(t).$$ For $p(t)\in J(A,\eta)$, we put \begin{equation} T_1[p](t)=\phi(p(t+1)+P_N(t+1),p(t+2)+P_N(t+2))-P_N(t).\label{2.8} \end{equation} Then we can prove that $T_1$ maps $J(A,\eta)$ into itself (see Appendix A). The map $T_1$ is obviously continuous if $J(A,\eta)$ is endowed with topology of uniform convergence on compact sets in $S(\eta)$. Furthermore $J(A,\eta)$ is clearly convex, and is relatively compact by the theorem of Montel \cite{Ahlfors}.\par By Schauder's fixed point theorem \cite{Dug}(p.74), \cite{Smar}(p.32), we obtain the existence of a fixed point $p(t)=p_N(t)\in J(A,\eta)$ of $T_1$ in $S(\eta)$. Moreover we can prove uniqueness of the fixed point (see Appendix B) and independence from $N$ (see Appendix C). Hence we have an analytic solution $u(t)$ in $S(\eta)$. \par \vspace{0.5cm} \quad In case ii) $|\lambda|>1$, (\ref{2.7}) is equivalent to the existence of $p(t)$ which satisfies, $$p(t)=f(p(t-2)+P_N(t-2),p(t-1)+P_N(t-1))-P_N(t).$$ For $p(t)\in J(A,\eta)$, we put \begin{equation} T_2[p](t)=f(p(t-2)+P_N(t-2),p(t-1)+P_N(t-1))-P_N(t)\notag \end{equation} Then we can prove the existence of an analytic solution $u(t)$ in $S(\eta)$ by the arguments similar as above. \par \vspace{0.7cm} Thus we have the following Theorem 1. \par \vspace{0.7cm} {\bf Theorem 1.} \em Let $\lambda_1,\, \lambda_2$ be roots of $D(\lambda)=0$ in (2.1), with $|\lambda_1|\leqq|\lambda_2|$. Suppose $|\lambda_1|<1$ or $|\lambda_2|>1$. Put $\lambda=\lambda_1$ for the former, and $\lambda=\lambda_2$ for latter. And we assume that $\lambda_1^k\neq \lambda_2$ and $\lambda_2^k\neq \lambda_1$ for any $k\in {\mathbb N}$. Then there is an $\eta>0$ such that we have a holomorphic solution $u(t)=\sum^{\infty}_{n=1}a_n\lambda^{nt}$ in $S(\eta)=\{t;|\lambda^t|<\eta\}$. \em\par \vspace{0.5cm} {\bf In case ii).} The solution $u(t)$ can be analytically continued to the whole plane, by making use of the equation (\ref{1.1}) $u(t+2)=f(u(t),u(t+1)).$\par {\bf In case i).} The function $\phi(w,z)$ in (\ref{2.5}) $s=\phi(w,z)$ for $|w|,\,|z|\leqq \rho$ is defined only locally, though we can also analytically continue $u(t)$, keeping out of branch points. The solution obtained is multi-valued. \par \vspace{0.5cm} \subsection{Particular solutions.} In this subsection, we consider solutions $u_1(t)$ and $u_2(t)$ which are respectively depend on $\lambda_1$ and $\lambda_2$. Put formal solutions such that $u_1(t) = \sum_{n=1}^{\infty} a_{1,n} \lambda_1^{nt} $ and $u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$, we have $a_{m,k}\cdot D(\lambda_m^k)=C_{m,k}(a_{m,1},\cdots,a_{m,k-1}),$ $(m=1,2; k\in \Bbb{N})$ with the similar arguments in subsection {\bf 2.1}, where $C_{m,k}(a_{m,1},\cdots,a_{m,k-1})$ are polynomials of $a_{m,1},\cdots,a_{m,k-1}$ with coefficients $b_{i,j}\lambda_m^l$, $0\leqq i \leqq k$, $0\leqq j\leqq k$, $0\leqq l\leqq k$, $2\leqq i+j\leqq k$. Furthermore if we take as $a_{m,1}\neq 0$, then we have \begin{equation} a_{m,k}D(\lambda_m^k)=a_{m,1}^k C^*_{m,k}(b_{i,j},\lambda_m^l), \,\, m=1,2;\,\,k\geqq 2, \label{2.9} \end{equation} where $C^{*}_{m,k}(b_{i,j},\lambda_m^l)$ are constants which are given by the function $f$, in which they consist of $b_{i,j}$, $2\leqq i+j\leqq k$ and $\lambda_m^l$, $0\leqq l\leqq k$. Then we have the following lemma 2 and lemma 3 with the similar arguments in {\bf 2.1-2.2}.\par \vspace{0.7cm} {\bf Lemma 2.} \em Let $\lambda_1, \lambda_2$ be roots of (\ref{2.1}) with $|\lambda_1| \leqq |\lambda_2| < 1$. If $\lambda_2^k \ne \lambda_1$ for any positive integer $k$ greater than $1$, then there are constants $\eta_1,\, \eta_2>0$ such that we have following two holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}), \begin{gather} u_m(t) = \sum_{n=1}^{\infty} a_{m,n} \lambda_m^{nt} \quad \text{in} \quad S(\eta_m)=\{t;|\lambda_m^t|<\eta_m\}, \quad (m=1,2),\notag \end{gather} in which $a_{1,1}$ and $a_{2,1}$ can be taken to be arbitrary non-zero constants.\par For the case $\lambda_2^k = \lambda_1$ for some $k \in {\mathbb N},$ if $C^*_{2,k}(b_{i,j},\lambda_2^l)=0$ given in (\ref{2.9}), then we take $a_{2,1}\neq 0$ and $a_{2,k} \ne 0$ arbitrary, and have the solution $u_2(t)$ as above. On the other hand, if $C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$ for the $k$, then we take $a_{2,j}=0$ for $(j\ne kn, n\in \Bbb{N})$, and can take $a_{2,k} \ne 0$ arbitrary, then we determine coefficients $a_{2,kn}$ for $n \geqq 2$ as above. Hence then there is an $\eta_2>0$ such that we have holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}), $$u_2(t) = \sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt} \quad\text{in} \,\,S(\eta_2),$$ as well as $u_1(t) = \sum_{n=1}^{\infty} a_{1,n} \lambda_1^{nt}$ in $S(\eta_1). $ In the case of $\lambda_2^k=\lambda_1$ and $C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$, if we take $a_{2,k}=a_{1,1}$, then $u_2(t)=u_1(t)$ in $S(\eta_1)\cap S(\eta_2)$. \par Thus, in the both cases, $u_1(t + n) \to 0, u_2(t + n) \to 0$ as $n \to \infty$ uniformly on any compact subset of the $t$-plane. \em\par \vspace{0.5cm} \quad {\bf Proof.}\quad If $\lambda_2^k \ne \lambda_1$ for any $k \in {\mathbb N},$ then we can determine formal solution $u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$ as in subsection {\bf 2.1}, with $\lambda = \lambda_2$ instead of $\lambda_1.$ And we can show that it is an actual solution as in subsections {\bf 2.1-2.2} \par For the case $\lambda_2^k = \lambda_1$ for some $k \in {\mathbb N},$ if we take $a_{2,1}\neq 0$, form (\ref{2.9}), we have \begin{equation} a_{2,k}D(\lambda_2^k)=a_{2,k}D(\lambda_1)=a_{2,1}^k C^*_{2,k}(b_{i,j},\lambda_2^l)=0.\label{2.10} \end{equation} If $C^*_{2,k}(b_{i,j},\lambda_2^l)= 0$, we can take $a_{2,k}$ arbitrary, and determine $a_{2,n}$, $2\leqq n \leqq k-1, n\geqq k+1$ by $a_{2,1}$ as in (\ref{2.9}).\par However if $C^*_{2,k}(b_{i,j},\lambda_2^l)\neq 0$, the equation (\ref{2.10}) is contradiction. Thus we must take $a_{2,1}=0$, then $a_{2,n}=0,$ for $n\leqq k-1$ by $a_{2,k}\cdot D(\lambda_2^k)=C_{2,k}(a_{2,1},\cdots,a_{2,k-1}).$ Then we can take $a_{2,k}$ to be arbitrary non-zero constant, and determine coefficients $a_{2,n}$ as follows \begin{equation} a_{2,n}= \begin{cases} 0,\quad (n\neq km , \,\, m\in \Bbb{N}),\notag\\ a_{2,k}^m\frac{C_{2,m}^*(b_{i,j}, \lambda_2^{lk})}{D(\lambda_2^{km})} =a_{2,k}^m\frac{C_{2,m}^*(b_{i,j}, \lambda_1^{l})}{D(\lambda_1^m)}, \quad (n= km , \,\, m\in \Bbb{N}),\notag \end{cases} \end{equation} where $C^*_{2,m}(b_{i,j},\lambda_1^l)$ are constants defined in (\ref{2.9}). Hence we can determine a formal solution $u_2(t)$ such that $$u_2(t) = \sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt} \quad\text{in} \,\,S(\eta_2).$$ If we take $a_{2,k}=a_{1,1}$, then we have only one solution. Futhermore for the both cases of $\lambda_2^k=\lambda_1$, we can prove that there is an $\eta_2>0$ such that we have a holomorphic solution $u_2=\sum_{n=1}^{\infty} a_{2,kn} \lambda_2^{knt}$ in $S(\eta_2)$, with the similar arguments in {\bf 2.2}. in $S(\eta_1)\cap S(\eta_2)$. \par Obviously $u_1(t + n) \to 0, u_2(t + n) \to 0$ as $n \to +\infty$ uniformly on any compact subset of the $t$-plane. $\square$\par \vspace{0.7cm} {\bf Lemma 3.} \em Let $\lambda_1, \lambda_2$ be roots of (\ref{2.1}) with $1<|\lambda_1| \leqq |\lambda_2|$. If $\lambda_1^k \ne \lambda_2$ for any positive integer $k$ greater than $1$, then there are constants $\eta_1,\, \eta_2>0$ such that we have following two holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}), \begin{gather} u_m(t) = \sum_{n=1}^{\infty} a_{m,n} \lambda_m^{nt} \quad \text{in} \quad S(\eta_m)=\{t;|\lambda_m^t|<\eta_m\}, \quad (m=1,2),\notag \end{gather} in which $a_{1,1}$ and $a_{2,1}$ can be taken to be arbitrary non-zero constants.\par For the case $\lambda_1^k = \lambda_2$ for some $k \in {\mathbb N},$ if $C^*_{1,k}(b_{i,j},\lambda_1^l)=0$ given in (\ref{2.9}), then we take $a_{1,1}\neq 0$ and $a_{1,k}\neq 0$ arbitrary, and have the solution $u_1(t)$ as above. On the other hand, if $C^*_{1,k}(b_{i,j},\lambda_1^l)\neq 0$ for the $k$, then we take $a_{1,j}=0$ for $(j\ne kn, n\in \Bbb{N})$, and can $a_{1,k} \ne 0$ arbitrary, then we determine coefficients $a_{1,kn}$ for $n \geqq 2$ as above. Hence then there is an $\eta_1>0$ such that we have holomorphic solutions $u_1$ and $u_2$ of (\ref{1.1}), $$u_1(t) = \sum_{n=1}^{\infty} a_{1,kn} \lambda_1^{knt} \quad\text{in} \,\,S(\eta_1),$$ as well as $u_2(t) = \sum_{n=1}^{\infty} a_{2,n} \lambda_2^{nt}$ in $S(\eta_2). $ In the case of $\lambda_1^k=\lambda_2$ and $C^*_{1,k}(b_{i,j},\lambda_1^l)\neq 0$, if we take $a_{1,k}=a_{2,1}$, then $u_1(t)=u_2(t)$ in $S(\eta_1)\cap S(\eta_2)$. \par Thus, in the both cases, $u_1(t - n) \to 0, u_2(t - n) \to 0$ as $n \to \infty$ uniformly on any compact subset of the $t$-plane. \par \em\par \vspace{0.5cm} \quad {\bf Proof.}\quad We can prove with arguments similar as Lemma 2. $\square$\par \vspace{0.7cm} The analytic solutions $u_1$ and $u_2$ obtained in Lemmas 2-3 are "Particular Solutions" of (\ref{1.1}). \section{Analytic General Solutions} Analytic general solutions of nonlinear difference equations have been investigated, for example, by Harris \cite{Harris}, \cite{Harris2}, and others, but we can not make use of their method. Here we follow the method of Kimura \cite{Kimu} and Yanagihara \cite{Ya}, where general solutions of the first order difference equations are studied. \par In this section we consider the following case, $$ |\lambda_1|<1<|\lambda_2|. $$ For other cases, we study general solutions of the difference equation (1.1) in other papers. \par \vspace{0.4cm} For a linear second order difference equation, general solutions are written by two particular solutions of it. But for a nonlinear second order difference equation, in this case, general solutions which converge to an equilibrium point of the equation are written by one of two particular solutions $u_1$ or $u_2$ of the difference equation.\par \vspace{0.7cm} Let $u(t)$ be a solution of (\ref{1.1}), and $w(t) = u(t+1).$ Then (\ref{1.1}) can be written as a system of simultaneous equations \begin{equation} \begin{pmatrix} u(t+1) \\ w(t+1) \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ - \beta & - \alpha \end{pmatrix} \begin{pmatrix} u(t) \\ w(t) \end{pmatrix} + \begin{pmatrix} 0 \\ g(u(t), w(t)) \end{pmatrix} \label{3.1} \end{equation} Let $\lambda_1, \lambda_2$ be roots of the equation (\ref{2.1}) and $P = \begin{pmatrix} 1 & 1 \\ \lambda_1 & \lambda_2 \end{pmatrix}.$ Put \begin{equation} \begin{pmatrix} u \\ w \end{pmatrix} = P \begin{pmatrix} x \\ y \end{pmatrix}. \label{3.2} \end{equation} From $\lambda_1\neq\lambda_2$, we can transform the coefficient matrix of linear terms of (\ref{3.1}) into diagonal form, i.e., (\ref{3.1}) is transformed to a following system with respect to $x, y:$ \begin{equation} \left\{ \begin{aligned} x(t + 1) &= \lambda_1 x(t) + \sum_{i+j \geq 2} c_{ij} x(t)^i y(t)^j = X(x(t), y(t)), \\ y(t + 1) &= \lambda_2 y(t) + \sum_{i+j \geq 2} d_{ij} x(t)^i y(t)^j = Y(x(t), y(t)). \end{aligned} \right. \label{3.3} \end{equation} On the other hand, let $Q = \begin{pmatrix} 1 & 1 \\ \lambda_2 & \lambda_1 \end{pmatrix}.$ Put \begin{equation} \begin{pmatrix} u \\ w \end{pmatrix} = Q \begin{pmatrix} x \\ y \end{pmatrix}. \label{3.4} \end{equation} Then (\ref{3.1}) is transformed to a system with respect to $x, y:$ \begin{equation} \left\{ \begin{aligned} x(t + 1) &= \lambda_2 x(t) + \sum_{i+j \geq 2} c'_{ij} x(t)^i y(t)^j = X'(x(t), y(t)), \\ y(t + 1) &= \lambda_1 y(t) + \sum_{i+j \geq 2} d'_{ij} x(t)^i y(t)^j = Y'(x(t), y(t)). \end{aligned} \right. \label{3.5} \end{equation} Then we will show the following Theorem 4. \vspace{0.7cm} {\bf Theorem 4.} \em Let $\lambda_1,\, \lambda_2$ be roots of the characteristic equation of (\ref{1.1}) such that $|\lambda_1|<1< |\lambda_2|$. Suppose that $u_1(t)$ and $u_2(t)$ are solutions of (1.1) which have the expansions $u_1(t)=\sum^{\infty}_{n=1} a_{1,n}\lambda_1^{nt}$ in $S(\eta_1)=\{t;|\lambda^t|<\eta_1 \}$, $u_2(t)=\sum^{\infty}_{n=1} a_{2,n}\lambda_2^{nt}$ in $S(\eta_2)=\{t;|\lambda^t|<\eta_2 \}$ with some constants $\eta_1,\eta_2>0$. Further suppose that $\Upsilon(t)$ is an analytic solution of (\ref{1.1}) such that either $\Upsilon(t+n)\to 0$ as $n\to +\infty$ or $n\to -\infty$, uniformly on any compact subsets of $t$-plane. If the solution $\Upsilon$ of (\ref{1.1}) satisfies $\Upsilon(t+(-1)^{m-1}n)\to 0$, $(m=1,2)$ as $n\to +\infty$, then there is a periodic entire function $\pi_m(t),(\pi_m(t+1)=\pi_m(t))$, such that \begin{align} \Upsilon(t)&=\frac{1}{\lambda_{m+1}-\lambda_m}( \lambda_{m+1}\sum^{\infty}_{n=1} a_{m,n}\lambda_m^{n(t+\pi_m(t))}-\sum^{\infty}_{n=1} a_{m,n}\lambda_m^{n(t+\pi_m(t)+1)})\notag\\ &\qquad+\Psi_m\Biggr( \frac{1}{\lambda_{m+1}-\lambda_m}( \lambda_{m+1}\sum^{\infty}_{n=1} a_{m,n}\lambda_m^{n(t+\pi_m(t))}-\sum^{\infty}_{n=1} a_{m,n}\lambda_m^{n(t+\pi_m(t)+1)}) \Biggr), \label{3.6} \end{align} in $S(\eta_m)$, with the convention $\lambda_3$ means $\lambda_1$. Further we have $\frac{\Upsilon(t+1+(-1)^{m-1}n)}{\Upsilon(t+(-1)^{m-1}n)}\to \lambda_m$, ($m=1,2$), as $n\to +\infty$. When $m=1$, $\Psi_1$ is a solution of \begin{equation} \Psi(X(x,\Psi(x)))=Y(x,\Psi(x)),\label{3.7} \end{equation} and when $m=2$, $\Psi_2$ is a solution of \begin{equation} \Psi(X'(x,\Psi(x)))=Y'(x,\Psi(x)),\label{3.8} \end{equation} in which $X$, $Y$ are defined in (\ref{3.3}), and $X'$, $Y'$ are defined in (\ref{3.5}). \par Conversely, a function $\Upsilon(t)$ which is represented as in (\ref{3.6}) in $S(\eta_m)$ for some $\eta_m>0$, where $\pi_m(t)$ is a periodic function with the period one, is a solution of (\ref{1.1}) such that $\Upsilon(t+(-1)^{m-1}n)\to 0$ and $\frac{\Upsilon(t+1+(-1)^{m-1}n)}{\Upsilon(t+(-1)^{m-1}n)}\to \lambda_m$ as $n \to +\infty$ with $m=1,2$. \par \em\par \medskip \vspace{0.7cm} {\bf Proof.}\quad At first we prove the case $m=1$.\par Let $u(t)$ be the solution of (\ref{1.1}) in the argument of Section 2. And suppose $\Upsilon(t)$ be a solution of (\ref{1.1}) such that $\Upsilon(t+n)\to 0$ as $n\to +\infty$ uniformly on any compact subsets of $t$-plane.\par At first we will consider the meaning of the functional equation (\ref{3.7}). \par Suppose (\ref{3.3}) admits a solution $(x(t), y(t))$. If $\frac{dx}{dt}\neq 0$, then we can write $t=\psi(x)$ with a function $\psi$ in a neighborhood of $x_0=x(t_0)$, and we can write \begin{equation} y(t)=y(\psi(x))=\Psi(x),\label{3.9} \end{equation} as far as $\frac{dx}{dt}\neq 0$. Then the function $\Psi$ satisfies the functional equation (\ref{3.7}).\par Conversely we assume that a function $\Psi$ is a solution of the functional equation (\ref{3.7}). If the first order difference equation \begin{equation} x(t+1)=X(x(t),\Psi(x(t))),\label{3.10} \end{equation} has a solution $x(t)$, then we put $y(t)=\Psi(x(t))$ and have a solution $(x(t), y(t))$ of (\ref{3.3}). From \cite{Ya} we see that the first order difference equation (\ref{3.10}) has an analytic solution.\par This relation is a point of our method.\par \vspace{0.7cm} Put $\omega(t)=\Upsilon(t+1)$ and \begin{equation} \begin{pmatrix}\chi\\ \nu \end{pmatrix} =P^{-1}\begin{pmatrix} \Upsilon\\\omega\end{pmatrix}.\label{3.11} \end{equation} Then we have $\chi(t)=\frac{1}{\lambda_2-\lambda_1}(\lambda_2\Upsilon(t)-\omega(t))$. Since $\Upsilon(t+n)\to 0$ and $\omega(t+n)\to 0$ as $n\to \infty$, we have $\chi(t+n) \to 0$ as $n\to \infty$.\par Let $u(t)$ be a solution given in Section 2, $$u(t)=\sum^{\infty}_{n=1}a_{1,n} \lambda^{nt}\qquad\qquad (\lambda=\lambda_1).$$ Then we can write by (\ref{3.11}), since $\lambda_1 = \lambda$ and $u(t)$ is a function of $\lambda^t,$ \begin{equation} x(t)=\frac{1}{\lambda_2-\lambda}(\lambda_2 u(t)-u(t+1)) =\frac{1}{\lambda_2-\lambda} \Biggr ( \sum_{n=1}^{\infty}(\lambda_2a_{1,n}-a_{1,n}\lambda^n)(\lambda^t)^n \Biggr ) =\Tilde{U}(\lambda^t),\label{3.12} \end{equation} where $\zeta = {\tilde U}(\tau)$ is a function of $\tau = \lambda^t$ and ${\tilde U}'(0) = a_{1,1} \ne 0$ and ${\tilde U}(0) = 0.$ Since ${\tilde U}(\tau)$ is an open map, for any $\eta_1 > 0$ there is an $\eta_2 > 0$ such that $$ {\tilde U}(\{ |\tau| < \eta_1 \}) \supset \{|\zeta| < \eta_2 \}. $$ Since $\chi(t+n) \to 0$ as $n \to \infty,$ supposed that $t$ belongs to a compact set $K,$ there is an $n_0 \in {\mathbb N}$ such that for $t' \in K$ $$|\chi(t'+n)|<\eta_2\quad (n\geqq n_0).$$ Thus there is a $\tau'=\lambda^{\sigma}$, such that \begin{equation} \chi(t'+n)=\Tilde{U}(\tau')=\Tilde{U}(\lambda^{\sigma}).\label{3.13} \end{equation} Since $\Tilde{U}'(0)=a_{1,1}\neq 0$, using the theorem on implicit function we have a $\Tilde{U}^{-1}$ such that $$\lambda^{\sigma}=\Tilde{U}^{-1}(\chi(t'+n)).$$ Put $t=t'+n$, then $\lambda^{\sigma}=\Tilde{U}^{-1}(\chi(t))$, and we write \begin{equation} \sigma=\log_{\lambda}\Tilde{U}^{-1}(\chi(t))=\ell(t).\label{3.14} \end{equation} \quad When there is a solution $\chi(t)$ of (\ref{3.3}), from (\ref{3.10}), (\ref{3.12}-\ref{3.13}) we have \begin{align} \chi(t+1) &=X(\chi(t),\Psi(\chi(t)) )\notag\\ &=X(\Tilde{U}(\lambda^{\sigma}),\Psi(\Tilde{U}(\lambda^{\sigma})) )\notag\\ &=X(x(\sigma),\Psi(x(\sigma)))\notag\\ &=x(\sigma+1)=\Tilde{U}(\lambda^{\sigma+1}).\notag \end{align} Hence $$\sigma+1=\ell(t+1),\,\,\ell(t)+1=\ell(t+1).$$ If we put $\pi(t) = \ell(t) - t,$ then we obtain $\pi(t+1) = \ell(t+1) - (t+1) = \ell(t) - t = \pi(t),$ and we can write as \begin{equation} \ell(t) = t + \pi(t), \label{3.15} \end{equation} where $\pi(t)$ is defined for a compact set $K$ with $\Re[t]$ sufficiently large. Furthermore we can continue the $\pi(t)$ analytically as a periodic function with the period $1.$ Thus we have $$ \sigma = t + \pi(t). $$ From (\ref{3.13}) and (\ref{3.12}), $\chi(t)$ can be written as $$ \chi(t)=\Tilde{U}(\lambda^{t+\pi(t)})=x(t+\pi(t)) =\frac{1}{\lambda_2-\lambda_1} (\lambda_2u(t+\pi(t))-u(t+1+\pi(t))).$$ We have following equations, making use of the equation (\ref{3.11}) \begin{align} \Upsilon(t)&=\chi(t)+\nu(t)\notag\\ &=\chi(t)+\Psi(\chi(t))\notag\\ &=x(t+\pi(t))+\Psi(x(t+\pi(t)))\notag\\ &=\frac{1}{\lambda_2-\lambda_1}(\lambda_2\sum^{\infty}_{n=1} a_{1,n}\lambda^{n(t+\pi(t))}-\sum^{\infty}_{n=1} a_{1,n}\lambda^{n(t+\pi(t)+1)})\notag\\ &\qquad+\Psi\Biggr( \frac{1}{\lambda_2-\lambda_1}(\lambda_2 \sum^{\infty}_{n=1} a_{1,n}\lambda^{n(t+\pi(t))}-\sum^{\infty}_{n=1} a_{1,n}\lambda^{n(t+\pi(t)+1)}) \Biggr),\notag \end{align} where $\pi(t)$ is defined for $t\in\cup_{n\in \Bbb{Z}}(K + n)$ with a compact set $K.$ Since $K$ is arbitrary, we can continue $\pi(t)$ analytically to a periodic entire function with period $1,$ and $\Psi$ is a solution of (\ref{3.7}). By making use of the Theorem in \cite{Suzu1}, (\cite{Suzu3}), $\Psi$ is obtained in the form, in a neighborhood of $x=0$, \begin{equation} \Psi(x)=\sum_{n=2}^{\infty} \gamma_n x^n,\label{3.16} \end{equation} that is, the expansion begins with $x^2$. From $\chi(t+1)=X(\chi(t),\Psi(\chi(t)))$, we have $$\chi(t+1)=\lambda_1\chi(t)+\sum_{i+j\geqq 2}c_{ij}\chi(t)^i \Psi(\chi(t))^j,$$ and $$\frac{\chi(t+1)}{\chi(t)}=\lambda_1+\sum_{i+j\geqq 2}c_{ij} \chi(t)^{i-1}\Psi(\chi(t))^j. $$ Since $\chi(t+n)\to 0$, as $n\to +\infty$ and by (\ref{3.16}), $$\frac{\Psi(\chi(t+n))}{\chi(t+n)}\to\, 0,\, \frac{\chi(t+1+n)}{\chi(t+n)}\to \, \lambda_1,\quad \text{ as} \quad n\to +\infty.$$ From $\Upsilon(t)=\chi(t)+\Psi(\chi(t))$, we have \begin{align} \frac{\Upsilon(t+n+1)}{\Upsilon(t+n)}= \frac{\chi(t+n+1)+\Psi(\chi(t+n+1))}{\chi(t+n)+\Psi(\chi(t+n))}&= \frac{\frac{\chi(t+n+1)}{\chi(t+n)}+ \frac{\Psi(\chi(t+n+1))}{\chi(t+n+1)}\cdot \frac{\chi(t+n+1)}{\chi(t+n)} } {1+\frac{\Psi(\chi(t+n))}{\chi(t+n)}}\notag\\ &\to\, \lambda_1,\,\, \text{as} \,\, n\,\to\, +\infty.\notag \end{align} Conversely, if we put $\Upsilon(t)$ as (\ref{3.6}), where $\pi$ is an arbitrary periodic entire function, and $\Psi$ is a solution of (\ref{3.6}), then $\Upsilon(t)$ is a solution of (\ref{1.1}) such that $\Upsilon(t+n)\to 0$ as $n\to \,+\infty$. Furthermore then we have a solution $\chi$ of (\ref{3.3}) such that $$\Upsilon(t)=\chi(t)+\Psi(\chi(t)),$$ where $\chi(t+n)\to 0$ as $n\to +\infty$. Hence we have $\frac{\Upsilon(t+1+n)}{\Upsilon(t+n)}\to \lambda_1$ as $n\to +\infty$. \par \vspace{0.5cm} Similarly in the proof of the above case, we can prove the case $m=2$ making use of the equations in (\ref{3.4}) and (\ref{3.5}). $\square$ \par \vspace{0.7cm}
1,314,259,996,805
arxiv
\section{Introduction}\label{sec:intro} The lattice QCD (LQCD\xspace) community has traditionally been an early adopter of new computing and network architectures. This typically requires major efforts porting simulation code or even communication libraries. The Regensburg lattice group (RQCD) has been involved in such efforts, as well as supercomputer development, for more than a decade. While the first computer in the QPACE series \cite{Goldrian:2008vlh, Baier:2009yq} was based on IBM's Cell processor and an FPGA-based custom interconnect, the subsequent machines are using Intel's Xeon Phi series with standard interconnects (see Sec.~\ref{subsec:overview}). To satisfy the increasing demands of the RQCD physics program we use a state-of-the-art method, \mbox{DD-$\alpha$AMG}\xspace \cite{Frommer:2013fsa}, to solve the discretized form of the Dirac equation. The high-performance implementation of this solver on \qp{2} is described in \cite{Heybrock:2014iga, Heybrock:lat15, Richtmann:2016kcq, Georg:2017diz}. The present contribution focuses on the software efforts we made to efficiently run this implementation on \qp{3}. This paper is structured as follows. In Sec.~\ref{sec:qpace3} we give an overview of \qp3 and highlight the differences to \qp2 in terms of processor and network. We discuss the network technology in some detail because it has changed rather drastically. In Sec.~\ref{sec:portingourimplementation} we describe how our solver and our communication library were adapted to the new technologies. In Sec.~\ref{sec:performancefigures} we present single-node and multi-node benchmarks of the solver on \qp3 and compare the results with numbers obtained on \qp2. In Sec.~\ref{sec:conclusions} we conclude and give an outlook on future work. \section{\qp{3}}\label{sec:qpace3} \subsection{Overview}\label{subsec:overview} While \qp{2} \cite{Arts:2015jia} is based on the Knights Corner (KNC\xspace) version of the Intel Xeon Phi processor series and an FDR InfiniBand\xspace network, its successor \qp{3} utilizes the current Xeon Phi processor, Knights Landing (KNL\xspace), and Intel's new Omni-Path\xspace fabric. \qp{3} was installed in two phases. Phase 1 consists of 352 nodes, each equipped with a 64-core Xeon Phi 7210 running at a clock frequency of \SI{1.3}{\giga\hertz}. Each core can run up to 4 hardware threads, giving a total of 256 threads per chip. The KNLs\xspace have \SI{16}{\giga\byte} on-package high-bandwidth memory, denoted MCDRAM\xspace by Intel, as well as \SI{48}{\giga\byte} of DDR4\xspace memory. The Omni-Path\xspace network is arranged in a 2:1 blocking-tree topology, where we use edge switches exclusively. This system was ranked 5th in the November 2016 issue of the Green 500 list and 18th in June 2017, both times being the most energy-efficient KNL\xspace system on the list. Phase 2 consists of 320 nodes with almost the same configuration. The only differences to Phase 1 are the doubled amount of DDR4\xspace memory (\SI{96}{\giga\byte} instead of \num{48}) and the blocking factor of 5:1. Given a fixed budget, these choices were made to be able to efficiently run ensemble-generation jobs that require strong scaling to many nodes on phase 1 as well as weak-scaling analysis jobs that require more memory per node on phase 2. All data shown in these proceedings were obtained on \qp3 (or on \qp2 when comparisons are made). \subsection{KNC\xspace vs.\ KNL\xspace hardware comparison}\label{sec:knlhardwarefeatures} While the KNL\xspace is quite similar to the KNC\xspace from the point of view of a software developer, with the exception that the KNL\xspace is now self-bootable,\footnote{A PCIe card was also planned but never reached the mass market.} the hardware has been changed quite significantly. Starting at the innermost level, the first improvement is the addition of a second vector processing unit (VPU\xspace), which enables a KNL\xspace core to issue twice as many floating-point operations per clock. Another improvement is that these cores are now able to execute instructions out of order, which reduces stalling penalties when cache misses occur. Furthermore, a KNC\xspace core could issue instructions from a given thread only every other cycle, which led to the need to have at least two threads running on the same core to be able to issue instructions in each cycle. This restriction is no longer present on the KNL\xspace. Concerning the cache structure, the per-core size of the L1 and L2 caches stays constant at \SI{32}{\kilo\byte} and \SI{512}{\kilo\byte}, respectively. However, with the KNL\xspace, Intel introduces the concept of a tile, which bundles two cores together sharing \SI{1}{\mega\byte} of L2 cache. In contrast to standard Xeon processors, there is no shared L3 cache. However, Intel makes up for that by adding an on-package high-bandwidth memory (MCDRAM\xspace) with a capacity of \SI{16}{\giga\byte} and a bandwidth of about \SI{420}{\giga\byte\per\second}, in addition to a standard DDR4\xspace memory interface with 6 channels and a bandwidth of about \SI{80}{\giga\byte\per\second} (both numbers are for the KNL\xspace 7210, to be compared with about \SI{160}{\giga\byte\per\second} for the KNC\xspace 7120). The MCDRAM\xspace is probably the most significant new feature of the KNL\xspace. There are different usage models for it, called memory modes. It can either be used as a large L3 cache (Cache mode), as a directly mapped NUMA\xspace node yielding an extra \SI{16}{\giga\byte} of memory in addition to the DDR4\xspace memory (Flat mode), or a combination of both, called Hybrid mode. The tiles, the MCDRAM\xspace and DDR4\xspace memory controllers, and the distributed tag directory are connected in a two-dimensional mesh, in contrast to the KNC\xspace's ring bus. According to Intel, this yields higher bandwidth and lower latency between the cores. This feature is by now also incorporated in the Xeon server architecture. The 2D mesh increases the flexibility of configuration options of the KNL\xspace even further with so-called cluster modes. It enables the chip to be used either as is (All-to-All) or divided into two/four equal parts which are then either software transparent (Hemisphere/Quadrant) or exposed to the operating system as separate NUMA\xspace domains (SNC2/4). From first to last, these modes increase the affinity between tile, distributed tag directory, and memory, thus yielding lower latency and higher bandwidth. \subsection{InfiniBand\xspace vs.\ Omni-Path\xspace hardware comparison}\label{subsec:opavsib} \qp3 utilizes the Omni-Path\xspace interconnect, replacing the previously used InfiniBand\xspace, for communication in multi-node jobs and access to the shared network storage. As shown in previous work \cite{Georg:2017diz} optimization for a particular network may yield significant improvements compared to relying on plain MPI. Therefore our \mbox{DD-$\alpha$AMG}\xspace implementation uses a custom communication library, pMR\xspace. Previously, pMR\xspace only supported InfiniBand\xspace and local inter-process communication, leveraging Linux CMA. The idea now is to add support for Omni-Path\xspace to pMR\xspace. To do that properly, it is crucial to understand the hardware, especially the differences to the well-known InfiniBand\xspace hardware. Apart from many differences that do not affect us, there are two main differences of these competing technologies we need to take a closer look at: connection-oriented vs.\ connectionless communication and interconnect offloading vs.\ onloading. \subsubsection{Connection-oriented vs.\ connectionless communication }\label{subsubsec:connection-oriented-vs-connectionless-communication} Omni-Path\xspace implements connectionless communication only, while InfiniBand\xspace mainly relies on connection-oriented communication for reliable data transfer. The InfiniBand\xspace specification also includes connectionless reliable and unreliable data transfers, but the former is not implemented in any well-known hardware. For simplicity, we ignore unreliable data transfer as it is not used in any communication pattern of interest to us. Connection-oriented communication requires one connection for each pair of processes that communicate with each other. Each connection consumes a certain amount of host memory, and the total memory utilization scales linearly with the number of connections. The InfiniBand\xspace Host Channel Adapter (HCA\xspace) uses on-chip memory to cache connection-related data, but if the cache is full it has to exchange data with the host's main memory via PCIe, resulting in a performance penalty. It is therefore sensible to minimize the number of connections. In a connectionless approach it is only necessary to set up communication endpoints and exchange address information with all other peers once (with MPI, typically during the initialization phase). This allows for scaling of applications to an arbitrary number of processes without any noticeable increase in the resources required per process. The preceding discussion implies that connectionless communication should be superior. For communication patterns that rely heavily on all-to-all communication, this indeed seems to be the case \cite{Mamidala:2007:UCV:1229428.1229437}. However, there are two reasons why connection-oriented communication can still obtain similar or better performance. First, many stencil-type applications, including LQCD\xspace, only require a limited subset of communication patterns for performance-relevant parts, in particular nearest-neighbor halo exchanges and global reductions. The former requires a maximum of $2d$ connections per process in a $d$-dimensional theory. The latter is often implemented using the recursive-doubling algorithm, which in turn consists of nearest-neighbor exchanges in a $\log_2(p)$-dimensional grid, where $p$ is the number of processes. Hence, it only adds $\log_2(p)$ connections, some of which might even be identical to the connections already set up for halo exchanges. Assuming a 256-node job with one process per node, no more than $8 + \log_2(256) = 16 $ connections are required to be set up for those two communication patterns. For other non performance-relevant parts, e.g., parallel input/output to the shared network storage, connections can be set up dynamically without any noticeable impact on wall-clock time. Second, a number of hard- and software features have been developed to reduce the number of connections and thus alleviate the main drawback of connection-oriented communication. In software, many MPI implementations allow for changing from a static to a dynamic connection setup. In hardware, new connection modes have been added, such as the Extended Reliable Connected (XRC) Transport Service that enables processes running on the same node to share certain connections \cite{koop:2008} and the Dynamically Connected (DC) Transport service that hands over the connection management to the HCA\xspace, which then sets up or tears down connections dynamically as required \cite{Subramoni:2014:DML:2769884.2769903}. However, if the number of connections is low, as in stencil-type applications, these new modes are not necessary and the traditional reliable connected (RC) mode still yields best performance. \subsubsection{Interconnect offloading vs.\ onloading}\label{interconnect-offloading-vs-onloading} Interconnect offloading/onloading specifies whether network functions are offloaded to the network adapter (InfiniBand\xspace HCA\xspace), or onloaded onto the CPU (Omni-Path\xspace). Onloading interconnect technology tends to be less complex and hence easier to build. However, while the CPU is managing and executing network operations it is not available for other tasks, most importantly computations. Whether this CPU overhead is significant depends on the particular application. Offloading hardware does not block any compute resources and can be beneficial in two ways. First, if an application is able to overlap communication and computation, the CPU can continue to execute computation tasks while the network adapter performs communication. This is true for LQCD\xspace simulations, where many algorithms allow for overlap of computation and halo exchanges. Second, even if no overlap is possible, offloading enables several technologies that can improve performance. One example is RDMA, which reduces the latency involved in non-overlapping halo exchanges. Another example is relevant for the other important communication pattern in LQCD\xspace, i.e., global reductions. These can hardly be overlapped with computation, as the result is either required immediately or is used as a stopping criterion in iterative algorithms. Most InfiniBand\xspace adapters nowadays support offloading collective routines to the HCA\xspace, reducing the involvement of the host CPU in these operations \cite{Graham:2010:CIM:1844765.1845221}. For global reductions, this approach is taken even further with the most recent InfiniBand\xspace networks, which now support in-network computing and in-network memory to reduce data movement \cite{Graham:2016:SHA:3018058.3018059}. \section{Porting of simulation code and communication library}\label{sec:portingourimplementation} \subsection{\boldmath\mbox{DD-$\alpha$AMG}\xspace for Xeon Phi} The starting point of the present work is our implementation \cite{Heybrock:lat15,Richtmann:2016kcq} of the \mbox{DD-$\alpha$AMG}\xspace algorithm for \qp{2}. This implementation contains a number of optimizations with respect to the original Wuppertal code (which was neither threaded nor vectorized). In the following we describe these optimizations, most of which are now part of the official \mbox{DD-$\alpha$AMG}\xspace code base \cite{ddalphaamggithubrepo}. We adapted the data layout to be able to make efficient use of the hardware (i.e., the caches and the \SI{512}{\bit} vector registers), performed extensive vectorization using compiler intrinsics, and inserted many software prefetching directives to overcome the limitations of the KNC\xspace. Furthermore, we implemented an MPI-like threading model with fully persistent threads and further enhanced the use of mixed precision by adding support for storing some data structures in half precision, i.e., \SI{16}{\bit} floating-point numbers. The subject of this contribution is the porting of our existing code base to \qp{3}. Reference~\cite{Heybrock:2014iga}, which was prepared in collaboration with Intel engineers and is part of our code base as the fine-grid smoother, states that the port from KNC\xspace to KNL\xspace will require only modest efforts since the instruction set architecture of these two processors is quite similar. Mainly, we need to replace the explicit IMCI intrinsics scattered all around the code by the corresponding AVX512 intrinsics. We realize this by introducing a lightweight abstraction layer consisting of small functions. This layer wraps the bare intrinsics, which is mostly straightforward except for the permutation intrinsics. At the time of this writing, no further KNL\xspace-specific optimizations have been performed. An issue of particular interest for our porting efforts is the support for half precision. The KNC\xspace does not support computations in half precision, but is able to do on-the-fly up/down conversions in hardware when loading/storing data from/to memory. We utilize this feature in our code to reduce the working set size, and as a consequence the requirements on cache capacity and memory bandwidth. However, on the KNL\xspace the up/down conversions are no longer part of the instruction set. We attempted to work around this problem by utilizing legacy Intel processor instructions, as depicted in Listing~\ref{lst:half-prec-masked-store}. Unfortunately, this attempt actually degrades the performance, see the benchmarks below. \begin{lstlisting}[float,caption={Wrapper function for a masked store operation from a SIMD register to memory in half precision. The KNL\xspace code is inspired by the up/down conversion in the code generator of the QPhiX library \cite{qphixgithubrepo}.},captionpos=b,label={lst:half-prec-masked-store}] inline void store(half *data, maskF mask, regF &reg) { #if defined(KNC) _mm512_mask_extstore_ps(data, mask, reg, _MM_DOWNCONV_PS_FLOAT16, 0); #elif defined(KNL) _mm256_store_si256( (__m256i *)data, _mm512_cvtps_ph( _mm512_mask_blend_ps( mask, _mm512_cvtph_ps(_mm256_load_si256((__m256i const *)data)), reg), _MM_FROUND_TO_NEAREST_INT)); #endif } \end{lstlisting} In contrast to the KNC\xspace, where only the Intel compiler and its runtime libraries are usable, the KNL\xspace is supported by all three major compiler suites: GCC, Clang, and Intel. In Tables~\ref{tab:comparison-single-core-numbers-mr-inversion} and \ref{tab:comparison-single-core-numbers-schwarz-method} we compare the single-core performance of two important code parts, the MR inversion within a domain and the entire Schwarz method, obtained with these compilers.\footnote{Using GCC we were unable to compile the code with half-precision support turned on.} \begin{table}[thb] \begin{center} \begin{tabular}{clrr} Processor & Compiler & GFlop/s single & GFlop/s half \\ \midrule KNC\xspace & Intel 15.0.3 & 8.9 & 10.3 \\ KNL\xspace & GCC 6.2.1 & 8.5 & X \\ KNL\xspace & Clang 4.0.0 & 13.8 & 11.3 \\ KNL\xspace & Intel 17.0.2 & 16.6 & 11.4 \\[-5mm] \end{tabular} \end{center} \caption{Single-core performance for the MR inversion within a domain.\vspace*{-5mm}} \label{tab:comparison-single-core-numbers-mr-inversion} \end{table} \begin{table}[thb] \begin{center} \begin{tabular}{clrr} Processor & Compiler & GFlop/s single & GFlop/s half \\ \midrule KNC\xspace & Intel 15.0.3 & 7.4 & 9.0 \\ KNL\xspace & GCC 6.2.1 & 5.6 & X \\ KNL\xspace & Clang 4.0.0 & 9.2 & 7.6 \\ KNL\xspace & Intel 17.0.2 & 9.7 & 7.6 \\[-5mm] \end{tabular} \end{center} \caption{Single-core performance for the Schwarz method.} \label{tab:comparison-single-core-numbers-schwarz-method} \end{table} \noindent We see that the Intel compiler yields the best performance, closely followed by Clang. On KNL\xspace, half precision deteriorates the performance, rather than improving it, in contrast to KNC\xspace. This may be a problem with our implementation and will be investigated further in the future. The KNC\xspace has no L1 hardware prefetcher, and the performance of the L2 prefetcher is not optimal. Therefore software prefetching of data to the caches is essential for achieving good performance on KNC\xspace. Although the KNL\xspace now features an L1 hardware prefetcher, we still use manual software prefetching since our code base already contains these directives. \subsection{pMR\xspace communication library}\label{subsec:opasoftware} In Sec.~\ref{subsec:opavsib} we have discussed the hardware differences of InfiniBand\xspace and Omni-Path\xspace. In this section we take a look at the Omni-Path\xspace software stack to identify necessary modifications of our own communication library. There are currently two APIs available to work with Omni-Path\xspace. The first API, which MPI implementors have been encouraged to rely on, is defined by libfabric, which is a core component of OpenFabrics Interfaces (OFI), a common framework for various interconnects. This API is supposed to be hardware agnostic and yield good performance on any supported hardware. However, this is currently not the case. Although one can, in theory, use the same user code on top of this API on, e.g., InfiniBand\xspace and Omni-Path\xspace hardware, this leads to severe performance degradation. Therefore, in practice the user code needs to be adapted to the hardware to achieve good performance. The second API is Performance Scaled Messaging 2 (PSM2\xspace), which is used by libfabric under the hood for Omni-Path\xspace. Due to the current limitations of libfabric just described, it is sensible to use PSM2\xspace directly to avoid performance degradation due to unsuitable abstraction. This is in fact what many MPI implementations still do, and we also follow this path for pMR\xspace. The PSM2\xspace API supports a tag-matched two-sided messaging model that is very similar to the MPI two-sided messaging API. In contrast to InfiniBand\xspace it is not necessary to register any memory region for use as send or receive buffers. Although PSM2\xspace is the lowest-level Omni-Path\xspace API, it not only supports Omni-Path\xspace but also includes a self (for intra-process communication) and a shared-memory (for local inter-process communication) provider. Furthermore, PSM2\xspace over Omni-Path\xspace supports two send methods, programmed input/output (PIO) and Direct Memory Access (DMA), and two receive methods, Token ID (TID, commonly known as rendezvous) and eager. These methods are mainly chosen by globally set thresholds and can hardly be influenced by the user code. The send method has an impact on the CPU utilization. As for the receive methods, the eager protocol might be able to reduce network latency by adding an additional memory copy on the receiving side. \begin{figure}[t] \begin{center} \includegraphics{ping-ping-benchmark} \end{center} \caption{Ping-ping benchmark of PSM2\xspace send and receive methods as a function of the KNL\xspace memory modes, cache and flat. For flat mode the application was bound once to DDR4\xspace and once to MCDRAM\xspace. The combination PIO/TID is not available.} \label{fig:hfimethods} \end{figure} In Fig.~\ref{fig:hfimethods} we compare the performance of all available combinations of PSM2\xspace send and receive methods as a function of the KNL\xspace memory modes. For all three combinations there are only marginal differences between MCDRAM\xspace and DDR4\xspace, which presumably are due to the slightly lower latency of DDR4\xspace. For PIO/eager and DMA/TID there is no significant dependence on the memory mode, while for DMA/eager the performance of the additional memory copy depends on the memory mode, with cache being worse than flat. Note that this synthetic benchmark is only useful to identify the best memory mode. It cannot identify the best combination of send/receive methods because it neglects the CPU utilization. The global thresholds at which PSM2\xspace switches between different methods should be tuned using benchmarks of the actual application. \begin{figure}[t] \begin{center} \includegraphics{halo-exchange-benchmark} \end{center} \caption{Halo-exchange benchmark running on 16 KNL\xspace nodes. Inter-node message size per process was set to \SI{512}{\kibi\byte} / (number of processes per node) to have a constant amount of data per node transferred via Omni-Path\xspace. For the \SI{0}{\kibi\byte} intra-node message size, the processes were synchronized to ensure that all processes are communicating at the same time and not sequentially. The \SI{16}{\kibi\byte} intra-node message size was chosen to simulate real-world applications.} \label{fig:hfiprocs} \end{figure} Since the PSM2\xspace API is very similar to MPI it is easy to port existing MPI software to run directly on top of PSM2\xspace. However, there is a limitation in PSM2\xspace which potentially impacts performance: PSM2\xspace is limited to open only one endpoint per process,\footnote{After completion of the work presented here, which is based on Omni-Path\xspace Fabric Software 10.3, this limitation has been removed in version 10.5. See Sec.~\ref{sec:conclusions} for future opportunities utilizing the updated software.} but one endpoint is not sufficient to achieve full bandwidth utilization (see Fig.~\ref{fig:hfiprocs} below). For non-threaded MPI applications this is obviously not an issue since there are many processes, and therefore many endpoints, per node. However, all of our software is based on hybrid MPI + OpenMP \cite{Heybrock:2014iga,Heybrock:lat15}. To obtain full bandwidth we need to open more than one endpoint, and thus we need more than one process per node. This in turn adds inter-process communication overhead within every node. This intra-node overhead is not negligible but can be reduced by utilizing fully threaded communication. In Fig.~\ref{fig:hfiprocs} we show the effective Omni-Path\xspace bandwidth per KNL\xspace depending on the number of processes per KNL\xspace. The effective bandwidth increases with the number of processes (= endpoints), and we need at least 16 processes per KNL\xspace to get close to peak bandwidth. Apart from the performance issues just discussed, implementation issues arise as well. We want to use PSM2\xspace directly for data transfer in performance-critical parts but still rely on MPI for all other parts. Because of the limitation to one endpoint per process, either our communication library pMR\xspace or the MPI implementation can open the endpoint, but not both. The other party has to look up the existing endpoint, and access to the endpoint has to be managed. To circumvent this issue we have introduced a new library between PSM2\xspace and the upper layers (i.e., pMR\xspace and MPI) which is responsible for opening the endpoint and managing access to it. This library is injected using the preloading feature of the dynamic linker. \section{Performance figures for \boldmath \mbox{DD-$\alpha$AMG}\xspace}\label{sec:performancefigures} \subsection{On-chip strong scaling}\label{subsec:onchipstrongscaling} We first investigate the on-chip scaling behavior of \mbox{DD-$\alpha$AMG}\xspace, i.e., the scaling of the wall-clock time for a single solve with the number of cores utilized on a single Xeon Phi. Both on KNC\xspace and KNL\xspace we use the standard hybrid approach with one process per chip and threads on all cores (in our case, 4 threads per core). All results are for a $16^3\times32$ lattice, which fits in the \SI{16}{\giga\byte} corresponding to the total memory of a KNC\xspace or the MCDRAM\xspace of a KNL\xspace. Before discussing the results we remind the reader of the relevant peak performance figures. The ratio of the peak floating-point performance of KNL\xspace and KNC\xspace is 2.2. The memory bandwidth is about \SI{420}{\giga\byte\per\second} for KNL\xspace with MCDRAM\xspace, \SI{80}{\giga\byte\per\second} for KNL\xspace with DDR4\xspace, and \SI{160}{\giga\byte\per\second} for KNC\xspace, respectively, i.e., the ratio of KNL\xspace with MCDRAM\xspace to KNC\xspace is about 2.6. Our results are depicted in Fig.~\ref{fig:on-chip-scaling-speedup-vs-cores}, where all numbers are normalized to the value of a single KNC\xspace core. We first notice that on the KNL\xspace there is no significant difference between cache mode and flat mode from MCDRAM\xspace, and therefore we do not consider cache mode further. The remaining results should be interpreted with care. Note that we are benchmarking a complex code and not just a simple kernel. The \mbox{DD-$\alpha$AMG}\xspace code contains many parts, some of which are memory-bandwidth bound, while others are compute bound. Microbenchmarks have shown that the total memory bandwidth of both KNC\xspace and KNL\xspace scales roughly with the number of cores utilized. (For KNL\xspace with DDR4\xspace this statement only holds for low core count.) Therefore, for both memory-bandwidth and compute bound code, the ideal scaling would be linear in the number of cores. However, once more and more cores are utilized, the overhead for synchronization between threads leads to a flattening of the speedup curve. This is indeed what we observe qualitatively in all cases. \begin{figure}[thb] \begin{center} \includegraphics{onchip-scaling} \end{center} \caption{On-chip strong scaling of the \mbox{DD-$\alpha$AMG}\xspace solver for a $16^3\times32$ lattice on KNC\xspace and on KNL\xspace in different memory modes.} \label{fig:on-chip-scaling-speedup-vs-cores} \end{figure} The case of KNL\xspace with DDR4\xspace deserves special attention. For low core count the memory bandwidth per core is the same for MCDRAM\xspace and DDR4\xspace, and therefore the performance is the same in this case. Since both the compute power and the memory bandwidth per core is higher than on the KNC\xspace, the performance is also higher than on the KNC\xspace. At $10\sim15$ cores we are starting to approach the sustainable DDR4\xspace bandwidth of about \SI{80}{\giga\byte\per\second} (measured with the STREAM benchmark), which explains that the red curve flattens much earlier than the green curve. At maximum core count, KNL\xspace with DDR4 and KNC\xspace achieve about the same performance. We regard this as an interesting coincidence. For our particular code, it seems that the two competing effects of higher compute power and lower memory bandwidth, which affect different code parts in different ways, just compensate each other in terms of the total solve time. Finally, the maximum performance on KNL\xspace with MCDRAM\xspace is a factor of 2.1 higher than on KNC\xspace, which is roughly consistent with the factors of 2.2 or 2.6 based on peak performance and memory bandwidth, respectively. \subsection{Multi-node benchmarks}\label{sec:solverruntimemultinode} Having evaluated Omni-Path\xspace using synthetic benchmarks, we now move to real-world benchmarks for our \mbox{DD-$\alpha$AMG}\xspace implementation. In Fig.~\ref{fig:wmgprocs} we study the dependence of the performance on the number of processes per KNL\xspace node. Depending on the number of processes per node different cluster modes are chosen: quadrant mode for a single process, SNC2 for two processes, and SNC4 for four or more processes. The results can be summarized as follows: single-node performance is best with a single process, while multi-node performance can benefit from using four or more processes per node. The former is as expected, and the latter is consistent with the synthetic benchmark in Fig.~\ref{fig:hfiprocs}. As explained in Sec.~\ref{subsec:opasoftware}, there are two competing effects: using more processes per KNL\xspace makes more endpoints available and thus increases bandwidth utilization, but the resulting intra-node overhead reduces this effect for a low number of KNLs\xspace. \begin{figure}[t] \begin{center} \includegraphics{processes-per-chip-multi-knl} \end{center} \caption{\mbox{DD-$\alpha$AMG}\xspace solver with Omni-Path\xspace on various numbers of KNLs\xspace vs.\ number of processes per KNL\xspace. The lattices sizes are $16^3\times32$ for single node and $32^3\times96$ for multiple nodes.} \label{fig:wmgprocs} \end{figure} In the second benchmark, see Fig.~\ref{fig:wmgstrongscaling}, we plot the off-chip strong scaling of our \mbox{DD-$\alpha$AMG}\xspace implementation, comparing KNL\xspace and Omni-Path\xspace to KNC\xspace and InfiniBand\xspace. The KNC\xspace benchmarks were run on \qp2, where each KNC\xspace shares its dual-port FDR InfiniBand\xspace adapter with three other KNCs\xspace. Thus, the network bandwidth per KNC\xspace is limited to \SI{28}{\giga\bit\per\second} compared to \SI{100}{\giga\bit\per\second} in case of KNL\xspace. We first observe that KNL\xspace with DDR4\xspace gives almost the same performance as KNC\xspace, which is consistent with Fig.~\ref{fig:on-chip-scaling-speedup-vs-cores}, where the single-node performance is also the same. This means that the network bandwidth is not the limiting factor for our particular application, since quadrupling the network bandwidth from InfiniBand\xspace to Omni-Path\xspace does not improve the scaling behavior. The real potential of the KNL\xspace can be unleashed by utilizing the MCDRAM\xspace either as cache or exclusively in flat mode. The Omni-Path\xspace performance is slightly worse using cache mode than running exclusively from MCDRAM\xspace, as already indicated by the benchmark in Fig.~\ref{fig:hfimethods}. Running in flat mode we indeed achieve the expected speedup of about 2.1 over KNC\xspace. \begin{figure}[thb] \begin{center} \includegraphics{offchip-scaling} \end{center} \caption{Off-chip strong scaling of \mbox{DD-$\alpha$AMG}\xspace for a $32^3\times96$ lattice on KNL\xspace with Omni-Path\xspace and on KNC\xspace with InfiniBand\xspace. The following parameters have been tuned to achieve the best performance: number of processes per KNL\xspace, Omni-Path\xspace threshold values (see Sec.~\ref{subsec:opasoftware}), and mapping of lattice to processes. The parameters of the solver algorithm are identical for all runs.} \label{fig:wmgstrongscaling} \end{figure} So far we have not discussed the performance drops at 6 and 12 KNLs\xspace with DDR4\xspace. These drops can be explained by a non-optimal mapping of the lattice to processes (i.e., MPI ranks) as done by QDP++ \cite{Edwards:2004sx}, which is hard to explain in words but easily shown in a picture, see Fig.~\ref{fig:distribution}. This can only happen if there is more than one process per node and if the number of nodes contains a prime factor (three in our case) that is not contained in the number of processes per node. The problem can easily be fixed by changing the order in which the lattice is distributed to processes, see Fig.~\ref{fig:distribution} again. This would have to be done either in the QDP++ library or by instructing the process manager to do so.\footnote{Unfortunately, Intel MPI 2017, which was used to perform the benchmarks, contains a bug that prevents us from reversing the order in which MPI ranks are distributed to nodes.} An additional reason for the performance drop is that our communication is not yet fully threaded, e.g., it can happen that an inter-node data transfer is stalled by a blocking intra-node data transfer. This is only the case when using pMR\xspace, as the user is responsible for all threading. In case of MPI, even if communication is not threaded in an application, the MPI implementation may have some internal threading. \begin{figure}[thb] \begin{center} \includegraphics{data-distribution} \end{center} \caption{Depiction of mapping of lattice to processes with multiple processes per node. Each rectangle presents data on one process. Rectangles of the same color represent processes running on the same node. Left: non-optimal default distribution, right: optimal distribution.} \label{fig:distribution} \end{figure} \section{Conclusions and future opportunities}\label{sec:conclusions} The subject of this contribution was the port of our existing code base for \qp{2} to our new machine \qp{3}. We performed a minimal-efforts port of the \mbox{DD-$\alpha$AMG}\xspace solver by adapting the code base to the KNL\xspace instruction set architecture and retaining already existing optimizations, but not implementing any new KNL\xspace-specific optimizations. On KNC\xspace we could achieve a significant performance gain using half precision, but on KNL\xspace half precision deteriorates performance rather than improving it, at least with our current implementation. Trying out different compilers, which was not possible on KNC\xspace, we found that the Intel compiler yields best performance but has fair competition by Clang, which is thus a valuable open-source alternative. In terms of interconnect hardware, we found that for \mbox{DD-$\alpha$AMG}\xspace the network bandwidth is not a bottleneck. The key performance factors are network latency and message rate. When running on a single or a small number of KNLs\xspace, we found that the standard hybrid parallelization approach with one MPI rank per processor and threads on each core still gives the best performance for our code. However, when running on many nodes, the best performance was obtained using multiple MPI ranks per processor, due to limitations of the Omni-Path\xspace software stack used in this work. The total speedup factor going from \qp2 to \qp3 is about 2.1, which we only reach when running from MCDRAM\xspace. Our future strategy thus includes optimizing for flat mode, which means that the entire solver, which is an external library used by Chroma \cite{Edwards:2004sx}, is allocated in MCDRAM\xspace by default. The corresponding data copying is not an issue at all, since data layout transformations are required at solver entry anyway. The limitation on the number of endpoints per process imposed by the Omni-Path\xspace software stack has been removed in a recent major software update. Hence, we are no longer required to have more than one process per KNL\xspace to achieve full Omni-Path\xspace performance but can use threaded communication and have each communication thread open its own endpoint. Finally, we plan to improve our multi-node scaling behavior by applying domain decomposition to the coarsest grid of the multigrid method, a project that has already been started but is still work in progress.
1,314,259,996,806
arxiv
\section{Introduction} \section{Motivation} Data analysts in the field of High Energy Physics (HEP) routinely deal with terabyte and petabyte scale datasets, but access them as objects persisted in files, rather than databases. Thus, they miss out on advantages enjoyed by data analysts in other fields, such as automated scale-out, data replication, primary key indexing for faster selections, and term rewriting and query planning of high-level queries. A major reason for this is that HEP data do not fit any of the existing database models, including common non-relational (NoSQL) ones: HEP data are hierarchically nested with arbitrary-length collections within collections, not the rectangular table served well by SQL. This feature suggests a document store like MongoDB\cite{mongodb}, except that HEP data have regular structure that is sparsely accessed: only a few of the object attributes are needed in each query, which would waste disk access if all attributes of an object are stored contiguously, rather than in ``columns,'' in which all values of a given attribute, across objects, are contiguous on disk. Since HEP objects typically have hundreds of attributes, accessing a few of them in a typed columnar store is orders of magnitude faster than a schemaless document store. Unlike key-value tables, HEP data must be scanned in bulk; unlike graph databases, HEP data cross-linking is limited to small, disconnected graphs called ``events'' no larger than hundreds of kilobytes. Furthermore, most of the well-known database systems use SQL or an SQL variant as the query language, but even the simplest HEP analysis functions are awkward and possibly hard to optimize as SQL. HEP analysis functions typically iterate over combinations of objects in different subcollections within each event, which would require multiple SQL explodes and joins. Although the hierarchically nested structure can be described in modern SQL, \vspace{0.15 cm} \input{minted-1.tex} \vspace{0.15 cm} \noindent often the first exploratory action in a HEP analysis is to plot a distribution of the highest particle {\tt pt} {\it per event}. In SQL, one would have to explode the {\tt muons} into a virtual table and aggregate for maximum {\tt pt}, grouped by {\tt eventId}, but the database system should be made aware that the millions of events are small and should each remain local to avoid millions of collations over the network. To search for short-lived particles that might have decayed into one muon and one jet, the analyst would have to iterate over all possible pairs of {\tt muons} and {\tt jets}, computing the mass\footnote{$\sqrt{2 {p_1}^{p_T} {p_2}^{p_T} (\cosh({p_1}^\eta - {p_2}^\eta) - \cos({p_1}^\phi - {p_2}^\phi))}$ for $p_1$ and $p_2$} of each combination, with or without constraining to one candidate per event. Searches for particles decaying into two muons must avoid double-counting unordered muon pairs, etc. In general, exploding subcollections into virtual tables and joining on {\tt eventId} is both syntactically inconvenient for the data analyst and introduces an optimization problem for the database engineer. A short functional or procedural program applied to each event is much more natural than shoehorning the problem into SQL. However, even query language agnostic systems like Apache Drill\cite{drill} make SQL-motivated assumptions about the structure of queries in the query planning and distribution. Apache Spark\cite{spark} drops from efficient SparkSQL\cite{sparksql} processing to a slower mode when the user needs to apply an arbitrary function. This is unnecessary. The features that accelerate scans over tables, namely a columnar data layout, Just-In-Time (JIT) compilation, and avoiding row materialization, can be applied to generic programming languages. This paper describes one component of a database/fast query system for HEP data, which is in early development. Such a system will involve distributed processing, Hadoop-style data locality, indexing, and data management with columnar, rather than file, granularity. However, this paper focuses only on the execution engine, which performs JIT compilation without object materialization on hierarchically nested data structures stored in a columnar layout. JIT compilation is central to machine learning tools like H2O\cite{h2o} and Theano\cite{theano}, as well as Julia\cite{julia}, a scientific programming language, and ROOT\cite{root}, the analysis framework that is ubiquitous in HEP. However, these tools do not avoid object materialization, even if the data are persisted in a columnar layout, as in the case of ROOT. Most HEP data are currently stored as ROOT files, which represent hierarchically nested objects in columnar form, yet the ROOT framework materializes them as C++ objects before applying analysis functions. We have augmented ROOT\cite{bulkio} to avoid this object materialization and modify the user's analysis function instead. Leaving the data in columnar arrays, we walk over an Abstract Syntax Tree (AST) of the user's analysis function, replacing object references with array element retrievals, then pass the transformed function to a traditional compiler. For the analyst's convenience, we transform functions written in Python and compile the result with Numba\cite{numba}, which allows high-level querying yet produces bytecode comparable to a compiled C function. The techniques described here could be applied to any language, but Python is popular in HEP for data exploration. We should also note that the code transformation technique described here is similar to that of Mattis {\it et al}.\cite{columnarobjects}, though we statically transform and compile Python functions, whereas Mattis {\it et al}.\ implemented object proxies in PyPy and let PyPy's tracing JIT compiler dynamically optimize them. This technique can be viewed as a general alternative to object deserialization: when faced with user code that expects objects but the data are in another form, one could either transform the data to fit the code's expectation (deserialization) or transform the code to fit the data. When manipulating code, the data format is only interpreted once at compile-time; when manipulating data, the format is interpreted once per object, with extra memory allocations and copying, so code transformation is preferable when possible. Code transformation for columnar data is becoming a common technique in databases and search engines\cite{searchengine}, and we apply it to HEP data with a view toward building it into a HEP database, comparing its performance to analysis on materialized C++ objects in ROOT. \section{Data Representation} \subsection{PLUR Type System} \label{type-system} We begin by describing the scope of data types we are considering. To simplify the transformations, we restrict this set as much as possible while still being useful. In particular, the data types described here only encode data; they don't determine how data are used, such as which functions can be legally called on them. However, an interpretation layer can be overlaid on this representation without affecting the format or code transformations. The set of possible types is generated by four constructors: \begin{itemize} \item {\bf Primitive:} fixed byte-width booleans, integers, floating point numbers, and characters. Even fixed-size matrices of numbers could be considered primitives--- the important point is that the width is known to the compiler. \item {\bf List:} arbitrary-length, ordered collections of other types. Each instance may have a different width, including empty lists, and the list may contain any other type, including nested lists and Lists of Records. All objects in the list must have the same type (homogeneous). \item {\bf Union:} an object that may be one of several types (``sum types'' in type theory). Each instance can have a different type, but its type must be chosen from a predetermined list. This provides more flexibility than class inheritance (e.g.\ {\tt Particles} that may be {\tt Muons} or {\tt Jets}), but less than dynamically typed Python. \item {\bf Record:} containers mapping field names to types (``product types'' in type theory). Each instance must contain all fields, like a class in C++ or Python. \end{itemize} We call this type system PLUR, an acronym of the four constructors. A complete type schema in PLUR is a tree of Lists, Unions, and Records whose leaves are Primitive types. One thing to notice is that this system does not allow for recursively defined types. For instance, one cannot make a {\tt Tree} Record containing {\tt Trees}. Thus, all data structures have a finite maximum depth determined by the schema. In practice, this is not a limitation, as trees and even arbitrary graphs can be (and routinely are) built in HEP by pointing to members of other subcollections with list indices. Another thing to notice is that we have chosen not to require names for Records, as classes in C++ and Python must be named. This is to allow for more flexible type-checking, in which the structure of a Record (a minimum set of field names and types) is sufficient to determine if it can be used in a function. For example, a {\tt mass} function only needs to verify that the two particles it is given have {\tt pt}, {\tt eta}, and {\tt phi} fields, instead of having to introduce type annotations or explicit polymorphism into Python code. Stronger type safety can be applied by overlaying this type system with names and adding dispatch rules. For instance, a List$\langle${\tt byte}$\rangle$ can be distinguished from a string of text with a name like {\tt UTF8String}. Functions like {\tt capitalize} would be applicable to strings in a way that they are not applicable to List$\langle${\tt byte}$\rangle$, and functions like {\tt len} could return the number of variable-width Unicode characters, rather than the number of raw bytes. However, these details are not important for the columnar representation: {\tt UTF8Strings} and List$\langle${\tt bytes}$\rangle$ are stored and accessed the same way, so the PLUR type system does not make a distinction. Similarly, unordered collections like sets and key-value mappings are just Lists at the storage level. \subsection{OAMap: Objects to Arrays} Any data that can be described by a PLUR schema can be represented in columnar arrays. This mapping from objects to arrays is analogous to Object Relational Mapping (ORM) of databases, so we call it Object Array Mapping (OAM). HEP data in ROOT files are encoded in an OAM, though ROOT's encoding transforms any object with a C++ type into arrays. The C++ type system would be a large project to convert from data transformation rules into code transformation rules, so we limit our discussion to the PLUR type system. The data transformation rules we describe below happen to be very similar to ROOT's and also a subest of Apache Arrow\cite{arrow-layout}. We call it OAMap, and provide an implementation\cite{oamap} on GitHub. In our tests, we convert ROOT data into OAMap on-the-fly. OAMap does not specify a storage mechanism: any means of storing arrays in a namespace may be used. This could be a Python dict of Numpy arrays, an HDF5 file, a filesystem directory of raw files, or a distributed object store. Only a PLUR schema is required to interpret the data, and this schema can even be encoded as a naming convention in the names of the arrays, eliminating the need for type metadata. The following set of rules transform an object of type $T$ with name $N$ into a set of named arrays. {\bf If $T$ is a Primitive}, append the primitive value to an array named $N$, creating it if it doesn't yet exist. {\bf If $T$ is a List} with contained type $T'$ and length $\ell$, find an array named $N + \mbox{``-Lo''}$ (list offset). If it does not yet exist, create it with a single element $0$. Then select the last element $e$ from this array and append $\ell + e$. Next, iterate through each item in the List and apply the rule for $T'$ with name $N + \mbox{``-Ld''}$ (list data). {\bf If $T$ is a Union} with possible types $T_1, \ldots, T_n$ and the value has actual type $T_t$ (where $t \in [1, n]$), find or create an array named $N + \mbox{``-Ut''}$ (union tag) and append $t$. Next, follow the rule for name $N + \mbox{``-Ud''} + t$ (union data~$t$) and type $T_t$. {\bf If $T$ is a Record} with field names and field types $(N_1, T_1), \ldots (N_n, T_n)$, follow the rule for each pair $N_f$, $T_f$ (where $f \in [1, n]$), using $N + \mbox{``-R\_''} + N_f$ (record field $N_f$) as a name and $T_f$ as a type. A Record does not generate any arrays to represent its structure (as Lists and Unions do); the connection between fields in an array is entirely encoded in the PLUR schema or naming convention. To ensure that array names can be properly parsed, field names must not contain the character ``-'' (or a different delimiter should be chosen). One must be sure to include empty/trivial arrays for types that were not touched due to missing data (Lists that are all empty at a given level or Union type possibilities that never occur in the data) or make the reading procedure insensitive to missing arrays. A simple way to include all arrays is to create them with a first pass over the type schema and only append to them in the pass over data. \subsection{OAMap: Arrays to Objects} The procedure described above losslessly stores the PLUR schema in array names and the object data in arrays. To demonstrate this, we describe an algorithm below that would materialize objects from arrays. As explained in the introduction, we prefer code transformation to object materialization, but it is important to prove that the transformation is indeed lossless. First, select arrays whose names begin with the prefix $N$ and pop the prefix from their names. For each of these arrays $a$, create an index $i_a$ whose initial value is 0. Then recursively apply the following rules. {\bf If only one array exists and its name is the empty string}, then the type is a Primitive. Return the value at index $i_a$ and increment $i_a$ by~1. {\bf If one array name begins with ``-Lo'' and all others begin with ``-Ld''}, then the type is a List. Take $a[i_a + 1] - a[i_a]$ as the length $\ell$ of the List and increment $i_a$ by~1 for the array $a$ that begins with ``-Lo''. Pop the ``-Ld'' from the beginning of all other array names and apply the rule for that set of arrays $\ell$ times to fill the List's contents. {\bf If one array name begins with ``-Ut'', and all others begin with ``-Ud''}, then the type is a Union. Take $a[i_a]$ as the tag $t$ for the datum and increment $i_a$ by~1 for the array $a$ that begins with ``-Ut''. Pop the $\mbox{``-Ud''} + t$ from the beginning of all other array names associated with tag $t$ and apply the rule for that subset. {\bf If all array names begin with ``-R\_''}, then the type is a Record. Partition the set of arrays by field name $N_f$, pop the $\mbox{``-R\_''} + N_f$ from the beginning of the array names, and apply the rule separately for each partition to fill each field. If any other configuration of arrays and names is encountered, the arrays are malformed. If indices go beyond the lengths of the arrays or do not perfectly end on the last element of each array, then the arrays are malformed. \subsection{Random Access and Redundancy} \label{random-access-and-redundancy} The reason that List structures are represented by data offsets (arrays whose names end in ``-Lo''), rather than List lengths, is to permit random access. For instance, if we had a List$\langle$List$\langle${\tt float}$\rangle\rangle$ named ``x'' and we wanted the {\tt float} at index $(i, j)$, we would compute \[ \mbox{x-Lo-Ld{\tt [}x-Lo-Lo{\tt [}x-Lo{\tt [}} 0 \mbox{\tt ]} + i \mbox{\tt ]} + j \mbox{\tt ]} \] At each level, the contents of the ``-Lo'' array are the starting indices of the next-deeper structure. In fact, if we want to reconstruct only one object in a List, we simply apply the arrays-to-objects algorithm for that List's ``-Ld'' arrays with all indices starting at $i_a = i$. The Union structure, as described so far, is not random accessible. Data arrays for each type possibility $T_t$ are only filled each time a value of that type is encountered, which cannot be every time for every type. Each type possibility must be indexed by a different offset, but these offsets can all be packed into the same array becuase exactly one type must be encountered per instance. The arrays-to-objects algorithm described in the previous subsection avoids this issue by walking over the data sequentially. To make Unions random accessible, we need to add a union offset array ``-Uo'', which can be generated from the tag array ``-Ut'' by \begin{algorithmic} \vspace{0.15 cm} \STATE $i_t \coloneqq 0$ {\bf for all} $t \in [1, n]$ \vspace{0.15 cm} \FOR{$i \coloneqq 0$ {\bf until} length of x-Ut} \STATE $t \coloneqq$ \mbox{x-Ut}{\tt [}$i${\tt ]} \STATE \mbox{x-Uo}{\tt [}$i${\tt ]} $\coloneqq i_t$ \STATE $i_t \coloneqq i_t + 1$ \ENDFOR \end{algorithmic} Now we can access Union objects randomly: to reconstruct a Union at index $i$, we find the tag $t$ at x-Ut{\tt [}$i${\tt ]} and follow the arrays-to-objects algorithm on the set of arrays named with the corresponding ``-Ud$t$'', all starting at index $i_t$ given by $\mbox{x-Uo{\tt [}}i\mbox{\tt]}$. Arrow calls this a ``dense union.'' \subsection{Relationship to Apache Arrow and ROOT} We chose to implement a subset of the Arrow AOM to increase the usefulness of our tools beyond HEP. Arrow unifies in-memory data frames across a variety of Big Data platforms\cite{arrow}, so a code transformation tool that assumes this encoding may be directly applied to data from these platforms. The version of OAMap tested here lacks Arrow's nullable types, but it is being extended to cover these cases. OAMap generalizes beyond Arrow in that arrays may be provided on demand for large, out-of-memory datasets. ROOT's encoding differs from OAMap in that some List offsets are represented as byte offsets, which can be converted to object offsets by an affine transformation, and in some cases as List lengths, which must be summed. Also, OAM is optional in ROOT and is only carried out one List level deep, but most HEP data are presented in this form anyway. Different C++ types like STL vectors and arrays are encoded differently, so we normalize all list-like types to PLUR Lists in our on-the-fly conversion. \section{Code Transformation} OAMap's most important feature is that the structure of any type can be expressed by a single integer: the index that would be used to start the arrays-to-objects algorithm. A type and a name prefix uniquely specifies a set of arrays, so the only values required at runtime are the locations of instance data within those arrays. Only one index is required because: \begin{itemize} \item a Primitive value is located at one index in its array; \item a List's length can be computed from one index \mbox{($a[i_a + 1] - a[i_a]$)} and all of its contents are derived from the value of its offset array at that index; \item a Union is specified by a tag from its tag array and an offset from its offset array, which occur at the same index. All of the contents are derived from the offset; \item a Record is just a bundle of fields with no structure array. However, the contents of all fields in a Record object start at the same index. \end{itemize} Therefore, any function that operates on Primitive, List, Union, and Record objects, no matter how complicated, can be replaced with a function that operates on integer indices. Replacing each instance of a PLUR-typed object with its index and all functions and code constructs operating on objects with the equivalent operations for indices transforms the code to match the data encoding, rather than shaping the data to match the object-oriented vocabulary of the code. To see this as an alternative to deserialization, consider an extreme scenario in which all arrays are in a binary file on disk that has been memory-mapped to look like an array. Wherever the source code would have required a structured object, the compiled code operates directly on the raw bytes on disk. No additional representation of the data is created. This technique can be applied to code written in any language, but all of the explicit examples are applied to Python code because this is our chosen query language. \subsection{General Strategy} Unlike SQL, functional and procedural programming languages can assign and possibly reassign variables, extract substructure as new variables, loop over substructure, and pass objects to other functions, where they are identified with new names. It is not sufficient to only transform symbols that coincide with object names, since parts of the data structure may be spread among user-defined variables. To avoid missing code that needs to be transformed, we must track the PLUR type of all symbols in the function. We therefore need a typed AST, in which PLUR types and array names are associated with all expression nodes that hold non-Primitive PLUR type. (Knowledge of other types is not necessary but not harmful.) These types and names must be propagated from symbols to expressions through assignment operators so that the correct array references may be injected into the code. We performed code transformation and type propagation in a single sweep. There are several constraints on the code to be transformed. It cannot create or change PLUR-typed objects, which is reasonable for a query language in which the input dataset and auxiliary inputs are immutable. The analysis function may call external functions, but the AST of those functions must either be accessible so they can be transformed, too, or they must accept only Primitive or non-PLUR arguments. Functions cannot be passed as objects and then called on PLUR types, since an unknown function can't be statically transformed. A variable name can't have different PLUR types in the same scope, which is a normal constraint for statically typed code, but unusual for dynamically typed Python. All of these are restricted by the Numba compiler as well, and since we pass our transformed functions to Numba, they do not represent {\it additional} constraints. Our code transformation process may be viewed as extending Numba from arrays and simple Python types to include any immutable, PLUR-typed object. The following AST nodes must be transformed: \begin{itemize} \item {\bf symbol reference}, which might have PLUR type; \item {\bf assignment}, which can pass PLUR type from an expression to a new symbol; \item {\bf list subscript} (square brackets in most languages), which might slice or extract from a PLUR List; \item {\bf attribute subscript} (dot in most languages), which might extract a field from a PLUR Record; \item {\bf function calls}, which can pass PLUR types to a new function, widening the scope of the transformation sweep to include that function, and may return a new PLUR type/prefix that must be tracked; \item {\bf for loops} which might iterate over a PLUR List; \item {\bf special functions}, like {\tt len} (List length) and {\tt isinstance} (check type) in Python, which have to be handled in special ways when called on PLUR types. \end{itemize} Any use of a PLUR-typed object that isn't specially handled must be treated as an error to avoid incorrect code, since these objects are replaced by plain integers at runtime. These errors would appear to the user as compilation errors, with line numbers and meaningful error messages. \subsection{Required Transformations} {\bf Symbol references} are leaves of the AST and therefore first to be transformed in the recursive walk. In the original function, an identifier that refers to a PLUR-typed object becomes an identifier for its integer index, so a symbol reference becomes array extraction. For Primitive and List types: \begin{center} {\tt x} (referring to object) $\to$ {\tt array[x]} ({\tt x} is index) \end{center} where {\tt array} is the array associated with a Primitive or the offset array associated with a List. Union objects should be immediately replaced with an object of specific type. Since that type is not known during code transformation, every branch must be followed, and subsequent code transformations must be predicated on the runtime tag value. The symbol table must therefore be branchable, a list of possible symbol tables that multiplies as unions are encountered. Union types lengthen the compilation process, but have minimal impact on runtime (an extra integer check). Branching is tamped by any {\tt isinstance} checks written by the user (see below). References to Record symbols do not require any transformation, though they are reinterpreted as indices that would be passed to their fields when subscripting (see below). {\bf Assignment} merely passes the PLUR type and associated array names to a new symbol. In the type-inference pass, this means adding the type information to a symbol table. Assignment is more complex if one tries to handle pattern matching, such as Python's tuple unpacking (see ``Flourishes'' below). {\bf List subscripts} replace a List index with a Primitive value if it is a List of Primitives or the index of the next substructure down if it is any other type. This can be performed in two steps: (1) transform \begin{center} {\tt x[i]} $\to$ {\tt offset[x + i]} \end{center} and then (2) transform the result of this as though it were a symbol reference (rule described above) using the List's contained type. An attempt to subscript any PLUR type other than a List is an error. The above only works if the index resolves to integer type, not (for example) a Python {\tt slice}. Handling slices would be considered a flourish. {\bf Attribute subscripts} extract a field from a Record by name. In most languages, the field names are syntactically required to be a string known at compile-time with constraints on the characters. We do this transformation in two steps, like the list subscript above: (1) transform \begin{center} {\tt x.fieldname} $\to$ {\tt x} \end{center} and then (2) transform the result of this as though it were a symbol reference (rule described above) using the selected field type. An attempt to subscript any PLUR type other than a Record is an error. {\bf Function calls} may include non-Primitive PLUR types in their arguments or not. If a function does not take any non-Primitive PLUR types as arguments, it can be left as-is. If it references non-PLUR types, then we must obtain the AST for that function and propagate PLUR types through it, starting from the argument types. If the function has previously been transformed with the same types, we may reuse the previously transformed function (as long as we pass array names as its new arguments). If it was transformed with different types, we must generate a new transformed copy of the function, propagating the new PLUR types through it. We are effectively treating the function as though it were type-parameterized in every argument, the way that Julia\cite{julia} does with functions that don't have type annotations. For example, a {\tt mass} function that takes two arguments, {\tt particle1} and {\tt particle2}, would be transformed twice if called with ({\tt Muon}, {\tt Muon}) types at one site and ({\tt Muon}, {\tt Jet}) at another. The second transformation verifies that the {\tt Jet} Record has all the required fields to calculate a mass. If a function is recursive, this process would not terminate without return-type hints (e.g.\ for any legal inputs, {\tt mass} returns a floating-point number). Refusing to transform recursive functions would not overly restrict HEP analysis functions, though Python 3's type annotations may be helpful for such cases. A function may return PLUR type; the transformed function either returns a Primitive or an integer index that must be propagated from the call point in the original function. {\bf For loops} may iterate over a PLUR List. The transformed List is just an integer index for the offset array, so it must be replaced by an iterator over offsets: \begin{center} {\tt x} $\to$ {\tt range(array[x], array[x + 1])} \end{center} The loop variable is now an integer representing indices into the List's contents, as desired. {\bf Special functions} that return meaningful data about non-Primitive PLUR types must be handled on a case-by-case basis. As with any function call, if a non-Primitive PLUR type is an argument, the function's identity must be known during the transformation sweep. Below are two important cases. {\bf len}: (get length of List) must return the length when given a PLUR List, which is computed as \begin{center} {\tt len(x)} $\to$ {\tt off[x + 1] - off[x]} \end{center} {\it without} transforming the argument {\tt x} (unlike other function calls, which operate on transformed arguments). {\bf isinstance}: (check type) must return true if the argument has a given type, false otherwise. If the symbol table is branched, some symbol tables may be eliminated in scopes guarded by an {\tt isinstance} check (``{\tt and}'' and ``{\tt if}''). This is the primary way a user might benefit from a Union type. The type or types to check would have to be known as literal names, not computed references, during the transformation sweep. If the argument to check has List or Record type, the whole expression can be statically replaced with a literal {\tt True} or {\tt False}, depending on whether the desired types are in the set of types to check. If it is a Union, it must be replaced with a tag check. \subsection{Flourishes} The implementation of the code transformation is open-ended: one can provide more or less support for PLUR-typed objects, as well as optimizations. The only invariant is that the transformed code must either work exactly as object-oriented code would or fail to be transformed (compilation error). \subsubsection{List Overflows} One special case requires special attention: what to do about List index overflows? In the minimal transformation described above, PLUR List overflows behave like C array overflows, in that they return undefined results. The errors are less obvious and therefore more dangerous than typical C array overflows because PLUR values just beyond a List's boundary belong to the same attribute in the next List, so they probably have the right scale and distribution, subtly biasing analysis results. A simple way to eliminate mistakes like this is to add a range check to the transformed code. Indices that fail the range check should raise a runtime exception. It would be correct, but many of these range checks would be unnecessary and would slow down processing. For instance, the list subscript might be in the body of an {\tt if} statement where the user did range-checking manually, or it could be in a {\tt for} loop bounded by the list length (a common case). A conservative approach would be to apply range checking by default and remove it from unnecessary cases as they are identified. The performance tests in the last section have no range checks, but the OAMap library has range checking at the time of writing. \subsubsection{Pythonic Indices} Negative index handling in Python is a user-familiarity enhancement, in which negative values start counting from the end of the list. Without this enhancement, negative indices would be caught by a range check, so it is not strictly required. Without a range check, it is extremely dangerous, as users would get subtly wrong results by assuming normal Pythonic behavior. (Pythonic indices are currently implemented in OAMap.) \subsubsection{Eliminating Zero-Lookups} As an optimization, one can statically identify array lookups that always return zero: the first element of every list offset array is zero, and the outermost list offset array is always evaluated at its first element. Without explicitly removing it, this unnecessary code would be executed at every step in an iteration over a List. (PLUR-unaware compilers cannot remove it.) \subsubsection{For loop flattening} Related to the above, nested {\tt for} loops that exhaustively walk over List contents, such as \begin{center} \begin{minipage}{0.8\linewidth} \input{minted-1-5.tex} \end{minipage} \end{center} can be collapsed into a single {\tt for} loop over the innermost contents ({\tt x}). The innermost data are stored contiguously, so a single loop would suffice, and it is much more likely to be vectorized by the compiler. (A PLUR-unaware compiler cannot make this optimization because it does not know that list offset arrays are monotonically increasing.) \subsubsection{Fixed-size Arrays and Matrices} PLUR Lists can have any length, which includes a constant length, at the expense of redundant offset arrays with linearly increasing contents. It would be possible to add the concept of a fixed-length List to the type system and propagate its implementation through OAMap and the code transformation rules. However, an important special case in which the fixed-length dimensions directly contain Primitives is available for free by accepting Numpy arrays with multidimensional {\tt shape} parameters as Primitives. \subsubsection{Type Constructors} As stated above, passing a non-Primitive PLUR object to any unrecognized function is a compilation error. This especially includes constructors for dynamic objects like Python lists and dictionaries, since we cannot statically track where they are used. However, one may wish to allow some immutable objects to contain PLUR objects, such as Python tuples to allow Python tuple-unpacking in assignments. We then become obliged, however, to track these objects as containing PLUR types. \subsubsection{Pattern Matching} Some languages make extensive use of typed pattern matching; PLUR types would need to be tracked through syntactical structures such as these. In Python, tuple-unpacking is the most obvious instance of pattern-matching, and type inference through it is possible (implemented in the version of OAMap used in these tests). \subsubsection{Equality and Order} Since PLUR types describe inert data (as opposed to functions or active elements like file handles), it would be reasonable to define value-based equality and possibly an ordering, effectively treating comparison operators like ``{\tt ==}'' and ``{\tt <}'' as special functions (as well as ``{\tt sorted}''). Reference-based equality, expressed in Python as an ``{\tt is}'' operator, is easiest to implement. For objects typed at identical nodes of the PLUR schema, references are the same if their indices are equal. At compile-time, this is a schema comparison, and at runtime, we replace ``{\tt is}'' with ``{\tt ==}''. \subsubsection{Fallback to Object Materialization} One way to support particularly difficult cases, such as external functions without code transformation, is by materializing objects using the arrays-to-objects algorithm. This gives up on zero-copy efficiency, but it may be worthwhile to mix fast code with expressive code. Then, rather than new features allowing specific cases of user code to run at all, new features would allow the user code to run faster. \subsubsection{Function dispatch} As described in Section~\ref{type-system}, the PLUR type system only distinguishes between storage types. If the same storage type with different names should be dispatched to different functions or different versions of a function, that would be handled in code transformation. As with special functions ({\tt len} and {\tt isinstance}), the identities of these functions must be known at compile-time. \section{Implementation} We are developing the OAMap toolkit\cite{oamap} as the execution engine of a HEP database/query system, but it is also usable on its own. This toolkit includes the code transformation routine outlined above, which is intended for high-throughput processing, and proxy classes for low-latency exploration on the Python commandline. The proxies are Python classes that yield data on demand, using Python's {\tt property}, {\tt \_\_getitem\_\_}, and {\tt \_\_getattr\_\_} to emulate static members by fetching data (from memory, disk, or network) as necessary. These proxies most clostly resemble the work of Mattis {\it et al}.\ in PyPy\cite{columnarobjects}, except that PyPy is JIT compiled on the fly, making the proxy and code transformation approaches equivalent. However, Numpy and CPython provide access to much-needed scientific libraries. The use of proxies would also be equivalent to code transformation in a fully compiled language like C++. The propagation of PLUR type would be performed by the C++ compiler, rather than manually through a Python AST. However, C++ would not be a convenient query language, and type-level programming in C++ is not as powerful as direct AST manipulation. (For example, it might not be possible to collapse {\tt for} loops based on our knowledge that list offset arrays are monotonically increasing.) Julia would be an excellent compromise, as it is a high-level language without manual memory management, it automatically JIT compiles, and provides access to the Julia AST, but C++ and Python are much more commonly used in HEP. Nearly all HEP data are stored in ROOT files, so we must be able to read this format efficiently. We have implemented a modification to ROOT, called BulkIO, that allows us to skip ROOT's usual object materialization methods ({\tt TTree::GetEntry} and {\tt TBranch::GetEntry}). These updates are scheduled to become part of the base ROOT distribution in ROOT version 6.14. Although the performance studies in the next section use ROOT with the BulkIO enhancements, this access method is also available as a pure Python package called uproot\cite{uproot}. \subsection{Performance Studies} As an execution engine, the transformed code must be convenient and fast. We chose Python over C++ for convenience and must not pay for that choice in performance. For this reason, we have been performance-testing OAMap throughout its development, informally comparing against ``bare metal'' speeds of one-off C programs. We always use Numba with ``{\tt nopython=True},'' so the code it compiles with LLVM is almost perfectly equivalent to a C program compiled with Clang. Our informal tests bore this equivalence, but were unrealistically sourced with random data. Real-world uses of OAMap would be sourced with HEP data in ROOT files (or a database equivalent). The most relevant comparison then is between a C++ analysis function in ROOT, from ROOT data through object materialization, and an OAMap-transformed Python function, from ROOT data but accessed as in-place arrays. We used the BulkIO enhancements to stream data into Numpy arrays and OAMap to transform and then compile the same analysis functions. C++ and Python versions of the analysis functions are listed in Figure~\ref{four-functions}. \begin{figure} \scriptsize \noindent\begin{minipage}{\textwidth} \begin{minipage}[c][1.8cm][t]{0.22\linewidth} \underline{{\bf max p$_{\mbox{\tiny T}}$} in Python} \input{minted-2.tex} \end{minipage} \begin{minipage}[c][1.8cm][t]{0.25\linewidth} \underline{{\bf max p$_{\mbox{\tiny T}}$} in C++} \input{minted-3.tex} \end{minipage} \vspace{0.25 cm} \begin{minipage}[c][2.6cm][t]{0.22\linewidth} \underline{{\bf eta of best by p$_{\mbox{\tiny T}}$} in Python} \input{minted-4.tex} \end{minipage} \begin{minipage}[c][2.6cm][t]{0.25\linewidth} \underline{{\bf eta of best by p$_{\mbox{\tiny T}}$} in C++} \input{minted-5.tex} \end{minipage}% \vspace{0.25 cm} \begin{minipage}[c][3.2cm][t]{0.22\linewidth} \underline{{\bf mass of pairs} in Python} \input{minted-6.tex} \end{minipage} \begin{minipage}[c][3.2cm][t]{0.25\linewidth} \underline{{\bf mass of pairs} in C++} \input{minted-7.tex} \end{minipage}% \vspace{0.25 cm} \begin{minipage}[c][2.4cm][t]{0.22\linewidth} \underline{{\bf p$_{\mbox{\tiny T}}$ sum of pairs} in Python} \input{minted-8.tex} \end{minipage} \begin{minipage}[c][2.4cm][t]{0.25\linewidth} \underline{{\bf p$_{\mbox{\tiny T}}$ sum of pairs} in C++} \input{minted-9.tex} \end{minipage}% \end{minipage} \caption{\label{four-functions} Sample analysis functions in Python (before code transformation) and object-oriented C++, showing only the body of the loop over events.} \end{figure} The first, ``max p$_{\mbox{\scriptsize T}}$,'' is an example of a query that would be difficult but not impossible in SQL. Instead of exploding a muons table and grouping by a unique {\tt eventId}, we keep a running maximum that resets in each event. The second, ``eta of best by p$_{\mbox{\scriptsize T}}$,'' is an extension of that idea: we select a muon by maximizing {\tt pt} and then plot its {\tt eta}. This is even more awkward in SQL, but very common in HEP. In the listing, note that {\tt best}, which is a muon object, is initialized as $-1$ in Python and {\tt nullptr} in C++. At the time of testing, the code transformer had no concept of a nullable PLUR type, though OAMap has this feature now. Instead of initializing the object as $-1$ and checking for that value as a negative index, we can now write it more naturally as {\tt None} and assigning the same variable as a muon Record and a number would be a compile-time error. The third function, ``mass of pairs,'' would require a full outer join in SQL but is a nested for loop in Python and C++, carefully indexed to avoid duplication. This kind of ``unique pairs'' loop is very common (often with a selection, e.g.\ requiring opposite-sign charges), and the {\tt mass} formula is one of the most frequently executed in HEP. The fourth, ``p$_{\mbox{\scriptsize T}}$ sum of pairs,'' is a diagnostic of the third. As we will show below, the mass calculation is the slowest of the sample functions, and it was unclear whether the nested indexing was responsible or the complex formula. This function has the same nesting structure but a simpler formula (one that is occasionally useful in HEP). We ran each analysis function on a 5.4 million event dataset of simulated Drell-Yan collisions (a realistic physics sample, one of about a dozen that might be involved in a real HEP analysis). The tests were performed on an {\tt i2.xlarge} instance on Amazon Web Services, which features a large, fast SSD (though in the end, we opted for tests with prewarmed cache). To avoid decompression and/or physical device access as a dominant contributor to these tests, we prepared an uncompressed ROOT file and loaded it into Linux page cache with vmtouch\cite{vmtouch}. This lets us see the differences due to other factors more clearly. Both tests were single-threaded and had plenty of working space memory. The benefits of parallelization are beyond the scope of this paper, and also factorize from our single-threaded tests. If we double the single-threaded speed of this embarrassingly parallel problem, we double the speed of a parallelized version (unless that parallel processor is swamped with overhead). The results of the study are shown in Figure~\ref{root-and-plur}. The shorthand ``ROOT'' signifies a conventional ROOT workflow with object materialization and C++, and ``code transformation'' is our workflow with BulkIO, no object materialization, and transformed Python code. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{root-and-plur.pdf} \caption{Event processing rates, including read and execute times, for 5.4 million events in the test sample. Grouped bars indicate different analysis functions and colors indicate different workflows. See the text for details.} \label{root-and-plur} \end{figure} The four colors signify variants on these workflows. ``ROOT full dataset'' means we let ROOT fill all 42 attributes of the Muon objects, which is clearly unnecessary for our functions. The event processing rate for this case is 0.4~MHz, regardless of the content of the function. Reading and filling the objects overwhelms all other factors. (This case is only included for completeness.) ``ROOT selective on full'' uses ROOT's opt-in mechanism to avoid filling all attributes in the objects, but still used the 42-field object definitions and original data file. The event processing rate is 1.29~MHz, regardless of the function. Handling all of the attributes still dominates. ``ROOT slim dataset'' performs the same selective read on a specially prepared dataset and object definition that has only 3 fields: {\tt pt}, {\tt eta}, and {\tt phi}. We now see different rates for the four functions: 6.68, 5.96, 3.56, and 6.34~MHz. ``Code transformation on full ROOT dataset'' is the only test in this batch that uses transformed Python code. It accesses the full dataset; the slim dataset yields similar results because this method is unconcerned with unused columns. The four functions are processed at 17.9, 12.1, 6.09, and 17.2~MHz, respectively, considerably faster than object materialization. The slowest of the four functions is ``mass of pairs.'' Unlike the first two functions, this involves a doubly nested loop over muons to find distinct pairs, as well as a much more complex formula. The fourth function has the same loop structure without the complex formula, and it is as fast as the single loops. Upon further investigation, we found that the trigonometric and hyperbolic cosines account for the majority of the time spent in this function. The factor that the ROOT tests and OAMap test had in common is that they both extracted data from ROOT files and used part of the ROOT framework to read them. OAMap can operate on any arrays, regardless of whether they came from ROOT, so we performed another test in which we extracted all the data as Numpy arrays in memory. The ROOT files were in memory, too, because we prewarmed the Linux cache with vmtouch, but some processing is still required to seek to the relevant parts of the file to expose the arrays. We compare the OAMap result (copied from the previous plot on the new scale) from ROOT and from raw arrays in Figure~\ref{physical-media} for two reasons: (1) it shows just how much room there is between these execution rates and the data-access rate, since somewhat more complex functions will be slower and we want to know when to expect it to be a bottleneck, and (2) because a HEP database system would conceivably cache frequently accessed arrays in memory, and this shows the difference between a cache-hit and a cache-miss. The effect of the slow mass calculation is dramatic: ``mass of pairs'' runs at 12.8~MHz while ``p$_{\mbox{\scriptsize T}}$ sum of pairs'' runs at 56.2~MHz. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{physical-media.pdf} \caption{Event processing rates for the same four analysis functions, but only for PLUR workflows. Colors represent data sources: full ROOT dataset and Numpy arrays in memory.} \label{physical-media} \end{figure} \section{Conclusions} We have demonstrated that it is possible to transform code to meet a data format, rather than deserializing data to fit the code's expectations, as long as JIT compilation is available. We have also demonstrated that, once transformed and compiled, an idiomatic Python analysis function outperforms the same function written in idiomatic C++ with object materialization. This should not be taken as a claim that Python is faster than C++, just as SQL is not faster than C++, but that we have applied one of the tools used by databases to accelerate queries to a general-purpose language. The compilation of Python by Numba provides parity between a restricted subset of Python (statically typable, no first-class functions) and C++, and the code transformation avoids the cost of object materialization. In principle, the same speedup could be achieved in C++ or Julia with the appropriate proxy classes. Finally, the usefulness of this technique is not limited to HEP. The need for complex loop dependencies can hardly be HEP-specific\footnote{As an indication of this need, Netflix proposed changes to Spark to allow higher-order functions in SparkSQL, which would transform subcollections without a full explode-and-join: {\tt https://issues.apache.org/jira/browse/SPARK-22231}}. Although many industries and fields of academia are currently using SQL or languages similar to SQL for data analysis, how much is being left unexplored because the tools are not suited for the task? It is our hope that this technique finds application in many different fields, just as the techniques of database-style analysis have inspired our work in HEP. \section*{Acknowledgments} This work was supported by the National Science Foundation under Grants No.\ 1450377 and 1450323. The authors wish to thank Philippe Canal for his expert help in understanding the ROOT file format and ROOT I/O subsystem. \bibliographystyle{IEEEtran}
1,314,259,996,807
arxiv
\section{Introduction} Playing computer games is a popular form of entertainment, driving sales in excess of \$64.4 billion within the United States (US) between 2010 and 2013 \cite{1}. Among the factors for this success is the broad appeal of computer games. Research shows that computer games are enjoyed by a diverse audience. Approximately 71\% of players are adults and 48\% of players are female \cite{1}. Even members of groups not commonly associated with games, such as the elderly, sometimes report that they regularly play games \cite{2, 3}. Thus, as Allaire writes, ``there is no longer a `stereotype game player', but instead a game player could be your grandparent, your boss, or even your professor'' \cite{4}. However, different players often have different needs. It is, therefore, important to consider how this apparent emergence of diversity can be accommodated in order to maximize the market \cite{5}. One such community to consider are those with sensory, motor or cognitive impairments. Contrary to popular belief, the proportion of such players is not trivially small. More than one fifth of PopCap's casual games audience identified as having some form of impairment \cite{6}. Furthermore, an estimate based on the 2002 US Census data suggests that 9\% of the population can encounter a reduced play experience as a result of an impairment and 2\% of the population might be unable to play games as a result of an impairment \cite{7}. So, although this is likely to be fewer in practice as not everyone plays games, the proportion of population that does play is growing and this trend is likely to continue \cite{1,7}. Unfortunately, many computer games present unnecessary barriers to those with impairments \cite{8, 9}. Consequently, such games limit their potential market as they are not accessible to those individuals whom want to purchase and play them. The term ``digital outcasts'' \cite{10, 11} has appeared in order to describe these individuals. That is, those being excluded by the gulf between the development of new innovations and their more inclusive variants. It is interesting to note, however, that many of the barriers these outcasts encounter can be addressed. Many of the recommendations listed by the International Game Developers Association (IGDA) \cite{12} and the authors of Game Accessibility Guidelines \cite{13} are far from insurmountable; often, being small in scale and reasonably low-cost to implement. Particularly, when inclusivity is considered during the early stages of development. Given that such changes have the potential to significantly broaden a game's audience, investing in inclusive design practice could yield satisfactory returns. As such, many inclusive design practices are feasible for commercial organisations to engage with. Despite this case, it is not clear whether many game developers have: an awareness of the impact of impairments; a knowledge of how to overcome pitfalls; or a willingness to even consider making inclusive games. With the increase of institutions offering educational programmes that focus on game design and development, students could be prepared with appropriate knowledge and skills prior to joining the industry. However, how can educators nurture the relevant attitudes in an effective manner? One approach, which has previously been piloted by the IGDA Special Interest Group on Game Accessibility \cite{14}, is to promote inclusive design at hackathons and other game making events such as the Global Game Jam. This is claimed to be an effective strategy \cite{15}, however as the initiative was only launched in 2012, no formal studies have been conducted to verify its impact. This paper begins to address this gap. \section{Global Game Jam} The Global Game Jam (GGJ) is a two-day game making event which takes place simultaneously at multiple physical locations across the world. It was founded in 2009 and while many similar game festivals and hackathons existed prior to the first event, it was the first to organise multiple physical venues. It is also considered to be the largest event of its kind in the world. There were 488 locations across 72 countries involved in GGJ 2014, which had 4,290 games projects being submitted \cite{16}. Participants work in small groups to rapidly prototype video games based around a common theme and set of constraints, which are revealed at the beginning of the event. The brief time span aims to encourage creative thinking and experimentation. As such, a range of innovative and artistic games are often produced each year. These are available to download from the official GGJ website and it is standard practice at many sites for groups to present their game designs and game prototypes to an audience. The event can be popular with undergraduate students, particularly those enrolled on game design and development programmes as it presents a range of opportunities for self-development and learning \cite{17}. It is not exclusively students who participate, therefore students have the opportunity to work alongside industry practitioners and educators in the field. Furthermore, the experience can highlight individual strengths and weaknesses; as well as provide opportunities to work collaboratively with people from other disciplines. A way in which the GGJ has distinguished itself from other similar events, is the flexibility it welcomes. Often, local site-specific constraints are embraced in order to explore key social or design issues. This presents opportunities for students to work alongside advocates as they engage with such constraints, forming a type of induction. As such, this presents an exciting opportunity for students to become aware of key industry issues and experience common pitfalls. Addressing the topic of game accessibility, an Accessibility Challenge has been offered at several GGJ venues since 2012. This has prompted participants to develop a range of accessible games, such as Super Space Snake shown below in Figure I. This initiative became better supported in 2014 when the GGJ offered optional design constraints focusing on game accessibility. \begin{figure}[h!] \caption{Accessible Game Made During GGJ 2012 Bristol \cite{18}} \centering \includegraphics[width=\textwidth]{snakes} \end{figure} \section{Method} The accessibility challenge was made available to students attending a GGJ 2014 location in London. This initiative took a minimalistic form, in which an advocate introduced the challenge and provided each group with handouts about visual, auditory, motor, and cognitive impairments as well as handouts on how to address each one. The advocate then periodically visited different groups throughout the event in order to encourage students to participate and to provide advice on their design ideas. In order to review the impact of the initiative, attitude questionnaires were distributed via SurveyMonkeyTM to everyone attending the event. This includes both those who opted-into the accessibility challenge and those who chose not to participate. No sampling was conducted as overall attendance was low. The following questions were posed: \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Game Accessibility Attitudes Questions} \label{table:videos} \centering \begin{tabular}{p{2cm}p{10cm}} \hline \textbf{Ref} & \textbf{Item} \\ \hline ATT-1 & I am familiar with the challenges that those with disabilities encounter when playing games \\ ATT-2 & I consider making my game designs inclusive \\ ATT-3 & I am aware of key game design pitfalls that affect how individuals with impairments enjoy games \\ HARD-1 & I believe making an accessible game is too hard \\ HARD-2 & I believe making an accessible game is too time consuming \\ \hline \end{tabular} \end{table} These were presented as a 5-point Likert scale, from strongly disagree to strongly agree, and scores were computed by summation. Alongside demographic questions, several nominal questions were included in the post-event survey: \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Post-Event Impact Questions} \label{table:videos} \centering \begin{tabular}{p{2cm}p{10cm}} \hline \textbf{Ref} & \textbf{Item} \\ \hline IMPT-1 & Compared to everyone else, what kind of experience should disabled gamers expect to have? \\ IMPT-2 & What level of priority should game developers adopt to avoid unnecessarily excluding gamers with disabilities? \\ IMPT-3 & Would you like to meet other people who are interested in accessible game design? \\ IMPT-4 & Would you be interested in taking part in a dedicated accessibility game jam? \\ \hline \end{tabular} \end{table} In addition to several questions about participants' enjoyment of the event: \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Post-Event Enjoyment Questions} \label{table:videos} \centering \begin{tabular}{p{2cm}p{10cm}} \hline \textbf{Ref} & \textbf{Item} \\ \hline ENJY-1 & The Global Game Jam is an enjoyable event \\ ENJY-2 & I believe attending the Global Game Jam is worthwhile \\ ENJY-3 & I would encourage others interested in games development to attend the Global Game Jam \\ \hline \end{tabular} \end{table} These were presented as a 6-point forced-choice Likert scale, from strongly disagree to strongly agree, where scores were computed by summation. \section{Results} There were 35 complete responses to the survey, representing a response rate of 80\%. There were 11 first-year students, 10 second-year undergraduates, 2 final-year undergraduates, and 12 postgraduate students; of which, 5 were female and 23 had not previously attended a game jam. All cases were included and there was no missing data. All reported p-values from null hypothesis significance tests are two-tailed. As participation in the accessibility challenge was optional, those who chose to participate (P; 10 respondents) and those who did not (NP; 25 respondents) have been analysed separately. Table IV below reveals a statistically significant improvement in ATT scores at post-test. Additionally, there was a statistically significant reduction in HARD scores for participants. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Paired T-Test for Pre-Post Differences} \label{table:videos} \centering \begin{tabular}{p{2cm}p{2cm}rrrrr} \hline \textbf{Items} & \textbf{Group} & \multicolumn{2}{l}{\textbf{Pre-Score}} & \multicolumn{2}{l}{\textbf{Post-Score}} & \textbf{$p$} \\ \textbf{} & \textbf{} & \textbf{$\mu$} & \textbf{$\sigma$} & \textbf{$\mu$} & \textbf{$\sigma$} & \textbf{} \\ \hline ATT & P & 9.7 & 3.3 & 11.8 & 3.01 & 0.003 \\ & NP & 9.76 & 2.9 & 11.1 & 3.23 & 0.013 \\ HARD & P & 8.9 & 1.52 & 7.7 & 2.11 & 0.037 \\ & NP & 7.84 & 2.21 & 7.4 & 2.25 & 0.204 \\ \hline \end{tabular} \end{table} Tables \ref{table:impt1}-\ref{table:impt4} below illustrate the IMPT responses for those attending the event: \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{IMPT-1 Responses} \label{table:impt1} \centering \begin{tabular}{p{8cm}rrr} \hline \textbf{Response} & \multicolumn{2}{r}{\textbf{Group}} & \textbf{Total} \\ \hline As equal as possible & 3 & 9 & 12 \\ Roughly equivalent & 1 & 3 & 4 \\ Access to some key areas of gameplay & 5 & 6 & 11 \\ Separate disability-specific games & 1 & 7 & 8 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{IMPT-2 Responses} \label{table:impt2} \centering \begin{tabular}{p{8cm}rrr} \hline \textbf{Response} & \multicolumn{2}{r}{\textbf{Group}} & \textbf{Total} \\ \hline Essential & 1 & 3 & 4 \\ Only if it fits within the budget and mechanic & 8 & 16 & 24 \\ Only after the game has been built & 1 & 4 & 5 \\ Not at all & 0 & 2 & 2 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{IMPT-3 Responses} \label{table:impt3} \centering \begin{tabular}{p{8cm}rrr} \hline \textbf{Response} & \multicolumn{2}{r}{\textbf{Group}} & \textbf{Total} \\ \hline Yes, I am interested in meeting other people & 9 & 10 & 19 \\ No & 1 & 15 & 16 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{IMPT-4 Responses} \label{table:impt4} \centering \begin{tabular}{p{8cm}rrr} \hline \textbf{Response} & \multicolumn{2}{r}{\textbf{Group}} & \textbf{Total} \\ \hline Yes, I would be interested & 9 & 14 & 23 \\ No & 1 & 11 & 12 \\ \hline \end{tabular} \end{table} These responses demonstrate a variety of perspectives with respect to each item. A series of chi-square tests show that there were no statistically significant differences in terms of response distribution for those who did and did not participate in the accessibility challenge. There was no general consensus regarding the experience that those with an impairment should expect. However, most of the respondents believed that inclusive designs should only be considered where they ``fit within the budget and intended game mechanics''. Tables IX below shows the ENJY responses for those attending the event: \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Independent T-Test for Differences in Enjoyment} \label{table:videos} \centering \begin{tabular}{p{2cm}p{4cm}rrr} \hline \textbf{Items} & \textbf{Group} & \textbf{$\mu$} & \textbf{$\sigma$} & \textbf{$p$} \\ \hline ENJY & P & 16.22 & 1.39 & \\ & NP & 16.28 & 1.69 & 0.928 \\ \hline \end{tabular} \end{table} This shows that there was no statistically significant difference in the enjoyment of the Global Game Jam experience between those participating in the accessibility challenge and those choosing not to. \section{Discussion} The low rate of participation in the accessibility challenge was disappointing as other events have had higher rates of participation (e.g. \cite{19}). Further qualitative enquiry is needed to explore \textit{why} students did not want to engage with the challenge. Despite the low rate, however, the results suggest that the accessibility challenge encouraged all of the students to question their attitudes about game accessibility. In particular, students noted increased awareness about the impact of impairments and clearly considered how they could overcome this impact. As such, many claimed that they would consider inclusive design practices in future projects. Therefore, the challenge appears to have had an impact. However, there was no consensus regarding the level of accessibility games should target, with many believing that games should be as 'equal as possible', with others believing that only 'access to key areas of gameplay' is sufficient. It is important to note that only those whom participated in the accessibility challenge, began to dispel the belief that accessible games are difficult and time-consuming to create. As such, the game making experience does seem to have provided an attitude-changing experience (as claimed in \cite{15}). It should be noted that the sample size was small and all of the participants were drawn from a single venue. A larger trial conducted across multiple venues would permit a more representative and general conclusion. There was only one form of measurement: a self-report questionnaire. Triangulation, through exploring students' efforts and analysing their future games, would permit a more sound interpretation of whether students' behaviours had actually changed as a result of their involvement in the event. As such, acquiescence bias poses a potential threat to the validity of these initial findings. Finally, there was no formal assessment of whether the students were able to apply inclusive design practices effectively during such a short and intensive event. This was done in order to maintain a light-hearted and creative spirit. However, it makes it unclear whether the students learned how to apply inclusive design practice. It is proposed that formal observation at future events could be used to determine whether students adopted and applied appropriate practices. Additionally, further analysis of the games developed by the students prior to and after the event is needed to determine whether or not their experience transfers to new contexts as this is pertinent to the long-term impact of the initiative. Depending on the results, it may be necessary to consider the format of the event in greater detail. Particularly, how best practices can be facilitated. The nature of the event aims to encourage innovation and creativity, so a prescriptive approach may contrast with its culture. On the other hand, a trade-off with encouraging best practices may exist. \section{Conclusion} This paper shows that the Global Game Jam can be an effective avenue to promote inclusive design practice to students. A statistically significant improvement between pre-event and post-event attitudes were found with respect to knowledge about game accessibility and beliefs that making a game accessible is difficult. As such, the initiative appears to have increased students' awareness about game accessibility in addition to their willingness to consider accessibility issues in their future games. These are, hopefully, attitudes that the students will take with them when they become members of the games industry. Interestingly, despite low participation, non-participants showed some improvement in attitude. The reason for this change is not clear. Perhaps considering to participate, passing engagement with the material, or observing what other students achieved during the event were key factors. Nevertheless, the reasons for such low participation at this particular venue and the impact the challenge has on design behaviour (and the inclusivity of future games) warrant further investigation. \section{Acknowledgement} The authors would like to thank Chris Cox, and the School of Arts, who secured funding from Brunel University as well as helped to organise and run the event. \section{References}
1,314,259,996,808
arxiv
\section{Introduction} In Paper I (Battaner, Florido and Jimenez-Vicente 1997) and Paper II (Florido and Battaner 1997) we suggested that primordial magnetic flux tubes have generated radiative energy density filaments throughout the radiation dominated era. As filaments are very frequently present in the observed large scale structure today, it is here considered that contemporary matter filaments could be identified from these early radiative filaments. Under this assumption, present large structures would have inherited the topological structure imposed by the properties of magnetic fields. This is the goal of this paper, i.e. by assuming that filaments are magnetically induced condensations of matter and that they are the pieces with which the Universe is made up, to identify what observational properties large scale structures would possess. Filaments are often found in many astrophysical systems, such as the Sun and the interstellar medium, and are often interpreted as being magnetically induced. We propose here a similar interpretation for galaxy cluster filaments. The model presented in Papers I and II, considers a time period between Annihilation and Equality; therefore, the matter, radiation and magnetic field configuration which had been predicted cannot strictly be compared with present large scale structures. To make such a comparison valid, the model should be extended to the present, considering epochs in which the evolution of filamentary structures is much more complicated. But the development of a single model, suitable for so many different epochs is not at present practical. It is however possible and tempting to qualitatively predict what kind of structures would now arise from those theoretically predicted pre-Equality structures. There are two main arguments which make this prediction reasonable: -As discussed in paper II, the further evolution of the structures is subject to three effects which were considered negligible in earlier epochs: viscosity and heat conduction as the fluid becomes imperfect, the amplification of magnetic fields by dynamo effects or ejections from galaxies, and non-linear effects. However, all these processes only affect small scale structures. Large scale structures, therefore, can be considered unaltered in recent epochs, and have evolved in a simple way, just being diluted by expansion, as described, for example, in an Appendix to Paper I. -At Equality, filamentary distributions of radiation and matter had already formed. By Recombination, matter was free to further concentrate around previous filamentary potential wells. We must therefore consider the evolution of these early matter structures, even independently of the subjacent magnetic fields which created them. The evolution of pre-recombination matter structures is a subject in which considerable experience has been attained in recent years, and is considered, for instance in the interpretation of CBR anisotropies. Again, the evolution of very large scale structures is much simpler, as it is linear. Magnetic fields could evolve in different ways and might even affect the evolution of matter distribution, though probably not in a wholly unpredictable way. It should not be forgotten, on the other hand, that our analysis in Papers I and II, was developed for a hot particle fluid, and in particular for photons before Equality. However the equations could describe actual structures at present, if hot dark matter were dominant. We are therefore aware that our discussion is qualitative and speculative, but nevertheless interesting, as it could open new ways of interpreting large scale structures. Let us therefore assume that the present large scale structures are larger than but essentially identical in shape to the parent pre-Equality filamentary structures. \section{Magnetic restrictions to the large scale structure} Independently of the epoch in which primordial magnetic fields were generated, they must fulfil a restriction: $\nabla \cdot \vec{B} =0$. Magnetic field lines are either straight lines when viewed at large scale or they form loops. The first possibility must be excluded if the Cosmological Principle is maintained. Loops can be made from filaments, with plane polygons being the simplest possibility. Three-dimensional structures made up of polygons are polyhedra. From the observational point of view, the polyhedric nature of the Universe is far from demostrated, but this possibility is by no means in disagreement with observations (Broadhurst et al., 1990; Einasto et al 1997a,b). We should look for the simplest polyhedric structures compatible with the absence of sources and the loss of magnetic field lines. The Cosmological Principle would require either isolated polygons or polyhedra with random orientations or period like structures with the basic polyhedra in contact, forming a network. As isolated structures are not suggested by observations, we concentrate on the network structure. Assuming the structure is periodic and three-dimensional, a crystallographic approach seems reasonable. Of course, we are not proposing that the Universe is a pure crystal. More irregular and imperfect forms would actually be produced, reminding us more of a form structure than a crystal, but a perfect network is an adequate zero-order theoretical description. A network is a description of the large scale structure commonly found in the literature. However, the edges of the network polyhedra now have a direction, that of the parent magnetic field. Magnetic field lines connect filaments and there exists in principle the possibility that they have a complicated contour travelling through different basic polyhedra. However, the simplest option is to close the loop within only one polyhedrum or even, within a single face. The simplest structure would be that when all edges, all faces and all vertexes are equivalent. This implies that the magnetic flux at any of the vertexes vanishes. Any magnetic field line reaching a vertex through one of the edges must quit the vertex through one of the other equivalent edges. Any arrow of the magnetic field entering a vertex must exist. This is a severe restriction as it implies that the number of edges converging at a vertex must be ``even''. The simplest case is when the basic polyhedra has four edges converging at a vertex (2 is impossible and 3 is ``forbidden''). For example, out of the five regular polyhedra, the octahedron has this property. The icosahedron also has an even number of edges at a vertex but it is less simple. Octahedra do not fill up the whole space, i.e. they cannot produce a Bravais lattice. Some minerals have lattices made up of a combination of octahedra and tetrahedra, but this possibility is ruled out here because only three edges converge at a tetrahedron vertex. When considering octahedra, or any other less simple polyhedra, in contact, they may share a filament merged from two contacting edges. In this case, we assume that both edges have the same direction of magnetic field, because otherwise reconnection of magnetic field lines would take place. This greatly restricts the possible ``a priori'' ways of putting the basic polyhedra in contact. We therefore conclude that the simplest lattice is made up of regular octahedra. In the next section we will consider which octahedron lattice is compatible with all the magnetic and simplicity requirements. There are also some interesting possibilities which do not fulfil all of our requirements, in particular the requirement that all edges and vortexes should be equivalent. Let us describe one of the most interesting, even if it was disregarded as the simplest one. There are only fourteen kinds of simple space lattices (Bravais lattices). Among them, those named body centred structures are the most likely to optimize the above restriction of four edges converging at a node of the unit cell (see figure 1). Note that the same is true for tetragonal and orthorhombic body-centred structures, that is, the configuration of directionality accepts dilation operation along the orthogonal axis of the cell. \begin{figure} \caption[]{ Body centred cube, showing convergent and divergent vertices and the central point} \end{figure} In this particular body-centred cube all edges are equivalent. Not all vertexes are equivalent. Convergent vertices are those in which magnetic fluxes from the three convergent edges converge. Divergent vertexes are those in which magnetic fluxes from the three convergent edges diverge. To ensure $\nabla \cdot \vec{B}=0$, the straight line joining a vertex and a central point, must support a magnetic flux three times the flux of one of the edges. When these individual body centred cubes are assembled to make a complete tessellation, it is easy to calculate that not all vertexes and central points are equivalent. Taking the magnetic flux in each individual edge as unity, the magnetic flux entering (or exiting) a vertex is 24. However, at the central point it is only 12. Vertexes and central points are not equivalent, nor are convergent and divergent vertexes equivalent. The lattice containing both types of vertexes is, however, simple, like a salt crystal. \section{The ``Egg-Carton'' Universe} Therefore, eliminating the body-centred cubic solution, we have looked for octahedric structures with all vertexes and edges completely equivalent. After trials, the only possibility was found to be the one shown in fig. 2. This consists of a primitive cubic lattice in which an octahedron is located at each lattice node and then exploded until it connects with its six nearest neighbours through the vertexes. In this configuration eight edges converge at any one vertex. In addition, all the edges and vertexes are equivalent, supporting the same magnetic flux. Note again that this configuration maintains the topological properties when a dilation operation along the ortogonal axis converts the octahedron shape into a bipyramid. This lattice is built up from octahedra joining only at their vertexes. This lattice reminds us of a structure of superimposed ``egg-cartons'', with the spaces for the eggs representing the voids constituting the large scale structure. \begin{figure} \caption[]{ Lattice of octahedra contacting at their vertexes} \end{figure} It is difficult to find observational evidence to check this prediction, due to the relative scarcity of data available today about the large-structure of the Universe. Our edges are super-dense photon filaments which after recombination become matter filaments. At those places where edges or filaments converge we would have still larger concentrations of matter. Filaments are made up of superclusters (e.g. Coma) or simply connect superclusters (Haynes \& Giovanelly 1986, Tago, Einasto \& Saar 1984). It is an observational fact that filaments are predominant (Gregory \& Thomson 1978; Joeveer \& Einasto 1978; Tully \& Fisher 1978; Lapparent, Geller \& Huchra 1986) with voids in between, and most of them connect superclusters (Gott, Weinberg \& Melott 1987). But the prediction here would be that {\it eight} filaments join each other in a supercluster or in a non-filamentary region in which the matter concentration is specially high. Are these superclusters, from which eight filaments diverge, in the actual structure? Einasto (1992) and Einasto et al. (1984) compare the structure of the Local Supercluster with a spider. Spiders have eight legs and Einasto's spider is no exception. Examining figure 18 in Einasto's (1992) review it is observed that precisely eight filaments diverge. Einasto's spider has eight legs. \begin{acknowledgements}We have held valuable discusssions with the crystallographer M. Rodriguez-Gallego. \end{acknowledgements}
1,314,259,996,809
arxiv
\section*{Data and methodology} We start by considering the six strong lensing time delay measurements, of which five were analyzed blindly, from the H0LiCOW collaboration in~\cite{Wong:2019kwg,Suyu:2009by,Suyu:2013kha,Wong:2016dpo,Birrer:2018vtm,Rusu:2019xrq,Chen:2019ejq}. The difference of excess time delays between two lensed images (and angular positions $\mathbf{\theta}_i$ and $\mathbf{\theta}_j$) of a source (at angular position $\mathbf{\beta}$) is given by: \begin{align} \label{Eq:TimeDelay} \Delta t_{ij} = \frac{D_{\Delta t}}{c} \Bigg[ \frac{(\mathbf{\theta}_i - \mathbf{\beta})^2}{2} - \psi (\mathbf{\theta}_i) - \frac{(\mathbf{\theta}_j - \mathbf{\beta})^2}{2} + \psi (\mathbf{\theta}_j) \Bigg] \,, \end{align} where, $\psi(\mathbf{\theta})$ is the lens potential. By measuring time delays and modelling the gravitational potential of the source and the lens one can infer the time delay distance: \begin{align} \label{Eq:TimeDelayDistance} D_{\Delta t} \equiv (1+z_l)\frac{D_l D_s}{D_{ls}} \,, \end{align} where, hereafter, subscripts $l$ and $s$ indicates quantities referring to the lens and source respectively and $D$ is the angular diameter distance of different objects. As we can see, the measured time delay distance is sensitive to the expansion history of the universe through its dependence on the angular diameter distance at two different redshifts. For the measured systems the spread in redshift is large, with the lens redshift ranging in $z\in [0.3,0.7]$ and the source redshifts ranging in $z\in [0.6,1.7]$ thus making the time delay distance measurements sensitive to the shape of the distance-redshift relation. In addition to the time delay distance measurements, lens kinematic data can be used to estimate the angular diameter distance of the lens, $D_l$, by comparing the dynamical mass with the lensing mass (which depends on distances) ~\cite{Paraficz:2009xj,Chen:2019ejq, Jee:2014uxa, Birrer:2015fsm, Jee:2019hah}. The measurements of $D_{\Delta t}$ and $D_l$ are in general correlated since they use different aspects of the same data \cite{Birrer:2015fsm, Birrer:2018vtm}. Since the H0LiCOW collaboration has not yet publicly released the full posterior of the two distance measurements for all but one of its observations, we separately consider the marginalized constraints on $D_{\Delta t}$ and $D_l$. We observe that, if we consider logarithmic distances, $\log_{10}D_{\Delta t}$ and $\log_{10}D_l$, then the posterior distribution of the H0LiCOW measurements becomes practically indistinguishable from Gaussian. The reason why logarithmic distances are likely to be Gaussian distributed, or very close to that, is that they are computed as the difference of well measured quantities rather than their ratios. We discuss further details on the treatment of the strong lensing measurements in Appendix~\ref{App:SLGaussianTest}. This allows us to convert the constraints on $D_{\Delta t}$ and $D_l$ from~\cite{Wong:2019kwg} into constraints on $\log_{10}D_{\Delta t}$ and $\log_{10}D_l$, properly accounting for the Jacobian of the transformation, and to consider the latter to be Gaussian distributed. We check that cosmological results obtained by fitting the logarithmic measurements reproduce the ones reported in~\cite{Wong:2019kwg}, as detailed in Appendix~\ref{App:SLGaussianTest}. We then consider the Pantheon SN compilation~\cite{Scolnic:2017caz} that provides accurate measurements of relative distances across the redshift range $z\in [0.01,2.26]$ with $1048$ SN measurements. We use the measurement of the Hubble constant of $H_0=74.03\pm 1.42$ from the SH0ES project~\cite{Riess:2019cxk}. Cepheid variables are used to calibrate the absolute magnitude of the SN so that the measured distance modulus can be used to directly estimate luminosity distances. Further details on the calibration of the SN distance modulus can be found in Appendix~\ref{App:SN_cal}. While the SH0ES analysis is the most mature and precise of the local measurements of $H_0$, analyses with alternative approaches are underway. A recent analysis using the Tip of the Red Giant Branch method by the Carnegie-Chicago Hubble Program~\citep{Freedman:2019jwv} yields a lower value of $H_0$ with somewhat larger uncertainty than SH0ES (but there is some debate about their analysis, see~\cite{Yuan:2019npk}). We test this alternative result also in Appendix~\ref{App:SL_Bias}. Note that the SN sample cannot be calibrated using the $H_0$ determination from CMB measurements in a model independent way. Not surprisingly, when using the standard cosmological model determination of the sound horizon scale, ~\cite{Macaulay:2018fxi,Alam:2016hwk,Aubourg:2014yra} find $H_0$ consistent with the CMB value. Finally we note that bias corrections of SN luminosities assume the $\Lambda$CDM model and in principle this breaks model independence, but as discussed in~\cite{Kessler:2016uwi} it is a very small effect. Given that the redshift range spanned by SN observations is populated by a large number of measurements we can interpolate between them to obtain an estimate of distances as a function of redshift. This is achieved by Gaussian process regression of the measured distance modulus. The Gaussian process kernel and kernel parameters are chosen so that the Gaussian process inference is as close as possible to the binned Pantheon SN sample. We find that this procedure is flexible enough to fully capture all the features that are present in the binned SN sample, starting from the full sample, thus effectively obtaining a version of the binned SN sample that is continuous in redshift. We also test the GP covariance matrix, and compare it to the result from polynomial interpolation which tends to significantly underestimate the error bars at intermediate redshifts. Further details on the implementation of the Gaussian process regression are discussed in Appendix~\ref{App:SN_GP}. \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{figure_2} \caption{\label{Fig:DistanceRatioComparison} Comparison of the H0LiCOW measured time delay distance ratios (a) and lens distance ratios (b) with the Pantheon SN predicted values (shaded bands). The ratio of angular diameter distances predicted from BAO are also shown in Panel b). The time delay distances in Panel a) agree at the $78.2\%$ confidence level while the lens distances in Panel b) agree at the $54.5\%$ confidence level. The three BAO points in Panel b) and predictions from Pantheon SN agree at $95.4\%$ confidence level. Note that since only distance ratios are considered in this test, the results are independent of the value of $H_0$. } \end{figure*} With the Gaussian process for the SN we can now predict, with Eq.~\eqref{Eq:TimeDelayDistance}, the time delay and lens distances that H0LiCOW should have observed, independently of the cosmological model. We note that the predicuted distances from SN are correlated; we take this into account when computing the statistical significance of the reported results. \section*{Results} In Fig.~\ref{Fig:DistanceComparison} we show the results of the SN prediction for the time delay and lens distance measurements. In Panel a) we can see that the time delay distance measurement predicted by SN agrees with the direct measurements from H0LiCOW. The uncertainties in the two methods are comparable. In Panel b) the lens distance as directly measured by H0LiCOW and predicted by SN are compared. As in the previous case these are largely in agreement. The error bars of the SN predicted distances are significantly smaller than the H0LiCOW ones because this quantity is directly measured by a large number of SN at intermediate redshifts. To precisely quantify the agreement between the H0LiCOW measurements and the SN ones, we exploit the fact that both logarithmic distances are close to Gaussian distributed. The difference between the two, weighted by the inverse sum of the two covariances is then chi squared distributed with number of degrees of freedom equal to the number of data points. This results in a probability of agreement of $85.7\%$ for the time delay distances and $63.6\%$ for the lens distance. Both results indicate very good agreement. Further, in Fig.~\ref{Fig:DistanceComparison}, there are no outlier measurements. Since the two distributions are very close, possible non-Gaussianities in the tails of the distributions are not expected to change the results significantly. This also rules out the possibility of a systematic uncertainty in the strong lensing sample at a level larger than the reported error bars. Beyond the above tests, it is crucial to test whether the SN and strong lensing measurements agree on the amplitude and ``shape'' (the redshift dependence) of the distance-redshift relation. The amplitude test relates to possible discrepancies in the determination of the Hubble constant, while being fully independent of the expansion history. The shape test would tell us if there is agreement between the two measurements regardless of the overall calibration that measures the Hubble constant. The amplitude of the distance-redshift relation can be tested by looking at the average residual logarithmic distance. The H0LiCOW data measure the average amplitude to be $\langle \log_{10}D_{\Delta t} \rangle = 3.514 \pm 0.012$ while the SN predict the average amplitude to be $\langle \log_{10}D_{\Delta t}^{\rm SN} \rangle = 3.517 \pm 0.017$. These two determinations are in full agreement at $89\%$ confidence level and provide a test at about $5\%$ precision. For lens distances we similarly have $\langle \log_{10}D_{\Delta t} \rangle = 3.026 \pm 0.044$ and $\langle \log_{10}D_{\Delta t}^{\rm SN} \rangle = 3.0416 \pm 0.0088$ which are again in full agreement at about $10\%$ precision. To test the redshift dependence of the measurements we can consider distance ratios, or differences in logarithmic distances. Since the Hubble constant enters as an overall distance multiplier, its value cancels in the ratio. We then take both data sets and consider the ratio with respect to the system with the lowest lens redshift. Although arbitrary, this choice does not influence the outcome of the test. All possible choices would just be different linear combinations of the same data thus leaving the statistical significance unchanged. We show in Fig.~\ref{Fig:DistanceRatioComparison} the comparison of distance ratios, as predicted by SN measurements and as estimated by H0LiCOW. As in Fig.~\ref{Fig:DistanceComparison}, the ratios also show agreement between the two measurements. And since logarithmic distance differences are also close to Gaussian distributed, we can easily compute the statistical significance of their agreement. The time delay distance ratios are in agreement at the $78.2\%$ level and the lens distance ratios are in agreement at the $54.5\%$ confidence level. We note that the H0LiCOW measurements are incompatible with no redshift dependence and hence detect the shape of the distance-redshift relation at $34\sigma$ in Panel a) and $2.6\sigma$ in Panel b). We can further compare these distance ratios with predictions from the BAO measurements. The BAO measurements are sensitive to the ratio of angular diameter distance of the galaxies and the photo-baryon sound horizon scale at the epoch of decoupling. Therefore, we can take the ratio of BAO measurements at two different redshifts and remove any sensitivity to the overall calibration with the sound horizon. In this analysis we consider three BAO measurements from BOSS DR12 data~\cite{Alam:2016hwk} and one measurement from the eBOSS DR14 LRG data~\cite{Bautista:2017wwp}. We use the lowest redshift DR12 measurement as our reference BAO measurement in the distance ratio test. The comparison between the BAO and SN predicted distance ratios can be performed as before and results are in agreement at the 95.4\% confidence level. To qualitatively compare the results to the H0LiCOW measurements we shift the overall amplitude to match the H0LiCOW reference redshift. We compare the distance ratios of all three probes in Fig.~\ref{Fig:DistanceRatioComparison}. All measurements show remarkable consistency in this test, which is independent of $H_0$ and of the cosmological model. Moreover, if we use the SN Gaussian process to calibrate the BAO angular diameter distances, as discussed in~\cite{Aylor:2018drw}, we can directly measure the sound horizon scale independently of the cosmological model. Individually the four calibrated BAO measurements that we consider are in good agreement on the sound horizon determination, giving: $137.39 \pm 3.91$ Mpc, $136.05 \pm 3.83$ Mpc and $137.48 \pm 3.92$ Mpc for the three SDSS DR12 observations in increasing order of their effective redshift; $133.77 \pm 6.24$ Mpc for the SDSS DR14 LRG observation. Jointly the four measurements result in a sound horizon measurement of $135.92 \pm 3.26$ Mpc, which is in $3.3 \sigma$ tension with the {\it Planck} results of $147.09 \pm 0.26 $ Mpc~\cite{Aghanim:2018eyx}. This effectively accounts for a large portion of the $4.4 \sigma$ Hubble constant tension. Since the sound horizon is constant after recombination, this part of the Hubble constant tension cannot be resolved by introducing new physics between the redshift of the BAO measurements and recombination. \section*{Summary and discussion} We have presented a new way of testing the consistency of distance measurements from supernovae and strong lensing time delays that is independent of the cosmological model. Our method exploits the power of SN observations across a large range of redshifts to directly predict the outcome of strong lensing measurements. We use Gaussian process regression to interpolate across redshift and show that these provide robust results. Tests of the covariance matrix show that it is more reliable than polynomial interpolation which tends to underestimate the error bars at intermediate redshifts. We devise three types of tests sensitive to the distance-redshift relation: one directly tests distances (and therefore biases in $H_0$), one tests their calibration and another tests distance ratios. While the first test is sensitive to both calibration and redshift dependent systematic effects, the other two single out and test these two aspects independently. We find that all tests report excellent agreement between the Pantheon supernovae, calibrated with the SH0ES distance ladder, and H0LiCOW time delay measurements. Given the model independence of our tests we conclude that, at present sensitivity, there is no indication for the presence of unaccounted systematic effects in either data set. In particular, if the distance-redshift relation inferred from SN is correct, there is no evidence of a residual bias due to mass modeling uncertainties in the strong lensing data. It is possible that both measurements have a bias of the same sign, magnitude and redshift dependence -- this unlikely scenario would evade our tests. The possibility that uncertainties in mass modeling, in particular the mass sheet degeneracy, have biased the strong lensing determination of $H_0$ has been discussed in the literature \citep{Schneider:2013wga,Xu:2015dra, Unruh:2016adf,Sonnenfeld:2017dca,Kochanek:2019ruu}. Recently~\cite{Kochanek:2019ruu} has suggested that this leads to at least a 10\% level of uncertainty on $H_0$ in present and near future measurements. We explicitly check the impact an unknown residual systematic would have on both $D_{\Delta t}$ and $D_{l}$. We create fake strong lensing observations and test for several types of unaccounted for errors: \begin{itemize} \item Biases in the distance-redshift relation that directly impact $H_0$ or the amplitude of the distance-redshift relation. An 8\% bias that would fully reconcile the tension with {\it Planck} is disfavored at nearly 3$\sigma$. \item Redshift dependent biases. We test the ratio of distances as well as a bias that scales as $(1+z)$, motivated by uncertainties in lens mass modeling. These are also disfavored at over 2$\sigma$. \item Underestimated errors. We artificially increase the covariance in the lensing measurements to obtain 10\% uncertainty on $H_0$. We then find that the probability of disagreement with SN inferred distances drops to 0.05\%, i.e. over 3$\sigma$ evidence against inflated errors. \end{itemize} Unknown measurement systematic effects and/or incorrect lens modelling are likely to affect different systems differently, so they would show up as an unexpected redshift dependence of the measurements. On the other hand if they affected all lenses the same way, they would affect the amplitude. These problems are not guaranteed to show up as a discrepancy in the determination of some cosmological parameter, so our tests provide an independent check. We discuss the detailed results of these tests in Appendix~\ref{App:SL_Bias}. Strong lensing and SN determination of distances almost certainly do not share the same systematic uncertainties. The consistency we have found between the two sets of measurements implies that the discrepancy with the CMB is more likely due to new physical phenomena, or a potential systematic error in the CMB analysis (which is considered unlikely as it has been tested extensively~\cite{Addison:2015wyg,Aghanim:2016sns,Hou:2017smn,Aylor:2017haa,Aghanim:2019ame}). With reduced uncertainties, as expected by increases in the number of SN~\cite{Abbott:2018wog, Scolnic:2019apa, Hlozek:2019vjs,Shajib:2019toy} and new lens systems~\cite{Huber:2019ljb, Yildirim:2019vlv}, we expect our methodology to provide more stringent tests for systematic uncertainties. In the near term joint analysis of the full $D_{\Delta t}$ and $D_{l}$ posterior will also strengthen our tests. With a factor of two reduction in statistical uncertainties, we could rule out a 5 percent bias at the 3$\sigma$ level, independent of the cosmological model -- this would certainly strengthen the case for new physics. We find that measurements of BAO distances are also in agreement with predictions from SN. Once calibrated with SN, the BAO measurements are in tension with the CMB over the determination of the sound horizon. We find that this accounts for a large part of the statistical significance of the Hubble constant tension. Since the sound horizon is constant after recombination, this tension is largely independent of the expansion history between the redshift of the measured BAOs, of about $z\sim 0.7$, and recombination. The structure of the CMB peaks measures the angular size of the sound horizon very precisely. Thus an explanation of the Hubble constant tension that would significantly change the inference of the CMB parameters must rely on new physical phenomena before recombination (see~\cite{Knox:2019rjx} and references therein). We implicitly assumed in our analysis that the universe is spatially flat. It is possible to perform the same tests assuming a curved universe. Since this will introduce an extra free parameter our tests would show stronger consistency. We defer an analysis of time delay distances and their implications for curvature to a future study (see \cite{Collett:2019hrr, Arendse:2019hev} for related analyses). \acknowledgments We thank Eric Baxter, Gary Bernstein, Simon Birrer, Dillon Brout, Neal Dalal, Wayne Hu, Mike Jarvis and Sherry Suyu for helpful discussions and comments on the paper. SP is supported in part by the U.S. National Science Foundation award AST-1440226. MR and BJ are supported in part by NASA ATP Grant No. NNH17ZDA001N, and by funds provided by the Center for Particle Cosmology. BJ is supported in part by the US Department of Energy Grant No. DE-SC0007901. Computing resources were provided by the University of Chicago Research Computing Center through the Kavli Institute for Cosmological Physics at the University of Chicago.
1,314,259,996,810
arxiv
\section{Introduction} The rapid development of artificial intelligence (AI) has greatly changed society. Given that a large amount of data can be generated in our daily life, how to effectively and efficiently use the data to train machine learning models has become a challenge that needs to be addressed. Traditional machine learning frameworks require servers to collect data and perform training tasks in a centralized way, which causes many problems and hinders the development of AI: 1) it's expensive to collect enough data; 2) performing machine learning on servers consumes a lot of resources, such as computation, communication, and storage resources; 3) transferring original data from end devices to servers can lead to data privacy leakage. The emergence of federated learning (FL) has provided a promising solution to tackle the above problems. Google proposed the concept of FL in 2016, which is a distributed machine learning framework\cite{mcmahan2017communication,kairouz2019advances}. The basic idea of FL is that multiple end devices, i.e., clients, collaboratively train a machine learning model. Unlike traditional machine learning frameworks, FL does not require clients to transmit raw data to a central server, but only the updates of the trained local models, thus protecting the data privacy of clients. FL is suitable for large-scale machine learning tasks because it distributes the training task to a large number of end devices, while the central server is only responsible for model aggregation, thus reducing the computational pressure on the server. Currently, FL has been applied in various fields, such as healthcare\cite{xu2020towards}, transportation\cite{zhang2021towards}, communications \cite{liu2020secure}, and Internet of the Things (IoT)\cite{lu2019blockchain}. In particular, FL has been used to support the development of 5G and 6G, which can enable more secure and efficient schemes for future communications and networking. However, FL has also encountered several challenges. Among them, security is one of the most important concerns for researchers. Although FL does not require clients to upload raw data to protect data privacy, due to the distributed nature of FL, there is no guarantee that all devices involved in training are honest, which means that they may upload malicious submissions. In addition, end devices can be vulnerable to external attacks, leading to erroneous local model updates. There are many attacks on FL, such as poisoning attacks\cite{tolpegin2020data,cao2019understanding}, backdoor attacks\cite{bagdasaryan2020backdoor,sun2019can}, and inference attacks\cite{nasr2019comprehensive,luo2021feature}. Poisoning attacks are divided into data poisoning attacks and model poisoning attacks, which are both untargeted attacks. In other words, the aim is to make the model performance degraded generally instead of achieving some targeted misclassification. Data poisoning attacks manipulate the raw data on the clients\cite{sun2021data}, while model poisoning attacks manipulate local model updates\cite{zhou2021deep}. Some studies have shown that model poisoning attacks are more likely to cause damages to FL than data poisoning attacks\cite{cao2020fltrust}. Though lots of existing surveys focus on analyzing the security problems faced by FL, little attention has been paid to defense methods. For example, the work in \cite{mothukuri2021survey,jere2020taxonomy,lyu2020threats} details the possible security problems of FL, but there is less analysis on how to defend against model poisoning attacks. Considering about the severity of model poisoning attacks to FL, it is necessary to survey its defense mechanisms so as to attract more attention and inspire future research. In our survey, we first introduce the relevant background knowledge about FL and model poisoning attacks, then we classify and detail the existing defense methods, and finally, we discuss the challenges and future research directions. To the best of our knowledge, this is the first survey on the defense mechanisms of model poisoning attacks. Our main contributions are as follows. \begin{itemize} \item We investigate the existing defense methods for model poisoning attacks in FL and classify them into two categories: evaluation methods for local model updates and aggregation methods of the global model. \item We describe some of the defense methods in detail, analyzing their workflows and application scenarios. \item We summarize the challenges of defense methods against model poisoning attacks and discuss future research directions. \end{itemize} The remainder of this paper is organized as follows. We introduce the background knowledge of FL and model poisoning attacks in Section \ref{back}. The detailed analysis of defense strategies toward model poisoning attacks in FL is described in Section \ref{defense}. In Section \ref{challenge}, we discuss the challenges and some promising future research directions. In the end, we conclude the whole paper in Section \ref{concl}. \section{Background Knowledge}\label{back} In this section, we illustrate the background of FL and model poisoning attacks. \subsection{Federated Learning}\label{fl} We consider a conventional FL framework, which consists of a \textit{central server} and numerous local devices termed \textit{clients}. We let $i\in \left\{1,2,3,\cdots,N\right\}$ represent an individual client, where $N$ is the total number of clients. Each client has a different size of local data set, which can be denoted as $D_i$. At the beginning of each round, the server first selects a certain number of clients to participate in the federated learning, and we use $m_k$ to represent the fraction of clients chosen in round $k$, where $k\in\left\{1,2,3,\cdots,K\right\}$ and $K$ is the total number of training rounds for a specific FL task. Once the clients are selected, the server sends the initial global model, denoted as $w_0$, to those clients. Then clients start to train local models using their own raw data based on $w_0$, and send the trained local model updates $w_i^k$ to the central server, where $w_i^k$ is the updates submitted by client $i$ in round $k$. Next, the central server collects the local model updates and runs an aggregation algorithm to update the global model. This process will be terminated once the loss of the global model is converged. The most popular aggregation algorithm is \textit{federated averaging} (FedAvg)\cite{wang2020optimizing,mcmahan2017communication}, and it can be expressed as $\delta^k=\frac{\sum_{i=1}^N D_{i} \delta_i^k}{\sum_{i=1}^N D_{i}},$ where $\delta^k$ is the final weight differences of all clients and $\delta_i^k$ is the weight difference of each client in round $k$, and $\delta_i^k$ can be calculated by $\delta_i^k = w_i^k-w_i^{k-1}$. \subsection{Model Poisoning Attacks} FL is a distributed learning framework that requires multiple devices to participate, but there is no guarantee that the selected devices will work honestly. In other words, the clients in FL are not trustworthy for the central server. This can lead to many potential security problems, such as poisoning attacks, backdoor attacks, and inference attacks. In this paper, we focus on the model poisoning attack, which is one of the most popular attacks against FL. \begin{figure}[htbp] \centering \includegraphics[width=0.30\textwidth]{f2.pdf} \caption{The topology of model poisoning attacks in FL. } \label{fig:framework} \end{figure} Model poisoning attacks can be initiated by malicious clients or by an external attacker who controls some clients. In this paper, we do not distinguish the attack initiators and uniformly refer to them as malicious clients. The topology of model poisoning attacks in FL is shown in Fig. \ref{fig:framework}. Specifically, when the malicious clients finish training based on the initial global model $w_0$ in round $k$, they modify the local model updates $\delta_i^k$ and then submit them to the central server. Since the central server does not have access to the raw data of the local clients, and the data on the clients are usually non-independent and identically distributed (non-IID), it is difficult for the central server to identify the modified local updates, which leads to slow convergence or reduced accuracy of the global model. Generally speaking, model poisoning is an untargeted attack, where the purpose is not to target a specific label but the overall performance of the final model. Based on the number of malicious clients, we can classify the model poisoning attack as single malicious clients-initiated attacks and multiple malicious clients-initiated attacks. As for the former, its effect is more obvious when the number of clients is small; while when the number of participants is large, its impact will be offset by weights-based algorithms such as FedAvg. The latter one is more practical in the mobile scenario. The reasons why it is difficult to detect and defend against model poisoning attacks are as below: \begin{itemize} \item The data of local clients are non-IID, which leads to significant differences among the obtained local model updates, causing difficulties for detection. \item The central server cannot obtain the raw data of local clients, and therefore cannot use these data for verification. \item The current popular model aggregation method (i.e., FedAvg) relies on the data volume of clients to assign certain weights to the updates submitted by clients, which does not have any special treatment for the contaminated updates, and thus it cannot defend against model poisoning attacks. \end{itemize} \section{Defense Strategies Towards Model Poisoning Attacks}\label{defense} In this section, we analyze the existing defense strategies toward model poisoning attacks. Although attack detection and defense are two different phases, we treat them as the same in this paper since they usually work together to protect FL. From the existing research, approaches to defend against model poisoning attacks can be divided into two categories: one is to identify malicious submissions by designing evaluation mechanisms for local model updates, and the other one is to design novel and Byzantine fault-tolerant aggregation algorithms based on mathematical statistics. These two approaches are usually used jointly. \subsection{Evaluation Methods for Local Model Updates} Intuitively, the most straightforward way to defend against model poisoning attacks is to examine the submissions of clients. However, since the central server does not have the direct access to the raw data of the end devices, evaluating the model updates submitted by devices has become a challenge. In this part, we will discuss some of the existing evaluation methods in detail. \subsubsection{Spectral Anomaly Detection Based Method} The basic idea of spectral anomaly detection is to embed benign data and malicious data into a low-dimensional space. In \cite{li2020learning}, a spectral anomaly detection based evaluation framework for FL is proposed. In this framework, they first assume that there will be a public dataset which can be trained to provide the spectral anomaly detection model. Then, they embed the local model updates, including benign updates and malicious updates, into a low-dimensional latent space. In this way, the essential features of these updates are well maintained, and the two kinds of updates can be easily distinguished after removing the noisy and redundant features. After the detection process, the malicious updates are removed and only the benign updates are taken into account during the global model aggregation process. According to the experimental results, the spectral anomaly detection based evaluation method can perform well in eliminating the abnormal updates and maintaining high accuracy of the model at the same time. \subsubsection{Truth Inference Based Evaluation Method} In \cite{tahmasebian2021robustfed}, the authors utilize an optimization based truth inference method to evaluate the reliability of the submitted updates in FL before the aggregation of the global model. The basic idea of truth inference in FL is minimizing the weighted deviation from the true aggregated parameters. First, they calculate the reliability score of each update. Then, they propose two methods to aggregate the global model: the first one is using the reliability score as the weight of each update, and the other one is to remove updates with low reliability scores. However, this paper only focuses on IID data, making it not practical for non-IID cases. \subsubsection{Entropy Based Filtering Method} Park et al.\cite{park2021sageflow} design an entropy-based filtering scheme to detect the outlier updates. At the beginning, the server collects some public data, and then calculates the entropy of each update with the public data. Based on their experimental observations, they argue that the updates with higher entropy will lead to lower accuracy during the testing stage. Thus, they set a threshold for the entropy and filter out updates with entropy higher than the threshold. They further illustrate that the entropy-based filtering method can perform well even when the number of adversaries is large, overcoming the limitation of attack ratio. \subsubsection{Cosine Similarity Based Evaluation Method} Cosine similarity is defined by calculating the cosine of the angle between two vectors to evaluate their similarity. Cao et al. \cite{cao2020fltrust} utilize cosine similarity to assess the similarity between each update and the update obtained by training based on the clean dataset of the server. They argue that an attacker can manipulate the directions of updates to achieve the purpose of model poisoning attack, and the directions of the updates can, to a certain extent, indicate the honesty of the end devices. They first let the server collect a small sample size of data (e.g., 100 samples) as the clean dataset, based on which the server trains the model. After the calculation of cosine similarity, there will be a trust score for each update used as the weight for the global model aggregation. In \cite{sattler2020byzantine}, the impact of the model poisoning attack is mitigated according to dividing the updates into different groups by the cosine similarity between updates submitted by clients. This is a new framework called Clustered Federated Learning. In \cite{xu2020towards}, a cosine similarity based evaluation method is applied to detect malicious updates. The central server keeps the reputation of each participant by checking the similarity of local model updates and removes non-contributing or malicious participants. Different from \cite{cao2020fltrust}, the schemes in \cite{sattler2020byzantine} and \cite{xu2020towards} require no collected clean dataset, and the cosine similarity is calculated between two different local model updates. \subsubsection{Learned Lessons} Malicious nodes can be effectively identified by evaluating local model updates before model aggregation, thus reducing the negative impact of model poisoning attacks on FL. The evaluation methods mentioned above require examination of the data submitted by each client, which consumes a long time and computational resources. In addition, some evaluation methods require the server to collect a portion of clean data to be used as a basis for validating model updates to perform machine learning accordingly, which may lead to new problems, such as energy consumption and privacy leakage. \subsection{Aggregation Methods for the Global Model} The aggregation of the global model is an important part of FL. Currently, conventional FL uses FedAvg as the aggregation method, which is unable to identify malicious submissions and leads to the success of model poisoning attacks. A number of studies have focused on designing novel aggregation algorithms to improve the robustness of FL. Based on the existing research, the aggregation methods used to defend against model poisoning attacks can be broadly classified into two categories: adjusting the weights of local model updates based on certain criteria and designing aggregation algorithms using statistical methods. \subsubsection{Criteria-based Aggregation Methods} The metrics here refer to some criteria used to evaluate local model updates (e.g., trust, reliability, similarity), and they are derived from the examination of the updates. For example, in \cite{cao2020fltrust}, the authors use trust as the weights for local model updates in the aggregation process, while in \cite{tahmasebian2021robustfed}, the authors use the reliability of local model updates as the weights. It should be noted that some aggregation methods directly discard data that do not satisfy the criteria, which is also a way of weighting, i.e., treating the weights as 0. \subsubsection{Statistic-based Aggregation Methods}Different from criteria-based aggregation methods mentioned above, the statistic-based aggregation method does not perform verification of local model updates, but only selects data by statistical methods during the global model aggregation process. \textit{Trim-mean} \cite{yin2018byzantine} is to select each parameter of the model independently, sort and remove the maximum and minimum values, and calculate the mean value as the aggregated value of the parameter. Specifically, for each $j$-th model parameter, the server ranks the $j$-th parameter of $m$ local models, i.e., $w_{1j}$, $w_{2j}$, ..., $w _{mj}$, where $w_{ij}$ is the $j$-th parameter of the $i$-th local model, removes the largest and smallest $\beta$ of them and calculates the average of the remaining $m-2\beta$ parameters as the $j$-th parameter of the global model. Assuming that at most $c$ clients are corrupted. This pruned average aggregation rule achieves a sequentially optimal error rate of $\widetilde{O}(\frac{c}{m\sqrt{n}}+\frac{1}{\sqrt{mn}})$ when $c \leq \beta < \frac{m}{2}$ and the objective function to be minimized is strongly convex, where $n$ is the number of training data points on the clients (with the assumption that each client has the same number of training data samples). \textit{Median} \cite{yin2018byzantine} is another aggregation method which selects the median value independently among the parameters as the aggregated global model. In this Median aggregation rule, for each $j$-th model parameter, the server ranks the $j$-th parameter of $m$ local models and takes the median as the $j$-th parameter of the global model. Like the Trim-mean aggregation rule, the Median aggregation rule achieves the sequentially optimal error rate when the objective function is strongly convex. \textit{Krum} \cite{blanchard2017machine} selects a local model among $m$ local models, which is the closest to the others, as the global model. The advantage is that even if the selected model comes from a malicious attacker, its impact may be limited because it is similar to other local models that may come from benign clients. Assume that at most $c$ clients are compromised. For each local model $w_i$, the server computes the sum of the distances between $m-c-2$ local models with the closest Euclidean distance to $w_i$. Krum selects the local model with the smallest sum of distances as the global model. When $c<\frac{m-2}{2}$, Krum has theoretical guarantees for convergence of certain objective functions. \textit{Bulyan} \cite{guerraoui2018hidden} is a Byzantine fault-tolerant algorithm, which continuously cycles through the updates and then performs a Trim-mean. And in particular, the algorithm uses Krum for selection. Thus Bulyan is a combination of Krum and Trim-mean. Specifically, Bulyan first iteratively uses Krum to select $\delta$ $(\delta\leq m-2c)$ local models. Then, Bulyan uses pruning averaging to aggregate $\delta$ local models. In particular, for each $j$-th model parameter, Bulyan ranks the $j$-th parameter of $\delta$ local models, finds $\gamma$ $(\gamma\leq\delta-2c)$ parameters that are closest to the median and calculates its mean as the $j$-th parameter of the global model. When $c\leq\frac{m-3}{4}$, Bulyan has theoretical guarantees for convergence of certain objective functions. \subsubsection{Learned Lessons} Existing aggregation methods, such as Trim-mean and Median, do not guarantee fidelity and robustness well\cite{cao2020fltrust}. In addition, Krum and Bulyan do not satisfy the efficiency goal because they require the server to compute the pairwise distances of local model updates for clients, which is computationally expensive when the number of clients is large. Bulyan is not scalable because it performs Krum multiple times in each iteration to calculate the pairwise distances between local models. Since the Euclidean distance between two local models may be influenced by individual model parameters, Krum may be affected by some anomalous model parameters\cite{guerraoui2018hidden}. \section{Challenges and Future Directions}\label{challenge} Although there are many ways to defend against model poisoning attacks, many problems still exist and need to be solved. Also, the existing methods are not effectively against all model poisoning attacks. For instance, the attack strategy against a robust FL proposed in \cite{cao2020fltrust} can pose a threat to most existing defense methods. We consider that a good defense mechanism should meet three requirements: 1) requires effective resistance to attacks; 2) resource conservation; and 3) ensuring data privacy. In this section, we will discuss potential challenges and future research directions for defending against model poisoning attacks in FL. \subsection{Resistance Effectiveness against Model Poisoning Attacks} If an attacker makes obvious changes to updates, such as the appearance of extreme values, then such an attack is easily detected. However, there are few existing studies focusing on how to resist well-designed malicious updates. For example, an attacker can design an attack based on a generative adversarial network (GAN)\cite{zhang2019poisoning,hardy2019md} that makes it difficult for modified updates to be detected by the server. In addition, an attacker can control multiple devices at the same time, or multiple malicious devices conspire to launch an attack. In this case, the contamination of local model updates can be adjusted according to the aggregation method of the global model. In the future research, we need to be aware of well-designed model poisoning attacks. On the one hand, we will only study against general types of attacks, i.e., attacks that are stochastic and synchronous. On the other hand, we need to study the impact of different attack strategies, such as the number of malicious clients, the number of rounds and the time to launch the attack, on the effectiveness of the attack, so that we can design effective defense mechanisms. \subsection{Computational Consumption of the Central Server} The deployment of defense mechanisms in FL requires a certain amount of computational resources, which should not exceed the capacity of the central server. The existing defense mechanisms generally fail to explicitly consider the limitation of computational resources. For example, some verification mechanisms verify all the updates, which will not only consume energy but also result in time delay, thus affecting the whole FL training process. Moreover, we also need to consider the energy consumption of the server if it is required to collect data and train the model. In future research, we need to consider how to reduce the resource consumption caused by deploying the defense mechanism. For FL with a small number of clients, the submitted local models can be verified one by one, but once the number of clients is huge, this can consume a lot of time and energy. One possible idea is to reduce resource consumption by designing FL with multiple servers, spreading the task of verifying updates to those servers. However, this design introduces new problems, such as communication cost and privacy leakage. The combination of blockchain and FL might be another promising solution \cite{wang2021blockchain,hu2021blockchain,chen2021robust}. In \cite{chen2021robust}, a blockchain-based FL is proposed to defend against malicious attacks. In this framework, clients upload updates to verifiers, who will select benign updates by voting, and then the selected updates will be aggregated and written to blocks through the blockchain network. \section{Conclusion}\label{concl} In this survey, we first investigate the existing defense methods against model poisoning attacks in FL, and then classify these methods into two main categories: evaluating local model updates and designing global aggregation model algorithms. We also analyze the challenges and future research directions regarding the model poisoning attacks in FL. \bibliographystyle{unsrt}
1,314,259,996,811
arxiv
\section{Introduction} \label{sec:intro} Over 5.8 million people in the USA and over 23 million worldwide are affected by cardiac diseases ~\cite{heron2016changes,bui2011epidemiology}, where the inability to generate or conduct the electrical signals necessary to stimulate muscle contraction is the major cause for many heart failures~\cite{chen2018}. To treat these failures, artificial electronic pacemakers are usually implanted to stimulate the heart with electrical impulses to maintain or restore a normal rhythm. In particular, about 3 million people worldwide use pacemakers and 6,000,000 pacemakers are implanted each year~\cite{wood2002cardiac}. Currently, cardiac pacemakers cannot sense or compute 12-lead ECGs from EGMs. Patients with pacemakers require regular, and often costly and time-consuming hospital visits to ensure (1) the proper functioning of the pacemaker and (2) the timely adjustment of the pacing parameters to adapt to changes in the heart's condition over time. Thanks to recent advances in the internet of things (IoT) technologies, remote monitoring of pacemakers will become more commonplace, allowing doctors to check pacemaker status and thus reducing the frequency of costly hospital visits~\cite{yeole2016use}. Despite the promising advantages of remotely monitoring pacemakers, there is still a gap in terms of the signals that can be provided by pacemakers and the ones doctors need to diagnose abnormal rhythms and provide appropriate therapy. Specifically, cardiac pacemakers utilize continuously collected EGMs which are electrical activities sensed locally via implanted electrodes. However, 12-lead ECGs obtained from skin electrodes contain significantly greater information than EGMs which, in certain cases, could be utilized to better diagnose abnormal rhythms and provide appropriate therapy. To close this gap, the synthesis or reconstruction of ECG signals from a set of EGM signals is of great significance in enabling effective remote monitoring of pacemakers, providing necessary therapy, and making timely clinical intervention possible~\cite{van2014value}. As such, there has been a growing interest in developing techniques to reconstruct ECG signals from their corresponding EGM signals using linear filtering \cite{gentil2005surface, kachenoura2007surface, kachenoura2008using, mendenhall2010implantable}, fixed dipole modeling algorithms \cite{mendenhall201012, mendenhall2010implantable}, and nonlinear reconstruction via a time delay neural network \cite{kachenoura2009non, kachenoura2009comparison, poree2012surface}. While the aforementioned techniques were pioneering steps, there is still much room to improve their performance for practical and widespread adoption. In particular, most of the existing techniques adopt either linear approaches that lack generalization capability for unseen symptoms and can fail in the presence of noises and artifacts, or a multivariate nonlinear approach that requires the simultaneous recording of both EGM and 12-lead ECG signals for every single patient \cite{kachenoura2009non, kachenoura2009comparison, poree2012surface}. Motivated by the recent breakthroughs in deep neural networks (DNNs) and their demonstrated promise in medical applications \cite{eraslan2019single,erhan2010does,tran2017missing, cosentino2020provable}, it is natural to consider DNN based reconstruction techniques, aiming for much improved generalization capability and better efficacy towards more practical clinical uses. However, the excellent performance of DNN based solutions often comes at the cost of high complexity (e.g., millions of parameters and operations \cite{wu2018deep}) which stands at odds with the extremely constrained resources at the implanted, battery-powered pacemakers. Specifically, restricted by the pacemaker's limited hardware budget, the often complex DNN based solutions make it particularly challenging to handle real-time reconstruction on the pacemakers, which could enable improved and possibly life-critical interventions to the patients. Currently, EGMs stored in pacemakers are analyzed offline through an inpatient setting for improved diagnosis of the underlying condition, where therapeutic intervention might need to be changed over time and thus require real-time adaptation. For example, monitoring ECG data in real-time can allow for determination of potentially deadly ventricular arrhythmias~\cite{nof2013catheter}, and dictate pacing mediated therapies such as anti-tachycardia pacing. Online real-time reconstruction of EGMs to ECGs allows for real-time and immediate intervention and thus potentially paves the way for novel treatments, whereas offline reconstruction may not always be possible and the potential latency involved in doing so could be life threatening. Another example is utilizing ECGs in real-time for optimizing parameters for cardiac resynchronization therapy to treat heart failure patients~\cite{antonini2012optimization}, where a real-time embedded accelerator allows for on-device reconstruction with a low latency and is thus critical. Furthermore, with traditional pacemakers slowly being replaced by leadless pacemakers~\cite{tjong2017permanent}, such an accelerator would also pave the way for improved therapy with minimal sensing sites. To this end, we aim to develop an efficient DNN based reconstruction framework to push forward the efficacy and efficiency frontier towards practical and widespread adoption by leveraging recent advances in neural architecture search and DNN acceleration. Specifically, we make the following contributions in this work: \begin{itemize} \vspace{-0.5em} \item We propose a new framework dubbed RT-RCG, which can automatically search for (1) efficient DNN structures and then (2) corresponding accelerators to enable \textbf{R}eal-\textbf{T}ime and high-quality \textbf{R}econstruction of E\textbf{C}G signals from E\textbf{G}M signals. To the best of our knowledge, the proposed RT-RCG is the first to simultaneously tackle and leverage neural architecture search (NAS) for both reconstruction efficacy and efficiency. \item Drawing inspiration from existing ECG reconstruction works, RT-RCG proposes a new DNN search space tailored for ECG reconstruction from EGM signals to enable automated search for DNNs which consistently outperform state-of-the-art (SOTA) reconstruction techniques in terms of both reconstruction correlation (between the reconstructed ECGs and the real-measured ECGs) and algorithmic generalization capability. \item Built upon recent advances in DNN acceleration, RT-RCG incorporates a differentiable acceleration search (DAS) engine which makes use of gradient-based optimization to efficiently navigate over the large and discrete accelerator design space to automatically generate optimized accelerators that achieve real-time reconstruction. \item Extensive experiments and ablation studies under various settings consistently validate the effectiveness of our proposed RT-RCG in leading to higher reconstruction quality and better reconstruction efficiency as compared to SOTA reconstruction algorithms and DNN accelerators, respectively. We believe that RT-RCG has made a nontrivial step towards practical ECG reconstruction from EGM signals on the pacemaker, promising the real possibility of real-time critical intervention in instant response to irregular and infrequent ventricular rhythms that require timely treatment. \end{itemize} \section{Related works} \textbf{ECG Reconstruction.} In response to the practical need of ECG reconstruction from EGM signals, various methods have been proposed~\cite{gentil2005surface,kachenoura2007surface,kachenoura2008using,kachenoura2009comparison,kachenoura2009non,mendenhall201012,mendenhall2010implantable,poree2012surface} using linear filtering~\citep{gentil2005surface,kachenoura2007surface,kachenoura2008using,mendenhall201012}, fixed dipole modeling algorithms~\citep{mendenhall2010implantable}, nonlinear filtering~\cite{kachenoura2009comparison,kachenoura2009non}, and time delay neural networks~\cite{poree2012surface}, In particular, a single EGM channel was used to synthesize a single ECG lead in \cite{gentil2005surface}, which can be highly dependent on the chosen EGM lead; Later, logical extension of \cite{gentil2005surface} were developed which uses all EGM leads for synthesis \cite{kachenoura2007surface, kachenoura2008using}, where both the EGMs and the ECGs were first projected onto a 3D space and then three linear filters were calculated between the signals, providing an indirect way to find the transfer functions between EGM signals and the 12-lead ECG; Similarly, \cite{mendenhall201012, mendenhall2010implantable} directly calculated a multivariate linear transfer matrix between the EGMs and the 12-lead ECGs via penalized linear regression. Despite their satisfactory performance, especially for patients with a surface ECG containing only a one beat morphology, these linear methods can suffer from a degraded correlation between the EGMs and the ECGs in real applications due to the noises and artifacts present, and the natural evolution and diversity of the pathology. The limitation of linear reconstruction methods (e.g., an average correlation value of lower than 0.5 in \cite{mendenhall201012}) motivated the multivariate nonlinear approach presented in \cite{kachenoura2009non, kachenoura2009comparison, poree2012surface}, all of which require the simultaneous recording of the EGMs and 12-lead ECGs for every single patient to train a time-delay artificial neural network (TDNN). While this method provided the best average correlation results for sinus rhythm heartbeats, it is still limited for practical uses as it cannot effectively reconstruct diseased morphologies and $12$ different TDNN models must be calculated to reconstruct each ECG lead. Built upon the above prior works, RT-RCG targets reconstruction algorithms that are generally applicable in the presence of noise, artifacts, and diverse pathologies. \textbf{DNNs in Cardiology Applications.} The recent breakthroughs of DNNs in various fields have sparked a growing interest in developing DNN based solutions for cardiologic problems spanning from ECG classification to sleep status monitoring~\cite{xiong2016ecg,li2018method,elola2019deep,xu2018towards,hannun2019cardiologist,hanbay2018deep}. In particular, \cite{xiong2016ecg} adopted a DNN to remove noises contaminating the ECG signals; ~\cite{elola2019deep} used two DNNs together with short-duration (5 seconds) ECG segments to detect pulses during out-of-hospital cardiac arrest;~\cite{xu2018towards} proposed to utilize DNNs for the classification of ECG signals into different heart rhythms (i.e., normal beat or different types of arrhythmias);~\cite{li2018method} made use of a DNN and a hidden Markov model to detect obstructive sleep apnea based on single lead ECG signals. The readers are referred to ~\cite{bizopoulos2018deep} for a detailed survey on applying DNNs to cardiology applications. While these works demonstrate the great potential of DNN based solutions for cardiologic problems, DNN-powered ECG-EGM reconstruction algorithms are still under-explored, let alone real-time reconstruction implementation, motivating us to propose and develop our RT-RCG framework. \textbf{Neural Architecture Search.} Neural architecture search (NAS)~\cite{zoph2016neural} has emerged as one of the most significant sub-fields of AutoML \cite{hutter2019automated} as it enables automatically searching for an optimal DNN structure from the given data and has outperformed manually designed DNNs on a range of tasks such as image classification~\cite{tan2019efficientnet, tan2019mnasnet, howard2019searching, liu2018darts} and segmentation~\cite{chen2018searching, liu2019auto, chen2019fasterseg}. Early NAS works achieve SOTA performance at the cost of enormous search time~\cite{zoph2016neural, zoph2018learning, real2019regularized}. Specifically, reinforcement learning (RL) based NAS~\cite{zoph2016neural, zoph2018learning, tan2019mnasnet, howard2019searching, tan2019efficientnet} and evolutionary algorithm based NAS~\cite{pham2018efficient, real2019regularized} explored the search space and train each sampled network candidate from scratch, thus suffering from prohibitive search costs. Later, differentiable NAS (DNAS)~\cite{liu2018darts, wu2019fbnet, wan2020fbnetv2, cai2018proxylessnas, xie2018snas} was proposed to update the weights and architecture in a differentiable manner through supernet weight sharing, reducing the search time to several hours~\cite{stamoulis2019single}. Motivated by the promising performance achieved by those DNAS works, recent works have extended DNAS to more tasks such as segmentation~\cite{liu2019auto, chen2019fasterseg}, image enhancement~\cite{fu2020autogan,lee2020journey}, and language modeling~\cite{chen2020adabert}. As a result, we leverage the DNAS method integrated with a new search space to develop our proposed RT-RCG framework. \textbf{DNN Accelerators.} \label{sec:related_dnn_accelerators} DNNs' powerful performance comes at a cost of a prohibitive complexity, motivating extensive research in dedicated DNN accelerators as specialized hardware has the potential to achieve orders-of-magnitude higher energy/time efficiency. Specifically, it has been shown that aggressive efficiency can be achieved by carefully designing the micro-architectures (e.g., the number of memory hierarchies or processing element (PE) units, the storage size of different memories, and the shape of the PE array) and algorithm-to-hardware mapping strategies (i.e., dataflow). For example, representative works, such as ShiDiannao \cite{du2015shidiannao} and Eyeriss \cite{chen2017eyeriss}, identified the performance bottleneck caused by the required massive data movements and proposed novel micro-architectures and dataflows that aim to maximize data reuse for reducing the energy/time cost to access higher cost memories. Early works mostly rely on experts' manual design, which can be very time-consuming (months or even years) and require cross-disciplinary knowledge in algorithm, micro-architecture, and circuit design. In response to the intense demands and challenges of manually designing DNN accelerators, we have seen rapid development of design flow~\cite{vivado_HLS,hls_chen2005xpilot,hls_chen2009lopass,hls_rupnow2011high} and DNN design automation frameworks~\cite{wang2016deepburning, zhang2018caffeine, guan2017fp, venkatesanmagnet,wang2018design} to standardize the design flow of DNN accelerators and to expedite the development process. For example, the DNNBuilder accelerator~\cite{zhang2018dnnbuilder} applied an automated resource allocation strategy, fine-grained layer-based pipeline, and column-based cache to deliver high-quality FPGA-based DNN accelerators, and \cite{xu2020autodnnchip} made the first step towards automatically generating both FPGA- and ASIC-based DNN accelerators without humans in the loop given the DNNs from machine learning frameworks (e.g., PyTorch) for a designated application and dataset. Leveraging the learning from prior works, RT-RCG integrates an DAS engine to automatically generate micro-architectures and dataflows to achieve real-time reconstruction. \textbf{DNN Algorithm and Accelerator Co-exploration.} Exploring the networks and the corresponding accelerators in a joint manner~\cite{edge_fpga_co_design, abdelfattah2020best, yang2020co, jiang2020device, jiang2019hardware, li2020edd} has shown great potential towards maximizing both accuracy and efficiency. Recent works have extended NAS to jointly search DNN accelerators in addition to DNN structures. In particular,~\cite{edge_fpga_co_design, abdelfattah2020best,jiang2019hardware,yang2020co} conducted RL-based searches to co-explore the network structures and design parameters of an FPGA-/ASIC-based accelerator, but their RL-based methods can suffer from large search costs, limiting their scalability to handle large joint spaces. Recently, \cite{li2020edd,choi2020dance} extended differentiable NAS to network and accelerator co-search. However,~\cite{li2020edd} only considered one accelerator parameter (i.e., the parallel factor of an FPGA accelerator template) which is not always applicable to most naturally non-differentiable accelerator design parameteres such as loop order and loop size, while~\cite{choi2020dance} adopted a DNN to generate accelerator designs with network structures as the DNN's inputs, which lack interpretability. In contrast, our work adopts differentiable joint search in a sequential manner to efficiently explore a generic network and accelerator design space. \vspace{-0.5em} \section{Preliminaries of Deep Neural Networks (DNNs) and the EGM/ECG Data Format} \label{sec:preliminary} \begin{figure} \vspace{-1.2em} \includegraphics[width=0.8\columnwidth]{Figs/conv_op.pdf} \vspace{-1.5em} \caption{An illustrative example of one CONV operation as formulated in Equation~(\ref{eq:CONV}), where $M$ / $C$ (the number of input / output channels), $E$ / $F$ (the input feature map height / width), $R$ / $S$ (kernel height / width) and U (stride) are 3 / 3, 5 / 5, 3 / 3, and 1, respectively. This example assumes that ReLU is used as the activation function and the first output is 2. } \vspace{-1.6em} \label{fig:example_conv} \end{figure} \textbf{Deep Neural Networks (DNNs).} Modern DNNs usually consist of a cascade of multiple convolutional (CONV), pooling, and fully-connected (FC) layers through which the inputs are progressively processed. The CONV and FC layers can be described as: \begin{equation} \begin{split} \textit{\textbf{O}}[c_o][e][f]=\sigma((\sum_{c_i=0}^{C-1}&{\sum_{k_r=0}^{R-1}{\sum_{k_s=0}^{S-1}{\textit{\textbf{W}}[c_o][c_i][k_r][k_s]}\times \textit{\textbf{I}}[c_i][eU+k_r][fU+k_s]}})+\textbf{B}[c_o])\\ & 0\le c_o < M,~0\le e < E, 0\le f < F \end{split} \label{eq:CONV} \end{equation} where \textit{$\textbf{W}$}, \textit{$\textbf{I}$}, \textit{$\textbf{O}$}, and \textit{$\textbf{B}$} denote the weights, input activations, output activations, and biases, respectively. In the CONV layers (see an example in Figure~\ref{fig:example_conv}), $C$ and $M$, $E$ and $F$, $R$ and $S$, and $U$ stand for the number of input and output channels, the size of input and output feature maps, and the size of weight filters, and stride, respectively; while in the FC layers, $C$ and $M$ represent the number of input and output neurons, respectively; with $\sigma$ denoting the activation function, e.g., a $ReLU$ function ($ReLU(x)=max(x,0)$). The pooling layers reduce the dimension of feature maps via average or max pooling. The recently emerging compact DNNs (e.g., MobileNet~\cite{howard2017mobilenets} and EfficientNet~\cite{tan2019efficientnet}) introduce depth-wise CONV layers and squeeze-and-excite layers which can be expressed in the above description as well~\cite{chen2019eyeriss}. \textbf{Pre-processing of the EGM/ECG Signals.} Here we describe the adopted pre-processing for the EGM and ECG signals, both of which were recorded simultaneously during the cardiac ablation procedure. In the first step, the signals were initially obtained at a sampling frequency of 1000 Hz, and subsequently bandpass filtered using a 5-th order Butterworth filter with a cutoff frequency at $3$ Hz and $50$ Hz. The cutoff at 3 Hz helps to eliminate potential baseline wanders and the cutoff at 50 Hz can eliminate powerline interferences, electromyographic noise, and electrode motion artifact noise \cite{kher2019signal}. To align with the phase change caused by the pre-processing filtering in the forward direction, we adopt the zero phase filtering and also filter the signal backwards in time \cite{kormylo1974two} to ensure that the pre-processing of the data does not introduce additional distortion. In the second step of the pre-processing, the data from the previous step is segmented to extract the QRS portion of ECG signals which contains much information about the synchronization of the heart's ventricles and has been demonstrated to be a strong biomarker for overall cardiac health \cite{moulton1990premature}. \textbf{Time-frequency Representation of EGM/ECG Signals.} To make use of DNNs to reconstruct ECG signals from EGM signals, we first transfer EGM signals' $2$-dimensional (2D) multi-channel time-series representation into a $3$-dimensional (3D) time-frequency representation with the help of STFT, inspired by the similar treatments in speech recognition and audio processing applications \cite{amodei2016deep, parveen2004speech, xu2014regression}. Assuming that the matrix $S_t \in \mathbb{R}^{M \times T}$ denotes the EGM time-series where $M$ and $T$ correspond to the number of channels and the number of time samples for each channel, respectively, then $S_t$ can be re-formulated as $S_t = \left [\textbf{s}_t^{(1)}, \dots, \textbf{s}_t^{(m)}, \dots, \textbf{s}_t^{(M)} \right ]^{\top}$, with $\textbf{s}_t^{(m)} \in \mathbb{R}^{T \times 1}$ and ${\top}$ denoting the time-series for each of the $M$ channels and the transpose operator, respectively. As such, the corresponding 3D time-frequency signals, denoted as $S_{tf}$, can be represented as: \begin{align} \begin{split} S_{tf}=&\left [\textbf{s}_{tf}^{(1)},\dots, \textbf{s}_{tf}^{(M)} \right ]^{\top}\\ \textbf{s}_{tf}^{(m)} = & \left [ \textbf{s}_t^{(m)} \circ \textbf{h}_{1},\dots, \textbf{s}_t^{(m)} \circ \textbf{h}_{K} \right ]^{\top} \end{split} \label{eq:data_format} \end{align} where $\forall k \in \left \{1,\dots,K \right \}$, $\textbf{h}_k \in \mathbb{C}^{T}$ defines the $k$-th time-frequency filter in the complex space corresponding to $\textbf{s}_t^{(m)}$, and $\circ$ denotes the convolution operator. We set the length of each time-frequency filter (after filtering) as $T_f$, and thus $\textbf{s}_{tf}^{(m)} \in \mathbb{C}^{K \times T_f}$ represents a 2D complex matrix with each row denoting the time domain information and each column denoting the frequency domain information. Concatenating all channels' time-frequency representation, we then have a 3D complex matrix $S_{tf}\in \mathbb{C}^{M \times K \times T_f}$. In this work, we use windowed Fourier filters as the filters $\textbf{h}_k$, i.e., transferring the time-series representation into its time-frequency one which becomes the operation of applying a 3D short-time Fourier transform (STFT) operator to the time-series EGM signals. \section{Methods} \vspace{-0.5em} \section{The Proposed RT-RCG Framework} \label{sec:method} \subsection{RT-RCG: Overview and Problem Formulation} \label{sec:overview_formulation} \vspace{-0.5em} \textbf{Framework Overview.} Figure~\ref{fig:co-search} shows an overview of the proposed RT-RCG framework. Given the recorded EGM signals, user-specified demands (e.g., accuracy and latency), and hardware resource budgets/specification, our RT-RCG framework automatically searches for networks to maximize the reconstruction efficacy and then the corresponding accelerators to maximize the hardware acceleration efficiency, i.e., the outputs of RT-RCG include (1) the searched network to be used for reconstructing ECG signals from the input EGM signals and (2) the searched accelerator to process the searched network with optimized hardware efficiency. In particular, our RT-RCG framework consists of two components, i.e., a differentiable network search (DNS) engine and a DAS engine which will be described in Section \ref{sec:DNS} and Section \ref{sec:DAS}, respectively. \begin{figure}[t!] \vspace{-2em} \centering \includegraphics[width=0.85\linewidth]{Figs/overview.pdf} \vspace{-1em} \caption{An overview of the proposed RT-RCG framework, which accepts the recorded EGM/ECG signals dataset and the target hardware specification as inputs to automatically generate reconstruction networks and their corresponding accelerators to maximize the reconstruction quality and acceleration efficiency. } \label{fig:co-search} \vspace{-2em} \end{figure} \textbf{The Optimization Formulation.} As stated in Section \ref{sec:intro}, RT-RCG is designed to reconstruct the full 12-lead ECG signals from the recorded (partial) EGM signals, with both originally being time-series signals. For notation, we denote the EGM and ECG samples using $\left \{ X_{n_t} \right \}_{n=1}^{N}$ and $\left \{ Y_{n_t} \right \}_{n=1}^{N}$, respectively, where $N$ denotes the total number of heartbeats in the dataset (see Table~\ref{tab1}). Meanwhile, the EGM and ECG signals can be represented using a 2D matrix, i.e., $X_{n_t} \in \mathbb{R}^{M_{EGM} \times T}$ and $Y_{n_t} \in \mathbb{R}^{M_{ECG} \times T}$, where $M_{EGM}$ and $M_{ECG}$ denote the number of channels (leads) for the ECG and EGM signals, respectively, and T denotes the number of time samples per heartbeat. As introduced in Section~\ref{sec:preliminary}, the ECG and EGM signals will first be transferred into a time-frequency format denoted as $X_{n_{tf}} \in \mathbb{R}^{M_{EGM} \times K\times T_f}$ and $Y_{n_{tf}} \in \mathbb{R}^{M_{ECG} \times K\times T_f}$, respectively. In this work, we have $M_{EGM}=5$ and $M_{ECG}=12$, respectively, and both $K$ and $T_f$ are empirically fixed to 16 with a STFT window size of 30 and overlap of 6 during the filtering, based on the collected dataset (see Table~\ref{tab1}). Through empirical studies, this STFT configuration gave us the best subsequent reconstruction accuracy with the least number of parameters. As such, the problem of reconstructing ECG from EGM becomes how to map the signals in $\mathbb{R}^{M_{EGM} \times K \times T_f}$ to that in $\mathbb{R}^{M_{ECG}\times K \times T_f}$, which can be considered as a problem of multivariate regression and the corresponding optimization can be formulated as follows: \begin{equation} \min\limits_{f \in \mathcal{H}}\sum\limits_{n=1}^N{\mathcal{L}(f(X_{n_{tf}}),Y_{n_{tf}})} \label{eq:loss} \end{equation} \noindent where $\mathcal{H}$ denotes the function space, $f$ denotes the reconstruction function that aims to reconstruct $Y_{n_{tf}}$ given $X_{n_{tf}}$, $\mathcal{L}$ denotes the loss function of reconstruction capturing the total difference (e.g., the mean square error) between the reconstructed samples $f(X_{n_{tf}})$ and the real-measured samples $Y_{n_{tf}}$ for all the ${N}$ samples. The goal of the optimization is to find a reconstruction function $f$ that minimizes the reconstruction loss $\mathcal{L}$. In RT-RCG, we use a DNN to approximate and search for $f$ using RT-RCG's DNS engine, with the direct output of $f$ having a time-frequency format and then being transferred back into a time-series format for evaluating the reconstruction efficacy. During training, the negative Pearson correlation~\cite{benesty2009pearson} of the flattened time-frequency data between the reconstructed ECG and the corresponding real-measured ECG signals will be used as the loss for optimization. For evaluation, the Pearson correlation will be calculated between the reconstructed and corresponding real-measured ECG signals on a test set (excluded in training) after both of them are converted back to the time domain through the inverse STFT. Note that the (inverse) STFT process will neither be accelerated by the proposed RT-RCG's hardware nor be counted towards the final latency in our experiments. This is because for a single piece of input, the combined operations of both STFT and inverse STFT only take up about 1\% of the total operations in the inference when DNNs shown in Table~\ref{tab:15-layer} are considered, assuming a fast convolution algorithm is adopted. The (inverse) STFT operation can thus be easily conducted on the hardware accelerator's accompanying CPU incurring a negligible latency overhead. \vspace{-0.5em} \subsection{RT-RCG: The DNS Engine } \vspace{-0.3em} \label{sec:DNS} \textbf{The Network Search Space.} \begin{table} [!b] \vspace{-1.8em} \centering \caption{Visualizing RT-RCG's network backbone with 14 searchable blocks, where TBS denotes "To Be Searched".}\label{tab:backbone}\vspace{-1em} \scriptsize \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccccccccc}\toprule Operation type & CONV & Maxpool & CONV & Maxpool& \textbf{Searchable blocks $\times$ 14} & DECONV & Upsample & DECONV & Upsample& CONV & CONV& CONV\\\midrule \midrule Output channels &48 & - &96 & - & TBS &48 & - & 96& - & 24&24 &24\\\midrule Kernel size &7 & 2 & 5 & 2 & TBS & 5& 2&7 & 2 &3 &3 &3 \\\midrule Stride &1 & 2 & 1 & 1 & TBS & 1& 2&1 & 2 &1 &1 &1 \\ \bottomrule \end{tabular} } \vspace{-0.5em} \end{table} Motivated by the success of the encoder-decoder structure~\cite{ronneberger2015u} which has demonstrated its efficacy in learning compressed, interpretable, or structured representation of data for denoising, compression, and data completion \cite{eraslan2019single,erhan2010does,tran2017missing, cosentino2020provable}, RT-RCG's DNS engine adopts a search space based on an encoder-decoder based network backbone with searchable blocks to extract and process diverse and patient-specific features from the complex EGM signals. As shown in Table~\ref{tab:backbone} and visualized in Figure \ref{fig:net_struct}, our network starts from a fixed downsample branch and ends in a fixed up-sample branch with the intermediate blocks being set to be searchable for better extracting and processing of the features hidden in the EGM signals. The hypothesis is that such an encoder-decoder structure, i.e., a cascade of convolutional transformations and nonlinearities with a bottleneck dimension, allows the approximation of the underlying data to be manifold as discussed in \citep{cosentino2020provable}. For the searchable blocks, inspired by the SOTA hardware-friendly search space in~\cite{wu2019fbnet} which searches the kernel size, channel expansion ratio, and group number for each building block, we propose a sequential search space with 14 searchable blocks and 9 candidate operations for each block, including standard convolutions with a kernel size of 3/5, inverted residual blocks with a kernel size of 3/5, a channel expansion of 1/3/5, and skip connections, which leads to a search space with a total of $9^{14}$ network choices. \textbf{The Network Search Algorithm.} We adopt the differentiable NAS (DNAS) algorithm~\cite{liu2018darts} considering its excellent search efficiency. In particular, we formulate the network search as a one-level optimization~\cite{xie2018snas, hu2020dsnas}, by making use of the unbiased gradient estimation~\cite{he2020milenas} to adapt to the complex EGM signals which are diverse for different patients: \begin{align} \vspace{-1.5em} \small \begin{split} & \min \limits_{\omega,\alpha} \,\, L_{rec}(\omega, \alpha)+\lambda L^{MAC}_{cost}(\alpha) \label{eq:update_alpha} \end{split} \end{align} \noindent where $\omega$ and $\alpha$ denote the supernet weights and the network architecture parameters, respectively, the latter of which contains the probability of selecting each candidate operation; $L_{rec}$ and $L^{MAC}_{cost}$ denote the ECG-EGM reconstruction loss and hardware-cost loss which is determined by the number of multiply-accumulate operations (MACs) in the given DNNs, respectively; and $\lambda$ is a trade-off parameter to balance the resulting reconstruction networks' accuracy and efficiency. In particular, the output of the $l$-th layer $A_l$ in our DNS engine is a weighted sum of all candidate operations: \begin{equation} \label{eqn:arch_loss} A_l = \sum_{k=1}^{K} GS(\alpha_{lk}) O_{lk}(A_{l-1}) \end{equation} \noindent where $K$ is the number of candidate operations, $O_{lk}$ is the $k$-th operation for the $l$-th layer, $\alpha_{lk}$ is the probability of $O_{lk}$, and $GS$ denotes the Gumbel Softmax function~\cite{jang2016categorical} which samples the operations based on the distribution parameterized by $\alpha$. In our DNS, we adopt a soft version of Gumbel Softmax, i.e., we use the output of Gumbel Softmax as the weighted coefficient of $O_{lk}$ with a continuous relaxation during backward pass~\cite{wu2019fbnet} for updating $\alpha$. At the end of the search, we derive the final/searched network by selecting the operation with the highest probability for each searchable block. \vspace{-0.5em} \subsection{RT-RCG: The DAS Engine} \label{sec:DAS} \vspace{-0.3em} In this subsection, we introduce the three key components in our proposed DAS engine, i.e., the accelerator template, the search space extracted from the accelerator template, and the search algorithm used to explore the search space. \vspace{-0.3em} \subsubsection{The Accelerator Template of Our DAS Engine} \ \\ \label{sec:template} \vspace{-1.2em} Our DAS engine leverages a parameterized accelerator template that features a total of $\sim 10^5$ choices for the micro-architecture and dataflow, the latter of which determines how the network is temporally and spatially scheduled to be executed on the micro-architecture, e.g., row stationary, output stationary, weight stationary, etc. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Figs/hw_template.pdf}\vspace{-1.5em} \caption{An illustration of the parameterized micro-architecture adopted in the DAS engine of our RT-RCG framework.} \label{fig:hw_template} \vspace{-2em} \end{figure} \textbf{The Micro-architecture Overview.} Our DAS engine leverages an accelerator template inspired by a SOTA DNN accelerator~\cite{shen2017ISCA}, which adopts a multi-chunk micro-architecture for maintaining high resource utilization when accelerating DNN layers with different structures (e.g., different sizes of the input/output feature maps and kernel sizes), in order to balance the communication bandwidth and improve the acceleration throughput. Our accelerator template parameterizes the multi-chunk micro-architecture. As illustrated in Figure~\ref{fig:hw_template} later, each chunk of the micro-architecture corresponds to a sub-accelerator, which has hierarchical memories (e.g., on-chip buffer and local register files (RF) ) and processing elements (PEs) characterized by searchable design knobs such as the types of PE interconnections (i.e., Network-on-chip (NoC)), allocated buffer sizes, and the computing scheduling and tiling (i.e., dataflows) to facilitate data reuse and parallelism. Specifically, each sub-accelerator sequentially processes multiple but not necessarily consecutive layers with similar network structures, while different sub-accelerators can be pipelined. \textbf{The Sub-accelerator Design.} As shown in Figure~\ref{fig:hw_template}, each sub-accelerator consists of (1) a secondary buffer to facilitate more local data reuse and reduce the higher-cost DRAM accesses and (2) a PE array, where each PE includes a multiply and accumulate (MAC) unit and local register files (RFs) for the inputs, weights, and outputs, respectively. For each sub-accelerator, the dataflow determines the networks' temporal and spatial mapping into the PE array and thus the data movement patterns within different memories/buffers/RF, leading to orders of magnitude difference in the acceleration performance \cite{venkatesanmagnet}. As our accelerator template can parameterize both the micro-architecture and the dataflow (see Section \ref{sec:space}), it enables our DAS engine to search for dedicated micro-architecture and dataflow to match the networks' structure in order to maximize the target hardware performance. \textbf{Acceleration/Execution.} Here we describe the execution of the network within each sub-accelerator for better understanding. In Figure~\ref{fig:hw_template}, if the data within the PEs process different input channels along the H (horizontal) axis of the PE array and different output channels along the V (vertical) axis of the PE array, the weights with different input and output channels will be spatially mapped into all PEs and then stay stationary until all corresponding computations are finished. Meanwhile, the input corresponding to the weights that have been loaded into the PE array will be streamed in via multicast along the H axis and broadcast along the V axis, facilitating various weight reuse. The computed results along the H axis are accumulated while those along the V axis are moved to the output buffer via multicast. In general, the PEs along both axes can process different dimensions of the networks, including the input channels, output channels, feature map height, and feature map width, where the ordering of the subsequent operations and buffer read/write will determine the dataflow and are searchable in RT-ECG. At any given time point, all sub-accelerators simultaneously process different clusters of the network layers with each sub-accelerator processing data of different input frames, where different layers within each sub-accelerator are executed sequentially, to improve the throughput without the necessity of waiting. This is made possible because (1) sub-accelerators only communicate with the DRAM for fetching/storing the intermediate results and (2) an additional ping-pong buffer is introduced in the DRAM to accommodate simultaneous read/write. In this way, there are no communications needed among the sub-accelerators, leading to a more flexible and modular design. It is then possible to tailor the design of each sub-accelerator to better match the network structure and thus favor the achievable acceleration efficiency. \vspace{-0.8em} \subsubsection{The Accelerator Search Space of Our DAS Engine} \ \\ \label{sec:space} \vspace{-1.2em} Based on the above accelerator template, we extract the searchable parameters, of which different combinations lead to different accelerators (i.e., micro-architecture and dataflow pairs), to form a generic accelerator space to be used by our DAS engine. The micro-architecture is characterized by the number of memory hierarchies and PEs, the size of each memory hierarchy, the shape and size of the PE array, and the NoC design~\cite{chen2017eyeriss}, and the dataflow is described by both the NoC design and the loop size/order. Specifically, we construct a generic accelerator search space as shown in Table~\ref{tab:hw_space} by leveraging the commonly used nested \textit{for-loop} accelerator description~\cite{chen2016eyeriss,parashar2019timeloop,blocking_cnn,zhang2015optimizing,zhao2020icassp} which naturally bridges the accelerator's micro-architectures and dataflows with DNNs' network parameters. Next, we introduce each accelerator parameter listed in Table~\ref{tab:hw_space}: \begin{table}[!h] \centering \vspace{-1.3em} \caption{The constructed generic accelerator search space extracted from the accelerator template introduced in Section~\ref{sec:template}, where TBS means ``to be searched'' and the searchable parameters include (1) the NoC design, (2) Max \# of PEs, (3) layer assignment, (4) loop-order and (5) loop-size across different memory hierarchies, i.e., the DRAM, Global Buffer, and PE array.}\vspace{-1em} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{ccc} \toprule \textbf{Memory Hierarchy} & \textbf{Loop-order} & \textbf{Loop-size} \\ \midrule \textbf{DRAM} & TBS & \multicolumn{1}{c}{-} \\ \textbf{Global Buffer} & TBS & TBS \\ \textbf{PE array} & \multicolumn{1}{c}{-} & TBS \\ \midrule \midrule \textbf{NoC design} & \textbf{Max \# of PEs} & \textbf{Layer assignment} \\ \midrule TBS & TBS & TBS \\ \bottomrule \end{tabular}% } \label{tab:hw_space}% \vspace{-1.2em} \end{table} \textbf{Loop-order.} The orders of the loops within each memory hierarchy, each of which has a total of $n$ data dimensions. As such, $n$ loops correspond to an $n$-item ordering problem. To be compatible with the proposed network search, where each accelerator parameter should have all possible choices parameterized by the corresponding $\gamma$ vector (see Equation~(\ref{eqn:obj_hw})), we formulate the \textit{loop-order} search as a problem of picking one choice from a total of $n$ options without replacement for $n$ times (e.g., $n=6$ considering the number of data dimensions in DNNs). \textbf{Loop-size.} The size of each loop in the \textit{for-loop} description. The product of all loop-sizes associated with each data dimension needs to be equal to the corresponding algorithmic dimension, because the nested loops' size as a whole dictates the total number of execution iterations. Then, intuitively, the possible choices for a certain loop's size are all the choices that the corresponding data dimension can be factorized into. \textbf{The NoC Design.} The parallel execution patterns of the MAC operations when accelerating DNNs on an accelerator (e.g., those described in Section \ref{sec:template}), which is determined by the PE array style. In this work, we consider three NoC options following the common practice, as inspired by SOTA accelerators~\cite{chen2016eyeriss,zhang2015optimizing,zhao2020icassp}: \begin{itemize} \setlength{\itemsep}{0pt} \item Parallelizing the computation over the output partial sums, where the dimensions of output channels, output rows, and output columns are executed in parallel; \item Parallelizing the computation over the kernels, where the dimensions of output channels and input channels are executed in parallel; \item Parallelizing the computation over both the kernel and output dimensions, where the dimensions of output channels, kernel rows, and output columns are executed in parallel. \end{itemize} \textbf{The Maximum Number of PEs.} The maximal number of PEs in the design which can range from 1 to a specified value determined by the area constraint and the trade-off between the storage and computation partition. The PEs will be inter-connected with a pre-designed pattern according to the adopted NoC design, e.g., Figure~\ref{fig:hw_template} gives an example of parallelizing the kernels among the PEs in the NoC across the input and output channel dimensions. In this work, where the latency is the primary objective, the maximum number of PEs is thus set to the hardware platform limit, e.g., the available Digital Signal Processing units (DSPs) in the given FPGAs. If other metrics like energy consumption are prioritized, our proposed framework can automatically search for designs balancing the trade-off between the consumed power and latency. \textbf{Layer assignment.} The assignment of all the layers to be executed on a fixed number of sub-accelerators, which is set to 10 for this work, unless specified otherwise.\\ \vspace{-1.3em} With maximum number of PEs fixed and all other parameters above taken into consideration, the space size can explode up to $\sim10^7$. \\ \vspace{-1.3em} \subsubsection{The Search Algorithm of Our DAS Engine} \ \\ \vspace{-1em} To efficiently explore our constructed generic accelerator search space, our DAS engine iteratively updates the accelerator design choices in a differentiable manner. In particular, we parameterize the choice of each accelerator design factor with a vector $\gamma$ and learn to update $\gamma$ based on the objective formulated as: \begin{equation} \label{eqn:obj_hw} \gamma^* = \,\, \underset{\gamma}{\min} \,\, \sum_{s=1}^{S} GS(\gamma^s) \, L^{HW}_{cost}(GS(\gamma^1), ..., GS(\gamma^S))\\ \end{equation} \noindent where $\gamma^s$ defines the probability distribution of the choices for the $s$-th accelerator design parameter, $GS(\gamma^s)$ denotes Gumbel-Softmax sampling~\cite{gumbel1948statistical} of the $s$-th accelerator parameter $\gamma^s$, and $L^{HW}_{cost}(GS(\gamma^1), ..., GS(\gamma^S))$ is the hardware cost of the target network on the sampled accelerator characterized by the $S$ sampled design factors $GS(\gamma^1), ..., GS(\gamma^S)$. To be more specific, we apply Gumbel Softmax sampling~\cite{gumbel1948statistical, maddison2014sampling} to sample only one choice $GS(\gamma^s)$ from all the options corresponding to the $s$-th accelerator parameter. Once all the accelerator parameters are sampled, the corresponding accelerator's acceleration cost is estimated using SOTA accelerator performance estimators, where in this work we adopt the performance estimator in~\cite{xu2020autodnnchip} for our prototyped FPGA-based accelerators. After that, we multiply the resulting acceleration cost by the sampled $GS(\gamma^s)$ and update the $\gamma$ based on the continuous relaxation of Gumbel-Softmax during backward pass~\cite{wu2019fbnet} for gradient estimation. When the gradient-based optimization converges, we derive the final accelerator by selecting the parameter options with the highest probability (i.e., $\gamma_{s}$) for each accelerator parameter. Note that we use the number of MACs as the complexity cost during the network search stage (see $L^{MAC}_{cost}$ in Equation~(\ref{eq:update_alpha})) for better search efficiency, and adopt the estimated accelerator cost $L^{HW}_{cost}$ during the accelerator search stage to better align with the actual acceleration cost. \vspace{-0.5em} \subsection{RT-RCG: The Complexity and Time Cost of The DNS and DAS Engines} \vspace{0em} \subsubsection{The Complexity of the DNS and DAS Engines} \ \\ \vspace{-1em} The algorithm complexity of our DNS engine is tied with that of the supernet training because we adopt the DNAS algorithm as mentioned in Section~\ref{sec:DNS} where the supernet weights and network architecture parameters are updated at the same time. Additionally, picking the final network structure with the highest probability requires an additional complexity of O(k), where k denotes the number of possible operations per block and equals 9 considering our search space defined in Section~\ref{sec:DNS}. On the other hand, the entire DNS process, including re-training the final picked network, can finish within a GPU hour of 0.5, given the DNS search space size of $9^{14}$ in this work. The algorithm complexity of our DAS engine is proportional to that of the Gumbel-Softmax, which is O(n) with n denoting the number of choices for each hardware design parameter. Thanks to the efficient hardware cost estimator~\cite{xu2020autodnnchip}, the entire DAS process only takes about 10 minutes with our space size being $10^7$. Note that our differentiable search method enables a much more directed and efficient search trajectory. Thus, there is no need to exhaustively evaluate every design choice within the search space, leading to a much shorter search time than that of an exhaustive search. Additionally, the search is terminated when the minimized objectives become stable. \vspace{-0.5em} \subsubsection{The Amortized One-Time Search Cost} \ \\ \vspace{-1em} For a given task, e.g., ECG reconstruction for a specific patient, merely a one-time effort is required to generate the network structure and its accelerator, and thus the search time cost is amortized throughout the implementation. Once the network structure and accelerator design are respectively generated by the DNS and DAS engine, they will be fixed throughout the task. If there are minor changes to the task settings like the patient's heart conditions, the network's parameters (weights) can be fine-tuned with the patient's newly generated heart samples, without the necessity of changing the network structure and accelerator design. The fine-tuning process can be conducted using a standard DNN training procedure on an external computer in a few minutes, considering the setup described in Section~\ref{sec:exp_setup}. Basically, the search only needs to be redone when there are necessary drastic changes, e.g., the change of the entire training dataset. \vspace{-0.5em} \subsubsection{Generalization of the Searched Designs} \ \\ \vspace{-1em} The searched network structure and its accelerator together with the final fully trained network weights can be generalized to distinct patients' heart samples, if the search and training is conducted on a diverse patient dataset, i.e., the searched designs (i.e., the network structure, accelerator design, and trained parameters) are expected to be effective for new patients which are not present in the pre-trained dataset. As such, no additional cost or complexity are incurred for this generalization, as the original search and training process can holistically take the diverse training dataset into consideration. This is validated in Section~\ref{sec:patient_Generalizability}, where the searched networks and accelerator designs consistently perform well on the newly included patients. This generalization capability can be significantly meaningful to real-life applications, where collecting data samples for new patients may not always be possible, and the search cost can thus be amortized across different patients. \section{Experiment Results} \label{sec:exp} \vspace{0em} In this section, we present the evaluation results of our proposed RT-RCG framework. Starting with the introduction to our dataset and experiment setup, we evaluate the effectiveness of the RT-RCG searched networks under various settings, including (1) patient-specific reconstruction (see Section~\ref{sec:single_patient}), (2) reconstruction generalized to a new patient (see Section~\ref{sec:patient_Generalizability}), and (3) robust reconstruction with deficient EGM channels (see Section~\ref{sec:single_channel}). After that, we evaluate RT-RCG's hardware acceleration performance as compared to two SOTA DNN accelerators~\cite{zhang2018dnnbuilder,chaidnn}, one edge platform~\cite{edgegpu}, and a CPU platform, followed by the ablation studies on the initial latency and under a constrained search space. \vspace{-0.8em} \subsection{Experiment Setup} \label{sec:exp_setup} \vspace{-0.2em} \begin{table}[b!] \vspace{-1.5em} \centering \caption{The number of heartbeat samples for each patient and the Patient ID in our clinically collected dataset.}\label{tab1}\vspace{-1em} \scriptsize \resizebox{0.95\textwidth}{!}{ \begin{tabular}{ccccccccccccccc}\toprule Patient ID & 1 & 2 & 3 & 4& 5& 6 & 7 & 8 & 9& 10 & 11& 12 &13 &14\\\midrule \midrule Number of heartbeats (N) &4765 &2309 &401 &1752 &3934 &3017 &2593 &6635 &3102 &2326 &5497&1591 &1827 &2917 \\ \bottomrule \end{tabular} } \vspace{-1em} \end{table} \indent\indent \textbf{Clinically Collected Dataset.} To evaluate the effectiveness of the RT-RCG framework, data was collected retrospectively from 14 patients undergoing cardiac ablation for premature ventricular contractions, where both the ECG and EGM signals were recorded simultaneously during the cardiac ablation procedure and each record of the database is composed of: \begin{itemize} \item Twelve standard surface ECG channels, namely leads I, II, III, aVR, aVL, aVF, and V1:V6. \item Five EGM channels measured by electrodes on a catheter placed inside the Coronary Sinus. \end{itemize} Specifically, the data was obtained from patients undergoing cardiac ablation procedures and was retrospectively collected under a protocol approved by an institutional review board at Baylor St. Luke's Medical Center~\cite{BaylorSt65}. During these procedures, the routine is to record both the surface ECG and the EGM signals via the mapping catheter. For each patient, the EGM was obtained from the coronary sinus. By virtue of the procedure, the recordings for each patient are of different lengths and contain a mix of sinus rhythms and diseased heartbeats, providing a diverse dataset to better emulate real-world scenarios while also making it more challenging to achieve high performance reconstruction on this dataset. This also means that the number of heartbeats (i.e., $N$ in Table \ref{tab1}) are different for different patients. In our experiment, the data for each patient was first randomly shuffled and then segmented into halves, with the first half of concurrent ECGs and EGMs being used during the search/training step and the second half for testing and performance evaluation. The patient number and corresponding number of heartbeats are summarized in Table \ref{tab1}. \indent\indent \textbf{Algorithm Experiment Setup.} \underline{Algorithm training setup:} All the DNN training is carried out on a machine with one NVIDIA 2080TI GPU and an AMD EPYC 7742 64-Core power processor. Throughout the training, we use an Adam optimizer with a batch size of 16, a learning rate of 1E-3, and a weight decay factor of 1E-3. During the training, we incorporate the Pearson correlation coefficient between the network output and the ground truth (i.e., corresponding real-measured ECG signals) into the loss function (see Equation (\ref{eq:loss}) in Section~\ref{sec:overview_formulation}). \underline{Network search setup:} We adopt the one-level optimization as in~\cite{xie2018snas, hu2020dsnas} and a fixed temperature of 1 for the Gumbel Softmax function. We reuse the above training setting for the supernet weights and adopt an Adam optimizer with a constant learning rate of 1E-3 for the architecture parameters. We then derive the operations with the highest probability for each searchable block at the end of the search. \underline{Algorithm evaluation setup:} To evaluate the reconstruction efficacy, we calculate the correlation between the reconstructed ECG signals and the real-measured ones on the half of the dataset for testing and performance evaluation. Specifically, we first convert the network output which is in the time-frequency domain to its time domain counterpart using the inverse STFT, and then calculate the Pearson correlation coefficient between the reconstructed signals and the original ECG signals, which are time-series waveforms. \indent\indent \textbf{Accelerator Experiment Setup.} \underline{Accelerator search setup:} Considering the real-time reconstruction goal, we adopt the commonly used Frames Per Second (FPS) metric. However, other metrics can be easily plugged into our RT-RCG framework depending on the specification of the target applications and the user-specified preference. During the accelerator search process, RT-RCG makes use of a SOTA accelerator performance predictor AutoDNNChip~\cite{xu2020autodnnchip} to obtain a fast and reliable estimation to guide the search towards the optimal solution. \underline{Accelerator evaluation setup:} For evaluating FPGA-based accelerators, we adopt a Xilinx ZC706 evaluation board~\cite{zc706} with the same DSP limit as the baselines~\cite{chaidnn,zhang2018dnnbuilder} for a fair comparison. Specifically, we adopt a standard Vivado HLS design flow~\cite{vivado_HLS}, where the FPS is obtained from the HLS synthesis results for our searched accelerators and the baseline ChaiDNN~\cite{chaidnn}. For DNNBuilder~\cite{zhang2018dnnbuilder}, we utilize their open source simulator to obtain its acceleration results. For the CPU baseline, we evaluate the achieved FPS of the networks being executed on an AMD EPYC 7742 64-Core CPU. For the edge platform baseline, we consider a commonly used edge device~\cite{li_2020_halo,siam2018comparative,wofk2019fastdepth}, i.e., the NVIDIA Edge GPU Jetson TX2~\cite{edgegpu}, where the networks are compiled using TensorRT~\cite{tensorrt}, a C++ library for high-performance inference on NVIDIA GPUs. Additionally, the device is configured to be in max-N mode to make full use of the available resources following~\cite{wofk2019fastdepth}. \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{Figs/network_perf.pdf} \vspace{-2em} \caption{ The average Pearson correlation coefficient between RT-RCG's reconstructed and real-measured ECG time-series signals across all the 14 patients in our dataset, when considering (1) \textbf{Blue}: patient-specific reconstruction from five channels of EGM (see Section~\ref{sec:single_patient}), (2) \textbf{Grey}: reconstruction generalized to new patients (see Section~\ref{sec:patient_Generalizability}), and (3) \textbf{Orange}: patient-specific reconstruction with merely one EGM channel (see Section~\ref{sec:single_channel}).} \vspace{-1.5em} \label{fig:network_performance} \end{figure} \vspace{-1.2em} \subsection{RT-RCG's Searched Algorithms: Patient-specific Reconstruction} \label{sec:single_patient} \vspace{-0.2em} In this subsection, we evaluate RT-RCG's searched networks in a patient specific setting, where all the search, training, and testing are based on the data collected from the same patients. This is to mimic the case where the pacemakers are customized to each patient. Specifically, for our clinical dataset, which contains sinus and diseased heartbeats of the 14 patients, we equally split it into two subsets for training and testing, respectively. To thoroughly evaluate RT-RCG's searched networks, we consider all of the 14 patients in a patient-specific manner, and plot the resulting correlation (between the constructed ECG and the real-measured ECG signals) in Figure~\ref{fig:network_performance} (the blue curve). We can see that the ECG signals reconstructed by RT-RCG's searched networks are highly correlated with the real-measured ones across all of the 14 patients, as evidenced by the resulting Pearson correlation coefficient value ranging from $0.952\sim0.983$, which is much improved as compared to the correlation value of $0.84$ achieved with the SOTA method~\cite{poree2012surface} using time delay neural networks. This improvement implies that RT-RCG's searched networks can accurately predict ECGs which are close to the corresponding real-measured ones as compared to the ones reconstructed by the SOTA method in ~\cite{poree2012surface}. \vspace{-0.8em} \subsection{RT-RCG's Searched Algorithms: Reconstruction Generalized to New Patients} \vspace{-0.3em} \label{sec:patient_Generalizability} In this subsection, we evaluate the efficacy of our RT-RCG's searched networks when being generalized to new patients. Specifically, the networks are searched and trained based on the data of all patients with one of the patients excluded and then tested on the excluded patient. By doing so, this experiment can evaluate the searched networks' generalization capability to unseen new patients, i.e., how well the networks dedicated to a set of patients can perform when adapted to other patients. As shown in the grey curve in Figure~\ref{fig:network_performance}, the correlation between the reconstructed and real-measured ECG signals is consistently higher than $0.93$, except for Patients 1, 2, and 3, whose heartbeat samples are very distinct from the remaining ones, implying the importance of searching/training the algorithms on diverse patients before being generalized to other patients to ensure the efficacy. Overall, the above experiments indicate the excellent generalization capability of RT-RCG's searched networks. We can expect improved performance if RT-RCG's searched networks are obtained based on more data with diverse ventricular conditions, paving the way for developing ``one-for-all" reconstruction algorithms which can save a large amount of the time and effort needed to collect data for each target patient; this is particularly useful when pre-collecting data for the target patient is not possible. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figs/pacemaker_vis.pdf} \vspace{-3em} \caption{ Visualizing the reconstructed ECG signals under different experiment settings together with the corresponding real-measured ones for Patient 4, where the x axis is the time sample and the y axis is the normalized voltage of the waveforms.} \label{fig:waveform}\vspace{-1.3em} \end{figure} \begin{figure}[t] \vspace{-0.5cm} \centering \includegraphics[width=\linewidth]{Figs/net_struct.pdf} \vspace{-2.5em} \caption{An illustration of the (a) reconstruction algorithm pipeline, consisting of the fixed earlier blocks, searchable blocks, and fixed later blocks, (b) choices for the searchable blocks following \cite{wu2019fbnet}, and the RT-RCG's searched network structures when given a constraint of (c) 28.87 MACs, (d) 31.38 MACs, and (e) no more than 15 layers. In (b), K\textit{a}E\textit{b} denotes a convolutional building block with a kernel size of \textit{a} and a channel expansion ratio of \textit{b}, and C\textit{a} denotes a standard convolution layer with a kernel size of \textit{a}.}\vspace{-1em} \label{fig:net_struct} \end{figure} \vspace{-0.5em} \subsection{RT-RCG's Searched Algorithms: Reconstruction Robustness Under EGM Deficiency} \vspace{-0.5em} \label{sec:single_channel} In practice, pacemakers only utilize 1 - 5 EGM channels and it is an imperative function of pacemakers to work with only one channel of EGM. Aiming towards practical uses, we thus evaluate our RT-RCG's searched networks under such scenarios, considering the most extreme case where only one out of the five EGM channels is available. Specifically, we search and train the networks based on data with only one EGM channel, and evaluate the correlation between the reconstructed and real-measured ECG signals under the patient-specific setting (similar to Section \ref{sec:single_patient}). While we observe consistent results when picking different EGM channels as the one to be used, we here show the observations when picking the first channel. As shown in the orange curve (i.e., "1-channel") in Figure~\ref{fig:network_performance}, the reconstruction quality under this extreme scenario is surprisingly close to that of the normal setting with all EGM channels on, achieving a correlation ranging from $0.942$ to $0.983$. Furthermore, Figure~\ref{fig:waveform} shows that the reconstructed ECG from only one channel of EGM does not have noticeable degradation when compared with the original ECG signals. This set of experiments demonstrates the excellent robustness of RT-RCG's searched networks in the presence of EGM channel deficiency. \vspace{-0.5em} \subsection{RT-RCG's Searched Algorithms: Visualizing the Searched Network and Reconstructed ECG Signals} \vspace{-0.5em} \label{sec:single_visual} To better understand and visualize RT-RCG's searched networks, we here provide a visualization to show RT-RCG's searched network and RT-RCG's reconstructed ECG signals. First, as an illustrative example, we visualize the searched network for Patient 4 under a constraint of 28.87M MACs, as illustrated in Figure~\ref{fig:net_struct}(c). In particular, this searched network contains 36 layers excluding the pooling and upsampling layers and a total of 28.87M MACs. In addition, the searched networks under different MAC constraints are similar in terms of the kernel size and expansion ratio choices, yet with different preferences in the networks' depth. As shown in Figure~\ref{fig:net_struct}(d), when the number of MACs is increased to 31.38M, the proposed DNS opts to reduce the frequency of skip connections, while the layer structures in terms of kernel sizes and expansion ratios are similar to those under a constraint of 28.87M MACs. Second, Figure~\ref{fig:waveform} visualizes the reconstructed ECG signals of RT-RCG's searched networks under various settings, when the reconstruction is performed using (1) 5 EGM channels and (2) 1 EGM channel, or generalized to new patients. While we observe consistent results across different patients, here we only show the visualization for Patient 4 for a better illustration. We can see that the reconstructed ECG signals are close to the real-measured ones when the network structure in Figure~\ref{fig:net_struct}(c) is used, with the largest deviation happening when the algorithm is generalized to a new patient, as expected. \vspace{-0.5em} \subsection{RT-RCG's Searched Accelerators: Achieved FPS over SOTA DNN Accelerators/Platforms} \vspace{-0.5em} \label{sec:fps_hw} In this subsection, we evaluate RT-RCG's searched accelerators by comparing their achieved FPS with that of (1) two SOTA DNN accelerators (DNNBuilder~\cite{zhang2018dnnbuilder} and ChaiDNN~\cite{chaidnn}), (2) the edge GPU (Jetson TX2~\cite{edgegpu}), and (3) a general DNN deployment platform (an AMD EPYC 7742 64-Core CPU~\cite{2ndGenAM48}) under the same conditions. Specifically, we ensure that the reconstruction algorithm (i.e., the searched network for Patient 4 under 28.87M MACs as shown in Figure~\ref{fig:net_struct}(c)) and the network precision be the same as the baselines'. The comparison results are summarized in Table~\ref{tab:hw_fps}. We can see that RT-RCG's searched accelerator consistently achieves a better FPS than all of the four baselines, based on the same network structure and hardware constraints. Specifically, the RT-RCG searched accelerator improves the achieved FPS, which in turn can be translated to processed heartbeat samples per second, by $1.87\times$, $1.73\times$, $1.22\times$, and $70.90\times$, as compared to the DNNBuilder, ChaiDNN, the edge GPU, and the CPU, respectively. This set of experiments indicates that the integrated DAS engine of RT-RCG is effective and RT-RCG's automatically searched accelerator can even outperform expert designed SOTA DNN accelerators, paving the way for the fast development of reconstruction accelerators. \begin{table}[!b] \vspace{-1em} \centering \caption{The achieved FPS of the RT-RCG's searched accelerator and the four SOTA DNN accelerators/platforms given the same network (see Figure~\ref{fig:net_struct}(c)), network bit precision, and clock frequency (except for the edge GPU and CPU cases), where the number of PEs indicates the peak usage of the processing elements, corresponding to the number of used DSPs for FPGA based accelerators. }\label{tab:hw_fps}\vspace{-1em} \scriptsize \resizebox{0.7\textwidth}{!}{ \begin{tabular}{cccccc}\toprule Platform &Clock frequency &\# of PEs &Bit precision &FPS \\\midrule \midrule DNNBuilder~\cite{zhang2018dnnbuilder} &200 MHz &435 &16 & 228\\ \textbf{RT-RCG} &\textbf{200 MHz} &\textbf{428} &\textbf{16} &\textbf{427} (\boldmath{$1.87\times$}) \\ \midrule ChaiDNN~\cite{chaidnn} &200 MHz &212 &8 & 401\\ \textbf{RT-RCG} &\textbf{200 MHz} &\textbf{185} &\textbf{8} &\textbf{696} (\boldmath{$1.73\times$}) \\\midrule Jetson-TX2~\cite{edgegpu} & 1.3 GHz &/ & 32 & 1190 (183.07 @ 200 MHz) \\ \textbf{RT-RCG} &\textbf{200 MHz} &\textbf{870} &\textbf{32} &\textbf{229} ($0.19\times$ w/ 1.3 Ghz; \boldmath{$1.22\times$} w/ 200 Mhz) \\\midrule CPU~\cite{2ndGenAM48} & 2.25 GHz & / & 32 & 21 (3.23 @ 200 MHz)\\ \textbf{RT-RCG} &\textbf{200 MHz} &\textbf{870} &\textbf{32} &\textbf{229} ($11.39\times$ w/ 1.3 Ghz; \boldmath{$70.90\times$} w/ 200 Mhz) \\ \bottomrule \end{tabular} } \vspace{0em} \end{table} \begin{table}[b!] \vspace{-1.5em} \centering \caption{The resulting subgroups for the DNNBuilder implementation of the searched network shown in Figure~\ref{fig:net_struct}(c), which is to enable DNNBuilder's feasible implementation of DNNs with over 15 layers. Note that each subgroup assumes one pipeline stage and layers within each subgroup share the same pipeline stage.}\label{tab:net_group}\vspace{-1em} \scriptsize \resizebox{0.7\textwidth}{!}{ \begin{tabular}{cccccccccc}\toprule Group ID & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\midrule \midrule Layer ID &(1) &(2, 3) &(4 $\sim$ 13) &(15 $\sim$ 18) & (19 $\sim$ 26) &(27 $\sim$ 33)& (34) & (35) &(36) \\ \bottomrule \end{tabular} } \end{table} More details regarding the experiment settings for each baseline are described below: \textbf{DNNBuilder.} For the comparison with the SOTA DNN accelerator named DNNBuilder, we adopt a DSP limit of 450, a 16-bit precision, and a frequency of 200 MHz to be the same as the original setting in DNNBuilder~\cite{zhang2018dnnbuilder}. As the reported DNNBuilder design uses a layer-wise pipeline micro-architecture, it is required to constrain the maximum number of DNN layers to be smaller than 15, for meeting the DRAM access bandwidth constraint, as shown in their open source codes~\cite{zhang2018dnnbuilder}. To support RT-RCG's searched networks, which has more than 15 layers, we first divide the network into 9 subgroups with each having some layers from the original network processed sequentially and then execute these subgroups in a pipeline fashion based on the open source design of DNNBuilder. The subgroups are formed to balance the latency among them and thus maximize the achieved throughput of DNNBuilder given the specific DNN structure. Specifically, the 9 subgroups for the network (see Figure~\ref{fig:net_struct}(c)) are shown in Table~\ref{tab:net_group}. Note that we also evaluate RT-RCG's searched accelerators over DNNBuilder when constraining RT-RCG's network search space to have networks with smaller than 15 layers as discussed in Section~\ref{sec:15-layer}. \textbf{ChaiDNN.} We also benchmark RT-RCG's searched accelerator with another SOTA FPGA DNN accelerator named ChaiDNN~\cite{chaidnn}, with its DietChai\_z variant enabled to optimize its performance under more resource constrained scenarios. Specifically, we select its 128-compute-DSP mode which results in a DSP limit of 212 when accelerating the given searched network. \textbf{Jetson TX2.} When comparing with the edge GPU Jetson TX2 which is a commonly used IoT device, we set the DSP limit to be 900 (the maximum amount available), so that our implementations have roughly the same power consumption as the edge GPU Jetson TX2. Note that the operating clock frequency of Jetson TX2 is 1.3 GHz, which is far higher than the maximum supported stable frequency of our platform ZC706. We thus scale the Jetson TX2's throughput to that corresponding to a frequency of 200 MHz for a fair comparison as shown in Figure~\ref{fig:net_struct}(c), under which the achieved FPS of the RT-RCG's searched accelerator outperforms the edge GPU by $1.22\times$. \textbf{CPUs.} Considering that CPUs are currently the mainstream computing platforms, we also evaluate RT-RCG's searched accelerator over an AMD EPYC 7742 64-Core processor given the same network. For a fair comparison, we adopt a DSP limit of 900, which is the maximum available DSP resource on our adopted ZC706 board. Note that the power consumption of the CPU is $\sim225$W, which significantly dwarfs that of the ZC706 board which is $\sim 10$W. \textbf{Discussion and Implication.} There are several levels of implication from our experiments (including the latency evaluation in Table~\ref{tab:hw_latency}). First, we can see that our proposed RT-RCG indeed can automatically generate (1) reconstruction networks that can provide high-quality reconstruction which outperforms SOTA techniques and has excellent generalization capability and (2) accelerators to run the reconstruction networks that achieve a better acceleration efficiency than diverse SOTA accelerators/platforms, under the same conditions. Second, the performance achieved by RT-RCG shows that it is indeed possible for doctors to remotely monitor the status of pacemakers and patients via reconstructed ECG signals, given the achieved FPS. Specifically, in our case, such real-time monitoring is possible as the achieved FPS (229 $\sim$ 606 FPS in our proposed RT-RCG) is much higher than the required 2 FPS (the highest input rate in our dataset is 2 Hz as it requires at least 0.5s to collect each piece of input). More importantly, the high FPS achieved is necessary as it implies that real-time intervention is possible, especially when considering that certain cardiac patients, particularly patients diagnosed with lethal ventricular arrhythmias, under which the higher the FPS, the sooner doctors can respond to provide the necessary intervention in life-critical situations. Despite the promising reconstruction efficacy and efficiency achieved by RT-RCG, our effort in this paper is merely a heuristic step towards next-generation pacemakers equipped with real-time monitoring and intervention. In particular, the energy cost of the RT-RCG framework currently implemented on FPGA is still significantly higher than the stringent energy consumption required by the pacemakers. We recognize that applying RT-RCG searched networks and accelerators to real-world pacemakers would require ultra-energy-efficient ASIC implementation, which we leave as one of our most exciting future works. \vspace{-0.5em} \vspace{-0.3em} \subsection{RT-RCG's Searched Accelerators: Achieved Latency over SOTA DNN Accelerators/Platforms} \vspace{-0.5em} \label{sec:latency_hw} As ECG signals can be used to detect irregular ventricular rhythms which trigger a corresponding alert mechanism~\cite{sukanesh2010gsm}, where the latency from the occurrence of the rhythms to the mechanism being triggered, denoted as start-up latency, can be of great significance to the patients' health and life, the latency of the EGM-ECG conversion contributing a considerable portion of the whole pipeline is thus important. Therefore, we also evaluate RT-RCG's searched accelerators over SOTA DNN accelerators/platforms in terms of this latency. Note that the achieved start-up latency and FPS have a \begin{wraptable}{r}{0.5\textwidth} \centering \vspace{-1em} \caption{The start-up latency and FPS of the RT-RCG generated accelerator given the network generated for Patient 4 (see Section~\ref{sec:single_patient}) under different platforms }\label{tab:hw_latency}\vspace{-1em} \scriptsize \resizebox{0.5\textwidth}{!}{ \begin{tabular}{cccc}\toprule Platform & \# of PEs & Start-up latency (ms) &FPS \\\midrule\midrule ChaiDNN~\cite{chaidnn} & 212 & 3.01 & 401 \\ \midrule RT-RCG & 185 & 3.29 (+9.3\%) & 696 (+73.6\%) \\\midrule \textbf{RT-RCG-latency} &\textbf{171} &\textbf{2.39 (+20.6\%)} & \textbf{419 (+4.5\%)}\\ \bottomrule \end{tabular} } \vspace{-1.5em} \end{wraptable} trade-off relationship. An advantage of our RT-RCG framework is that users can customize their own desired trade-off given their priority and conditions. As shown in Table~\ref{tab:hw_latency}, we provide two searched accelerators of RT-RCG which favor the achieved start-up latency and FPS, respectively, with the former achieving a 27.36\% better start-up latency at a cost of 38.8\% lower FPS. We can see that RT-RCG's automatically searched accelerators achieve a smaller start-up latency as compared to the baseline under the same hardware constraint, i.e., 20.60\% over the expert-designed accelerator ChaiDNN~\cite{chaidnn}. This set of experiments again validates the effectiveness of our RT-RCG framework's DAS engine. \vspace{-0.5em} \subsection{RT-RCG's Searched Accelerators: Constrained Networks with <15 Layers} \vspace{-0.2em} \label{sec:15-layer} \begin{table}[t!] \centering \caption{The reconstruction accuracy of the searched network with the constraint of < 15 layers, compared with the searched network in Figure~\ref{fig:net_struct}(c) searched without the layer number constraint. }\label{tab:15-net_acc }\vspace{-1em} \scriptsize \resizebox{0.95\textwidth}{!}{ \begin{tabular}{ccccccccccccccc}\toprule Patient ID & 1 & 2 & 3 & 4& 5& 6 & 7 &8 & 9& 10 & 11& 12 &13&14\\\midrule \midrule 36-layer-net &0.9613 &0.9524 &0.9678 &0.9634 &0.9668 &0.9730 &0.9622 &0.9784 &0.9830 &0.9727 &0.9610 &0.9735 &0.9589 &0.9821 \\\midrule \textbf{15-layer-net} & \textbf{0.9601} & \textbf{0.9463} & \textbf{0.9641} & \textbf{0.9640} & \textbf{0.9674} & \textbf{0.9714} & \textbf{0.9626} & \textbf{0.9762} & \textbf{0.9829} & \textbf{0.9656} & \textbf{0.9571} & \textbf{0.9620} & \textbf{0.9580} & \textbf{0.9824}\\ \midrule \textbf{Improvements from 36-layer} & \textbf{-0.0012}& \textbf{-0.0061}& \textbf{-0.0037}& \textbf{0.00057}& \textbf{0.00058}& \textbf{-0.0016}& \textbf{0.00041}& \textbf{-0.0022}& \textbf{-0.00011}& \textbf{-0.0071}& \textbf{-0.0039}& \textbf{-0.011}& \textbf{-0.00088}& \textbf{0.00026}\\ \bottomrule \end{tabular} } \vspace{-2.5em} \end{table} As mentioned in Section \ref{sec:fps_hw}, the baseline DNNBuilder~\cite{zhang2018dnnbuilder} adopts a layer-wise acceleration micro-architecture, which favors networks with fewer than 15 layers. To validate the general efficacy of our RT-RCG framework, we here present experiments where we constrain the network search space to \begin{wraptable}{r}{0.5\textwidth} \centering \vspace{-1.5em} \caption{RT-RCG's searched accelerators vs. DNNBuilder, when constraining the networks to have fewer than 15 layers.}\label{tab:15-layer}\vspace{-1em} \scriptsize \resizebox{0.5\textwidth}{!}{ \begin{tabular}{cccc}\toprule Platform & \# of PEs &Network MACs (M) & FPS \\\midrule\midrule DNNBuilder-36-layer & 435 &28.87&228 \\\midrule DNNBuilder-15-layer & 441 &24.23&340 (+49.1\%) \\\midrule \textbf{RT-RCG-36-layer} &\textbf{ 428} & \textbf{28.87}&\textbf{ 427 (+87.3\%)}\\\midrule \textbf{RT-RCG-15-layer} &\textbf{ 433} & \textbf{24.23}&\textbf{ 447 (+96.1\%)}\\ \bottomrule \end{tabular} } \vspace{-1.5em} \end{wraptable}ensure that the searched networks have fewer than 15 layers and then compare the acceleration performance of RT-RCG's searched accelerator with that of the DNNBuilder baseline under the patient-specific setting. Specifically, we adaptively adjust $\lambda$ in Equation~(\ref{eq:update_alpha}) when the depth of the derived network surpasses 15 layers by doubling $\lambda$. As shown in Figure~\ref{fig:net_struct}(e), with the number of layers being constrained to 15, the searched network contains only a 16.07\% lower number of MACs as compared to the unconstrained case (see Figure~\ref{fig:net_struct}(c)), while Table~\ref{tab:15-net_acc } indicates that our RT-RCG framework's DNS engine is able to adapt to different constraints while maintaining the networks' performance (i.e., reconstruction quality in terms of the correlation): 0.9601 vs. 0.9613 for Patient 4. In particular, RT-RCG results in a wider network under this depth constraint in order to maintain the network capacity and thus reconstruction efficacy. Meanwhile, as shown in Table~\ref{tab:15-layer} we can see that (1) DNNBuilder's achieved FPS is improved by 49.1\% as compared to the unconstrained case presented in Section~\ref{sec:fps_hw}, which has a 36-layer network, under the same DSP constraint, and (2) RT-RCG's automatically searched accelerator again outperforms the expert designed accelerator DNNBuilder with a 23.94\% higher FPS. This set of experiments together with the ones in Section~\ref{sec:fps_hw} validates the general effectiveness of our RT-RCG framework across different network search spaces and accelerated networks. \section{Conclusion} The costly and time-consuming hospital visits required for patients with implanted pacemakers and the recent advances in the IoT technologies have motivated an increasing need for remote monitoring of pacemakers to reduce hospital visit costs and to provide continuous monitoring and potential real-time intervention which can be life-critical under some irregular and infrequent ventricular rhythms. However, the signals provided by pacemakers and the ones doctors use for diagnosis during in-person clinical visits are different, with the former being EGM signals and the latter being ECG signals, calling for high-quality and real-time ECG reconstruction from the recorded EGM signals. To this end, we propose, design, and validate a first-of-its-kind framework dubbed RT-RCG, which can automatically search for (1) efficient DNN structures and then (2) corresponding hardware accelerators to implement the ECG-EGM reconstruction process, respectively, tackling both the reconstruction efficacy and efficiency. Specifically, RT-RCG integrates a new DNN search space tailored for required ECG-EGM reconstruction to enable automated search for DNNs that conduct ECG reconstruction with much improved quality over SOTA solutions, and incorporates a differentiable acceleration search engine that can automatically generate optimal accelerators to accelerate the resulting DNNs from the previous step. Extensive experiments and ablation studies under various settings consistently validate the effectiveness and advantages of the proposed RT-RCG at leading to higher reconstruction quality and better reconstruction efficiency as compared to SOTA reconstruction algorithms and DNN accelerators. Our RT-RCG has made the first heuristic step towards automated generation of ECG-EGM reconstruction DNNs along with the matched accelerators, which enable real-time critical intervention in instant response to irregular and infrequent ventricular rhythms that require timely treatment, paving the way for more pervasive remote monitoring of the pacemakers via ECG-EGM reconstruction. \section{Test}
1,314,259,996,812
arxiv
\section{ The coupled Thermo-hydraulic problem and is solution \label{Appendix A}} \subsection{Problem description} \noindent We have already discussed that we are interested in the limiting case where the position and profile of the PSZ can be prescribed inside the fault gouge. Knowing the form of the profile of the shear plastic strain-rate in $\dot{\gamma}^p(y,t)$ (equation \eqref{ch: 6 plastic_strain_rate_prf}), the two way coupled problem in the form of temperature and pressure diffusion equations is given by: \begin{itemize} \item[•] Heat diffusion BVP: \begin{align} &\frac{\partial \Delta T}{\partial t}=c_{th}\frac{\partial \Delta T}{\partial y^2}+\frac{1}{\rho C}\tau(t)V\delta_{\text{Dirac}}(y-u(t)),\nonumber\\ &\Delta T\big{\|}_{y=0}=\Delta T\big{\|}_{y=\text{H}}=0,\nonumber\\ &\Delta T(y,0)=0, \label{ch: 6 Temp_known_gamma} \end{align} \noindent where $T(y,t)$ is the unknown change in temperature in the fault gouge layer of height $\text{H}$. The fault gouge is considered to be in isothermal boundary conditions during shearing. The coupled pressure problem is given by: \item[•] Pressure diffusion BVP, in its Homogeneous form: \begin{align} &\frac{\partial \Delta P}{\partial t}=c_{hy}\frac{\partial \Delta P}{\partial y^2}+\Lambda\frac{\partial \Delta T}{\partial t},\nonumber\\ &\Delta P\big{\|}_{y=0}=\Delta P\big{\|}_{y=H}= \Delta P(y,t)-\Delta P(y,0) =0,\nonumber\\ &P(y,0)=P_0, \label{ch: 6 Press_known_gamma} \end{align} \noindent where $\Delta P(y,t)$ is the unknown pressure difference between the fault gouge layer and the boundaries, while $P_0=P(y,0)$ is the initial pore fluid pressure, that is kept constant at the boundaries of the fault gouge (drained boundary conditions). \end{itemize} We note that the above formulations are also valid in the case of an unbounded domain considering $\text{H}\to\pm\infty$. The pressure problem affects also the temperature BVP through the value of shear stress (fault friction), $\tau(t)$, in the yielding region. According to the Mohr-Coulomb yield criterion, subtracting the initial ambient pore fluid pressure $P_0$ we get: \begin{align} \tau(t)=f(\sigma_n-p_0)-f\Delta P_{max}(t). \label{ch: 6 material_eq_known_gamma} \end{align} \noindent We note here that once we know the form of the plastic strain-rate profile $\dot{\gamma}^p(y,t)$ as in equation \eqref{ch: 6 Temp_known_gamma} the only unknown is the fault friction $\tau(t)$. We can find the solution of the temperature equation $T(y,t)$ in terms of the unknown fault friction $\tau(t)$ and replace into the pressure equation, which can then also be described as an unknown function of friction. Finally, we can define the value of fault friction $\tau(t)$ by inserting the pressure increase solution $\Delta P(y,t)$ into the material equation \eqref{ch: 6 material_eq_known_gamma} and solving for $\tau(t)$. The above equations have constant coefficients and since the loading is prescribed (based on the unknown $\tau(t)$), the system has been transformed to a one-way coupled set of linear differential 1D diffusion equations of the form: \begin{align} &\frac{\partial q(y,t)}{\partial t} = c \frac{\partial^2 q(y,t)}{\partial x^2}+\frac{1}{k}g(y,t),\nonumber\\ &a_1\frac{\partial u}{\partial n_1}\big{\|}_{y=0}+b_{1}q\big{\|}_{y=0}(t)=f_1(t),\;{t>0},\nonumber\\ &a_2\frac{\partial q}{\partial n_2}\big{\|}_{y=H}+b_{2}u\big{\|}_{y=H}(t)=f_2(t),\;{t>0},\nonumber\\ &q(y,0) = I(y), \end{align} \noindent where $q(y,t)$ is the unknown function (e.g. the temperature $T(y,t)$), $f_i,\;i=1,2$ are the values of the general Robin boundary conditions with coefficients ($a_i,\;b_i,\;{i=1,2}$), $I(y)$ is the initial condition and $g(y,t)$ is the loading function (here related to frictional dissipation). We denote by $c$ the diffusivity and by $k$ the conductivity of the material. \subsection{Fundamental solution \label{ch: 6 subsec_2.2}} \noindent We can find the solution to the above BVP by application of the Green's theorem, which for the general diffusion case in 1D reads (see \cite{cole2010heat}): \begin{align} \begin{aligned} q(y,t)=&\int_{0}^{\text{H}}G(y,t;y^\prime,0,c)I(y^\prime)dy^\prime+\frac{c}{k}\int_0^t\int_{0}^{\text{H}}g({y^\prime,t^\prime})G(y,t;y^\prime,t^\prime,c)dy^\prime dt^\prime\\ &+c\int_0^t\sum_{i=1}^2\left[\frac{f_i(t^\prime)}{a_i}G(y,t;y^\prime_i,t^\prime,c)\right]dt^\prime-\alpha\int_0^t\sum_{i=1}^2\left[f_i(t^\prime)\frac{G(y_i,t;y^\prime,t^\prime,c)}{n_i^\prime}\bigg{\|}_{y^\prime=y_i}\right]dt^\prime,\label{ch: 6 eq: Green's solution } \end{aligned} \end{align} \noindent where $G(y,t;y^\prime,t^\prime,c)$ is the appropriate Green's function. The first two terms correspond to the initial condition $I(y,0)$ and the loading term $g(y,t)$ respectively. The terms $\alpha ,k$ represent the diffusivity and the conductivity of the unknown quantity $q(y,t)$ respectively. The third term is important for non homogeneous Neumann and Robin boundary conditions while the fourth term refers to non homogeneous Dirichlet boundary conditions. In what follows the last two terms in equation \eqref{ch: 6 eq: Green's solution } are omitted due to the existence of homogeneous Dirichlet boundary conditions in the problems of temperature and pressure difference diffusion at hand.\\ \newline \noindent Applying the solution in terms of the Green's function \eqref{ch: 6 eq: Green's solution } to problems \eqref{ch: 6 Temp_known_gamma},\eqref{ch: 6 Press_known_gamma} we obtain the solution in terms of the Green's function specific to each diffusion problem. \begin{align} &\begin{aligned} \Delta T(y,t)&=\frac{c_{th}}{k_T}\int_0^t\int_{-\infty}^{\infty}g_T(y^\prime,t^\prime)G(y,t;y^\prime,t^\prime,c_{th})dy^\prime dt^\prime,\\ \end{aligned}\\ &\begin{aligned} \Delta P(y,t)&= \frac{c_{hy}}{k_H }\int_0^t\int_{-\infty}^{\infty}g_H(y^\prime,t^\prime)G(y,t;y^\prime,t^\prime,c_{hy})dy^\prime dt^\prime,\\ \end{aligned} \end{align} \noindent where $c_{th},k_T$ are the thermal diffusivity, conductivity pair and $c_{hy},k_H$ are their hydraulic counterparts. Similarly $(g_{T},g_{H})$ are the loading functions, while $G(y,t;y^\prime,t^\prime,c)$ is the Green's function kernel for the thermal ($c=c_{th}$) and pressure ($c=c_{hy}$) diffusion problems respectively.\\ \newline \noindent In the case of the coupled pressure problem \eqref{ch: 6 Press_known_gamma} with the temperature as a loading function, we are interested in rewriting the system's response with the help of the dissipative loading $(\frac{1}{\rho C}\tau(t)\dot{\gamma^p})$ of the temperature equation \eqref{ch: 6 Temp_known_gamma}. This way we can connect the pressure response $P(y,t)$ to the fault friction $\tau(t)$ which is the main unknown. We can do this by replacing in the expression of $T(y,t)$ in the pressure diffusion equation \eqref{ch: 6 Press_known_gamma} the temperature impulse response of equation \eqref{ch: 6 Temp_known_gamma} due to a impulsive (Dirac) thermal load. This way the response obtained from the pressure diffusion equation is a Green's function kernel that contains the influence of an impulse thermal load (see Appendix \ref{Appendix E} for detailed derivation in the cases of 1) a bounded domain for a stationary impulsive thermal load and 2) an unbounded domain subjected to a moving impulsive thermal load). The pressure solution can then be written as: \begin{align} \Delta P(y,t)=P(y,t)-P_0&= \frac{\Lambda\dot{\delta}}{\rho C(c_{hy}-c_{th})}\int_0^t\int_{-\infty}^{\infty}g_T(y^\prime,t^\prime)G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})dy^\prime dt^\prime,\label{ch: 6 eq: pressure_sol_mod} \end{align} \noindent where $G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})$ is the Green's function kernel of the pressure equation \eqref{ch: 6 Press_known_gamma} containing the influence of an impulse thermal load from the temperature equation \eqref{ch: 6 Temp_known_gamma}.\\ \newline \noindent Having found the pressure solution $P(y,t)$ as a function of $g_{T}$ we can then replace \eqref{ch: 6 eq: pressure_sol_mod} into the material description equation \eqref{ch: 6 material_eq_known_gamma}. For the case of 1D shear $\tau$ under constant normal load $\sigma_n$ that we will consider throughout this paper, the material law is transformed into the integral equation: \begin{align} \tau(t) = f(\sigma_n-P_0)-C\int_0^t\int_{-\infty}^{\infty}g_T(y^\prime,t^\prime)G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})dy^\prime dt^\prime, \label{ch: 6 eq: integral equation} \end{align} \noindent where $C=\frac{f\Lambda\dot{\delta}}{\rho C(c_{hy}-c_{th})}$. Due to the concentrated nature of the thermal load (Dirac distribution), the integral equation \eqref{ch: 6 eq: integral equation} can be brought to its final form: \begin{align} \tau(t) = f(\sigma_n-p_0)-C\int_0^t\tau(t^\prime)G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})\big{\|}_{y=y^\prime(t^\prime)} dt^\prime \label{ch: 6 eq: integral equation1} \end{align} \noindent The above integral equation is a linear Volterra integral equation of the second kind \cite{wazwaz2011linear}. We note here that this equation is valid only at the position of the yielding plane which has to coincide with the position of the maximum pressure inside the layer ($y=y^\prime(t^\prime)$). This has been proven to hold true for the cases present on the unbounded domain (see Appendix \ref{Appendix I}). In the case of a bounded fault gouge under the influence of a traveling PSZ (thermal load) this is true only in the regions again from the boundary. Nevertheless, the difference between the position of the traveling thermal load and that of the $P_{max}$ is small (see also Figure \ref{ch: 6 fig: fields_moving_bounded}). \chapter{Appendix E \label{Appendix E}} \section{Derivation of the coupled pore fluid pressure diffusion kernel. } \noindent In this section we derive the coupled pore fluid pressure diffusion kernel for the cases of a bounded domain subjected to a stationary Dirac load and an unbounded domain under a moving Dirac load. Our procedure follows the discussion in \cite{Lee1987} where the same problem was solved for a stationary Dirac thermal load on an unbounded domain. \subsection{Stationary thermal load, coupled pore fluid pressure Green's kernel for a bounded domain.} \noindent In the case of the bounded domain we proceed by applying the method of separation of variables and then expanding the solution to a Fourier series. We note here that the coupled system of pressure and temperature diffusion equations have the same form of linear partial differential operators and boundary conditions and therefore their solution belongs to the same space of Sturn-Liouville problems. In essence the two solutions have the same eigenfunctions. In the case of the bounded domain the Temperature diffusion equation has the solution given in \cite{cole2010heat}: \begin{align} T(x,t) = \sum_{n=1}^\infty\frac{2}{\text{H} \rho C}\int_0^{t}\int_{-\infty}^\infty g(x^\prime, t^\prime)\exp{\left[-\lambda^2 c_{th}(t-t^\prime)\right]}\sin{\left(\lambda_n x\right)}\sin{\left(\lambda_n x^\prime\right)}dx^\prime dt^\prime, \end{align} \noindent where $\lambda_n$ is the Sturm-Liouvile eigenfunction coefficient, $\lambda_n=\frac{n\pi}{\text{H}}$, $\text{H}$ is the length of the bounded domain. The eigencondition for the homogeneous Dirichlet boundary condtitions are given by: \begin{align} \sin\left({\frac{n\pi}{\text{H}}\text{H}}\right)=0,\; \lambda_n=\frac{n\pi}{\text{H}},\; n=1,2,... \end{align} \noindent We note here that the homogeneous pressure diffusion partial deifferential equation on the above bounded domain has the same boundary conditions. Therefore, the pore fluid pressure solution can be written with the same eigenfunctions as above. Replacing the pore fluid pressure eigenfunction expansion $\tilde{p}(x,t)=p(x,t)-p_0=\sum_{i=n}^\infty \tilde{p}_n\sin{\frac{n\pi x}{\text{H}}}$ into the coupled pressure diffusion partial differential equation, \begin{align} &\frac{\partial \tilde{p}(x,t)}{\partial t}-c_{hy}\frac{\partial^2 \tilde{p}(x,t)}{\partial x^2}=\Lambda\frac{\partial T(x,t)}{\partial t},\nonumber\\ &\tilde{p}(x,0)=0, \nonumber\\ &\tilde{p}(0,t)=\tilde{p}(\text{H},t)=0, \end{align} \noindent we obtain: \begin{align} \sum_{n=1}^\infty\frac{\partial \tilde{p}_n(t)}{\partial t}\sin{\lambda_n x}+c_{hy}\sum_{n=1}^\infty \lambda^2_n\tilde{p}_n(t)\sin{\lambda_n x}= \frac{2 \Lambda}{\text{H} \rho C}\sum_{n=1}^\infty\sin{\lambda_n x} \frac{\partial T_n{(t)}}{\partial t}&,\\ \intertext{where $T_n(t)$ is given as:} T_n(t)=\int_0^t\int_{-\infty}^\infty g(x^\prime, t^\prime)\exp{\left[-\lambda_n^2 c_{th}(t-t^\prime)\right]}\sin{\lambda_n x^\prime}dx^\prime dt^\prime. \nonumber \end{align} \noindent Isolating each eigenfunction $\sin{\lambda_n x}$ we arrive at the following first order linear differential equations involving the unknown coefficient $\tilde{p}_n(t)$ and the loading coefficient $T_n(t)$ for each particular component of the solution series expansion. \begin{align} \frac{\partial \tilde{p}_n(t)}{\partial t}+c_{hy} \lambda^2_n\tilde{p}_n(t)=& \frac{2 \Lambda}{\text{H} \rho C}\frac{\partial T_n(t)}{\partial t},\;t\geq 0. \end{align} \noindent Applying the Laplace transformation in the field of time: \begin{align} &s\tilde{P}_n(s)+c_{hy} \lambda^2_n\tilde{P}_n(s)=\frac{2 \Lambda}{\text{H} \rho C}\frac{s}{s+\lambda^2_n c_{th}}\int_{-\infty}^\infty G(x^\prime, s)\sin{\lambda_n x^\prime}dx^\prime .\\ &\tilde{P}_n(s)=\frac{2 \Lambda}{\text{H} \rho C}\frac{s}{(s+\lambda^2_n c_{th})(s+\lambda^2_n c_{hy})}\int_{-\infty}^\infty G(x^\prime, s)\sin{\lambda_n x^\prime}dx^\prime \end{align} \noindent Applying the inverse of the Laplace transform gives us: \begin{align} \tilde{p}_n(x,t) = \frac{2 \Lambda}{\text{H} \rho C }\int_0^t\int_{-\infty}^{\infty}g(x^\prime,t^\prime)\frac{c_{hy} \exp{\left[-\lambda^2_nc_{hy}(t-t^\prime)\right]}-c_{th} \exp{\left[-\lambda^2_nc_{th}(t-t^\prime)\right]}}{c_{hy}-c_{th}}\sin{\lambda_n x^\prime}dx^\prime dt^\prime \end{align} \noindent Finally, in the series expansion $\tilde{p}(x,t)=\sum_{n=1}^\infty \tilde{p}_n(t)\sin{\lambda_n x}$ we move the summation under the integral sign and we obtain: \begin{align} \begin{aligned} \tilde{p}(x,t) = \frac{2 \Lambda}{\text{H} \rho C }\int_0^t\int_{-\infty}^{\infty}&g(x^\prime,t^\prime)\\ &\sum_{n=1}^\infty\frac{c_{hy} \exp{\left[-\lambda^2_nc_{hy}(t-t^\prime)\right]}-c_{th} \exp{\left[-\lambda^2_nc_{th}(t-t^\prime)\right]}}{c_{hy}-c_{th}}\sin{\lambda_n x}\sin{\lambda_n x^\prime}dx^\prime dt^\prime \end{aligned} \label{app E: eq: bounded Green's function Kernel press}. \end{align} \noindent We recognize the term in the second line of equation \eqref{app E: eq: bounded Green's function Kernel press} as the Green's function kernel of the coupled pressure diffusion partial differential equation. This expression has the added advantage that the influence of the thermal load on the pressure $\tilde{p}(x,t)=p(x,t)-p_0$ solution is straightforward. Noticing that for a general diffusion problem on a bounded domain under homogeneous Dirichlet boundary conditions the Green's function kernel is given by: \begin{align} G_{X11}(x,x^\prime,t-t^\prime,c) = \frac{2}{\text{H}}\sum_{n=1}^{\infty}\exp{\left[-\lambda^2_nc\frac{t-t^\prime}{\text{H}^2}\right]}\sin{\lambda_n x}\sin{\lambda_n x^\prime}. \end{align} The Green's function kernel of the coupled pressure differential equation on the bounded domain is then given as: \begin{align} G_{X11}(x,x^\prime,t-t^\prime,c_{th},c_{hy}) =\frac{ c_{hy}G_{X11}(x,x^\prime,t-t^\prime,c_{hy})-c_{th}G_{X11}(x,x^\prime,t-t^\prime,c_{th})}{c_{hy}-c_{th}}. \end{align} \noindent Finally, the pressure solution can be given as: \begin{align} p(x,t)-p_0=\tilde{p}(x,t)=\frac{\Lambda}{\rho C}\int_0^t\int_{-\infty}^\infty g(x^\prime,t^\prime)G_{X11}(x,x^\prime,t-t^\prime,c_{th},c_{hy})dx^\prime dt^\prime \end{align} This result agrees with the formula provided in \cite{Lee1987,Rice2006} for the unbounded domain. \subsection{Moving thermal load, coupled pore fluid pressure Green's kernel for a unbounded domain.} \noindent Here, we present the derivation of the Green's function kernel of the coupled pressure diffusion equation for an unbounded domain under moving thermal load. Note here, that the Green's function kernel is independent of the type of loading (stationary or moving), it depends on the kind of the differential operator and the boundary conditions. What differs here in the form of the Green's function kernel is the velocity dependence, since we want to connect the pressure evolution not with the stationary Green's function but with the moving Dirac thermal load, that can be written as $g(x,t)=\frac{\dot{\delta}}{\rho C}\tau(t)\delta(x-vt)$. In essence we need to only prescribe the velocity dependence of $x^\prime=f(v,t^\prime)$ in the Green's function kernel for the unbounded domain under Dirichlet conditions $G_{X00}(x,x^\prime,(t-t^\prime),c_{hy},c_{th})$. We provide a full description and then compare the results. The coupled system of temperature and pore fluid pressure diffusion equations in the unbounded domain is given by: \begin{align} &\frac{\partial T(x,t)}{\partial t}-c_{th}\frac{\partial^2 T(x,t)}{\partial x^2}=\frac{\dot{\delta}}{\rho C}\tau(t)\delta(x-vt),\;-\infty<x<\infty,\;0<t<\infty\nonumber\\ &\frac{\partial \tilde{p}(x,t)}{\partial t}-c_{hy}\frac{\partial^2 \tilde{p}(x,t)}{\partial x^2}=\Lambda\frac{\partial T(x,t)}{\partial t},\;-\infty<x<\infty,\;0<t<\infty,\nonumber\\ &T(x,0)=0,\nonumber\\ &\lim{T(x,t)}\|_{x=-\infty,x=\infty}=0,\nonumber\\ &\tilde{p}(x,0)=0,\nonumber\\ \end{align} \noindent To account for the moving load we perform a change of variables on the original system \eqref{app E: eq: system_of_pdes} where $\xi=x-vt,\;\eta=t$ so that we attach a frame of reference to the moving load. In this case and by suitable application of the chain rule we can write: \begin{align} &\frac{\partial T(\xi,\eta)}{\partial \eta}-v\frac{\partial T}{\partial \xi}-c_{th}\frac{\partial^2 T(\xi,\eta)}{\partial \xi^2}=\frac{1}{\rho C}\tau(t)\delta(\xi),\;-\infty<\xi<\infty,\;0<\eta<\infty,\nonumber\\ &\frac{\partial \tilde{p}(\xi,\eta)}{\partial eta}-v\frac{\partial \tilde{p}(\xi,\eta)}{\partial \xi}-c_{hy}\frac{\partial^2 \tilde{p}(\xi,\eta)}{\partial \xi^2}=\Lambda\frac{\partial T(\xi,\eta)}{\partial \eta},\;-\infty<\xi<\infty,\;0<\eta<\infty\nonumber\\ &T(\xi,0)=0,\nonumber\\ &\lim{T(\xi,\eta)}\|_{\xi=-\infty,\xi=\infty}=0,\nonumber\\ &\tilde{p}(\xi,0)=0,\nonumber\\ &\lim{\tilde{p}(\xi,\eta)}\|_{\xi=-\infty,\xi=\infty}=0 \label{app E: eq: system_of_pdes_mov} \end{align} \noindent Applying a Fourier transform in space and a Laplace transform in time on the system of partial differential equations \eqref{app E: eq: system_of_pdes_mov} we obtain: \begin{align} &s T(k,s)-v(ik)T(k,s)-c_{th}(ik)^2T(k,s)=\frac{1}{\rho C}\tau(s),\nonumber\\ &s \tilde{P}(k,s)-v(ik)\tilde{P}(k,s)-c_{th}(ik)^2\tilde{P}(k,s)=\Lambda s T(k,s).\label{app E: eq: system_of_trans} \end{align} \noindent Solving the above algebraic system \eqref{app E: eq: system_of_trans} we obtain: \begin{align} &T(k,s) = \frac{1}{\rho C}\frac{\tau{(s)}}{s-v(ik)+c_{th}k^2},\\ &\tilde{P}(k,s) = \frac{\Lambda\tau(s)}{\rho C}\frac{s}{(s-v(ik)+c_{th}k^2)(s-v(ik)+c_{hy}k^2)} \end{align} \noindent Inverting the Laplace and then the Fourier transform yields: \begin{align} &T(x,t)= \frac{\dot{\delta}}{\rho C }\int_0^t\frac{\tau(t^\prime)}{2\sqrt{\pi c_{th}(t-t^\prime)}}\exp{\left[-\frac{(x-vt^\prime)^2}{4 c_{th}(t-t^\prime)}\right]} dt^\prime,\\ &\begin{aligned} \tilde{p}(x,t)= \frac{\Lambda\dot{\delta}}{\rho C (c_{hy}-c_{th})}\int_0^t&\frac{\tau(t^\prime)}{2\sqrt{\pi(t-t^\prime)}}\left(\sqrt{c_{hy}}\exp{\left[-\frac{(x-vt^\prime)^2}{4 c_{hy}(t-t^\prime)}\right]} -\sqrt{c_{th}}\exp{\left[-\frac{(x-vt^\prime)^2}{4 c_{th}(t-t^\prime)}\right]} \right)\nonumber\\ dt^\prime. \end{aligned} \end{align} \noindent By inspection we note that these are the same expressions as the ones presented in \eqref{ch: 6 eq: G_X00_kernel}, where $x^\prime $ was replaced by $x^\prime=v t^\prime$ and $c=c_{th},\;\text{or}\;c=c_{hy}$ respectively. \section{Derivation of the coupled pore fluid pressure diffusion kernel. \label{Appendix E}} \noindent In this appendix we derive the coupled pore fluid pressure diffusion kernel for the cases of a bounded domain subjected to a stationary Dirac load and an unbounded domain under a moving Dirac load. Our procedure follows the discussion in \cite{Lee1987} where the same problem was solved for a stationary Dirac thermal load on an unbounded domain. \subsection{Stationary thermal load, coupled pore fluid pressure Green's kernel for a bounded domain.} \noindent In the case of the bounded domain we proceed by applying the method of separation of variables and then expanding the solution to a Fourier series. We note here that the coupled system of pressure and temperature diffusion equations have the same form of linear partial differential operators and boundary conditions and therefore their solution belongs to the same space of Sturn-Liouville problems. In essence the two solutions have the same eigenfunctions. In the case of the bounded domain the Temperature diffusion equation has the solution given in \cite{cole2010heat}: \begin{align} \Delta T(y,t) = \sum_{n=1}^\infty\frac{2}{\text{H} \rho C}\int_0^{t}\int_{-\infty}^\infty g(y^\prime, t^\prime)\exp{\left[-\lambda^2 c_{th}(t-t^\prime)\right]}\sin{\left(\lambda_n x\right)}\sin{\left(\lambda_n y^\prime\right)}dy^\prime dt^\prime, \end{align} \noindent where $\lambda_n$ is the Sturm-Liouvile eigenfunction coefficient, $\lambda_n=\frac{n\pi}{\text{H}}$, $\text{H}$ is the length of the bounded domain. The eigencondition for the homogeneous Dirichlet boundary condtitions are given by: \begin{align} \sin\left({\frac{n\pi}{\text{H}}\text{H}}\right)=0,\; \lambda_n=\frac{n\pi}{\text{H}},\; n=1,2,... \end{align} \noindent We note here that the homogeneous pressure diffusion partial deifferential equation on the above bounded domain has the same boundary conditions. Therefore, the pore fluid pressure solution can be written with the same eigenfunctions as above. Replacing the pore fluid pressure eigenfunction expansion $\Delta P(x,t)=P(y,t)-P_0=\sum_{i=n}^\infty \tilde{p}_n\sin{\frac{n\pi x}{\text{H}}}$ into the coupled pressure diffusion partial differential equation, \begin{align} &\frac{\partial \Delta P(y,t)}{\partial t}-c_{hy}\frac{\partial^2 \Delta P(y,t)}{\partial y^2}=\Lambda\frac{\partial \Delta T(y,t)}{\partial t},\nonumber\\ &\Delta P(y,0)=0, \nonumber\\ &\Delta P(0,t)=\Delta P(\text{H},t)=0, \end{align} \noindent we obtain: \begin{align} \sum_{n=1}^\infty\frac{\partial \tilde{p}_n(t)}{\partial t}\sin{\lambda_n y}+c_{hy}\sum_{n=1}^\infty \lambda^2_n\tilde{p}_n(t)\sin{\lambda_n y}= \frac{2 \Lambda}{\text{H} \rho C}\sum_{n=1}^\infty\sin{\lambda_n y} \frac{\partial T_n{(t)}}{\partial t}&,\\ \intertext{where $T_n(t)$ is given as:} T_n(t)=\int_0^t\int_{-\infty}^\infty g(y^\prime, t^\prime)\exp{\left[-\lambda_n^2 c_{th}(t-t^\prime)\right]}\sin{\lambda_n y^\prime}dy^\prime dt^\prime. \nonumber \end{align} \noindent Isolating each eigenfunction $\sin{\lambda_n y}$ we arrive at the following first order linear differential equations involving the unknown coefficient $\tilde{p}_n(t)$ and the loading coefficient $T_n(t)$ for each particular component of the solution series expansion. \begin{align} \frac{\partial \tilde{p}_n(t)}{\partial t}+c_{hy} \lambda^2_n\tilde{p}_n(t)=& \frac{2 \Lambda}{\text{H} \rho C}\frac{\partial T_n(t)}{\partial t},\;t\geq 0. \end{align} \noindent Applying the Laplace transformation in the field of time: \begin{align} &s\tilde{P}_n(s)+c_{hy} \lambda^2_n\tilde{P}_n(s)=\frac{2 \Lambda}{\text{H} \rho C}\frac{s}{s+\lambda^2_n c_{th}}\int_{-\infty}^\infty G(y^\prime, s)\sin{\lambda_n y^\prime}dy^\prime .\\ &\tilde{P}_n(s)=\frac{2 \Lambda}{\text{H} \rho C}\frac{s}{(s+\lambda^2_n c_{th})(s+\lambda^2_n c_{hy})}\int_{-\infty}^\infty G(y^\prime, s)\sin{\lambda_n y^\prime}dy^\prime \end{align} \noindent Applying the inverse of the Laplace transform gives us: \begin{align} \tilde{p}_n(y,t) = \frac{2 \Lambda}{\text{H} \rho C }\int_0^t\int_{-\infty}^{\infty}g(y^\prime,t^\prime)\frac{c_{hy} \exp{\left[-\lambda^2_nc_{hy}(t-t^\prime)\right]}-c_{th} \exp{\left[-\lambda^2_nc_{th}(t-t^\prime)\right]}}{c_{hy}-c_{th}}\sin{\lambda_n y^\prime}dy^\prime dt^\prime \end{align} \noindent Finally, in the series expansion $\Delta P(y,t)=\sum_{n=1}^\infty \tilde{p}_n(t)\sin{\lambda_n y}$ we move the summation under the integral sign and we obtain: \begin{align} \begin{aligned} \Delta P(y,t) = \frac{2 \Lambda}{\text{H} \rho C }\int_0^t\int_{-\infty}^{\infty}&g(y^\prime,t^\prime)\\ &\sum_{n=1}^\infty\frac{c_{hy} \exp{\left[-\lambda^2_nc_{hy}(t-t^\prime)\right]}-c_{th} \exp{\left[-\lambda^2_nc_{th}(t-t^\prime)\right]}}{c_{hy}-c_{th}}\sin{\lambda_n y}\sin{\lambda_n y^\prime}dy^\prime dt^\prime \end{aligned} \label{app E: eq: bounded Green's function Kernel press}. \end{align} \noindent We recognize the term in the second line of equation \eqref{app E: eq: bounded Green's function Kernel press} as the Green's function kernel of the coupled pressure diffusion partial differential equation. This expression has the added advantage that the influence of the thermal load on the pressure $\Delta P(y,t)=P(y,t)-P_0$ solution is straightforward. Noticing that for a general diffusion problem on a bounded domain under homogeneous Dirichlet boundary conditions the Green's function kernel is given by: \begin{align} G_{X11}(y,t;y^\prime,t^\prime,c) = \frac{2}{\text{H}}\sum_{n=1}^{\infty}\exp{\left[-\lambda^2_nc\frac{t-t^\prime}{\text{H}^2}\right]}\sin{\lambda_n y}\sin{\lambda_n y^\prime}. \end{align} The Green's function kernel of the coupled pressure differential equation on the bounded domain is then given as: \begin{align} G_{X11}(y,t;y^\prime,t^\prime,c_{th},c_{hy}) =\frac{ c_{hy}G_{X11}(y,t;y^\prime,t^\prime,c_{hy})-c_{th}G_{X11}(y,t;y^\prime,t^\prime,c_{th})}{c_{hy}-c_{th}}. \end{align} \noindent Finally, the pressure solution can be given as: \begin{align} \Delta P(y,t)=P(y,t)-P_0=\frac{\Lambda}{\rho C}\int_0^t\int_{-\infty}^\infty g(y^\prime,t^\prime)G_{X11}(y,t;y^\prime,t^\prime,c_{th},c_{hy})dy^\prime dt^\prime \end{align} This result agrees with the formula provided in \cite{Lee1987,Rice2006} for the unbounded domain. \subsection{Moving thermal load, coupled pore fluid pressure Green's kernel for a unbounded domain.} \noindent Here, we present the derivation of the Green's function kernel of the coupled pressure diffusion equation for an unbounded domain under moving thermal load. Note here, that the Green's function kernel is independent of the type of loading (stationary or moving), it depends on the kind of the differential operator and the boundary conditions. What differs here in the form of the Green's function kernel is the velocity dependence, since we want to connect the pressure evolution not with the stationary Green's function but with the moving Dirac thermal load, that can be written as $g(y,t)=\frac{\dot{\delta}}{\rho C}\tau(t)\delta(y-vt)$. In essence we need to only prescribe the velocity dependence of $x^\prime=f(v,t^\prime)$ in the Green's function kernel for the unbounded domain under Dirichlet conditions $G_{X00}(y,t;y^\prime,t^\prime,c_{hy},c_{th})$. We provide a full description and then compare the results. The coupled system of temperature and pore fluid pressure diffusion equations in the unbounded domain is given by: \begin{align} &\frac{\partial \Delta T(y,t)}{\partial t}-c_{th}\frac{\partial^2 \Delta T(y,t)}{\partial y^2}=\frac{\dot{\delta}}{\rho C}\tau(t)\delta(y-vt),\;-\infty<y<\infty,\;0<t<\infty\nonumber\\ &\frac{\partial \Delta P(y,t)}{\partial t}-c_{hy}\frac{\partial^2 \Delta P(y,t)}{\partial y^2}=\Lambda\frac{\partial \Delta T(y,t)}{\partial t},\;-\infty<y<\infty,\;0<t<\infty,\nonumber\\ &\Delta T(y,0)=0,\nonumber\\ &\lim{\Delta T(y,t)}\|_{y=-\infty,y=\infty}=0,\nonumber\\ &\Delta P(y,0)=0,\nonumber\\ &\lim{\Delta P(y,t)}\|_{y=-\infty,x=\infty}=0 \label{app E: eq: system_of_pdes} \end{align} \noindent To account for the moving load we perform a change of variables on the original system \eqref{app E: eq: system_of_pdes}, setting $\xi=y-vt,\;\eta=t$ so that we attach a frame of reference to the moving load. In this case and by suitable application of the chain rule we can write: \begin{align} &\frac{\partial \Delta T(\xi,\eta)}{\partial \eta}-v\frac{\partial \Delta T}{\partial \xi}-c_{th}\frac{\partial^2 \Delta T(\xi,\eta)}{\partial \xi^2}=\frac{1}{\rho C}\tau(t)\delta(\xi),\;-\infty<\xi<\infty,\;0<\eta<\infty,\nonumber\\ &\frac{\partial \Delta P(\xi,\eta)}{\partial \eta}-v\frac{\partial \Delta P(\xi,\eta)}{\partial \xi}-c_{hy}\frac{\partial^2 \Delta P(\xi,\eta)}{\partial \xi^2}=\Lambda\frac{\partial \Delta T(\xi,\eta)}{\partial \eta},\;-\infty<\xi<\infty,\;0<\eta<\infty\nonumber\\ &\Delta T(\xi,0)=0,\nonumber \end{align} \begin{align} &\lim{\Delta T(\xi,\eta)}\|_{\xi=-\infty,\xi=\infty}=0,\nonumber\\ &\Delta P(\xi,0)=0,\nonumber\\ &\lim{\Delta P(\xi,\eta)}\|_{\xi=-\infty,\xi=\infty}=0 \label{app E: eq: system_of_pdes_mov} \end{align} \noindent Applying a Fourier transform in space and a Laplace transform in time on the system of partial differential equations \eqref{app E: eq: system_of_pdes_mov} we obtain: \begin{align} &s \tilde{T}(k,s)-v(ik)\tilde{T}(k,s)-c_{th}(ik)^2\tilde{T}(k,s)=\frac{1}{\rho C}\tau(s),\nonumber\\ &s \tilde{P}(k,s)-v(ik)\tilde{P}(k,s)-c_{th}(ik)^2\tilde{P}(k,s)=\Lambda s \tilde{T}(k,s).\label{app E: eq: system_of_trans} \end{align} \noindent Solving the above algebraic system \eqref{app E: eq: system_of_trans} we obtain: \begin{align} &\tilde{T}(k,s) = \frac{1}{\rho C}\frac{\tau{(s)}}{s-v(ik)+c_{th}k^2},\\ &\tilde{P}(k,s) = \frac{\Lambda\tau(s)}{\rho C}\frac{s}{(s-v(ik)+c_{th}k^2)(s-v(ik)+c_{hy}k^2)} \end{align} \noindent Inverting the Laplace and then the Fourier transform yields: \begin{align} &\Delta T(y,t)= \frac{\dot{\delta}}{\rho C }\int_0^t\frac{\tau(t^\prime)}{2\sqrt{\pi c_{th}(t-t^\prime)}}\exp{\left[-\frac{(y-vt^\prime)^2}{4 c_{th}(t-t^\prime)}\right]} dt^\prime,\\ &\begin{aligned} \Delta P(y,t)= \frac{\Lambda\dot{\delta}}{\rho C (c_{hy}-c_{th})}\int_0^t&\frac{\tau(t^\prime)}{2\sqrt{\pi(t-t^\prime)}}\left(\sqrt{c_{hy}}\exp{\left[-\frac{(y-vt^\prime)^2}{4 c_{hy}(t-t^\prime)}\right]} -\sqrt{c_{th}}\exp{\left[-\frac{(y-vt^\prime)^2}{4 c_{th}(t-t^\prime)}\right]} \right)\nonumber\\ dt^\prime. \end{aligned} \end{align} \noindent By inspection we note that these are the same expressions as the ones presented in \eqref{ch: 6 eq: G_X00_kernel}, where $y^\prime $ was replaced by $y^\prime=v t^\prime$ and $c=c_{th},\;\text{or}\;c=c_{hy}$ respectively. \let\clearpage\relax \section{Collocation Methodology \label{Appendix H}} \subsection{Regular kernels} \noindent In order to apply the collocation methodology to the linear Volterra integral equation of the second kind, \eqref{ch 6: eq: normalized_integral_equation}, we make use of the collocation methodology described in \cite{tang2008spectral}. The integral equation is given as: \begin{align} \bar{\tau}(\bar{t}) = 1-C\int_0^{\bar{t}}\bar{\tau}(\bar{t}^\prime)\bar{G}^\star(\bar{y},\bar{t};\bar{y}^\prime,\bar{t}^\prime)\|_{\bar{y}=\bar{y}^\prime} d\bar{t}^\prime,\; \bar{t}\in [0,\text{T}\frac{c_{th}}{\text{H}^2}] \label{app: H eq: integral equation1}, \end{align} \noindent where $\bar{\text{T}}=\text{T}\frac{c_{th}}{\text{H}^2}$ is the final normalized time, and $\bar{\tau}(\bar{t})$ is the unknown function. We begin by performing a change of variables from $\bar{t}\in [0,\bar{\text{T}}]$ to $\bar{z}\in [-1,1]$. The change of variables reads: \begin{align*} \bar{t}=\bar{\text{T}}\frac{1+\bar{z}}{2},\;\bar{z}=\frac{2\bar{t}}{\bar{\text{T}}}-1 \end{align*} \noindent The Volterra integral equation can then be written: \begin{align} U(\bar{z}) = 1-C\int_0^{\bar{\text{T}}\frac{1+\bar{z}}{2}}\bar{G}^\star(\bar{y},\bar{\text{T}}\frac{1+\bar{z}}{2};\bar{y}^\prime,\bar{t}^\prime)\|_{y=y^\prime}d\bar{t}^\prime,\; \bar{z}\in [-1,1] \label{app: H eq: integral equation2}, \end{align} \noindent where $U(\bar{z})=\tau(\bar{\text{T}}\frac{1+\bar{z}}{2})$. In order for the collocation method solution to converge exponentially we require that both the integral equation \eqref{app: H eq: integral equation2} and the integral inside \eqref{app: H eq: integral equation2} are expressed inside the same interval $[-1,1]$. To do this first we change the integration bounds from $\bar{t}^\prime\in[0,\bar{\text{T}}\frac{1+\bar{z}}{2}]$ to $s\in [-1,\bar{z}]$. \begin{align} U(\bar{z}) = 1-C\int_0^{\bar{z}}K(\bar{y},\bar{z};\bar{y}^\prime,s)\|_{\bar{y}=\bar{y}^\prime}U(s)ds,\; \bar{z}\in [-1,1], \label{app: H eq: integral equation3}, \end{align} \noindent where $K(\bar{y},\bar{z};\bar{y^\prime},s)=\frac{\bar{\text{T}}}{2}\bar{G}^\star(\bar{y},\frac{\bar{\text{T}}}{2}(\bar{z}+1);\bar{y^\prime},\frac{\bar{\text{T}}}{2}(s+1))$. Next, we set the $N+1$ collocation points $\bar{z}_i\in [-1,1]$ and corresponding weights $\omega_i$ according to the Clenshaw-Curtis quadrature formula. The integral equation \eqref{app: H eq: integral equation3} must hold at each $\bar{z}_i$: \begin{align} U(\bar{z}_i) = 1-C\int_0^{\bar{z}_i}K(\bar{y},\bar{z}_i;\bar{y}^\prime,s)\|_{\bar{y}=\bar{y}^\prime}U(s)ds,\; i\in [0,N], \label{app: H eq: integral equation4}, \end{align} \noindent The main hindrance in solving equation \eqref{app: H eq: integral equation4} accurately, is the calculation of the integral with variable integration bounds. For small values of $\bar{z}_i$, the quadrature provides little information for $U(s)$. We handle this difficulty by yet another variable change where we transfer the integration variable $s\in [-1,\bar{z}_i]$ to $\theta\in [-1,1]$ via the transformation: \begin{align} s(\bar{z},\theta)=\frac{1+\bar{z}}{2}\theta+\frac{\bar{z}-1}{2},\;\theta \in [-1,1]. \end{align} Thus, equation \eqref{app: H eq: integral equation4} is transformed into: \begin{align} U_i+C\frac{1+\bar{z}_i}{2}\sum\limits_{j=0}^N{}^\prime u_j\sum\limits_{p=0}^N K(\bar{y},\bar{z}_i,\bar{y}^\prime,s(\bar{z}_i,\theta))\|_{\bar{y}=\bar{y}^\prime}U(s(\bar{z}_i,\theta_p))\omega_j=1,\; i\in [0,N] \end{align} \noindent In order to apply the collocation method according to the Clenshaw-Curtis quadrature, we express the solution $U(s(\bar{z}_i,\theta_p))$ with the help of Lagrange interpolation polynomials $P_j(s(\bar{z}_i,\theta_p))$ as a series: $U(s(\bar{z}_i,\theta_p)) \sim \sum \limits_{k=0}^N{}^\prime U_jP_j(s(\bar{z}_i,\theta_j))$, \begin{align} U_i+\frac{1+\bar{z}_i}{2}\sum\limits_{j=0}^N{}^\prime U_j\left(\sum\limits_{p=0}^N K(y,\bar{z}_i;\bar{y}^\prime,s(\bar{z}_i,\theta))\|_{\bar{y}=\bar{y}^\prime}P_j(s(\bar{z}_i,\theta_p))\omega_p\right)=1,\; i\in [0,N] \end{align} \noindent where, $P_j(s(\bar{z_i},\theta_p)),\;\sum\limits_{j=0}^N{}^\prime$ have been defined in the main text (see equations \eqref{ch: 6 eq: barycentric formula} and \eqref{ch: 6 eq: mod sum}). In order to assure an exponential degree of convergence, we choose the set of Gauss Chebyshev quadrature points for the numerical evaluation of the integral $\{\theta_j\}_{j=0}^N$, to coincide with the set of collocation points $\{\bar{z}_j\}_{j=0}^N$, where the integral equation is evaluated. Rearranging the terms and applying Einstein's summation over repeated indices yields the system of algebraic equations: \begin{align} (\delta_{ij}+A_{ij})U_j=g(\bar{z}_i), \end{align} \noindent where, $A_{ij} =\frac{1+\bar{z}_i}{2}\sum\limits_{j=0}^N{}^\prime\left(\sum\limits_{p=0}^N K(\bar{y},\bar{z}_i;\bar{y}^\prime,s(\bar{z}_i,\theta))P_j(s(\bar{z}_i,\theta_p))\omega_p\right),\; g(\bar{z}_i)=1$ and $U_j$ the unknown quantities. Because Lagrange interpolation was assumed, the interpolation coefficients $U_j$ calculated at each $\bar{x}_j$ are also the value of the interpolation at $\bar{z}_j$. \subsection{Singular kernels} \noindent When the kernel of the integral equation \eqref{ch 6: eq: normalized_integral_equation} involves a singularity (see equation \eqref{ch 6: eq: normalized_unbounded_kernel}), we cannot use the Clenshaw-Curtis quadrature rule in its original form because the quadrature requires the values of the function at a position, where the kernel evaluates to infinity. For this reason a different quadrature strategy needs to be implemented. Here, based on the work of \cite{tang2008spectral} we apply the Gauss-Chebyshev quadrature rule. This quadrature rule involves the values of the function at the zeros of the $N$-th degree Chebyshev polynomial of the first kind $\{\bar{z}^\prime_i\}$. The quadrature can then be successfully calculated, because the new set of integration points, $\{\bar{z}^\prime_i\}$, does not involve the ends of the interval [-1,1]. However, since the Chebyshev polynomials of the first kind were used, we need to take into account the specific weight function $w(\bar{z})=\sqrt{1-\bar{z}^2}$ under which the Chebyshev polynomials of the first kind are orthogonal on the interval [-1,1], namely: \begin{align} \label{orthogonality_condition} \int_{-1}^1\frac{T_{i}(\bar{z})T_{j}(\bar{z})}{\sqrt{1-\bar{z}^2}}d\bar{z}=\begin{cases}&1,\;i=j\\ &0,\;i\neq j\end{cases}, \end{align} \noindent moreover, due to the change in the evaluation set, $\{\bar{z}^\prime_i\}$, the formula for the calculation of the Lagrange interpolation is given by: \begin{align} U(s(\bar{z},\theta_p))=\sum\limits_{j=0}^NU(\bar{z}^\prime_j)F_j(s(\bar{z},\theta_p)) \end{align} \noindent where $F_j(s(\bar{z},\theta_p)$ are the Lagrange cardinal polynomials. We note that the formula for the Lagrange cardinal polynomials changes due to the change of the interpolation nodes $\{\bar{z}^\prime_j\}^N_{j=0}$. Taking advantage of the orthogonality condition the cardinal polynomials $F_j(s(\bar{z},\theta_p))$ are given by: \begin{align} F_j(s(\bar{z}^\prime_i,\theta_p))=\sum_0^N\alpha_{p,j}T_{p}(s(\bar{z}^\prime_i,\theta_p)), \end{align} \noindent where, again due to orthogonality, $\alpha_{p,j}$ is given by: \begin{align} \alpha_{p.j}=T_{p}(\bar{z}^\prime_j)\omega_j/\gamma_p,\\ \gamma_p=\begin{cases}&\pi,\;j=0\\ &\frac{\pi}{2},\;j\neq 0 \end{cases} \end{align} \noindent The final discretized form of the integral equation \eqref{ch 6: eq: normalized_integral_equation} is then given by: \begin{align} U_i+\frac{1+\bar{z}_i}{2}\sum\limits_{j=0}^NU_j\left(\sum\limits_{p=0}^N K(y,\bar{z}_i;\bar{y}^\prime,s(\bar{z}_i,\theta))\|_{\bar{y}=\bar{y}^\prime}F_j(s(\bar{z}_i,\theta_p))\sqrt{1-\theta^2_p}\omega_p\right)=1,\; i\in [0,N]. \end{align} \noindent we note here that the term $\sqrt{1-\theta^2_p}$, accounts for the weight function present in the orthogonality condition \section{Proof that the maximum of pressure and the position of the yielding plane (thermal load) coincide }\label{Appendix I} \noindent In this appendix we will present the central part of our argument in concerning the applicability of a traveling strain localization mode. \subsection{Proof of pressure Maxima for the traveling strain localization on an bounded domain.\label{app I: sec: 1}} \noindent In \ref{ch: 6 sec: BD_loading_presentation} we have argued that in order for the strain localization mode to be applicable to the problem at hand it needs to satisfy the equations of equilibrium and it should warrant that the prescribed localization motion corresponding to the position of the yielding plane inside the layer satisfies also the condition that it coincides with the maximum of the pressure profile inside the layer at all times \cite[see][]{Rice2006}. To this end, provided that the function we search for is sufficiently smooth except maybe at a finite number of points, we make use of our tools from calculus that indicate the use of the first and second order derivatives to decide upon the position and the kind of the extremalities of the unknown function. We start our discussion with the case of a traveling localization on a bounded mathematical domain. In this case the derived kernel of the coupled pressure temperature diffusion equation is bounded and smooth at all times $t\in[0,\infty)$. The pressure profile at all times $t$, for a traveling strain localization mode $\dot{\gamma(x,t)}=\dot{\delta}\delta_{Dirac}(x-vt)$ (and thermal load $\frac{1}{\rho C}\tau{t}\dot{\delta}\delta_{Dirac}(x-vt)$) at position $x=vt$ is given by the relation: \begin{align} p(x,t)-p_0 =& \frac{2\Lambda \dot{\delta}}{\text{H}\rho C(c_{hy}-c_{th})}\int_0^t\tau(t^\prime)\sum\limits_{m=0}^{\infty} K(x,t,t^\prime,m)dt^\prime \label{app I: eq: pressure} \end{align} \noindent where $K(x,t,t^\prime,m)= \left(c_{hy}\exp{\left(-m^2\pi^2c_{hy}\frac{t-t^\prime}{\text{H}^2}\right)}-c_{th} \exp{\left(-m^2\pi^2c_{th}\frac{t-t^\prime}{\text{H}^2}\right)}\right)\sin{\left(m\pi\frac{x}{\text{H}}\right)}\sin{\left(m\pi\frac{vt^\prime}{\text{H}}\right)})$. The first derivative of equation \eqref{app I: eq: pressure} is given by: \begin{align} \frac{\partial p(x,t)}{\partial x} =\frac{2\Lambda \dot{\delta} m \pi}{\text{H}^2\rho C(c_{hy}-c_{th})}\int_0^t\tau(t^\prime)\sum\limits_{m=0}^\infty\frac{\partial K(x,t,t^\prime,m)}{\partial x}dt^\prime \end{align} \noindent where $\frac{\partial K(x,t,t^\prime,m)}{\partial x}=\left(c_{hy}\exp{\left(-m^2\pi^2c_{hy}\frac{t-t^\prime}{\text{H}^2}\right)} -c_{th}\exp{\left(-m^2\pi^2c_{th}\frac{t-t^\prime}{\text{H}^2}\right)}\right)\cos{\left(m\pi\frac{x}{\text{H}}\right)}\sin{\left(m\pi\frac{vt^\prime}{\text{H}}\right)}$. We note that for $\frac{\partial p}{\partial x}=0$ we need to find $x$ such that: \begin{align} \Pi(x,t^\prime,m) = \text{Re}\left[\cos{\left(m\pi\frac{x}{\text{H}}\right)}\sin{\left(m\pi\frac{vt^\prime}{\text{H}}\right)}\right]=0. \end{align} \noindent We write the above product with the help of the exponential function. \begin{align} \Pi(x,t^\prime,m) &= \text{Re}\left[\frac{\exp{im\pi\frac{x}{\text{H}}}+\exp{-im\pi}\frac{x}{\text{H}}}{2}\frac{\exp{im\pi\frac{vt^\prime}{\text{H}}}-\exp{-im\pi}\frac{vt^\prime}{\text{H}}}{2i}\right]=0,\\ \Pi(x,t^\prime,m) &=\text{Re}\left[\frac{1}{4i}\left(\exp{\left(im\pi\frac{x+vt^\prime}{\text{H}}\right)}+\right. \left.\exp{\left(-im\pi\frac{x-vt^\prime}{\text{H}}\right)}\right.\right.\nonumber\\ &\left.\left.-\exp{\left(im\pi\frac{x-vt^\prime}{\text{H}}\right)}-\exp{\left(-im\pi\frac{x+vt^\prime}{\text{H}}\right)}\right)\right]=0. \end{align} \noindent Transforming again the above summation with the help of the Euler relations into $\sin{(\cdot{})}+i\cos{(\cdot{})}$ and using the trigonometric equalities between the arguments of the opposite sign we arrive at: \begin{align} \cos{\left(m\pi\frac{x-vt^\prime}{\text{H}}\right)}-\cos{\left(m\pi\frac{-x+vt^\prime}{\text{H}}\right)}=0, \end{align} \noindent which is true $\forall t\in[0,\infty)$ when $x=vt^\prime$. Thus we know that $p(x,t)-p_0$ presents an extremum an the position of the traveling yielding plane. We need to prove that this extremum is also maximum for every $t>0$. For this we take the second derivative of equation \eqref{app I: eq: pressure}. \begin{align} \frac{\partial^2 p(x,t)}{\partial x^2} =\frac{2\Lambda \dot{\delta} m^2 \pi^2}{\text{H}^3\rho C(c_{hy}-c_{th})}\int_0^t\tau(t^\prime)\sum\limits_{m=0}^\infty\frac{\partial^2 K(x,t,t^\prime,m)}{\partial x^2}dt^\prime,\label{app I: eq: pressure sec_der} \end{align} \noindent where $\frac{\partial^2 K(x,t,t^\prime,m)}{\partial x^2}=-\left(c_{hy}\exp{\left(-m^2\pi^2c_{hy}\frac{t-t^\prime}{\text{H}^2}\right)} -c_{th}\exp{\left(-m^2\pi^2c_{th}\frac{t-t^\prime}{\text{H}^2}\right)}\right)\sin{\left(m\pi\frac{x}{\text{H}}\right)}\sin{\left(m\pi\frac{vt^\prime}{\text{H}}\right)}$. Evaluating the second derivative in equation \eqref{app I: eq: pressure sec_der} with respect to $x$ at the position $x=vt^\prime\;\frac{\partial^2 p(x,t) }{\partial x^2}\big{\|}_{x=vt^\prime}$ we see that for $t>0$ equation \eqref{app I: eq: pressure sec_der} is always negative. This means that the extremum at $x=vt^\prime$ is also a maximum. \subsection{Proof of pressure Maxima for the traveling strain localization on an unbounded domain. \label{app I: sec: 2}} \noindent In the case of the unbounded domain due to the form of the coupled temperature and pressure kernel, $G_{T,P}(x,t-t^\prime,c_{th},c_{hy})$, \cite[see][]{Lee1987} resulting in a weakly singular convolution integral, such a derivation is not trivial. While we can ensure the convergence of the integral in a Riemann sense for the original integral and verify the position of its extremalities we cannot directly calculate its second derivative since the resulting integral corresponds to a hyper singular divergent integral. To answer questions about its convergence we need to apply the notion of Hadamard regularization.\\ \newline \noindent We notice, however, that this question is a special case of the pressure maxima of a traveling strain localization on a bounded domain of height $\text{H}$, when $\text{H}$ tends to $\infty$, see section \autoref{app I: sec: 1}. \section{Introduction \label{PartII}} \noindent The results of Part I \citep[see][]{Alexstathas2022a}, concerning the influence of the weakening mechanism of thermal pressurization, diverge -spectacularly- from the expected behavior based on the model of \cite{Mase1987,Rice2006}. Furthermore, we note that these results, indicate the divergence to take place long before the completion of the seismic slip $\delta$. This holds true for the range of commonly observed seismic slip velocities $\dot{\delta}\;\in\;\{0.1\sim 1\}$ m/s and seismic slip displacements ${\delta}\;\in\;\{0.1\sim 1\}$ m (see \cite{Harbord2021,Rempe2020}). In this follow-up paper, Part II, we investigate the reasons for this divergence between the theoretical results and their implications for the appreciation of thermal pressurization as one of the main weakening mechanism during coseismic slip. Our investigation leads us to extend the existing model of slip on a mathematical plane by relaxing its key assumptions.\\ \newline \noindent In Figure \ref{ch: 6 fig: classical_vs_micromorphic_compare} we compare the frictional response of the micromorphic model used in Part I \citep[see][]{Alexstathas2022a}, with the response of the established model for the limiting cases of uniform shear \cite{lachenbruch1980frictional} and shear on a mathematical plane \cite{Mase1984,Rice2006}. In particular, the two limiting responses of the established model, depend on the width of accumulating strain localization inside the fault gouge, which we call the Principal Slip Zone (PSZ). They are characterized respectively by: a) the uniform slip across the fault gouge (see \cite{lachenbruch1980frictional}), and b) the localization of slip on a mathematical plane (see \cite{Lee1987,Mase1987,rempel2006effects,Rice2006}). We note that while at the initial stages of slip (see inset of Figure \ref{ch: 6 fig: classical_vs_micromorphic_compare}) the response of the micromorphic model lies inside the the envelope of the limit cases, at larger values of slip it diverges, presenting frictional regain and the initiation of frictional oscillations. These results come in contrast to the strictly monotonous behavior predicted by the limiting cases of uniform slip and slip on a mathematical plane. \begin{figure}[h!] \centering \includegraphics[width=0.45\linewidth]{Solution_Comparison_numerical_detail_expanded_full_annotations_corected_nfit_simple.pdf} \caption{Comparative normalized friction $\bar{\tau}$- displacement $\delta$ results. The purple-square curve presents the frictional response of the established thermal pressurization model in the case of slip on a mathematical plane under isothermal drained boundary conditions lying at infinity \protect{\cite{Mase1987,Rice2006}}. The yellow-circle curve presents the frictional response of the established model when uniform slip occurs under adiabatic undrained boundary conditions for a fault gouge of height H=1 mm under shear velocity V=1 m/s \cite{lachenbruch1980frictional}. The black-triangle line corresponds to the frictional response of the micromorphic model of Part I, \cite{Alexstathas2022a} for the same fault gouge under isothermal drained boundary conditions. For small values of slip $\delta\leq 10$ mm, the response of the micromorphic model lies between the two limiting cases, however, it diverges as seismic slip $\delta$ accumulates.} \label{ch: 6 fig: classical_vs_micromorphic_compare} \end{figure} \newline \noindent We note here, that the limiting cases are predicted by the established model of thermal pressurization, under three important assumptions (see \cite{lachenbruch1980frictional,Mase1987,Rice2006}): First of all, the thickness of the yielding region, which corresponds to the PSZ, coincides with the fault gouge. Prescribing the thickness, and therefore, the shape of plastic strain profile, essentially decouples the mechanical and thermo-hyraulic components of the coupled THM problem (see \cite{Mase1984}). Secondly, the variability between the thermal and hydraulic parameters of the gouge and the surrounding rock is assumed to be small, thus the thermo-hydraulic boundaries for the THM coupled problem lie at infinity. In essence the change of hydrothermal parameters between the fault gouge and the surrounding rock is neglected. Lastly, the position of the heat source due to the dissipation inside the PSZ, remains stationary inside the fault, and coincides to the position of the fault gouge.\\ \newline \noindent These assumptions, however, are not representative of observations. We know from laboratory experiments and in situ observations that the fault gouge, has a finite thickness of the order of some milimeters, and it does not deform in a uniform manner (see \cite{myers2004evolution,Brantut2008}). In fact, inside the fault gouge, the principal slip zone (PSZ) is a region of finite thickness of the order of some micrometers depending on the geomaterial’s microstructure (see \cite{Muhihaus1988,sibson1977fault}). In this configuration the fault gouge and the region that accumulates the majority of the plastic deformation inside it -the PSZ- do not coincide. Furthermore, one needs to acknowledge, that the frictional response inside the fault gouge is dependent on the ratio of thermal to hydraulic diffusivity of the fault gouge and the surrounding rock. In particular we know from the works of \cite{aydin2000fractures,passelegue2014influence,yao2016crucial} that the hydraulic and thermal diffusivity of the gouge is smaller than that of the surrounding rock by 1 to 2 orders of magnitude. This large difference between the parameters of the fault system needs to be accounted for. Finally, there is experimental evidence of fault gouges, that are thicker than expected according to the existing models of \cite{Platt2014} and \cite{Rice2006}, and of closely adjacent fault gouges, see \cite{nicchio2018development}, whose existence can be linked to the possibility that the position of the PSZ is not stationary, rather it is traveling inside the fault gouge, possibly expanding the latter in the process. \\ \newline \noindent There is also theoretical evidence considering the possibility of a traveling PSZ inside the fault gouge. In this case the preferred mode of strain localization might not be that of the divergence kind described in \cite{Rice1975}, rather it can be a ``flutter'' type instability, corresponding to a traveling strain localization profile (PSZ). According to the Lyapunov theory of stability (see \cite{lyapunov1893problem,Brauer1969}), a traveling strain localization (PSZ) is manifested by the appearance of a Lyapunov exponent with imaginary parts. The transition form a stationary instability of divergence type to a flutter traveling instability is called a Hopf bifurcation. For more details we refer to section \ref{Part I-ch: 5 Traveling_Instabilities} of Part I, \cite{Alexstathas2022a}, where we have shown numerically that, for stress states common in faults, traveling instabilities are present in the linear stability analysis for Cosserat continua under strain softening and apparent softening due to multiphysical couplings. In the broader context of a classical continuum, under hydraulic couplings, \cite{Benallal2003} have shown that traveling instabilities are also present. It is not yet clear if this is the case for a classical continuum under THM couplings (see \cite{benallal2005localization}).\\ \newline \noindent In this paper we provide an explanation concerning the differences between the numerical results of Part I (see \cite{Alexstathas2022a}) and the frictional response predicted by limit cases of the classical model \cite{lachenbruch1980frictional,Mase1987,Rice2006}. To this end we expand the classical model of thermal pressurization described in \cite{Rice2006}, and extend its applicability to cases of bounded fault gouges and traveling strain localization modes of the PSZ. We will use the same thermal, hydraulic and geometric parameters for the fault gouge as in the model of Part I, \cite{Alexstathas2022a}. Next, we will collapse the PSZ, where yielding and frictional heating takes place, onto a mathematical plane by employing the same formalism used in \cite{Lee1987,Mase1987,Rice2006}. We assume further, that the yield (dissipation) obeys a Coulomb friction law with the Terzaghi normal effective stress. The mechanical behavior of the layer outside the yielding plane is ignored and for the purposes of this model it can be considered as rigid. This allows us to avoid solving a BVP for the mechanical part, which significantly simplifies the problem (c.f. \cite{Alexstathas2022a}).\\ \newline \noindent The decision to collapse the PSZ onto a mathematical plane can be justified based on the results of Part I, see sections \ref{Part I-ch: 5 sec: Two_step_procedure}, \ref{Part I-ch: 5 viscosity_reference} and Figures \ref{Part I-ch: 5 fig: tau_u_velocity_compare-fit},\ref{Part I-ch: 5 fig: R_bstar_compare}). These lead us to the observation that it is the hydraulic and thermal parameters of the fault that mainly affect thermal pressurization. We note, however, that the Cosserat radius, which is a parameter connected with the grain size and the material properties of the granular medium is still an indispensable internal length for the numerical analyses of Part I because: a) it assures the mesh independence of the numerical results, and b) it provides finite localization width over which frictional heating takes place. However, for the analyses performed in this part (Part II), the introduction of the Dirac delta distribution prescribing the profile of the plastic strain rate, thus decoupling the mechanical and thermohydraulic component of the propblem of thermal pressurization, allows us to overcome the problem of incorporating the microstructure to the model, considerably simplifying the analysis. This allows us to elaborate on the effect of the boundary conditions on the frictional response. Furthermore, this simplification allows us to gain further insight into the problem, because the mechanisms responsible for the principal characteristics of the response of the micromorphic model described in Part I (restrengthening, frictional oscillations) can be isolated and investigated separately, corroborating the numerical results of Part I.\\ \newline \noindent This paper is structured as follows: In section \ref{ch: 6 sec: section 2} we present the basic equations of the classical model of thermal pressurization (see Mase1987,Mase1984,Rice2006) and our proposed expansion to the cases of bounded fault gouges and a traveling PSZ, by elaborating further on their differences. Our extended model leads to a Volterra integral equation of the second kind, which cannot be solved analytically as in the case of \cite{Rice2006}. In section \ref{ch: 6 sec: section 3}, we solve the Volterra integral equation of the second kind by applying a Spectral Collocation Method with Lagrange basis functions (SCML), based on the work of \cite{Evans1981,Elnagar1996,tang2008spectral}. This is a general spectral method and can handle the challenging task of integrating the Volterra equation under different assumption of boundary conditions and traveling strain localization modes, when other analytical approaches such as Laplace transform, Adomian decomposition Method and Taylor series expansion fail \cite[see][]{wazwaz2011linear,boyd20062000}.\\% However, care is needed in the case of weakly singular kernels.\\ \newline \noindent Having described our model and the solution procedure, we present in section \ref{ch: 6 sec: section 4} a series of applications showcasing the differences with the analyses in \cite{Rice2006}. The applications include the frictional responses of: (a) a stationary PSZ on a bounded isothermal drained domain, (b) a moving PSZ on an unbounded isothermal drained domain, and (c) a moving PSZ on a bounded isothermal drained domain. The original solution in \cite{Rice2006} is obtained as a special case of the more general solutions presented here and is taken as reference (see Figure \ref{ch: 6 fig: frictional_behavior_stationary_unbounded}).\\ \newline \noindent Finally, in conclusions we discuss the implications of our results concerning the introduction of a traveling PSZ inside the fault gouge. Our results are important as they describe better the underlying physical process of seismic slip. Moreover, a traveling PSZ naturally enhances the frictional response with oscillations, which in turn can enhance the ground acceleration spectra with higher frequencies as observed in nature \cite{Aki1967a,BRUNEJN1970}. Moreover, our results are valuable in the context of experiments for the description of the weakening behavior due to thermal pressurization (see \cite{Badt2020}), for controlling the transition from steady to unsteady slip and for the nucleation of an earthquake (see \cite{Rice1973a,viesca2015ubiquitous}). They are also central in earthquake control, as they provide bounds for the apparent friction coefficient with slip and slip-velocity enabling modern control strategies (see \cite{Stefanou2019,Stefanou2020,tzortzopoulos2021Thesis}. \section{Thermal pressurization model of slip on a plane \label{ch: 6 sec: section 2}} \subsection{Problem statement} \noindent We already discussed in the introduction, that the current model of thermal pressurization, shown in Figure \ref{ch: 6 fig: classical_model_fault_gouge}, assumes that yielding is constrained on a mathematical plane inside the domain, which is modeled based on the Coulomb friction criterion (see equation \eqref{ch: 6 eq: Coulomb friction} below). This plane will be also called yielding plane in the following. Contrary to \cite{Mase1987,Rice2006}), the yielding plane is not considered stationary inside the domain. Instead its position $u(t)$ is allowed to change with a velocity $\dot{u}(t)=v(t)$. Furthermore, we will not consider only isothermal drained boundary conditions lying at infinity. In particular, we will also take into account the case, where the fault gouge is bounded under isothermal drained boundary conditions lying at $y=0\;y=\text{H}$. \begin{figure}[h!] \centering \includegraphics[width=0.45\linewidth]{Cauchy_localized_deformation_rigid.pdf} \caption{The established model of thermal pressurization: The values of Pressure and temperature are prescribed at infinity. The bodies outside the fault gouge (red color) are considered rigid. Deformation loclalizes on a mathematical plane and the PSZ coincides with the fault gouge.} \label{ch: 6 fig: classical_model_fault_gouge} \end{figure}\\ \noindent At the yielding region inside the layer heat is produced due to dissipation, $D$. The thermal source then is described by the calculation of the plastic work rate, $\dot{D}$, at the position of the failure plane, namely: \begin{align} \dot{D}=\tau(t)\dot{\gamma}^p(y,t). \end{align} \noindent In the above formula, friction $\tau(t)$, which is the main unknown of the problem, is independent of position $y$, due to equilibrium considerations along the height of the fault gouge ($\frac{\partial \tau}{\partial y}=0$, in the absence of inertia, see also \cite{Rice2006}). The term $\dot{\gamma}^p(y,t)$ is the plastic strain rate inside the fault gouge. In the established model of thermal pressurization this term is prescribed with the help of a Dirac distribution stationed at the plane of symmetry, $y=0$ (see \cite{Mase1984,Rice2006}). Here we expand the term $\dot{\gamma}^p(y,t)$ to account for a traveling PSZ at position $y=u(t)$ as follows: \begin{align} \dot{\gamma}^p(y,t)=V(t)\delta_{\text{Dirac}}(y-u(t)). \label{ch: 6 plastic_strain_rate_prf} \end{align} \noindent In the case of $u(t)=0$, no traveling can take place and the stationary condition of \cite{Mase1987,Rice2006} is recovered. In the model of \cite{Rice2006} the author considers that the shear rate $V(t)$ applied at the boundaries of the fault gouge is constant $V(t)=V$. We adopt this assumption although seismic slip rate during coseismic slip may vary significantly (see \cite{Rempe2020}). The equations of the established model \cite{Mase1987,Rice2006} are then written as follows: \begin{align} &\tau(t)=f(\sigma_n-P_{max}(t)), \text{ on the yielding plane,} \label{ch: 6 eq: Coulomb friction}\\ &\frac{\partial \Delta T}{\partial t}=c_{th}\frac{\partial \Delta T}{\partial y^2}+\frac{1}{\rho C}\tau(t)V\delta_{\text{Dirac}}(y-u(t)),\\ &\frac{\partial \Delta P}{\partial t}=c_{hy}\frac{\partial \Delta P}{\partial y^2}+\Lambda\frac{\partial \Delta T}{\partial t},\\ &\Delta T\|_{y=0,H}=\Delta P\|_{y=0,H}=0,\\ &\Delta T(y,0)=\Delta P(y,0)=0,\;P(y,t)=\Delta P(y,t)+P_0, \end{align} \noindent where $f$ is the friction coefficient, $c_{th},\;c_{hy}$ are the thermal and hydraulic diffusivity of the layer (same values for the fault gouge and the fault walls) respectively, $\rho C$ is the specific heat density of the layer, $V$ is the shearing rate of the layer, assumed here to be constant, and $\Lambda=\frac{\lambda^\star}{\beta^\star}$ is the thermal pressurization coefficient (see Table \ref{ch: 6 table:material_properties}). The symbol $(\|_\alpha)$ indicates the value of temperature and pressure fields at position $\alpha$ of the model, while $P_0$ is the ambient value of pore fluid pressure at the boundaries of the fault gouge. We note that if we set the boundary conditions at infinity (i.e. $\|_{\pm\infty}$) the boundary assumptions of \cite{Mase1987} and \cite{Rice2006} are recovered.\\ \newline \noindent We note here, that prescribing the position of the yielding plane $y=u(t)$ implies that the position of $P_{max}$ is known, and coincides with the position of the thermal load. Thus the above model is valid if the position of the maximum pressure $P_{max}(t)$ and the yielding plane coincide. In this case, because the yielding position is prescribed and the plastic strain profile known, the mechanical behavior is decoupled and the resulting coupled thermo-hydraulic problem described above is linear.\\ \newline \noindent Applying the pore fluid pressure solution (see also equation \eqref{ch: 6 eq: pressure_sol_mod} in \ref{Appendix A}) to the failure criterion results finally to the following Volterra integral equation of the second kind for the determination of the layer’s frictional response under constant shearing rate (see \cite{Rice2006,wazwaz2011linear}): \begin{align} &\tau(t)=f(\sigma_n-P_0)-\frac{f\Lambda V}{\rho C(c_{hy}-c_{th})}\int_{0}^t\tau(t^\prime)G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})\|_{y=y^\prime}dt^\prime,\label{Volterra_integral_equation_mod} \end{align} \newline \noindent where $G^\star(y,t;,y^\prime,t^\prime,c_{hy},c_{th})$ is the kernel of the integral equation , which we present further in section \ref{ch: 6 sec: BD_loading_presentation} (see also \cite{cole2010heat}). The kernel indicates the influence of the thermal load applied at position $y’$ and time $t’$ in the pore fluid pressure observed at position $y$ and time $t$. Throughout our analysis we make the assumption that the maximum value of pore fluid pressure $P_{max}(t)$, at observation time $(t)$, lies at the point of application of the thermal load $y^\prime$. This assumption is then verified numerically. Hence, the position of observation of $P_{max}(t)$, $y$, is equal to $y=y^\prime$, and the kernel $G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})$ needs to be calculated at $y=y^\prime$.\\ \newline \noindent We note that the frictional response is dependent on the strain localization mode and the boundary conditions applied at the fault gouge. The first influences the form of the thermal load as a function of time and position, while the latter influences the form of the kernel of the coupled linear thermo-hydraulic problem at hand. For the purposes of our analyses we will consider the cases of: 1) an unbounded fault gouge under a) a stationary PSZ, described in \cite{Rice2006}, b) a traveling PSZ at a constant velocity $v$, and 2) a bounded fault gouge under a) a stationary PSZ, and b) a traveling PSZ, where position is a periodic function of time i.e. $y^\prime=u(t^\prime )$ (see equation \eqref{ch: 6 plastic_strain_rate_prf} and section \ref{ch: 6 sec: section 4}). The periodic movement of the PSZ is justified on the basis of the numerical analyses presented previously in Part I (see \cite{Alexstathas2022a}, Figures \ref{Part I-ch: 5 fig: tau_u_velocity_compare_1},\ref{Part I-ch: 5 fig: l_dot_gamma_final_1000-T_p_final_1000}.) We present the relevant Green's function kernels in section \ref{ch: 6 sec: BD_loading_presentation}. In order to solve the modified resulting Volterra integral equation \eqref{Volterra_integral_equation_mod}, we have employed the collocation quadrature method described in \cite{tang2008spectral} as explained in section \ref{ch: 6 sec: section 3}. \\ \newline \noindent Having defined the differences between the classical and the extended model of thermal pressurization described in this section, we comment further on the differences between our linear extended model and the one used in the fully nonlinear analyses of Part I, \cite{Alexstathas2022a}. In particular, in Part I, a micromorphic model together with THM couplings, was used for the determination of the PSZ thickness during coseismic slip. The application of a micromorphic continuum leads to a finite thickness for the PSZ, which guarantees mesh objectivity of the numerical results. Because the thickness of the PSZ is finite, the thermal load applied inside the PSZ is distributed over the PSZ thickness. Furthermore, the finite thickness of the PSZ is a crucial part of the mechanism explaining the appearance of traveling PSZ inside the fault gouge as we have argued in Part I. We further note that the yield criterion employed in the analyses of Part I was a Drucker-Prager yield criterion, while, here we make use of a Mohr-Coulomb yield criterion. The use of the Mohr Coulomb criterion allows us to describe the friction $\tau(t)$ with the help of the normal stress $\sigma_n$ to the yielding plane, instead of the combination of normal stresses required in the case of the Drucker-Prager. \subsection{Cases of Interest \label{ch: 6 sec: BD_loading_presentation}} \noindent We consider four cases for the loading and boundary conditions concerning the evaluation of the fault friction during coseismic slip. We first separate between stationary and traveling modes of strain localization and then we further discriminate between unbounded and bounded domains in order to cover all possible cases. The separation of the fault's frictional response into these categories leads to four different expressions for the Green's function kernel $G^\star(y,t;y^\prime,t^\prime,c_{hy},c_{th})$ in equation \eqref{Volterra_integral_equation_mod}.\\ \newline \noindent Here we will provide the analytical expressions, for the kernels to be substituted into equations \eqref{Volterra_integral_equation_mod}. In naming the Green's function kernels we used the subscript naming conventions of \cite{cole2010heat}. Namely for diffusion in 1D line segment domains, the letter $X\alpha\beta$ is adopted, where $\alpha,\;\beta$ are the left $(y=0)$ and right $(y=\text{H})$ boundaries of the domain respectively. They can take the values $0$ or $1$ indicating an unbounded or a bounded domain respectively, under homogeneous Dirichlet boundary conditions.\\ \newline \noindent We begin by introducing the Green's function kernels of the unbounded $X00$ and the bounded $X11$ cases in the case of a 1D diffusion equation under homogeneous Dirichlet boundary conditions.\\ \newline For the unbounded case we use: \begin{align} G_{X00}(y,t;y^\prime,t^\prime,c) = \frac{1}{2\sqrt{\pi c(t-t^\prime)}}\exp{\left[-\frac{(y-y^\prime)^2}{4 c(t-t^\prime)}\right]}. \label{ch: 6 eq: G_X00_kernel} \end{align} \noindent Similarly for the bounded case we use: \begin{align} G_{X11}(y,t;y^\prime,t^\prime,c) = \frac{2}{L}\sum_{m=1}^{\infty}\exp{\left[-m^2\pi^2c\frac{t-t^\prime}{\text{H}^2}\right]}\sin{\left(m\pi\frac{y}{\text{H}}\right)}\sin{\left(m\pi\frac{y^\prime}{\text{H}}\right)}.\label{ch: 6 Green's_long_co-time} \end{align} \noindent We note here that $c$ can be either $c_{th}$ or $c_{hy}$ depending on the diffusion problem in question. The kernels $G^\star_{X\alpha\beta}(y,y^\prime,t-t^\prime,c_{hy},c_{th})$ of the pressure diffusion problem based on the impulse of the frictional response for given boundary strain localization modes and boundary conditions are given by: \begin{itemize} \item[•] Stationary mode of strain localization \begin{itemize} \item[•]Unbounded domain, $\alpha=0,\;\beta=0,\;x^\prime=0$, \cite[see][]{Rice2006} \begin{align} G^\star_{X00}(y,t;0,t^\prime,c_{hy},c_{th})=c_{hy}G_{X00}(y,t;0,t^\prime,c_{hy})-c_{th}G_{X00}(y,t;0,t^\prime,c_{th}).\label{ch:6 Green's_unbounded_stationary} \end{align} \item[•]Bounded domain $\alpha=1,\;\beta=1,\;,x^\prime=0$ \begin{align} G^\star_{X11}(y,t;0,t^\prime,c_{hy},c_{th}) =\ c_{hy}G_{X11}(y,t;,0,t^\prime,c_{hy})-c_{th}G_{X11}(y,t;0,t^\prime,c_{th}). \end{align} \end{itemize} \item[•] Traveling mode of strain localization \begin{itemize} \item[•]Unbounded domain, $\alpha=0,\;\beta=0,\;y^\prime=u(t^\prime)$: \begin{align} \begin{aligned} G^\star_{X00}(y,t;y^\prime,t^\prime,c_{hy},c_{th})=c_{hy}G_{X00}(y,t;u(t^\prime),t^\prime,c_{hy})-c_{th}G_{X00}(y,t;u(t^\prime),t^\prime,c_{th}). \end{aligned} \end{align} \item[•]Bounded domain, periodic trajectory in time, $\alpha=1,\;\beta=1,\;y^\prime=u(t^\prime)$: \begin{align} \begin{aligned} G^\star_{X11}(y,t;y^\prime,t^\prime,c_{hy},c_{th}) =\ c_{hy}G_{X11}(y,t;u(t^\prime),t^\prime,c_{hy})-c_{th}G_{X11}(y,t;u(t^\prime),t^\prime,c_{th}). \end{aligned}\label{ch:6 Green's bounded periodic trajectory} \end{align} \end{itemize} \end{itemize} \section{Methods for the numerical solution of linear Volterra integral equations of the second kind \label{ch: 6 sec: section 3}} \noindent The solution of linear integral equations of the second kind can be sought with a variety of different analytical and numerical methods. From an analytical standpoint, these methods include methods from operational calculus namely, Laplace, Fourier or $\mathcal{Z}$-Transform \cite[see][]{churchill1972operational,brown2009complex,Mavaleix-Marchessoux2020}, the use of Taylor expansions for the integrand inside the integral and the method of Adomian decomposition \cite[see][]{wazwaz2011linear,Evans1981}. The case of a stationary yielding mathematical plane described in \cite{Rice2006} can, and has been solved analytically, making use of the Laplace transform. Those methods depend on the convolution property of the integral in the integral equation to transform it into a simpler algebraic equation. The challenge then lies in the inversion of the relation obtained in the auxiliary (frequency) domain back to the time domain. However, as the complexity of the Green's function kernels and the loading function increases due to the introduction of boundary conditions and different assumptions concerning the trajectory of the shear band along the fault gouge, such an inversion is not always possible analytically. We are then forced to use numerical methods for the solution of the above Volterra integral equation.\\ \newline \noindent The above analytical methods have also their numerical counterparts, with the use of the Discrete Fourier Transform (DFT) being a central part in most numerical solution procedures. However, use of the DFT is most efficient when the integral equation to be solved has the form of a convolution. This is not always the case in our problem. For instance, the kernel described in equation \eqref{ch:6 Green's bounded periodic trajectory} has terms in $(t,\;t^\prime)$ that do not involve their difference $(t-t^\prime)$, and therefore, its use in equation \eqref{Volterra_integral_equation_mod} results in the integral term not being a convolution. In order to handle the above difficulty then, we will make use of another class of numerical methods called spectral collocation methods, which solve the integral equation \eqref{Volterra_integral_equation_mod} directly in the time domain. These methods are conceptually easy to use, and since no inversion is required, they are able to handle very general cases of Green's function kernels and loading functions.\\ \newline \noindent In what follows, we will make use of the Spectral Collocation Method with Lagrange basis functions (SCML) for the numerical solution of the integral equation \eqref{Volterra_integral_equation_mod} \cite[see][and section \ref{ch: 6 sec: section 3.2}]{tang2008spectral,Elnagar1996}. The SCML method will be shown to handle both the bounded and unbounded domains and the cases of stationary vs traveling strain localization. \subsection{Collocation method\label{ch: 6 sec: section 3.2}} \noindent We begin by normalizing equation \ref{Volterra_integral_equation_mod}. We choose the following normalization parameters $t_0=\frac{\text{H}^2}{c_{th}},\;\tau_0=f(\sigma_n-p_0),y_0=\text{H},r_c=\frac{c_{hy}}{c_{th}}$. The normalized equation is the given by: \begin{align} \bar{\tau}(\bar{t})=1-\frac{f\Lambda V}{\rho C}\frac{ \text{H}}{c_{th}(r_c-1)}\int^{\bar{t}}_0\bar{\tau}(\bar(t)^\prime)\bar{G}^{\star}(\bar{y},\bar{t};\bar{y}^\prime,\bar{t^\prime})\|_{y=y^\prime}d\bar{t^\prime}\label{ch 6: eq: normalized_integral_equation} \end{align} \noindent where $\bar{\tau}=\frac{\tau}{\tau_0},\;\bar{t}=\frac{t}{t_0},\bar{t}^\prime=\frac{t^\prime}{t_0}\;\bar{y}=\frac{y}{y_0},\bar{y}^\prime=\frac{y^\prime}{y_0}$ and $\bar{G}^\star(\bar{y},\bar{t};\bar{y}^\prime,\bar{t^\prime})$ is the normalized Green's function kernel given by: \begin{itemize} \item[•] In the unbounded case: \begin{align} \bar{G}_{X00}^\star(\bar{y},\bar{t};\bar{y}^\prime,\bar{t^\prime})=\frac{1}{2}\left[\frac{r_c}{\sqrt{\pi r_c(\bar{t}-\bar{t}^\prime)}}\exp\left[-\frac{(y-y^\prime)^2}{4r_c(\bar{t}-\bar{t}^\prime)}\right]-\frac{1}{\sqrt{\pi(\bar{t}-\bar{t}^\prime)}}\exp\left[-\frac{(y-y^\prime)^2}{4(\bar{t}-\bar{t}^\prime)}\right]\right]\label{ch 6: eq: normalized_unbounded_kernel} \end{align} \item[•] In the bounded case: \begin{align} \bar{G}_{X11}^\star(\bar{y},\bar{t};\bar{y}^\prime,\bar{t^\prime})=2\left[r_c\sum_{m=1}^\infty\exp\left[-(m\pi)^2r_c(\bar{t}-\bar{t}^\prime)\right]-\sum_{m=1}^\infty\left[-(m\pi)^2r_c(\bar{t}-\bar{t}^\prime)\right]\right]\sin{\left(m\pi\bar{y}\right)}\sin{\left(m\pi\bar{y}^\prime\right)}\label{ch 6: eq: normalized_bounded_kernel} \end{align} \end{itemize} \noindent Based on the work of \cite{tang2008spectral}, we apply a spectral collocation method for the calculation of the frictional response described by equation \eqref{ch 6: eq: normalized_integral_equation}. Spectral methods allow for evaluation of the solution in the whole domain of the problem yielding exponential degree of convergence (see \cite{tang2008spectral}). The principle of the method is the substitution of the unknown function $\bar{\tau}(\bar{t})$ inside the integral equation by a series of polynomials that constitute a polynomial basis. We then opt for the minimization of the residual between the exact and the approximate solution at specific collocation points inside the problem's domain. Here we use the Chebyshev orthogonal polynomials of the first kind (see \cite{trefethen2019approximation}). Because the Chebyshev polynomial of the first kind constitute a basis in the interval [-1,1], we transform the integral equation \eqref{ch 6: eq: normalized_integral_equation} to lie in this interval (see Appendix \ref{Appendix H}).The integral equation then reads: \begin{align} U(\bar{z})=1-\frac{f\Lambda V}{\rho C}\frac{ \text{H}}{c_{th}(r_c-1)}\frac{\bar{\text{T}}}{2}\int^{\bar{z}}_{-1}U(s)G^{\star}\left(\bar{y},\frac{\bar{\text{T}}}{2}(\bar{z}+1);\bar{y}^\prime,\frac{\bar{\text{T}}}{2}(s+1)\right)ds,\label{ch 6: eq: normalized_integral_equation_mew_interval} \end{align} \noindent where we note that $U(\bar{z})=\bar{\tau}(\frac{\bar{\text{T}}}{2}(\bar{z}+1))$. In the previous equation we performed a change in the integration variable from $\bar{t}\in [0,\frac{\bar{\text{T}}}{2}(\bar{z}+1)]$ to $s\in[-1,\bar{z}]$ so that the unknown function $U(s)$ inside the integral remains in the same form as $U(\bar{z})$ outside the integral. \noindent Next, we choose to approximate the unknown function in equation \eqref{ch 6: eq: normalized_integral_equation_mew_interval} (i.e. frictional response) by its Lagrange interpolation i.e: \begin{align} U(\sigma)\approx \sum_{j=0}^{N}{} U(\bar{z}_j)F_j(\sigma) \end{align} \noindent The Lagrange interpolation allows a function to be approximated as a linear combination of the Lagrange cardinal polynomials $F_j(\sigma)$, and weights $U(\bar{z}_j)$ corresponding to the values of the function at specific points $z_{j}$. The Lagrange cardinal polynomials have the property that $F_m(\bar{z}_n)=\delta_{mn}$, where $\delta_{mn}$ is the kronecker symbol. We choose to express the Lagrange polynomials with the help of the Chebyshev polynomials of the first kind, and we choose the set of approximation nodes $\bar{z}_j$ to correspond to the extrema of the Chebyshev polynomial of the first kind, of degree $N$ (see \cite{trefethen2019approximation}). In this case the interpolating polynomial is written as follows: \begin{align} &U(\sigma)\approx \sum_{j=0}^{N}{}^\prime U(\bar{z}_j)P_j(\sigma),\\ &P_j(\sigma)=\begin{cases} \frac{(-1)^{j}}{\sigma-\bar{z}_j}/\sum\limits_{k=0}^{N}{}^\prime\frac{(-1)^k}{\sigma-\bar{z}_k}&\sigma\neq \bar{z}_j\text{ or } \sigma\neq \bar{z}_k\label{ch: 6 eq: barycentric formula}\\ 2&\sigma=\bar{z}_0\text{ or }\sigma=\bar{z}_N\\ 1&\sigma=\bar{z}_j\text{ and }j\neq 0\text{ or }j\neq N\\ 0&\sigma=\bar{z}_k\\ \end{cases}\\ &\sum_{j=0}^{N}{}^\prime(\cdot)_j=\sum_{j=0}^N(\cdot)_j-\frac{(\cdot)_0+(\cdot)_N}{2}\label{ch: 6 eq: mod sum} \end{align} \noindent where the barycentric formula involving the modified sum $\sum\limits_{j=0}^{N}{}^\prime(\cdot)$ is used for the cardinal polynomials and the interpolation (see \cite{trefethen2019approximation}). By making use of the barycentric formula in equation \eqref{ch: 6 eq: barycentric formula} we are able to evaluate the cardinal polynomials fast and with smaller error that other conventional approaches (see \cite{trefethen2019approximation,tang2008spectral}). We note that the Lagrange interpolation polynomial at the selection of Chebyshev points $\{\bar{z}_i\}$ stays unaffected by Runge's phenomenon. Runge's phenomenon is the observation that the high polynomial degree Lagrangian interpolation in equidistant grids leads to high error at the approximation of points that don't belong to the set of interpolation nodes. The effect is more pronounced near the boundaries of the interpolation domain. \\ \newline \noindent For the numerical evaluation of the integral in equation \eqref{ch 6: eq: normalized_integral_equation_mew_interval} the Clenshaw-Curtis quadrature will be used since it is compatible with the iterpolation nodes used. We note here that the choice of the interpolation nodes $\bar{z}_j$ -extrema of the degree N Chebyshev polynomial of first kind- leads to quadrature weights of positive sign, which reduces the error of the summation. If equidistant points were used as a quadrature rule of high order $(N>7)$, this would lead to quadrature weights of alternating sign increasing the integration error \cite{Quarteroni2007}. We transform once again the integral of equation \eqref{ch 6: eq: normalized_integral_equation_mew_interval} from $s\in[-1,z]$ to $\theta\in[-1,1]$ in order to apply the appropriate quadrature rule for integration. the new integral equation reads: \begin{align} U(\bar{z})=1-\frac{f\Lambda V}{\rho C}\frac{ \text{H}^2}{c_{th}(r_c-1)}\frac{\bar{\text{T}}}{2}\frac{\bar{z}+1}{2}\int^{1}_{-1}U(s(\bar{z},\theta))G^{\star}\left(\bar{y},\frac{\bar{\text{T}}}{2}(\bar{z}+1);\bar{y}^\prime,\frac{\bar{\text{T}}}{2}(s(\bar{z},\theta)+1)\right)ds,\label{ch 6: eq: normalized_integral_equation_mew_interval1} \end{align} \noinden The discretized form of equation \eqref{ch 6: eq: normalized_integral_equation_mew_interval1} for the Clenshaw Curtis quadrature scheme is given by: \begin{align} &U(\bar{z}_i)=1-a\bar{t}_i\sum_{j=0}^{N}{}^\prime U_j(\bar{z}_j)\sum_{p=0}^{N}P_j(s_{ip})G^{\star}\left(\bar{y},\bar{t}_i;\bar{y}^\prime,\bar{t}^\prime_{ip})\right)w_p,\label{ch 6: eq: normalized_integral_equation_mew_interval_discretized_full} \end{align} \noindent where,$ s_{ip}=s(\bar{z}_i,\theta_p),\;\bar{t}_i=\frac{\bar{z_i}+1}{2}\bar{\text{T}},\;\bar{t}^\prime_{ip}=\frac{\bar{\text{T}}}{2}\left(s_{ip}+1\right),\;a=\frac{f\Lambda V}{\rho C}\frac{ \text{H}^2}{c_{th}(r_c-1)}\frac{\bar{\text{T}}}{2}$.\\ \newline \noindent Finally, by adopting the indicial notation with summation over repeated indices our system is written as: \begin{align} &(\delta_{i,j}+A^\star_{i,j})U_j(\bar{z}_j)=g_i, \end{align} \noindent where $g_i=1$ and $A^\star_{i,j}$ is given by: \begin{align} &A^\star_{i,j}=A_{ij}-B_{ij}\\ &A_{ij}=a\bar{t}_i \sum_{p=0}^{N}P_j(s_{ip})G^{\star}\left(\bar{y},\bar{t}_i;\bar{y}^\prime,\bar{t}^\prime_{ip})\right)w_p\\ &B_{ij}=\begin{cases} A_{i0}&,\;j=0\\ A_{iN}&,\;j=N\\ A_{ij}&,\;j\neq 0 \text{ or } j\neq N\\ \end{cases} \label{ch: 6 eq: algebraic_system} \end{align} \noindent or in matrix form: \begin{align} \left(I+A^\star\right)U=G, \end{align} \noindent We can then solve the algebraic system to find the interpolation coefficients $U_j$ of the numerical solution. Due to the properties of the Lagrange polynomials the coefficients $U_i$ are also the values of the numerical solution at the specific times $t_i$. \section{Applications \label{ch: 6 sec: section 4}} \noindent In this section we will present the evolution of the frictional strength $\tau(t)$ for the different cases of loading and boundary conditions described in section \ref{ch: 6 sec: BD_loading_presentation}. The available values for the fault gouge properties considered homogeneous along its height are given in Table \ref{ch: 6 table:material_properties}. \begin{table}[h!] \begin{center} \begin{tabular}[]{l l l l l l} \hline Parameters & Values & Properties & Parameters & Values & Properties \\ \hline \hline $f$ &$0.5$ &- &$\Lambda$ & $2.216$ & MPa/$^o$C \\ $\sigma_n$ & $200$ &MPa &$\rho C$ & $2.8$ &MPa$/^o$C \\ $P_0$ & $66.67$ &MPa &$c_{hy}$ & $10 $ &mm$^2$/s \\ $\text{H}$ & $1$ &mm &$c_{th}$ & $1 $ &mm$^2$/s \\ \hline \end{tabular} \caption{Material parameters of a mature fault at the seismogenic depth \protect\cite[see][]{rice2006heating,Rattez2018b}.} \label{ch: 6 table:material_properties} \end{center} \end{table} \subsection{Stationary strain localization mode} \subsubsection{Stationary strain localization on an unbounded domain \label{ch: 6 sec: Rice_proc_inf_layer}} \noindent The solutions for the temperature field on an infinite layer under a stationary point source thermal load were first derived in \cite{carslaw1959conduction}. \cite{Mase1987} and \cite{Andrews2005}, present temperature field solutions for stationary distributed thermal loads. Later in \cite{Lee1987} the authors used the above temperature solutions to derive the pressure solution fields $\Delta P(y,t)$ of the coupled pore fluid pressure equation.\\ \newline \noindent In the work of \cite{Rice2006,Rempel2006} the authors introduce a methodology for the determination of the coupled frictional response of a fault gouge under constant shear rate. The results for the stationary instability on an infinite domain have already been derived in \cite{rice2006heating} for yielding on a mathematical plane, and further expanded in the case of distributed yield in \cite{Rempel2006}. In this case, a closed form analytical solution is possible: $\tau(\delta)=f(\sigma_n-p_0)\exp(\frac{\delta}{L^\star})\erfc(\sqrt{\frac{\delta}{L^\star}})$, where $L^\star=\frac{4}{f^2}\left(\frac{\rho C}{\Lambda}\right)^2\frac{\left(\sqrt{c_{hy}}+\sqrt{c_{th}}\right)^2}{\dot{\delta}}$. The derived solution is recognized as the Hermite polynomial of degree -1.\\ \newline \noindent We note that this solution is dependent on the seismic slip rate $\dot{\delta}$ (see dimensions of $L^\star$). The dependence of the fault friction on the seismic slip rate $\dot{\delta}$ (velocity weakening) has been shown in experiments \cite[see][among many others]{Badt2020,Harbord2021,Rempe2020}. In order to demostrate the efficiency of the SCLM method, we use the above analytical solution as a benchmark for comparison. In Figure \ref{ch: 6 fig: frictional_behavior_stationary_unbounded} we present the numerical results of slip on a stationary mathematical plane \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth]{Rice_Stathas_Analytical_comparison.pdf} \caption{Left: $\tau-\delta$ response of the layer for different slip velocities $\dot{\delta}$ applied. Due to the constant isothermal drained conditions at the boundary near infinity the solution tends asymptotically to the zero steady state solution. For different values of the velocity $\dot{\delta}$, the analytical solution is presented by a continuous line and the numerical solution is presented by the triangle markers. The numerical solution obtained by the SCLM method, coincides to the analytical curves.} \label{ch: 6 fig: frictional_behavior_stationary_unbounded} \end{figure}\\ \newline \noindent To showcase further the accuracy of our results, we present the calculated temperature $\Delta T(y,t)$ and pressure $\Delta P(y,t)$ fields, computed with the method of Gaussian quadrature at the already computed Chebyshev nodes for the time domain, in a uniform spatial grid around the position of strain localization. The results of Figure \ref{ch: 6 fig: Temp and press fields} indicate that at all times the pressure maximum coincides with the position of the strain localization as expected from the analytical solution. This corroborates the accuracy and precision of our results. \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth]{Temp_Press_field_us_mod.pdf} \caption{Temperature $\Delta T$ and pore fluid pressure $\Delta P$ fields along the height of the layer for shearing velocity $\dot{\delta}=1$ m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of $\Delta P_{max}$ coincides with the position of the stationary strain localization.} \label{ch: 6 fig: Temp and press fields} \end{figure}\\ \subsubsection{Stationary strain localization on a bounded domain\label{ch: 6 sec: Stationary_bounded}} \noindent When the yielding region (PSZ) is wholly contained on a mathematical plane one might assume that the true boundaries of the fault gouge play little role in the evolution of the phenomenon, simulating the fault gouge region as an infinite layer. However, the validity of this model depends heavily on the pressure and temperature diffusion characteristic times in comparison to the total evolution time of the seismic slip. In essence, the question is: Does the phenomenon evolve so fast that the boundaries do not play a role in the overall frictional response?\\ \newline \noindent This is a valid question, considering that in experiments and in the majority of the numerical simulations, we need to assign some kind of boundary conditions to the problem in question. We address this question by investigating the case of a stationary strain localization (point thermal source) in the middle of a bounded domain representing the fault gouge, with the linear Volterra integral equation of the second kind \eqref{ch 6: eq: normalized_integral_equation}. We do so by applying the new form of the kernel $G^\star_{X11}(x,x^\prime,t-t^\prime,c_{hy},c_{th})$, which takes into account the boundary conditions of coseismic slip, pressure and temperature discussed in Part I, \cite{Alexstathas2022a}. Namely, the domain of the fault gouge was assumed to have a width of $\text{H}=1$ mm. We remind also that the boundary conditions correspond to an isothermal ($\Delta T(0,t)= \Delta T(\text{H},t)=0$) drained ($\Delta P(0,t)=\Delta P(\text{H},t)=0$) case. \\ \newline \noindent In order to solve equation \eqref{ch 6: eq: normalized_integral_equation} for the new kind of boundary conditions, we need to derive the new expressions for the Green's function kernel for the thermal diffusion and coupled pore fluid pressure diffusion equations on the bounded domain. The expression for the bounded Green's function kernel under Dirichlet boundary of the heat diffusion equation \eqref{ch 6: eq: normalized_bounded_kernel}, can be found by applying the method of separation of variables according to \cite{cole2010heat}.\\ \newline \noindent Equation \eqref{ch 6: eq: normalized_bounded_kernel} is termed the long co-time Green's function kernel. A mathematically equivalent short co-time solution can be constructed making use of the Green's kernel defined for the infinite domain case via the method of images, however, its form is significantly more complicated than equation \eqref{ch 6: eq: normalized_bounded_kernel} and is not convenient for the numerical procedures used in this paper. Namely, the short co-time solution is best suited when studying transient diffusion at the very start of the phenomenon. For fast timescales we don't need a lot of terms for the short co-time series to converge to the expected degree of accuracy. However, for large timescales after the initiation of the phenomenon the large co-time solution converges faster, i.e. using fewer terms in the sum. Furthermore, the form of the large co-time solution has a simpler form and can be integrated numerically faster, i.e. with less machine operations, than that of the short co-time.\\ \newline \noindent Next, we need to obtain the Green's function for the coupled pore fluid pressure diffusion equation. This is done by solving the coupled pressure differential equation on the bounded domain, using the method of separation of variables. We note that the two diffusion problems (thermal and coupled pore fluid pressure) are bounded by Dirichlet boundary conditions on the same domain and therefore, their Fourier expansions belong to the same Sturm-Liouville problem. This allows us to express, for the first time in the literature, the Green's function kernel of the coupled temperature diffusion system on a bounded domain due to an impulsive thermal load. Full derivation details are shown in Appendix \ref{Appendix E}, where we prove that the kernel in question can be given in a manner similar to the original expression for the infinite domain case found in \cite{Lee1987} \\ \newline \noindent Next, we apply the kernel of equation \eqref{ch 6: eq: normalized_bounded_kernel} in the equation \eqref{ch 6: eq: normalized_integral_equation}. Using the SCLM method, the values of friction at specific values of time ($t$) and seismic slip displacement (${\delta}$) can be derived for different seismic slip velocities ($\dot{\delta}$). The results of such an analysis are presented in Figure \ref{ch: 6 fig: frictional_behavior_stationary_bounded}. \begin{figure}[h! \centering \includegraphics[width=0.9\linewidth]{Collocation_tau_delta_bs_500_40_total.pdf} \caption{$\tau-\delta$ response of the layer for different slip velocities $\dot{\delta}$ applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. Due to the existence of a steady state the fault recovers all of its strength lost due to thermal pressurization at the beginning of the coseismic slip.} \label{ch: 6 fig: frictional_behavior_stationary_bounded} \end{figure}\\ \newline \noindent We note here that contrary to the results obtained in the case of the infinite layer in \cite{Rice2006,Rempel2006}, where the frictional response is decreasing monotonously (see also Figures \ref{ch: 6 fig: frictional_behavior_stationary_unbounded},\ref{ch: 6 fig: Temp and press fields}), in the case of the stationary thermal load on a bounded layer the frictional response is eventually influenced by the boundaries of the domain (see Figures \ref{ch: 6 fig: comparison_stationary_bounded_unbounded},\ref{ch: 6 fig: Temp and press fields bs}). Since the conditions on the boundaries are constant in time and the frictional source provides heat to the layer at a rate that is bounded by a constant $(\frac{1}{\rho C}\tau(t)\dot{\delta}\leq\frac{1}{\rho C}\tau_0\dot{\delta}=M)$, the temperature field will eventually reach a steady state. This in turn means that at the later stages of the phenomenon the temperature profile will remain constant in time, therefore its rate of change $\frac{\partial \Delta T}{\partial t}$ will become zero. Consequently, the phenomenon of thermal pressurization will cease, leading to rapid pore fluid pressure decrease due to the diffusion at the boundaries. As a result pore fluid pressure will return to its ambient value, and therefore, friction will regain its initial value too. \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth]{V1000_b_ub_sta_compare1.pdf} \caption{ Comparison of the $\tau-\delta$ response of the layer for an applied slip velocity $\dot{\delta}=1000$ mm/s. We observe that the influence of the boundaries becomes important from the early stages of coseismic slip ($\delta\approx10$ mm). In the bounded case, due to the existence of a steady state the fault tends to recover all of its strength lost to thermal pressurization at the beginning of the phenomenon. namely for a typical value of coseismic slip $\delta=1000$ mm, the fault has recovered more than half of its initial frictional strength.} \label{ch: 6 fig: comparison_stationary_bounded_unbounded} \end{figure}\\ \newline \noindent It is important to note here that as we show in Figure \ref{ch: 6 fig: comparison_stationary_bounded_unbounded}, frictional regain happens well inside the time and coseismic slip margins observed in nature during evolution of the earthquake phenomenon. Of course frictional regain depends on the height of the layer. Namely as the height of the layer increases, the stress drop due to thermal pressurization at the initial stages becomes larger and the fault gouge recovers its frictional strength slower and in later stages of slip. However, the height of the fault gouge H=1 mm corresponds to typical values from fault observations around the globe \cite[see][among others]{myers2004evolution,Rice2006,Sibson2003,sulem2004experimental}. Furthermore, based on the significantly higher hydraulic, and to a lesser extent thermal, diffussivities of the surrounding damaged zone \cite[see Part I][]{aydin2000fractures,tanaka2007thermal}, we conclude that the assumption of isothermal drained conditions at the boundaries of the fault gouge as a first approximation, is also justified.We note in particular that for a mature fault gouge, the ratio of the hydraulic permeability and thermal conductivity of the fault gouge $(^{f})$ to the surrounding damaged zone $(^d)$ lies between $r_{hy}=\frac{k^{f}_{hy}}{k^d_{hy}}=10^2\sim 10^6,\;r_{th}=\frac{c^f_{th}}{c^d_{th}}=1\sim 10$. Therefore, the a priori assumption that an infinite layer describes adequately well the fault gouge during seismic slip should, in our opinion, be revised. \\ \newline \noindent Next, we provide in Figure \ref{ch: 6 fig: Temp and press fields bs} the field numerical solutions for the change in temperature and pressure in a bounded domain of height $\text{H}=1$ mm, under constant seismic slip rate $\dot{\delta}=1$ m/s. In the bounded domain, the fields of temperature and pressure will reach the steady state, while the maximum pore fluid pressure coincides with the position of the stationary strain localization. However the steady state reached now is one where full frictional regain takes place. Therefore, the predicted temperature field at the steady state is not applicable, since other weakening mechanisms will take place (e.g. thermal decomposition of minerals will start at 900 $^o$C, see \cite{Sulem2009,Sulem2016}). The role of the boundary conditions at the fault gouge level becomes very important. \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth]{Temp_presfield_bs1.pdf} \caption{Temperature $\Delta T$ and pore fluid pressure $\Delta P$ fields along the height of the layer for shearing velocity $\dot{\delta}=1$ m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of $\Delta P_{max}$ coincides with the position of the stationary strain localization. The arrows indicate the evolution course of the maxima of each field. The pressure field initially increases before subsiding when the temperature field progressively reaches steady state and thermal pressurization ceases. The shaded area, indicates a range of temperatures ($\Delta T\geq 800^o$C), that is prohibitively large inside the fault gouge since it corresponds to melting of the gouge material. Moreover, at $\Delta T\geq 600^o$C chemical decomposition of minerals will start to take place inside the gouge, antagonizing the weakening mechanism of thermal pressurization.} \label{ch: 6 fig: Temp and press fields bs} \end{figure} \subsection{Traveling mode of strain localization} \noindent In the available literature \cite{Rice2006,rice2006heating} and the subsequent works \cite{Rempel2006,platt2014stability,rice2014stability} one of the main assumptions is that the principal slip zone (PSZ), which is described by the profile of the plastic strain rate (localized on a mathematical plane or distributed over a wider zone) remains stationed in the same place during shearing of the infinite layer. In this work we depart from this assumption, by assuming that the principal slip zone is traveling inside the fault gouge.\\%First we investigate the influence of a traveling slip zone in an infinite domain and we compare with the reference solution of \cite{Rice2006}. Next, we adopt the more realistic situation of a bounded domain, and we compare again the apparent frictional behavior of the system with the reference solution of \cite{Rice2006}.\\ \newline \noindent Two cases will be discussed, the first one discusses the implications of a traveling shear band inside the infinite layer, while the other case focuses on a moving shear band inside the bounded layer. The difference between a stationary and a moving shear band is that in the second case a steady state for the temperature $\Delta T(y,t)$ and pressure $\Delta P(y,t)$ fields is not possible (i.e. their rates of change cannot become zero, $\frac{\partial \Delta T}{\partial t}\neq0,\;\frac{\partial \Delta P}{\partial t}\neq0$ , since the profile of temperature constantly changes due to the thermal load constantly moving around the domain. This ensures that thermal pressurization never ceases. Thus, the value of the residual friction $\tau_{res}$ depends on the fault gouge's thermal and hydraulic properties $(c_{th}, c_{hy})$, the coseismic slip velocity $\dot{\delta}$, and the traveling velocity of the strain localization mode $(v)$. This has serious implications for the frictional response of the layer during shearing. More specifically, as the load does not stay stationary, thermal pressurization does not have enough time to act by increasing the pore fluid pressure. Therefore, according to the Mohr-Coulomb yield criterion, friction does not vanish as in the case of \cite{Rice2006}. Instead friction reaches a residual value $\tau_{res}$ different than zero. This is central for the dissipated energy \cite[see][among others]{Andrews2005,Kanamori2004,Kanamori2006} and the control of the fault transition from steady to unsteady seismic slip. \subsubsection{Traveling mode of strain localization in the unbounded domain.\label{ch: 6 sec: Traveling_unbounded}} \noindent Here we consider the shearing of a fault gouge, whose boundaries are taken at infinity. In what follows, we distinguish between the seismic slip velocity $\dot{\delta}$ and the velocity of the traveling shear band $v(t)$. In Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded}, we consider the PSZ (moving point heat source) to travel inside the fault gouge with a velocity $v$=50 mm/s. Different values for the rate of coseismic slip parameter $\dot{\delta}$ are taken into account. The shear band velocity $v$ is in agreement with observations from the numerical results of Part I \cite{Alexstathas2022a}. Contrary to the results obtained in the case of a stationary strain localization studied in \cite{Rice2006}, our results indicate the existence of a lower bound in the frictional strength $\tau_{res}$, dependent on the rate of seismic slip $\dot{\delta}$ (see Figure \ref{ch: 6 fig: friction_compare_mov_sta_unbounded}). \begin{figure}[h \centering \includegraphics[width=0.5\linewidth]{tau_d_mov_unbounded_no_markers1.pdf} \caption{$\tau-\delta$ response of the layer for different slip velocities $\dot{\delta}$ applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. Higher seismic slip rates correspond to lower residual values for friction.} \label{ch: 6 fig: frictional_behavior_moving_unbounded} \end{figure}\\ \newline \noindent In Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded}, we observe that an increase in seismic slip velocity $\dot{\delta}$ leads to a decrease of the frictional plateau. Since the plateau reached in these cases is other than the initial friction value corresponding to the ambient pore fluid pressure, we conclude that thermal pressurization is still present in the model's response. This is true, since the profile of temperature changes continuously, due to the yielding plane moving at a constant velocity $v$. This forces the maximum temperature, $T_{max}$, to move in the same way. Thus, the rate of change of the temperature field $\frac{\partial \Delta T}{\partial t}$, which is the cause of thermal pressurization, does not vanish.\\ \begin{figure}[h] \centering \includegraphics[width=0.75\linewidth]{V1000_ub_ub_mov_compare.pdf} \caption{Comparison of the $\tau-\delta$ frictional response between a moving and a stationary strain localization (PSZ) in an unbounded domain. The assumption of a traveling strain localization leads to a plateau of non zero residual friction $\tau_{res}$, contrary to the solution of \protect{\cite{Rice2006}}, which is based on a stationary PSZ.} \label{ch: 6 fig: friction_compare_mov_sta_unbounded} \end{figure}\\ \newline \noindent In Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded_vel}, we plot the frictional response of the fault for a given seismic slip velocity $\dot{\delta}=1$ m/s treating the shear band velocity $v$ as a parameter. We notice that the slower moving shear bands force the fault to faster and larger frictional strength drops, before they eventually reach a plateau. This is consistent with the observations made in \cite{Rice2006}, where the stationary shear band that presents an infinite negative slope at the start of the slip $\delta$ and tends asymptotically to zero as $\delta$ increases, can be treated as a special case of the model of traveling localization mode as the shear band velocity tends to zero ($v=0$). \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{tau_d_mov_unbounded_no_markers3.pdf} \caption{ Frictional response $\tau-\delta$ of the layer for different velocities $v$ of the traveling PSZ. For low traveling velocities the response tends to the behavior of stationary slip on a mathematical plane. As the traveling velocity increases the drop in friction becomes less.} \label{ch: 6 fig: frictional_behavior_moving_unbounded_vel} \end{figure}\\ \newline \noindent In Figure \ref{ch: 6 fig: Temp and press fields unbounded moving}, we present the evolution with time of the temperature $\Delta T(y,t)$ and pressure increase $\Delta P(y,t)$ fields, in the region of the unbounded domain covered by traveling strain localization mode. We note that in this case the traveling localization mode leads to a distribution of the thermal load inside the domain, which -since thermal pressurization remains constant- leads to significantly lower values of temperature inside the domain. We note that the frictional response shown in Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded} is consistent with the pressure increase $\Delta P(y,t)$ inside the domain, while the temperature and pressure fronts coincide with the prescribed position of the traveling strain localization (thermal load). \begin{figure}[h!] \centering \includegraphics[width=0.75\linewidth]{Temp_Press_field_um.pdf} \caption{Temperature $\Delta T$ and pore fluid pressure $\Delta P$ fields along the height of the layer for shearing velocity $\dot{\delta}=1$ m/s, at different times during the analysis. The numerical solution is consistent with the analytical observation that the position of $\Delta P_{max}$ coincides with the position of the stationary strain localization.} \label{ch: 6 fig: Temp and press fields unbounded moving} \end{figure}\\ \subsubsection{Traveling mode of strain localization in the bounded domain.\label{ch: 6 sec: Traveling_bounded}} \begin{figure}[h! \centering \includegraphics[width=0.75\linewidth]{schema_fault_gouge.pdf} \caption{Schematic representation of a fault gouge of height $H=1$ mm, under seismic slip $\delta$. The PSZ - red line- is allowed to travel in a region of thickness h=0.6mm according to the numerical results of Part I, \protect{\cite{Alexstathas2022a}}. The PSZ is moving periodically inside the region $h$ with velocity $v$.} \label{ch: 6 fig: explanatory_figure_moving_unbounded} \end{figure} \noindent In this section we investigate the frictional response of the layer of height $\text{H}=1$ mm, when the plastic strain localization (PSZ) travels inside a predefined region with a width $\text{h}=0.6$ mm as shown in Figure \ref{ch: 6 fig: explanatory_figure_moving_unbounded}. This region has the same width as the width of the plastified region predicted by our numerical model in Part I, \cite{Alexstathas2022a} see Figure \ref{Part I-ch: 5 fig: l_dot_gamma_final_1000-T_p_final_1000}). Based on the numerical results of Part I, we apply a periodic mode of traveling strain localization, with a constant velocity $v=30$ mm/s. We prescribe the trajectory of the yielding plane, whose position $u(t)$ is given by a triangle pulse train: \begin{align} u(t)=\frac{\text{H}}{2}+\frac{\text{h}}{2\text{H}}Tr(v t), \end{align} \noindent where $\text{H}$ is the height of the layer, $\text{h}$ is the width of the plastified region, $v$ is the velocity of the strain localization and $Tr(\cdot)$ is the triangle wave periodic function. The period is given by $T=\frac{2\text{h}}{v}$. The resulting linear Volterra integral equations of the second kind is solved numerically by making use of the spectral collocation method in section \ref{ch: 6 sec: section 3}. \\ \newline \noindent We observe in Figure \ref{ch: 6 fig: frictional_behavior_moving_bounded} that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. The frictional response presents oscillations due to the periodic movement of the strain localization. Since the strain localization is constantly moving, a steady state is not possible for the fields of temperature and pressure ($\frac{\partial \Delta T}{\partial t}\neq 0\to\frac{\partial \Delta P}{\partial t}\neq 0$). This means that the friction presents a residual value, $\tau_{res}$, which is lower than the fully recovered value of the stationary bounded case. \begin{figure}[h! \centering \includegraphics[width=0.75\linewidth]{tau_d_bounded_traveling_d_1_annotations_mod2.pdf} \caption{$\tau-\delta$ response of the bounded layer for different slip velocities $\dot{\delta}$ applied. A periodic traveling localization mode is applied. We observe that as the shearing rate increases, the softening behavior becomes more pronounced. For typical values of the seismic slip displacement we note that the effect of the boundaries becomes important. As the periodic traveling localization mode is constantly moving, a steady state is not possible. This means that the friction presents an oscillating residual value lower than the fully recovered value of the stationary bounded case. } \label{ch: 6 fig: frictional_behavior_moving_bounded} \end{figure}\\ \noindent Assuming the material parameters $c_{th},c_{hy}$ and the height of the layer $\text{H}$ constant, characteristics such us the oscillations amplitude $A$, circular frequency $\omega$ and the residual value of friction $\tau_{res}$ are controlled by three parameters, the thickness of the prescribed region the PSZ is allowed to travel inside the layer, $h$, the velocity of the traveling PSZ, $v$, and the seismic slip rate applied at the fault gouge, $\dot{\delta}$. \\ \newline \noindent In Figure \ref{ch: 6 fig: frictional_behavior_moving_bounded_vd_compare}, we investigate the influence of the shearing velocity $\dot{\delta}$, the velocity of the traveling shear band $v$ on the frictional response of a fault gouge with height $\text{H}=1$ mm. We note that the period of oscillations in the frictional response depends on the velocity with which the shear band travels inside the fault gouge. For the range of applied traveling shear band velocities $30-50$ mm/s the minima and maxima of the frictional response $\tau-\delta$ are not affected. \begin{figure}[h! \centering \includegraphics[width=0.9\linewidth]{tau_d_bm_vd0503_compare.pdf} \caption{$\tau-\delta$ response of the bounded layer for different ratios of strain localization velocities $v$ to coseismic slip rates $\dot{\delta}$ applied $(\frac{c}{\dot{d}})$. We note that for the same rate the period of oscillation remains the same. The period of oscillations is depends on the height of the layer $H$ and the velocty of the strain localization.} \label{ch: 6 fig: frictional_behavior_moving_bounded_vd_compare} \end{figure} \newline \noindent In Figure \ref{ch: 6 fig: frictional_behavior_moving_per_bounded_sta_unbounded}, we present a comparison between the friction developed during shearing of a bounded fault gouge and the model of slip on a stationary mathematical plane presented in section \ref{ch: 6 sec: Rice_proc_inf_layer} and in \cite{Rice2006}. In the bounded fault gouge, the seismic slip velocity is given by $\dot{\delta}=1000$ mm/s. We further consider the shear band to travel with a velocity $v=30$ mm/s inside a predefined region of height $\text{h}=0.6$ mm. We note that the two responses differ. The periodic movement of the yielding plane (thermal load) inside the layer leads to frictional oscillations. This happens because the yielding plane moves towards the isothermal drained boundaries that function as heat and pressure sinks. Namely, the crests of the oscillations correspond to the time instance the load approaches the fault gouge boundaries, while troughs correspond to the time the PSZ is closer to the middle of the layer. We note here that the average friction inside the layer, $\tau_{ave}$, is increasing due to the diffusion of pressure and temperature at the boundaries of the fault gouge. We note also that the oscillatory movement of the fault gouge moves excess heat and pressure towards the boundaries of the fault gouge leading to a ventilation phenomenon that further enhances the recovery of frictional strength. It is likely that removing the invariance along the slip direction would lead to vortices and other convective phenomena inside the layer \cite[see][]{griffani2013rotational,miller2013eddy,rognon2015circulation}. However, 2D and 3D phenomena inside the fault gouge are not explored here. \begin{figure}[h! \centering \includegraphics[width=0.75\linewidth]{Tau_d_per_Vel_Compare_annotations2.pdf} \caption{Comparison of the $\tau-\delta$ frictional response between a moving periodic strain localization on a bounded domain and a stationary strain localization (PSZ) on an unbounded domain. The influence of the boundary conditions is noticeable from the initial stages of the coseismic slip ($\delta\approx 10$ mm). \label{ch: 6 fig: frictional_behavior_moving_per_bounded_sta_unbounded} \end{figure}\\ \newline \noindent The results obtained in Figures \ref{ch: 6 fig: frictional_behavior_moving_bounded},\ref{ch: 6 fig: frictional_behavior_moving_bounded_vd_compare}, \ref{ch: 6 fig: frictional_behavior_moving_per_bounded_sta_unbounded}, present a qualitative agreement with those of Part I \cite{Alexstathas2022a}. The difference in the values is due to the assumption of a Dirac load in this paper, in order to preserve the equilibrium inside the band. Assuming a distribution of the yielding rate $\dot{\gamma}^p$ that is not singular while respecting the equilibrium conditions along the layer -as it is the case for the Cosserat continuum- would allow for higher minima in the frictional response, because of the distributed thermal load over the finite thickness of the yielding region. This leads to more efficient diffusion at the initial stages of thermal pressurization. \\ \newline \noindent In Figure \ref{ch: 6 fig: fields_moving_bounded} we present the fields of temperature $\Delta T(y,t)$ and pore fluid pressure increase $\Delta P(y,t)$ during shearing of the bounded fault gouge, with coseismic slip rate $\dot{\delta}=1$ m/s, assuming a traveling mode of strain localization traveling with a velocity of $v=30$ mm/s. We note that along the bounded fault gouge, the pore fluid pressure increase might become negative. This is acceptable as long as the total pore fluid pressure doesn't become negative ($\Delta P(y,t)>-P_0$). This is a characteristic that also exists in our fully nonlinear numerical analyses on the bounded domain (see \cite{Alexstathas2022a}, Figure \ref{Part I-ch: 5 fig: l_dot_gamma_final_1000-T_p_final_1000}). \begin{figure}[h! \centering \includegraphics[width=0.75\linewidth]{Temp_Press_field_bm_01_neg_p.pdf} \caption{Temperature $\Delta T$ and pore fluid pressure $\Delta P$ fields along the height of the layer for shearing velocity $\dot{\delta}=1$ m/s, at different times during the analysis. Because of the thermal load moving inside the domain closer to the sinks at the boundaries, temperature reaches markedly smaller values than in the stationary case. We note that the change in the pressure field presents negative values leading to regions of smaller pressure than the initial $P_0$ ($P(y,t)=P_0+\Delta P(y,t)$). Thiscoincides with the numerical analyses presented in Part I, \cite{Alexstathas2022a}.} \label{ch: 6 fig: fields_moving_bounded} \end{figure}\\ \section{Conclusions} \noindent In this paper a series of numerical results have been obtained for the coupled thermal and pore fluid pressure diffusion equations. We follow the methodology developed in \cite{Rice2006,Rempel2006}, and we expand it to the cases of bounded domains and moving thermal loads resulting from traveling (flutter) instabilities on a Cauchy continuum \cite[see][]{Rice2006,Benallal2003,benallal2005localization,Rice2014,Platt2014}.\\ \newline \noindent To handle the integral differential equations the SCLM method was applied \cite[see][]{Elnagar1996,tang2008spectral}. The method can handle the weakly singular kernels that appear in the unbounded case and the stationary thermal load on the bounded case. The method can also generalize to the case of a periodic traveling strain localization inside the bounded domain, which is in accordance with the numerical results of Part I, \cite{Alexstathas2022a}.\\ \newline \noindent It is found that contrary to the case of a stationary thermal load on an unbounded domain described in \cite{Rice2006}, taking into account the existence of the boundary conditions at the edges of the fault gouge plays an important role at the frictional evolution of the fault for a range of values of the seismic slip velocities commonly observed during earthquake events. Namely, for a seismic slip $\delta$ of 1 m under a seismic slip velocity $\dot{\delta}$=1 m/s, the influence of the boundaries becomes important after the first 0.1 m of slip. It is shown that under the influence of homogeneous Dirichlet conditions on the bounded domain, a steady state is reached for the temperature field, which in turn implies that the effects of thermal pressurization progressively attenuate until it completely ceases. In this case the temperature rise inside the fault gouge is well above the melting point of the fault gouge material. The apparent scarcity of pseudodactilites and absence of widespread melting observations in faults (see \cite{Brantut2008,kanamori2004physics,Rice2006}), however, indicates that other possible frictional weakening mechanisms will become prevalent, such as chemical decomposition of minerals \cite[see][]{Sulem2009}.\\ \newline \noindent Furthermore, the effects of a moving thermal load corresponding to a traveling strain localization (flutter instability) inside the fault gouge, were examined under both unbounded and bounded boundary conditions. In both cases, traveling strain localization mode showed the existence of a plateau in the frictional strength of the fault, $\tau_{res}$ (see Figures \ref{ch: 6 fig: frictional_behavior_moving_unbounded}, \ref{ch: 6 fig: frictional_behavior_moving_bounded}).\\ \newline \noindent In the case of the traveling load on the unbounded domain, the fact that the load changes its position constantly leads to a non zero change of the temperature field ($\frac{\partial \Delta T(x,t)}{\partial t}\neq 0$) and constant influence of the pore fluid pressure profile by the thermal pressurization term. Moreover, because the thermal load changes its position, temperature does not have time to accumulate in one point and provoke a pressure increase that eliminates fault friction. Instead fault friction reaches a non-zero plateau (see Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded}). This is an important result since it directly influences the dissipation energy produced during seismic slip.\\ \newline \noindent Moreover, we examined the influence of the velocity of the strain localization (moving thermal load) in the frictional evolution. Based on our analyses, we established that the faster traveling shear bands have a smoother stress drop at the first stages of the analysis and they reach a higher plateau of frictional strength, see Figure \ref{ch: 6 fig: frictional_behavior_moving_unbounded_vel}. When the velocity of the shear band tends to zero we retrieve the solution described in \cite{Rice2006}, as expected.\\ \newline \noindent Next, a traveling instability was applied into a bounded domain with homogeneous Dirichlet boundary conditions. Again the results show that the frictional strength of the fault reaches a plateau and is not fully recovered as in the case of a stationary instability (see Figure \ref{ch: 6 fig: frictional_behavior_moving_bounded}). The reason is the change of the position of the thermal load during the analysis and the subsequent change of the temperature profile, leading to a non attenuating thermal pressurization phenomenon. Again the plateau reached, differs based on the traveling velocity of the shear band $v$, which ranges in the order of $20\sim 50$ mm/s according to the numerical analyses of Part I, \cite{Alexstathas2022a}. In this case, it is shown that in contrast to the case of a stationary thermal load on the bounded domain, the fault never recovers entirely its frictional strength since the effects of thermal pressurization never cease.\\ \newline \noindent The results presented above clearly show a strong dependence of the fault's frictional behavior in both the fault gouge boundary conditions and the strain localization mode (traveling or stationary PSZ) introduced into the medium. These results can be used as a preliminary model in order to evaluate qualitatively the results obtained by numerical analyses taking into account the microstructure of the fault gouge material, where discerning between the effects of the different mechanisms affecting the frictional response of a fault undergoing thermal pressurization is more involved. The results of the fully non-linear numerical analyses with the Cosserat micromorphic continuum of Part I agree qualitatively with the results from the linear model of this paper. This indicates that the driving cause behind the obtained results is the diffusion from the thermal and hydraulic couplings. The microstructure follows to a lesser extend. Its use in the solution of the BVP presented in Part I (see \cite{Alexstathas2022a}), is required in order for the dissipation and the meta-stable frictional response of the fault gouge to be calculated correctly excluding mesh dependency from the numerical results. \\ \newline \noindent In conclusion, our results show that for typical values of seismic slip $\delta$ and seismic slip velocity $\dot{\delta}$, the effects of the boundaries of the fault gouge cannot be ignored. This means that those effects need to be accounted in both numerical analyses and laboratory experiments. The influence of different kind of boundary conditions needs to be studied. The introduction of a traveling (flutter-type) strain localization mode is an important aspect of our model. Its presence increases the frequency content of the earthquake and it prevents the bounded fault gouge from fully recovering its frictional shear strength due to the diffusion at the boundaries. Furthermore, it contributes in keeping the temperatures inside the fault gouge smaller than in the stationary cases. The existence of oscillations and the reduction of the peak residual frictional strength are also important in understanding the transition form a stable to unstable seismic slip and subsequent fault nucleation \cite[see][among others]{Rempel2006,Rice1973a,Rice2006,viesca2015ubiquitous}. Furthermore, the existence of non zero upper and lower bounds in the fault's frictional behavior ($\tau_{min},\tau_{res}$), has serious implications for any attempt in controling the transition from stable (aseismic) to unstable (coseismic) slip \cite[see][]{Stefanou2019,Stefanou2020,tzortzopoulos2021Thesis}. \section*{Acknowledgments} \noindent The authors would like to acknowledge the support of the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement no. 757848 CoQuake). \let\clearpage\relax
1,314,259,996,813
arxiv
\section*{Introduction} One of the main achievements in high energy physics in the last decade is the observation of a new boson at the Large Hadron Collider (LHC) by the two collaborations ATLAS and CMS \cite{HiggsATLAS,HiggsCMS}. The observed particle is the candidate for the missing key element of the standard model, i.e., the Higgs boson, $\mathit{h_{SM}}$ \cite{Higgs1,Higgs2,Higgs3,Englert1,Kibble1,Kibble2} and its properties are in reasonable agreement with SM predictions as verified by various analyses at the LHC \cite{LHC1,LHC2,LHC3,LHC4,LHC5,LHC6,LHC7,LHC8}. Within the uncertainty of these measurements, there is still possibility to consider beyond Standard Model (BSM) such as the two Higgs doublet model (2HDM) \cite{2hdm1,2hdm2,2hdm3} which introduces the SM-like Higgs boson candidate together with extra neutral and charged Higgs bosons. Although the 2HDM provides the Higgs sector for supersymmetry in the minimal form (MSSM) \cite{MSSM1,MSSM2,MSSM3}, it is still attractive as a standalone model due to the possibility of better agreement with experimental data \cite{fmahmoudi}. The structure of the 2HDM and its parameters provide the possibility to coincide the lightest Higgs boson properties to those of the SM Higgs boson \cite{2hdm_theorypheno}. The heavy neutral CP-even(CP-odd) Higgs bosons $\mathit{H}$($\mathit{A}$) and the two charged Higgs bosons $\mathit{H^{\pm}}$ are considered as extra Higgs bosons to be observed or excluded in the current or future experiments. After the discovery of the light Higgs boson candidate, one of the main goals of the ATLAS and CMS collaborations has been the search for the extra Higgs bosons. The ATLAS collaboration has reported an analysis of $pp$ $\rightarrow A\rightarrow Zh$ \cite{atlas3} where the CP-odd Higgs boson, $A$, decays to $Z$ boson and 125 GeV Higgs boson. They cover four types of 2HDM based on the Higgs-fermion couplings and results are presented in terms of exclusion contours in the parameter space. These results are confirmed by the CMS collaboration \cite{cms2}. The heavy Higgs conversion, i.e., $A \to ZH$ has been analyzed by the two collaborations CMS \cite{cms1} and ATLAS \cite{atlas1,atlas2}. We will discuss about these results in the next sections. While collision data is taken by the two LHC collaborations CMS and ATLAS, there are ongoing analyses focusing on the possibility of observing extra Higgs bosons at the LHC luminosity upgrade \cite{hllhc1,hllhc2} and also future lepton colliders such as CLIC \cite{clichiggs1,clichiggs2}, ILC \cite{ilchiggs}, FCC \cite{fcchiggs} and CEPC \cite{cepchiggs}. In a number of recent works, we analyzed charged \cite{hashemimuon1,hashemimuon2,hashemichlc} and neutral \cite{htype1,htype3,htype4} Higgs boson production and decay at lepton colliders and provided prospects for their observation in different scenarios using benchmark points in the parameter space. The above analyses were based on the alignment limit \cite{align1,align2,align3} which is defined as the scenario in which the properties of one of the neutral CP-even Higgs mass eigenstates coincide with those of the SM Higgs boson. The alignment limit is naturally achieved in the so-called decoupling limit where the masses of other scalar states are large and decouple from the SM-like Higgs boson \cite{decoupling}. However, it is possible to achieve the alignment limit even without decoupling \cite{align1,align2,align3} which has been the case in our previous studies. In the current work, we consider the possibility of migrating from the alignment limit and we perform a general scan of the parameter space to analyze the neutral Higgs boson branching ratio of decays. The analysis is not limited to a specific type of the 2HDM and all types are analyzed and compared to reach a conclusion on the choice of the most relevant production process and decay channel for the neutral Higgs bosons in each part of the parameter space. In what follows, a brief theoretical description of the 2HDM and the software setup used for the analysis are presented. Next, we discuss about the signal processes adopted by LHC collaborations and then present our detailed study of the neutral Higgs boson decay channels in different mass scenarios, theoretical constraints and their relevance in each type of the model. The final conclusion for each type of the 2HDM is presented based on kinematic analysis of events. \section{The two Higgs doublet model} The SM Higgs Lagrangian is written in the form \begin{equation} \mathcal{L}=(\partial_{\mu}\Phi)^\dagger (\partial^{\mu}\Phi)-\mathcal{V}(\Phi) \end{equation} where $\mathcal{V}$ is the Higgs potential based on only one Higgs doublet $\Phi$: \begin{equation} \mathcal{V} = \mu^2\Phi^\dagger\Phi+\lambda(\Phi^\dagger\Phi)^2 \end{equation} With this form of the potential, the condition to have non-zero vacuum expectation value for the Higgs field is $\mu^2<0$. The two Higgs doublet model is made as an extension of the SM Higgs sector by introducing two Higgs doublets $\Phi_1$ and $\Phi_2$. Writing all possible Lagrangian terms requires additional degrees of freedom. The SM Higgs potential $\mu^2$ term is extended to include three parameters $m_{11}^2$, $m_{22}^2$ and $m_{12}^2$ and the $\lambda$ term is extended to seven terms containing $\lambda_1$ to $\lambda_7$ \cite{decoupling,2hdm_higgssector1,tanbsignificance}. In such a general Higgs potential, Higgs-boson-associated FCNC interactions exist. It has been shown that such FCNC terms are avoided at tree level by imposing discrete $Z_2$ symmetry ($\Phi_1 \to \Phi_1$ and $\Phi_2 \to -\Phi_2$) \cite{2hdm2}. The 2HDM Higgs potential under softly broken $Z_2$ symmetry (allowing $m_{12}\neq 0$) reduces to the following form \cite{2hdm_higgssector2}: \begin{align} \mathcal{V} \nonumber &= m_{11}^2\Phi_1^\dagger\Phi_1+m_{22}^2\Phi_2^\dagger\Phi_2-m_{12}^2\left(\Phi_1^\dagger\Phi_2+\Phi_2^\dagger\Phi_1\right)\\ \nonumber &+\frac{1}{2}\lambda_1\left(\Phi_1^\dagger\Phi_1\right)^2+\frac{1}{2}\lambda_2\left(\Phi_2^\dagger\Phi_2\right)^2\\ \nonumber \nonumber &+\lambda_3\left(\Phi_1^\dagger\Phi_1\right)\left(\Phi_2^\dagger\Phi_2\right)+\lambda_4\left(\Phi_1^\dagger\Phi_2\right)\left(\Phi_2^\dagger\Phi_1\right)\\ \nonumber \nonumber &+\frac{1}{2}\lambda_5\left[\left(\Phi_1^\dagger\Phi_2\right)^2+\left(\Phi_2^\dagger\Phi_1\right)^2\right]\\ \nonumber \label{lag} \end{align} The condition corresponding to the SM $\mu^2<0$ is that the Higgs mass matrix made of $m^2_{ij}$ has at least one negative eigenvalue. If this is the case, the two doublets can be written in terms of their vacuum expectation values: \begin{equation} \langle \Phi_1 \rangle=\frac{1}{\sqrt{2}} \left( \begin{array}{c} 0\\ v_1\end{array}\right), \qquad \langle \Phi_2\rangle= \frac{1}{\sqrt{2}} \left(\begin{array}{c}0\\ v_2 \end{array}\right)\,\label{vevs} \end{equation} where the ratio of the two vevs is a free parameter of the model denoted by $\tan\beta=v_2/v_1$ with $v^2=v_1^2+v_2^2=(246~ \textnormal{GeV})^2$. The other parameter is the mixing angle $\alpha$ used to diagonalize the CP-even Higgs mass-squared matrix. The two parameters $\alpha$ and $\beta$ appear in the Higgs-fermion and Higgs-gauge couplings \cite{2hdm_higgssector2}. The Higgs-fermion Yukawa interactions, keeping only the neutral Higgs interactions, take the following form \begin{equation} \mathcal{L}_{Y}=\sum_{f=u,d,\ell}\frac{m_f}{v}\Big(\xi_{h}^{f}\bar{f}fh + \xi_{H}^{f}\bar{f}fH - i\xi_{A}^{f}\bar{f}\gamma_5fA \Big) \label{yukawa1} \end{equation} where the couplings are expressed in terms of the corresponding SM value, $m_f/v$, times the type dependent factors $\xi_{h/H/A}^{u,d,\ell}$ presented in Tab. \ref{couplings}. The CP-even Higgs coupling terms are sometimes written in terms of $\sin(\beta-\alpha)$ or $\cos(\beta-\alpha)$ using trigonometric relations \cite{decoupling}: \begin{align} \sin\alpha/\sin\beta \nonumber&~=~ \cos(\beta-\alpha)-\cot\beta\sin(\beta-\alpha)\\ \nonumber \cos\alpha/\cos\beta &~=~ \cos(\beta-\alpha) + \tan\beta\sin(\beta-\alpha)\\\nonumber -\sin\alpha/\cos\beta &~=~ \sin(\beta-\alpha)-\tan\beta\cos(\beta-\alpha)\\\nonumber \cos\alpha/\sin\beta &~=~ \sin(\beta-\alpha) + \cot\beta\cos(\beta-\alpha)\\ \label{trig} \end{align} The Higgs boson couplings to gauge bosons are model independent and, normalized to their corresponding SM values, are \begin{equation} g_{hVV}=\sin(\beta-\alpha),~~g_{HVV}=\cos(\beta-\alpha). \label{hgauge} \end{equation} There is no tree level coupling of the CP-odd Higgs boson $A$ to vector bosons. Since $\xi^{u,d,\ell}_h$ is either $\cos\alpha/\sin\beta$ or $-\sin\alpha/\cos\beta$, it is obvious through Eq. \ref{trig} and \ref{hgauge} that both $h$-fermion and $h$-gauge couplings align to their corresponding SM values if $\sin(\beta-\alpha)=1$. One of the consequences of the alignment is that the heavier CP-even Higgs coupling to gauge bosons vanishes and its couplings to fermions is expressed in terms of $\tan\beta$ or $\cot\beta$. The above simplified scheme of Higgs-fermion/gauge couplings has been analyzed in various analyses. The extra Higgs bosons ($H$ and $A$) are gaugeophobic and their couplings to fermions, normalized to the corresponding SM couplings, depend on $\beta$ while $\alpha$ is fixed through $\sin(\beta-\alpha)=1$. In this work, we do not restrict ourselves to the above requirement and instead, we take $\sin(\beta-\alpha)$ and $\tan\beta$ as input to evaluate the couplings of Tab. \ref{couplings} with the use of \texttt{2HDMC 1.8} \cite{2hdmc1,2hdmc2,2hdmc3}. The full combination of experimental limits are also obtained from the LHC 13 TeV run analyses using \texttt{HiggsBounds 5.10.2} \cite{hb1,hb2,hb3,hb4,hb5} and \texttt{Higgs Signals 2.6.2} \cite{hs1,hs2,hs3} to make sure that heavy neutral Higgs boson masses and the parameters used for the event analysis are allowed. The SM Higgs boson measurements constraints are shown in all scenarios based on the results reported in \cite{atlas:2020}. In addition to the experimental limits, the theoretical constraints of potential stability (positivity) \cite{pos1,pos2,pos3,pos4,pos5}, unitarity and perturbativity \cite{uni1,uni2,uni3} are also verified. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline {} & Type 1 & Type 2 & Type 3 & Type 4 \\ \hline $\xi^u_h $ & $\cos\alpha/\sin\beta$ & $\cos\alpha/\sin\beta$ & $\cos\alpha/\sin\beta$ & $\cos\alpha/\sin\beta$ \\ \hline $\xi^d_h $ & $\cos\alpha/\sin\beta$ & $-\sin\alpha/\cos\beta$ & $-\sin\alpha/\cos\beta$ & $\cos\alpha/\sin\beta$ \\ \hline $\xi^\ell_h $ & $\cos\alpha/\sin\beta$ & $-\sin\alpha/\cos\beta$ & $\cos\alpha/\sin\beta$ & $-\sin\alpha/\cos\beta$ \\ \hline\ $\xi^u_H $ & $\sin\alpha/\sin\beta$ & $\sin\alpha/\sin\beta$ & $\sin\alpha/\sin\beta$ &$\sin\alpha/\sin\beta$ \\ \hline $\xi^d_H $ & $\sin\alpha/\sin\beta$ & $\cos\alpha/\cos\beta$ & $\cos\alpha/\cos\beta$ &$\sin\alpha/\sin\beta$ \\ \hline $\xi^\ell_H $ & $\sin\alpha/\sin\beta$ & $\cos\alpha/\cos\beta$ & $\sin\alpha/\sin\beta$ &$\cos\alpha/\cos\beta$ \\ \hline $\xi^u_A $ & $\cot\beta$ & $\cot\beta$ & $\cot\beta$ &$\cot\beta$ \\ \hline $\xi^d_A $ & $- \cot\beta$ & $\tan\beta$ & $\tan\beta$ &$-\cot\beta$ \\ \hline $\xi^\ell_A $ & $- \cot\beta$ & $\tan\beta$ & $-\cot\beta$ &$\tan\beta$ \\ \hline \end{tabular} \end{center} \caption{Yukawa couplings of the up-type ($u$) and down-type ($d$) quarks and leptons ($\ell$) to the neutral Higgs bosons $h/H/A$ in different types of 2HDM. There are also other names for types 3 and 4: flipped and lepton-specific \cite{2hdm_theorypheno}.} \label{couplings} \end{table} \section{The LHC search channel for 2HDM neutral Higgs bosons} Before proceeding to our detailed analysis, we discuss about the LHC (ATLAS and CMS) search channel for the 2HDM neutral Higgs bosons presented in \cite{atlas3,cms2,cms1,atlas1,atlas2}. The above analyses are based on the single CP-odd Higgs boson production followed by the subsequent decay $A \to ZH$ or $A \to Zh_{\mathrm{SM}}$. The CMS collaboration also considers the $m_H>m_A$ possibility through $H \to ZA$ decay \cite{cms1}. In our study, we assume $m_A>m_H$ while the analysis of the opposite case can be performed in a similar way. The LHC searches for the neutral Higgs bosons are divided into two categories of Higgs boson conversion, i.e., $A\to Zh_{SM}$ and $A \to ZH$ where the final state contains the SM-like Higgs boson or the CP-even heavy Higgs boson. The decay chains for the two processes are slightly different. The case of $A\to Zh$ involves $\cos(\beta-\alpha)$ as the coupling factor and vanishes at the alignment limit which is defined as $\cos(\beta-\alpha)=0$. While the $A\to Zh$ coupling is type independent, $h_{SM} \to b\bar{b}$ depends on the type of the 2HDM which results in different patterns for the four types of the 2HDM in two dimensional $\tan\beta$ vs $\cos(\beta-\alpha)$ space. Figure \ref{AZh} shows $\mathrm{BR}(A\to Zh)\times BR(h\to b\bar{b})$ for the four types as a function of $\tan\beta$ and $\cos(\beta-\alpha)$ assuming $m_{H/A}=300$ GeV. Except for the lepton-specific type which is essentially designed for $h\to \ell\ell$, any suppression of the product of branching ratios in $A\to Zh\to Zbb$ which occurs at $\cos(\beta-\alpha)\neq 0$ is due to the suppression of $h\to bb$, otherwise a symmetric pattern around the vertical line of $\cos(\beta-\alpha)=0$ would have been obtained. The area inside the red line is allowed and consistent with the LHC light Higgs boson observation. The ATLAS and CMS analyses which used $A\to Zh$ followed by $h\to b\bar{b}$ (\cite{atlas3,cms2}) left the low BR (blue) regions especially the central vertical line of $\cos(\beta-\alpha)=0$ and excluded the rest. Although Fig. \ref{AZh} is for $m_{H/A}=300$ GeV, the alignment limit is always out of reach as long as the signal is $pp \to A\to Zh\to Zbb$ because it vanishes at $\cos(\beta-\alpha)=0$ independent of the Higgs boson mass. \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{AZhhbb300} \caption{The product of the branching ratio of the two decay channels adopted by LHC in \cite{atlas3,cms2}. Here $m_{A/H}~=~300$ GeV. The area outside the red line is excluded by the LHC SM Higgs boson measurements. \label{AZh}} \end{figure} The case of heavy Higgs boson conversion through $A\to ZH$ analyzed in \cite{cms1,atlas1,atlas2} is suitable at the alignment limit as the coupling is $\sin(\beta-\alpha)$. Therefore, for a given $\tan\beta$, the two production processes, i.e., $A\to ZH$ and $A\to Zh$ are complementary along the $\cos(\beta-\alpha)$ axis. In \cite{cms1} results are presented only for type 2 for a specific choice of the Higgs boson masses, i.e., $m_H=379$ GeV, $m_A=172$ GeV. The alignment limit is not reachable by the analysis presented in \cite{atlas1} due to the choice of $H \to WW$ which vanishes at $\cos(\beta-\alpha)=0$. The analysis reported in \cite{atlas2} is performed at the alignment limit but is limited to $m_A-m_H\geq m_Z$. Therefore scenarios with degenerate Higgs boson masses ($m_H\simeq mA$) are out of reach in \cite{atlas2} due to the choice of the signal . One of the reasons for using $A\to Zh$ by the LHC collaborations is less number of free parameters in the signal due to the fixed value of the $h_{\mathrm{SM}}$ mass. Moreover, $A\to Zh$ can be tested for $A$ masses as low as $m_h+m_Z$ while such masses are not allowed in $A\to ZH$ due to $m_H>m_h$ assumption. However, as long as the alignment limit and its nearby area is concerned, $A\to ZH$ provides a higher sensitivity near $\cos(\beta-\alpha)=0$. There are also differences in the CP-even Higgs boson decays to fermions as well as gauge bosons. Since $h\to bb$ has been analyzed by LHC, we will discuss about $H\to bb$ in the next sections. Concerning the heavy Higgs boson decay to gauge bosons, it was mentioned that the coupling, normalized to the corresponding SM value, is $\cos(\beta-\alpha)$ (Eq. \ref{hgauge}). Therefore combining $A\to ZH$ and $H\to VV$ ($V ~=~ W~ \mathrm{or}~ Z$) may not be a reasonable idea as the two coupling factors compensate each other and the higher the production cross section, the lower the $H\to VV$ decay rate. This is not the case for $A\to Zh$ followed by $h\to VV$ as both involve $\cos(\beta-\alpha)$ factors. However, it is essentially a production chain most suitable far from the alignment area. On the other hand, the higher final state particle multiplicity due to gauge boson decays leads to no superiority over fermionic final states. Therefore the conclusion for both processes ($A\to ZH$ and $A\to Zh$) is to preferably use $H/h\to f\bar{f}$. Here, we denote the decay final state as $f\bar{f}$ to remember that in lepton-specific type, the $\tau\tau$ final state has to be used while in other types $b\bar{b}$ is the most suitable final state of the light or heavy CP-even Higgs boson. Another possibility, which is currently missing among the list of LHC analyses, is to use single neutral gauge boson production leading to the Higgs boson pair production, i.e., $pp~ \mathrm{or}~e^+e^- \to Z^* \to AH$. Here we also include lepton collisions at future colliders. The final state can be set by $A \to bb,~H\to bb$ or $A\to bb, ~H\to VV$ with $b$ replaced by $\tau$ for the lepton-specific type. The $A\to VV$ can not be considered due to vanishing CP-odd Higgs-gauge coupling. Figures \ref{AHffff} and \ref{AHWWff} show the Feynman diagrams related to the Higgs boson pair production in the two final states discussed above. These are example diagrams for lepton pair collision and $V~=~W$ while for the case of LHC, the same signal is initiated through quark anti-quark annihilation. Since the signal is proposed to be analyzed in the four $b$- or $\tau$-jet final state, reasonable control of the QCD background at the LHC event environment is crucial. However, we have shown in a number of analyses that the signal of this process can well be observed at future lepton colliders (the most recent results are found in \cite{prd2021}). In the following sections, taking the Higgs boson pair production as the golden channel for extra Higgs boson studies, we discuss about the branching ratio of CP-even and CP-odd Higgs boson decays and reach the final conclusion by analyzing all main combinations of decays. \section{CP-even heavy Higgs boson decay} The Higgs boson pair production should be analyzed in a specific final state. The decay of the CP-even Higgs boson can occur in fermionic mode if the Higgs boson mass is below the threshold of the lightest gauge boson pair production, i.e., if $m_H<2m_W$. With $m_H>2m_W$, decay to $WW$ and then $ZZ$ (if $m_H>2m_Z$) are kinematically allowed. However, one should be aware of possible Higgs boson conversion, i.e., $H\to hh$, which turns on if $m_H>2m_h$. Therefore we discuss about the three regions as follows. \subsection{$m_H<2m_W$} In this region the Higgs boson decays to fermions, i.e., $b$ quarks in types 1 to 3, and $\tau$ leptons in type 4 (lepton-specific) unless decay to gauge bosons is enhanced by migrating from the alignment limit. As seen in Fig. \ref{Hff150}, in type 1, $H\to bb$ is dominant near the alignment limit where $H\to VV$ is suppressed. However, increasing $|\cos(\beta-\alpha)|$ enhances $H\to VV$ in off-shell mode resulting in reduction of $H\to bb$ down to 0.2 or lower. However, the lower the Higgs boson mass, the higher the suppression of $H\to VV$. The two similar types 2 and 3 allow $H\to bb$ to be dominant in a wider area of the parameter space due to the $\tan\beta$ factor in the $H\to bb$ coupling (Eq. \ref{trig}). The type 4 behaves similar to type 1 with $b$ replaced by $\tau$. However, contrary to type 1 in which $H\to bb$ is almost $\tan\beta$ independent, in type 4, $H\to \tau\tau$ is enhanced at high $\tan\beta$. The parameter space of the type 2 is very limited as it makes the Higgs sector of MSSM and is affected by those searches. The Higgs boson masses of 150 and 200 GeV are fully excluded at this type but at higher masses, the parameter space opens as verified by \texttt{HiggsBounds/HiggsSignals}. In order to compare the two fermionic and bosonic decay modes, we plot $H\to WW$ in the same parameter space as shown in Fig. \ref{HWW150}. The two complementary plots shown in Figs. \ref{Hff150} and \ref{HWW150} show how the two decay modes $H\to ff$ and $H\to WW$ compete. These two plots assume $m_H~=~150$ GeV. Here we verify that the region of parameter space shown in Figs. \ref{Hff150} and \ref{HWW150} is theoretically accessible within the requirements of unitarity, stability and perturbativity. In order to do so, for each point in the parameter space, we search for a range of $m_{12}^2$ values which satisfy theoretical constraints. Results are type independent and are shown for the four chosen values of $\tan\beta=5,~10,~15$ and 20 in Fig. \ref{m12150}. As is seen, increasing $\tan\beta$ shrinks the available $m_{12}^2$ range for a given point defined by the values of $\cos(\beta-\alpha)$ and $\tan\beta$. For the mass scenario adopted in this section, BR$(H\to ff)$ and BR$(H\to VV)$ are independent of $m_{12}^2$ and any value of $m_{12}^2$ can be picked up from the range shown in Fig. \ref{m12150}. However, plots shown in Fig. \ref{m12150} confirm that there is at least one such $m_{12}^2$ value for each point in the parameter space in the range of $\tan\beta$ and $\cos(\beta-\alpha)$ under study. We also verify that the provided allowed ranges for $m^2_{12}$ are consistent with $h \to \gamma\gamma$ measurements at the LHC. The BR($h \to \gamma\gamma$) slightly depends on $m^2_{12}$. For example, with $\cos(\beta-\alpha)=0$ and $\tan\beta=5$, the minimum and maximum theoretically allowed values for $m^2_{12}$ are 2338 and 4447 GeV$^2$ (shown in Fig. \ref{m12150}) for $m_{H/A}=150$ GeV. The corresponding values for BR($h \to \gamma\gamma$) are $2.41\times10^{-3}$ and $2.55\times10^{-3}$. These values are allowed by the LHC observations as verified by \texttt{HiggsBounds/HiggsSignals}. \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.25\textwidth,width=0.35\textwidth]{AHffff_full} \caption{Higgs boson pair production in the four fermion final state. \label{AHffff}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.25\textwidth,width=0.35\textwidth]{AHWWff_full} \caption{Higgs boson pair production in $WWff$ channel. Due to the large particle multiplicity in the final state, this channel may not provide better signal significance compared to the four fermion final state. \label{AHWWff}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{Hff150} \caption{Branching ratio of Higgs boson decay to fermions ($bb$ final state in types 1 to 3 and $\tau\tau$ in type 4). The Higgs boson mass is set to 150 GeV. The area outside the red line is excluded by the LHC SM Higgs boson measurements. In type 2, the whole region is excluded by direct searches for heavy neutral Higgs boson. \label{Hff150}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{HWW150} \caption{Branching ratio of Higgs boson decay to $W$ boson pair assuming $m_H~=~150$ GeV. The area outside the red line is excluded. \label{HWW150}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{m12150} \caption{The range of $m_{12}^2$ which satisfy theoretical constraints as a function of $\cos(\beta-\alpha)$ for the four values of $\tan\beta=5,~10,~15,~20$. The Higgs boson mass is set to $m_{H/A/H^{\pm}}~=~150$ GeV. These results are independent of the type of the model. \label{m12150}} \end{figure} \subsection{$2m_W<m_H<2m_h$} In this region, $H\to WW$ starts to occur in on-shell mode and if $m_H>2m_Z$, $H\to ZZ$ will also be present. The relevant domain of $H\to VV$ is limited to Higgs boson masses below the threshold of SM-like Higgs boson pair production, i.e., $2m_W \lesssim m_H \lesssim 2m_h$. As for illustration, we show BR($H\to ff$) and BR($H\to WW$) in Figs. \ref{Hff200} and \ref{HWW200} respectively. The Higgs boson mass is set to $m_H~=~200$ GeV. The Higgs boson decay to gauge boson pair is dominant when $|\cos(\beta-\alpha)|$ approaches unity unless $H\to ff$ is enhanced at high $\tan\beta$ values in types 2 to 4. Inverse colors in the two plots shown in Figs. \ref{Hff200} and \ref{HWW200} show that there is no other relevant decay mode for such Higgs boson masses in the range $2m_W \lesssim m_H \lesssim 2m_h$. The hashed region on the top left parts of Figs. \ref{Hff200} and \ref{HWW200} are excluded by theoretical constraints. The approach is the same as what was discussed in the previous section. For a given $\tan\beta$ value, a range of $m_{12}^2$ is obtained under theoretical constraints. Fig. \ref{m12200} shows results for the four values of $\tan\beta$. In this case, at high $\tan\beta$, for some values of negative $\cos(\beta-\alpha)$ there is no $m_{12}^2$ value respecting theoretical constraints. \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{Hff200} \caption{Branching ratio of Higgs boson decay to fermion pair assuming $m_H~=~200$ GeV. The hashed region on the top left is theoretically inaccessible and the area outside the red line is excluded by the LHC SM Higgs boson measurements. In type 2, the whole region is excluded by direct searches for heavy neutral Higgs boson. \label{Hff200}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{HWW200} \caption{Branching ratio of Higgs boson decay to $W$ boson pair assuming $m_H~=~200$ GeV. The hashed region on the top left is theoretically inaccessible and the area outside the red line is experimentally excluded. \label{HWW200}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{m12200} \caption{The range of $m_{12}^2$ which satisfy theoretical constraints as a function of $\cos(\beta-\alpha)$ for the four values of $\tan\beta=5,~10,~15,~20$. The Higgs boson mass is set to $m_{H/A/H^{\pm}}~=~200$ GeV. These results are independent of the type of the model. \label{m12200}} \end{figure} \subsection{$m_H>2m_h$} If $m_H>2m_h$, i.e., with a Higgs boson mass above 250 GeV, there is possibility of $H\to hh$ with a type independent coupling which depends on $m_{12}^2$. The presence of this decay mode causes suppression of BR($H\to VV$). One needs to search for a range of $m_{12}^2$ which satisfy theoretical constraints. On the other hand, $m_{12}^2$ dependence of the $H\to hh$ coupling leads to dependence of branching ratio of all other decay modes especially $H\to VV$ on $m_{12}^2$ as the sum of all BRs has to be unity. The only safe area in this mass region is the vertical line of $\cos(\beta-\alpha)=0$ and nearby where both $H\to VV$ and $H\to hh$ are suppressed and there is always a range of $m_{12}^2$ which respects theoretical constraints. Moreover, due to smallness of the above decay modes in the central region, BR$(H\to ff)$ is effectively independent of $m_{12}^2$. Let us show BR($H\to ff$) for $m_H=260$ GeV in Fig. \ref{Hff260} which features dominance over other decay modes as well as $m_{12}^2$ independence at the alignment limit. The $m_{12}^2$ value has been set to 1000 GeV$^2$ in Fig. \ref{Hff260}. However, $m_{12}^2$ concerns rise when migrating from the alignment limit where BR($H\to ff$) is essentially small. The BR($H\to hh$) has been shown in Fig. \ref{Hhh260} again with fixed value of $m_{12}^2~=~$ 1000 GeV$^2$, which shows that $|\cos(\beta-\alpha)|>0$ area is under control of this decay mode except for the very low $\tan\beta$ values. It is notable that there is a larger theoretically excluded area at this mass compared to the lower mass of 200 GeV. The excluded area at negative $\cos(\beta-\alpha)$ is larger and also extends to positive $\cos(\beta-\alpha)$ values at high $\tan\beta$. The dominant decay mode at $|\cos(\beta-\alpha)|>0.3$ is $H\to hh$ with BR$(H\to hh)>0.5$. At the central region of $\cos(\beta-\alpha)\simeq 0$, $H\to bb$ is still dominant as both $H\to hh$ and $H\to VV$ are suppressed when approaching this area. There is a point in Fig. \ref{Hhh260} which should be cautious about. We plotted BR($H\to hh$) for a fixed value of $m_{12}^2$ to show the relevant domain of this decay mode in the parameter space. However, theoretical constraints rule out some parts of the plot in Fig. \ref{Hhh260} because for them the chosen $m_{12}^2$ may not be in the allowed range. Therefore for each point in the parameter space of $\tan\beta$ vs $\cos(\beta-\alpha)$ a value of $m_{12}^2$ should be picked up from the specified range to respect the theoretical constraints. The complexity is thus due to $m_{12}^2$ dependence of BR($H\to hh$) which provides a range of theoretically allowed branching ratios (not a single value) for each point in Fig. \ref{Hhh260}. To conclude, study of $H\to VV$ at high masses faces difficulties due to theoretical considerations as well as the presence of other decay modes. However, the alignment limit can still be analyzed by $H\to ff$ at high masses. For completeness, we note that if the Higgs boson mass is above the kinematic threshold of decay to top quark pair, $H/A \to t\bar{t}$ is also switched on. However, Higgs-top quark coupling in all types of the model is proportional to $\cot\beta$ and is suitable to be studied in type 1 where all Higgs-fermion couplings are proportional to $\cot\beta$ and cancel out in branching ratio of Higgs boson decays. In this case $H/A \to t\bar{t}$ is dominant when $m_H>2m_{top}$. In the same Higgs boson mass region, in types 2 and 3, $H/A \to b\bar{b}$ is dominant at high $\tan\beta$ values, while in type 4, $H/A \to \tau\tau$ will be the most promising decay mode. \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{Hff260} \caption{Branching ratio of Higgs boson decay to fermion pair assuming $m_H~=~260$ GeV. The hashed regions on the top left and right are theoretically inaccessible and the area outside the red line is experimentally excluded. \label{Hff260}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.24\textwidth,width=0.35\textwidth]{Hhh260} \caption{Branching ratio of Higgs boson decay to SM-like Higgs boson pair assuming $m_H~=~260$ GeV. This is a type independent decay. However, the allowed regions are different for each type of the model and are shown by the red lines in Fig. \ref{Hff260}. \label{Hhh260}} \end{figure} \section{CP-odd heavy Higgs boson decay} The situation with CP-odd Higgs boson decays is simpler as $A\to VV$ vanishes and $A\to ff$ depends only on $\beta$ through $\tan\beta$ or $\cot\beta$. Therefore BR($A\to ff$) can be plotted as a function of $\tan\beta$ as shown in Figs. \ref{Aff150} and \ref{Aff200} for the two masses $m_A~=~150$ and 200 GeV. The main difference between the two masses is observed in type 1, where $A\to bb$ is more suppressed by $A\to gg$ at $m_A~=~200$ GeV. The other types essentially prefer $A\to ff$ at $\tan\beta>5$. \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.24\textwidth,width=0.35\textwidth]{BRAff150} \caption{Branching ratio of CP-odd Higgs boson decay to $bb$ in types 1 to 3 and $\tau\tau$ in type 4. The Higgs boson mass is set to 150 GeV. \label{Aff150}} \end{figure} \begin{figure}[h] \hspace*{-0.1in} \includegraphics[height=0.24\textwidth,width=0.35\textwidth]{BRAff200} \caption{Branching ratio of CP-odd Higgs boson decay to $bb$ in types 1 to 3 and $\tau\tau$ in type 4. The Higgs boson mass is set to 200 GeV. \label{Aff200}} \end{figure} \section{Cross section of the Higgs boson pair production} The only missing element for analyzing the Higgs boson pair production in the final states shown in Figs. \ref{AHffff} and \ref{AHWWff} is now the total cross section of $HA$ production which should then be multiplied by branching ratios of Higgs boson decays. The cross section of these events depends on $\sin(\beta-\alpha)$ through the second vertex and for a fixed value of $\sin(\beta-\alpha)$ is independent of $\tan\beta$. The cross section of the first mass scenario ($m_H=m_A=150$ GeV) can be calculated for FCC-ee center of mass energy of 365 GeV \cite{FCCEnergy} as well as ILC \cite{ILCEnergy,ILCEnergy2} and CLIC stage 1 \cite{cliccdr} operating at center of mass energy of 500 GeV. The second mass scenario above the vector boson pair production threshold can be realized at the same operation scenario of ILC and CLIC. As previously mentioned, the signal cross section prefers the central region of the alignment due to the $\sin(\beta-\alpha)$ factor thus preferring $H\to ff$ over $H\to VV$ because at the alignment limit, in all mass scenarios mentioned before, $H\to ff$ is dominant. The product of cross sections and branching ratio of CP-even and CP-odd Higgs bosons are presented in Figs. \ref{sigma1} and \ref{sigma2} for the first mass scenario and Figs. \ref{sigma3} and \ref{sigma4} for the second mass scenario at CLIC/ILC center of mass energy $\sqrt{s}=500$ GeV for the two final states $H/A\to ff$ and $H\to WW,~A\to ff$. The color palette obviously shows that relevant final state for the central region is $H/A\to ff$ while regions far from the alignment limit can be probed by $H\to WW,~A\to ff$ with its own difficulties. \section{Event Analysis at 500 GeV lepton collider} We proceed to perform an event analysis at a lepton collider operating at $\sqrt{s}~=~500$ GeV. We do not check FCC center of mass energy due to the missing beam spectrum file for the $t\bar{t}$ operation scenario of $\sqrt{s}~=~365$ GeV.\\ The analysis is limited to the four jet final state where the jets are $b-$jets in types 1 to 3 and $\tau-$jets in type 4. The two scenarios of $m_{H}~=~m_{A}~=~150$ GeV and $m_{H}~=~m_{A}~=~200$ GeV are considered with event selection algorithm similar to what we presented in a previous work \cite{prd2021}. The analysis is performed based on a single point in the parameter space of 2HDM defined by $\sin(\beta-\alpha)=1$ and $\tan\beta=10$. These parameters are used for the signal cross section and decay rates calculations but since event kinematics is independent of them, results can be easily scaled to other points in the parameter space. The event generation is performed by \texttt{WHIZARD 3.0.0} \cite{whizard1,whizard2} and the beam spectrum of 500 GeV ILC is used to account for the ISR and beamstrahlung. The FSR and multi-particle showering is performed by \texttt{PYTHIA 8.3.03} \cite{pythia} followed by the detector simulation by \texttt{DELPHES 3.4.2} \cite{DELPHES} using \texttt{ILCGen} detector card. The hadronic background from photon interactions is taken into account by adding jet momentum smearing set to $0.3\%$ and $1.5\%$ for $|\eta|<0.76$ and $|\eta| \ge 0.76$ respectively based on the approach proposed by the CLIC collaboration \cite{overlay}. The pseudorapidity is defined as $\eta=-\ln{\tan({\theta/2})}$ where $\theta$ is the polar angle with respect to the beam axis. The jet reconstruction is performed by \texttt{FASTJET 3.3.4} \cite{fastjet1,fastjet2} with anti-$k_{t}$ algorithm \cite{antikt} and the jet cone size $\Delta{R}=\sqrt{(\Delta{\eta})^{2}+(\Delta{\phi})^{2}}=0.5$. The jet tagging algorithms in \texttt{DELPHES} are based on MC truth matching with efficiencies depending on the jet energy and pseudorapidity for both $\tau-$ and $b-$tagging scenarios. As for the $b-$tagging we use average efficiency of 50$\%$ which was shown to work better against $t\bar{t}$ background in \cite{prd2021}. The event selection starts from choosing events with exactly four $b/\tau$-jets with $p_{T}>10$ GeV and $|\eta|<2$. We also perform a kinematic correction of the jet four momenta by correcting every jet four momentum component so that the four linear equations of momentum and energy conservation are satisfied. In order to have solutions for the set of four equations containing correction factors as unknowns, we consider a single correction factor for every jet and apply it on its four momentum components. Therefore the correction does not change the jet direction and only scales the four momentum vector. After kinematic correction, jets are sorted in terms of their energies and the invariant mass of the second and third jets are calculated to make the signal distributions on top of the corresponding invariant mass distribution from the background events as shown in Figs. \ref{BP1_4types} and \ref{BP2_4types}. The idea of choosing the second and third jet is as follows. The two Higgs particles in the signal event produce their decay products back-to-back in their rest frames. However, each pair of jets fly at a specific angle which differs event to event. The two jets with the smaller angle with respect to the Higgs boson trajectory appear as the jets with maximum and minimum energies at the laboratory frame due to the Lorentz boost they receive. The other two jets from the other Higgs boson take the second and the third position in the energy-sorted list of jets. Here, results are shown based on using the second and the third jets, although similar results are obtained using the first and the fourth jets. Figures \ref{BP1_4types} and \ref{BP2_4types} show the distributions of the jet pair invariant masses in the two mass scenarios for the four types of 2HDM normalized to integrated luminosity of $1~fb^{-1}$. Only relevant background distributions are shown for each type. Other backgrounds like $WW$ and $t\bar{t}$ are negligible when the $b/\tau$-tagging is applied. The $hZ$ background has also been shown assuming $m_h=125$ GeV. This is a variable background in the parameter space because BR($h \to bb/\tau\tau$) depends on $\tan\beta$ and $\cos(\beta-\alpha)$. The contribution of this background in the heavy Higgs boson mass window is very small. In type 4, the SM background (which is mainly $ZZ$) is highly suppressed due to the low branching ratio of the $Z$ boson decay to $\tau\tau$ which is $\sim3\%$. This value has to be squared as there are two $Z$ bosons in such events. On the contrary, in signal events BR$(H/A \to \tau\tau)\sim1$. In other types of the model, we deal with $b$-jets final state with BR$(Z \to b\bar{b})\sim15\%$. There are also kinematic differences between the signal and $ZZ$ background related to the pseudo-rapidity distributions of the final state particles which were discussed in \cite{prd2021}. The lower efficiencies of $\tau$-tagging and related fake rate compared to the corresponding values for $b$-tagging and its fake rate also result in more suppression of the $\tau$-jets final state. The signal significance is obtained using the formula suitable for the low background statistics, i.e., $\sqrt{2(S+B)log(1+S/B)-2S}$ where $S$ and $B$ are the number of signal and background events at a given integrated luminosity inside the mass window. The mass window position and width is determined by maximizing the signal significance. Results for the two Higgs boson mass scenarios are shown in Figures \ref{signif1} and \ref{signif2}. The two regions shown in yellow and green are 5$\sigma$ and 2$\sigma$ contours respectively and the integrated luminosities of 10 $fb^{-1}$ and 100 $fb^{-1}$ have been assumed for the first and second mass scenarios. These amounts of data correspond to one or few weeks of operation of the collider. \pagebreak \section{Conclusions} The two Higgs doublet model was adopted as the theoretical framework for study of extra neutral Higgs bosons. The analysis was performed for all four types of the CP-conserving 2HDM and parameter space scans were presented including relevant parameters which determine the production cross sections and branching ratio of Higgs boson decays. The results were divided into two domains of the Higgs boson mass, i.e., below and above the threshold of decay to gauge boson pair. Including experimental limits from the latest LHC results, it was shown that the $\cos(\beta-\alpha)=0$ limit known as the alignment limit is not yet excluded by LHC and can well be verified at lepton colliders if $e^+e^- \to HA\to 4$ fermion final state is analyzed. The reason is due to the dominance of the cross section as well as BR($H/A\to ff$) in this region. We also included an event analysis for the two mass scenarios and obtained the invariant mass distributions of signal and background events. Final results were presented as contours of 95$\%$ CL exclusion and 5$\sigma$ discovery for the four types. It is concluded that unexplored regions of 2HDM can well be excluded at 95$\%$ CL at $\mathcal{L}=$ 10 and 100 $fb^{-1}$ for the two mass scenarios respectively, which correspond to a week of collider operation or so. \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.24\textwidth,width=0.35\textwidth]{sigma} \caption{The signal cross section as a function of $\sin(\beta-\alpha)$ for the two center of mass energies $\sqrt{s}~=~365$ GeV (FCC-ee) and 500 GeV (ILC or CLIC). The two scenarios of $m_{H}=m_{A}=150$ GeV and 200 GeV are shown. The signal cross section has a quadratic dependence on $\sin(\beta-\alpha)$ and reaches its maximum at $\sin(\beta-\alpha)=1$ for each mass scenario. \label{sigma}} \end{figure} \begin{figure}[t] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{SHffAff150150500} \caption{The signal cross section in the four fermion final state for the mass scenario $m_H=m_A=150$ GeV at $\sqrt{s}=500$ GeV. \label{sigma1}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{SHWWAff150150500} \caption{The signal cross section in the $WWff$ final state for the mass scenario $m_H=m_A=150$ GeV at $\sqrt{s}=500$ GeV. \label{sigma2}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{SHffAff200200500} \caption{The signal cross section in the four fermion final state for the mass scenario $m_H=m_A=200$ GeV at $\sqrt{s}=500$ GeV. \label{sigma3}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{SHWWAff200200500} \caption{The signal cross section in the $WWff$ final state for the mass scenario $m_H=m_A=200$ GeV at $\sqrt{s}=500$ GeV.\label{sigma4}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{BP1_4types} \caption{The invariant mass of the four jet final state in signal ($m_H=m_A=150$ GeV) and background events at $\sqrt{s}=500$ GeV normalized to 1 $fb^{-1}$. The model parameters are $\tan\beta=10$ and $\cos(\beta-\alpha)=0$.\label{BP1_4types}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{BP2_4types} \caption{The invariant mass of the four jet final state in signal ($m_H=m_A=200$ GeV) and background events at $\sqrt{s}=500$ GeV normalized to 1 $fb^{-1}$. The model parameters are $\tan\beta=10$ and $\cos(\beta-\alpha)=0$.\label{BP2_4types}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{Signif150150500} \caption{The 2$\sigma$ (95$\%$ C.L.) and 5$\sigma$ contours in green and yellow colors respectively for the mass scenario $m_H=m_A=150$ GeV ($\mathcal{L}= 10~ fb^{-1}$ and $\sqrt{s}=500$ GeV).\label{signif1}} \end{figure} \begin{figure}[] \hspace*{-0.1in} \includegraphics[height=0.4\textwidth,width=0.5\textwidth]{Signif200200500} \caption{The 2$\sigma$ (95$\%$ C.L.) and 5$\sigma$ contours in green and yellow colors respectively for the mass scenario $m_H=m_A=200$ GeV ($\mathcal{L}= 100~ fb^{-1}$ and $\sqrt{s}=500$ GeV).\label{signif2}} \end{figure}
1,314,259,996,814
arxiv
\section{INTRODUCTION} The first stars, so-called Population~III (Pop~III) stars form in the absence of heavy elements in the early Universe. Due to the lack of metal cooling, they are expected to be drastically different from the stars found in our vicinity at the present day \citep{BrommReview, GloverReview, GreifReview}. Initially, Pop~III stars were thought to be very massive \citep[e.g.,][]{Bromm99, Omukai01}, but later it was found that their protostellar disks may fragment, leading to the formation of clusters of low-mass metal-free stars \citep{Greif11b, Clark11}. Whereas it is clear that Pop~III stars may form over a wide range of masses, until today simulations are unable to constrain well the metal-free initial mass functions (IMFs). The results depend significantly on the physics employed, the choice of numerical method, and the resolution \citep[see e.g.,][]{Hosokawa16, Stacy16, Susa19}. There are so far no direct detections of metal-free stars. Pop~III stars are expected to form in high-redshift, relatively low-mass mini- and atomic-cooling haloes. Therefore, ``Pop~III galaxies'' are most likely not bright enough to be detected today \citep{Xu16, Hartwig16b, Visbal17}. Having no direct observations, there are several indirect methods that allow us to gain observational constraints on the IMF of Pop~III stars. Direct detection of supernovae (SNe) \citep{Hummel12, Hartwig18b, Rydberg20} or gravitational waves \citep{Kinugawa14, Kinugawa16, Hartwig16a} from the first stars are challenging, but may provide constraints on the high-mass end of the Pop~III IMF in the coming decade. The 21\,cm absorption feature, as reported by the EDGES experiment \citep{EDGES18}, can constrain the timing of the first star formation and the star formation efficiency, but it is not very sensitive to the assumed IMF \citep{Schauer19}. There are two remaining methods to constrain the pristine IMF that are feasible at present. Both are related to the observations of ancient metal-poor stars in the Milky Way and its satellites. The first one is constraining the low-mass end of the IMF with the current non-detection of metal-free stars \citep{Salvadori07, Hartwig15b, Ishiyama16, Magg18, Magg19}. The second method, which is our focus here, is comparing the abundance patterns observed in metal-poor stars to simulated SN yields in Pop~III stars. It was found that the most metal-poor stars, often called extremely metal poor (EMP) stars with an iron abundance\footnote{For elemental abundances, we use the notation \\$\mbox{[X/H]} = \log_{10}(N_\mathrm{X}/N_\mathrm{H})-\log_{10} (N_{\mathrm{X},\sun} /N_{\mathrm{H},\sun})$ where $N_{\mathrm{X}}$ and $N_{\mathrm{H}}$ are the fractional abundances of any element X and hydrogen, and $N_{\mathrm{X},\sun}$ and $N_{\mathrm{H},\sun}$ are the corresponding solar abundances.} of less than $\mbox{[Fe/H]} =-3$ are surprisingly rich in carbon \citep{Beers2005, Frebel15}. A particularly interesting subgroup of stars are the carbon-enhanced extremely metal-poor (CEMP) stars and among these the CEMP-no stars: those with iron abundances below $\mbox{[Fe/H]}=-3$, an excess of carbon relative to iron of more than $\mbox{[C/Fe]}=1$ and no enhancement in neutron-capture elements, i.e., $\mbox{[Ba/Fe]}<0$\ \citep[e.g.,][]{Frebel05, Keller14, Aguado18, Nordlander19}. The origin of the elemental abundance patterns in these stars is one of the key questions of early chemical enrichment. \citet{Umeda2003} proposed that the abundance pattern of CEMP-no stars is the fingerprint of so-called faint SNe. These are Pop~III SNe with relatively large mixing-and-fallback at the core-collapse explosions\footnote{The explosion energy and progenitor star mass are not necessary larger than those for Pop~II SNe \citep{kob14}.}, and eject much of their outer layers, containing carbon and other light elements, whereas most of the inner shells, containing in particular iron, fall back onto the compact remnant. Thus, when the first metal-enriched stars form from gas enriched by one of these SNe, they form with a very small iron abundance but a much higher carbon abundance. Notably, these SNe do not produce particularly large absolute amounts of carbon compared to more conventional core-collapse SNe; rather, they yield high [C/Fe] ratios because they produce unusually small amounts of iron. Subsequently, it has become common practice to use SN models to infer the properties of the primordial progenitors of the most metal-poor observed stars, both for individual stars \citep[e.g.,][]{HegerWoosley2010, Hansen11, Nomoto13, Ishigaki14, Bessel15, Placco16} and for large samples \citep{Cayrel04, Fraser17, Ishigaki18}. Additionally, constraints on the primordial IMF can be inferred from bulk properties of metal-poor stars with semi-analytical models \citep{deBennassuti17, Hartwig18b, Tarumi20b}. For these purposes, libraries of SN yields have been computed \citep{HegerWoosley2010, Nomoto13, Ishigaki18}. These yields typically depend on the stellar mass of the exploding star, the explosion energy and one or a few parameters that quantify the mixing-and-fallback process, which cannot be simulated self-consistently in the one-dimensional SN simulations \citep[e.g.,][]{Chen17, Chan20}. Comparing the resultant Fe mass to the observed [Fe/H], dilution has been discussed in \citet{tom07} and \citet{kob11}. In order to compare modelled and observed abundance pattern, one further step is required: the SN yields need to be physically diluted with metal-free gas to match the absolute metallicity of the observed star. Usually this dilution is treated as a free parameter and chosen to optimize the quality of fit. Freely adjusting the dilution factor essentially makes the fit independent of the absolute abundances and only considers the ratios of the abundances to each other. Then observed and modelled abundance pattern are compared and the well-fitting ones are interpreted as likely progenitor. For example, the \textsc{starfit}\footnote{\url{http://starfit.org}} \citep{HegerWoosley2010, Fraser17} pipeline can be used for such an analysis. In this study, we argue that this approach has do be amended because the amount of ambient gas into which the metals from a Pop~III supernova are mixed cannot be assumed to be arbitrarily small. We derive a simple analytical model for the lower limit of the mass a SN remnant has to mix with before it can recollapse. We find that this limit is consistent with the results from 3D hydrodynamical simulations. In many cases, there are large differences between the halo-scale mixing found in hydrodynamical simulations and the mixing implicitly assumed by fitting abundance ratios with arbitrary dilution. We show how the dilution limit can be applied in abundance fitting methods. Finally, we investigate examples of the impact this dilution limit has on the conclusions drawn from fitting observed abundances. \section{The minimum mixing mass} \subsection{Analytic estimate} As outlined before, abundance fitting usually employs the observed ratios of abundances of certain metals and compares those to the ratios found in theoretical SN models. Of particular importance is, e.g., the $[\mathrm{C}/\mathrm{Fe}]$ ratio. This method, however, typically neglects the absolute abundances (i.e., [Fe/H] or [C/H]) and treats them as an arbitrary normalization factor. Conceptually, this normalization can be achieved by diluting the SN yields with the correct amount of metal-free gas. As in published work usually only single SNe are fitted to observed abundance patterns, we only consider single, isolated SNe in this work. We consider SN explosions as well as their subsequent expansion into the ambient medium and the corresponding mixing processes in spherical symmetry. Simulations carried out in two \citep{Tominaga09} and three \citep{Chan20} dimensions, however, show that Pop~III SNe can be strongly aspherical. In this context, we note that even when considering anisotropic SNe, the observed abundances in most published studies are compared to angle-integrated yields. This means that the problem considered is effectively spherically symmetric, as the angular average implies that different elements ejected in different directions become well mixed before the second generation stars form. An exception to this may be, if the abundances are distributed more spherically than the energy input, such as seen in some of the models in \citet{Tominaga09}. A critical analysis of the validity of this approximation is one of the primary motives for the study presented here. We argue that properly accounting for the asymmetries expected in Pop~III SNe requires both detailed three dimensional explosion models as well as high-resolution simulations of the expansion of the resulting anisotropic shock wave into an inhomogeneous ambient medium that are able to adequately follow the chemical mixing process. Since there is no analytic model for such a small-scale inhomogeneous mixing, however, we follow the bulk of the existing literature and approximate the SN as a spherical explosion inside a homogeneous ambient medium. As SNe are very energetic events, a large amount of gas is required to confine the metals and thus not all dilution masses are physically plausible. The lowest limit for this mass is the mass enclosed in the final radius of the SN remnant. Analytical solutions to spherical blast waves of SNe can be derived under a variety of assumptions \cite[ e.g.,][]{Ostriker88}, with the expansion of the remnant stalling at the end of the momentum-driven snowplough phase. In this phase, the expansion velocity reaches the speed of sound in the ambient medium. As shocks cannot be subsonic, the shock wave transforms into a sound wave and dissipates. This occurs at the fade-away radius $R_\mathrm{fade}$ which is \begin{equation} R_\mathrm{fade} \approx 2.07\times 10^{20}\,\mathrm{cm}\ E_{51}^{0.32} n_0^{-0.37} \left(\frac{c_\mathrm{s}}{10\,\mathrm{km}\,\mathrm{s}^{-1}}\right)^{-2/5}, \label{eq:R_fade} \end{equation} where $n_0$ is the nucleon number density of the ambient medium in units of cm$^{-3}$, $E_{51}$ is the explosion energy in units of $10^{51}\,\mathrm{erg}$ and $c_\mathrm{s}$ is the ambient medium speed of sound \citep[e.g.,][]{Draine}. We assume the ambient medium is ionized, i.e., that it has a speed of sound of $c_\mathrm{s}=18\,\mathrm{km}\,\mathrm{s}^{-1}$, for a metal-free H\textsc{ii} region \citep[see e.g.,][]{abel07}. In case the medium is actually neutral, the speed of sound would be lower and the stalling radius larger. Thus, this is a conservative assumption. In the homogeneous mixing case, the minimum mass with which the ejecta are mixed is the mass that is enclosed in the stalling radius, i.e., \begin{equation} M_\mathrm{dil, min} = \frac{4}{3}\pi n_0 \mu m_{\rm H} R_\mathrm{fade}^3 = 1.9\times 10^4\,{\ensuremath{\mathrm{M}_{\sun}}}\,E_{51}^{0.96}\,n_0^{-0.11}, \label{eq:M_dil} \end{equation} where $m_{\rm H}$ is the mass of a hydrogen nucleus and where we assumed a mean molecular weight of $\mu=1.22$. The fade-away radius used here is for gas cooling rates of solar metallicity gas. However, the smaller amount of metals in the case considered here would only decrease the cooling rates and therefore increase the total mixing mass. As we aim at computing a lower limit for the mixing mass, the reduced cooling can be neglected. By definition the SN remnant expands faster than the speed of sound in the ionized medium. As we consider haloes below the atomic cooling limit the escape velocity from the haloes is much smaller than this speed of sound. Therefore SN remnants expand much faster than the escape velocity and the effect of gravity can be neglected. This result is very similar to the one obtained through numerical simulations by \citet{Thornton98}. While it has been widely used in the discussion of stellar feedback, it is often neglected when fitting abundance patterns of individual stars. For example, \citet{Tominaga14} note that the minimum dilution mass obtained by \citet{Thornton98} is not a binding limit, as metal mixing is highly inhomogeneous \citep{Ritter12}. We will later see that our derived limit holds even in cases of inhomogeneous mixing. We assume an ambient density of $n_0 = 1\,\mathrm{cm}^{-3}$, which should be the typical case for the ionized regions around massive Pop~III stars \citep{Whalen04}. We note that the density dependence of the minimum mixing mass (Eq. \ref{eq:M_dil}) is very weak, so it would need to be higher by several orders of magnitude to affect our conclusions. If the density is this much higher than the assumed value, the free-fall time of the ambient gas is smaller than the life-time of the star, and thus it should form stars already before the SN explodes or while the remnant expands. Furthermore, simulations show that it is difficult to mix metals into gas that is already very dense when the SN explodes \citep{Ritter16, Chiaki18}. Under the assumptions outlined above, the dilution mass is lower limit for two main reasons: \begin{enumerate} \item We assume a homogeneous medium. If the medium is not homogeneous the denser gas will be less enriched but form stars first. This effect is discussed further below. \item We assume no further mixing. Realistically further mixing with additional pristine gas should occur during recollapse, rather than the stalled SN remnant monolithically collapsing back on itself. This effect would further increase the dilution mass. \end{enumerate} We note that we assume all SNe are able to produce second generation stars. Very energetic explosions may actually disrupt their host haloes, which suppresses or delays second generation star formation \citep{Whalen08b}. This effect is difficult to quantify without hydrodynamical simulations in cosmological context, and is therefore neglected here. \subsection{Consistency with simulations} To see whether sub-galactic-scale inhomogeneous mixing can lead to higher metallicities than predicted by the minimum dilution we will compare it to the dilution found in all suitable published simulations of inhomogeneous mixing and the formation of second generation stars which we are aware of. For comparison with our limit, we use an ambient density of $n_0 = 1\,\mathrm{cm}^{-3}$ in all cases but take the explosion energies used in the simulations to compute the minimum mixing. Simulations are included provided that they \begin{itemize} \item are three dimensional hydrodynamical simulations of the expansion of Pop~III SN remnants into their ambient medium, \item are set up to and have sufficient resolution to model individual, isolated, Pop~III SNe, not combined populations, \item follow the enriched gas until it re-collapses, and \item provides the output needed for our comparison. \end{itemize} This implies that we do not discuss the results from \citet{Greif07} and \citet{Chen15} because the re-collapse of the enriched gas is not modelled. Larger-scale simulations, such as the ones from \citet{Wise12}, \citet{FiBY1}, or \citet{Tarumi20a}, are not considered, because they do not follow individual isolated SNe. The simulations of \citet{Whalen08b} are not included here because they are one-dimensional. Nevertheless, we note that their metal-enriched gas masses are consistent with our upper limit in most cases. Only in one of their models is the enriched gas mass they find smaller than our prediction in Eq. \eqref{eq:M_dil}. In this case, the star completely fails to create an ionized region, and the ability to model an off-centre re-collapse would be crucial to make accurate predictions for the metallicity of the second generations star. We begin with the dilution found in \citet{Ritter12, Ritter15, Ritter16}.\footnote{We only consider the 1\textsc{sn} model from \citet{Ritter15} as the 7\textsc{sn} model deals with enrichment by multiple SNe, which is not the topic of our analysis.} In all three simulations the SNe considered are core collapse (CC) SNe with $E_{51}=1$. They eject $M_\mathrm{met}=4\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metals in \citet{Ritter12, Ritter15} and $M_\mathrm{met}=6\,{\ensuremath{\mathrm{M}_{\sun}}}$ in \citet{Ritter16}. Thus, according to Eq. \eqref{eq:M_dil} the maximum final metallicity we should expect is \begin{equation} Z_\mathrm{max} = \frac{M_\mathrm{met}}{M_\mathrm{dil, min}} \approx 10^{-3.6} \approx 10^{-1.7}\,{\ensuremath{\mathrm{Z}_{\sun}}} \label{eq:ZMass} \end{equation} where ${\ensuremath{\mathrm{Z}_{\sun}}}=0.0142$ is the solar metallicity \citep{Asplund09}. While the mixing is highly inhomogeneous, and orders of magnitude of spread in metallicity can be seen, the newly collapsing cores always show metallicities below this value. All simulations also contain gas at higher metallicities than predicted by the minimum dilution. While from \citet{Ritter12} it is unclear in which phase this gas is contained, in \citet{Ritter15, Ritter16} only some of the very diffuse gas has metallicites above the dilution limit. \citet{Chiaki18} and \citet{Chiaki19} model the inhomogeneous mixing occurring after the SNe of 7 different stars with masses between 13\,{\ensuremath{\mathrm{M}_{\sun}}}\ and 200\,{\ensuremath{\mathrm{M}_{\sun}}}. Some of these SNe are simulated in several different halos. The simulations cover a wide range of different environments in which SNe can explode. For massive stars, halos are often completely photo-evaporated, whereas, for the smallest stars, the gas in the stellar birth-cloud remains dense throughout the lifetime of the star. The results show large variations between the mixing behavior and the metallicities of the second-generation stars. \citet{Chiaki18} distinguish between three separate enrichment channels: \begin{enumerate} \item Internal enrichment: in this case, the SN expands efficiently and the metals mix well with the surrounding gas before the halo collapses back on itself. \item External enrichment: the metals escape from the halo in which the SN explodes and mix with the gas in a different halo that has not formed stars yet. This type of enrichment is also found in \citet{bsmith15}. \item Inefficient internal enrichment: dense structures remain in the halo. When the SN explodes these structures are only enriched to very low metallicities and proceed to form stars with metallicities much lower than the average gas metallicity in the halo. \end{enumerate} None of these simulations, however, show the formation of second generation stars that violate our dilution limit. According to Eq.~\eqref{eq:M_dil} the predicted maximum metallicity ranges between $10^{-2.6}\, {\ensuremath{\mathrm{Z}_{\sun}}} < Z_\mathrm{max} < 10^{-1.6}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. All second generation stars in their simulations have metallicities in the range $10^{-6.3}\,{\ensuremath{\mathrm{Z}_{\sun}}}<Z< 10^{-2.2}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. None of the stars violate our derived limit. The simulated second generation star that is closest to our computed upper limit is enriched by a 25\,{\ensuremath{\mathrm{M}_{\sun}}}\ CCSN that explodes in their halo ``MH1'', which is their smallest halo with a mass $M_\mathrm{vir}=3\times10^5\,{\ensuremath{\mathrm{M}_{\sun}}}$. The re-collapsing region has a metallicity of 40 per cent of our computed upper limit. In these simulations, there are several cases of stars with much lower metallicities than predicted by the minimum dilution model. These are the cases in which the surroundings of the SNe are the most dense and the mixing is the most inhomogeneous. The second generation stars form in clumps that already exist when the SNe explode and only the outer layers of these clumps are enriched with metals. Thus, the enrichment proceeds in what \citet{Chiaki18} label the ``inefficient internal enrichment'' channel. \citet{Greif10} simulate the explosion of a single PISN with $E_{51}=10$ and $100\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metal ejecta. According to our model the maximum metallicity in this extreme case should be below $Z=10^{-1.4}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. They find metallicities in the recollapsing galaxy that are around $Z=10^{-3}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. As \citet{Greif10} note, the average metallicities are initially much higher but they decrease to this low value during the recollapse of the halo, which takes around 300\,Myr. The simulations by \citet{Jeon14} include several SNe exploding in three different haloes. The authors provide information on the metallicity of recollapsing regions in three cases: a 15, 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ star exploding in their ``halo1''. They all explode as $E_{51}=1$ CCSN and eject 5 per cent of their stellar mass as metals. According to our model, this should lead to metallicites of $Z<10^{-2.1}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. Their reported metallicities are all below $Z=10^{-3.5}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. \citet{bsmith15} highlight the external enrichment channel. Their SN is a $E_{51}=1$ CCSN which ejects $11.19\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metals, leading us to predict a maximum metallicity of $Z_\mathrm{max} = 10^{-1.4}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. Only a very small fraction of gas is found at such high metllicities, and none of it is in the re-collapsing region. The metal-enriched star forming gas in this case has a metallicity of $Z=10^{-4.7}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. \begin{figure} \includegraphics[width=\linewidth]{img1.pdf} \caption{\label{fig:sim_comp} Comparison of the minimum dilution model to simulations of inhomogeneous metal mixing. We show the ratio of the effective dilution mass of the simulations and our estimate of the minimum dilution mass as a function of the stellar mass of the exploding star. The effective dilution mass is derived from the metallicity in the second generation stars or the likely sites of second generation star formation in the simulations. All simulations show a ratio above one, i.e. they are consistent with our predicted minimum.} \end{figure} We convert the metallicities found in the simulations back to an ``effective dilution mass'' with eq. \eqref{eq:ZMass} and summarize the simulations in Fig. \ref{fig:sim_comp}. None of the simulations of inhomogeneous mixing show inconsistencies with the minimum dilution mass derived from the spherically symmetric case. In some of the simulations, there is gas above the derived upper limit for the metallicity, but it tends to be diffuse and hot. This can be understood intuitively: as thermal energy and metals are ejected together, more metal-rich gas tends to be hotter. It is important to note that there is significant scatter in the simulation results: even for similar exploding stars the effective dilution mass can vary by many orders of magnitude. The cases with the largest effective dilution masses are usually external or inefficient internal enrichment. We conclude that, to the best of our knowledge and the current state of modelling, our estimate provides a useful limit on the mixing and dilution of metals even in the presence of inhomogeneous mixing. \subsection{Bayesian fitting} We will here briefly discuss how the derived limit on mixing can be implemented in abundance fitting codes. For this purpose we create an algorithm that fits observed abundances by comparing to them to the modelled SN yields from \citet{HegerWoosley2010}. The yields of the SNe generally depend on the progenitor mass ($M_\mathrm{prog}$), the explosion energy ($E_{51}$ in units of $10^{51}\,\mathrm{erg}$) as well as a mixing factor ($f_\mathrm{mix}$). For matching observed and modelled abundances, we use the SN yields and analysis tools provided with \textsc{starfit} and supplement them with a generic Bayesian fitting approach. A general description of Bayesian parameter estimation can be found in \citet{BailerJones2017}. We first compute the likelihoods $L_i (x_i|M)$ that a model $M$, which predicts the abundances $y_i$, results in the observed abundances $x_i$, \begin{equation} L_i(x_i|M) = \exp\left(-\frac{(x_i-y_i)^2}{2\sigma_i^2}\right), \end{equation} where $i$ is any of the observed elements and $\sigma_i$ is the error of the observations. The normalization is left arbitrary for now. This likelihood calculation implicitly assumes that the errors follow a Gaussian distribution. While it is not clear whether this assumption is valid, it is commonly made when fitting SN models to observed abundances \citep[e.g.][]{HegerWoosley2010, Ishigaki18, Ezzeddine19}. Computing the modelled abundances $y_i$ requires, as discussed above, a usually arbitrary dilution mass $M_\mathrm{dil}$. If $M_i$ is the mass of element $i$, which has a mass-number of $\mu_i$, that is ejected by a SN, the model abundance is \begin{equation} y_i = \log_{10} \left(\frac{M_i}{\mu_i\,X_\mathrm{H} M_\mathrm{dil}}\right) - \log_{10} \left(\frac{N_{i,\odot}}{N_{\mathrm{H},\odot}}\right), \end{equation} where $X_\mathrm{H}=0.754$ is the hydrogen abundance of primordial gas \citep{Planck2015}. The respective solar fractions of the element $i$ and of hydrogen are $N_{i,\odot}$ and $N_{\mathrm{H},\odot}$. We iteratively adjust the dilution mass for each model until we find the dilution that gives the maximum final likelihood according to Eq. \eqref{eq:comb_L}. The dilution mass is picked individually for each model, but within each model the same dilution mass is used for every element. This choice comes from our assumption that each element mixes in the same way. This assumption is commonly made for SN fitting, as without it the SN yields would not be representative of the elements found in the second generation stars. However, in many cases elements are not detected and only upper limits on their abundance can be derived. These upper limits need to be treated simultaneously with the detections. For this we assume that the upper limits are strict (i.e.\ the likelihoods are Heaviside step-functions $\Theta$) combined with a Gaussian error on where exactly this limit is. These assumptions lead to a likelihood $L_i(x_i|M)$ for an upper limit of $x_i$ in element $i$ of: \begin{equation} \begin{split} L_i(x_i|M) &= \int_{-\infty}^{\infty} \Theta(x_i-z_i) \exp\left(-\frac{(z_i-y_i)^2}{2\sigma_i^2}\right) \mathrm{d} z_i\\ &=\int_{-\infty}^{x_i} \exp\left(-\frac{(z_i-y_i)^2}{2\sigma_i^2}\right) \mathrm{d} z_i\\ &= \sqrt{\frac{\pi}{2}}\sigma_i\, \mathrm{erf} \left(\frac{x_i-y_i}{\sqrt{2}\sigma_i} \right),\\ \end{split} \label{eq:L_lim} \end{equation} where $\mathrm{erf}$ is the Gaussian error function. The likelihoods of the individual elements can then be combined by multiplication: \begin{equation} L(x|M) = \prod_{i} L_i(x_i|M). \label{eq:comb_L} \end{equation} The same approach to compute fit likelihoods was also used in, e.g., \citet{Fraser17}. In cases where there are only detections and no upper limits, maximizing this likelihood is equivalent to minimizing $\chi^2$. This way of combining likelihoods implicitly assumes that the errors of all abundance determinations are uncorrelated. Especially for errors from uncertainties in the determination of stellar parameters, this may not be true \citep{McWilliam95}. This is because all low-excitation lines arising from neutral minority species tend to have similar sensitivity to the effective temperature, which typically dominates the error budget. However, we only aim at showing the importance of constraining the dilution of SN ejecta, and a complete treatment of the error distributions and dependencies of abundance determinations exceeds the scope of the current investigation. If we assign each model in the SN library the same prior probability, we can further compute the probability of each model $M$ given the observations $x$ by \begin{equation} P(M|x) = N \,L(x|M), \end{equation} where $N$ is a normalization constant chosen such that \begin{equation} \sum_M P(M|x) = 1. \end{equation} \section{Application to observations} In this section we demonstrate in three cases why it is important to consider the dilution when fitting abundances of metal-poor stars. Firstly, we will show that it can help to break degeneracies in a fit; secondly, that it may systematically change properties of large fitted samples of stars; and thirdly, that for some stars there may not be a viable single-progenitor scenario to explain the observed abundance patterns. \subsection{Example 1: The progenitor of HE~0020-1741} \label{sect:deg} To investigate the impact of the minimum dilution mass on abundance fitting, we firstly fit the CEMP-no star HE~0020-1741 ([Fe/H] = -3.6). \citet{Hansen19} have determined abundances for 13 elements (C, N, O, Mg, Ca, Sc, Ti, Cr, Mn, Ni, Fe, Sr, Ba). As the yields from \citet{HegerWoosley2010} do not include \textit{r-} and $s-$process elements, Sr and Ba are excluded from the fits. Because it is generally underpredicted in the models Sc is treated as an upper limit. \begin{figure} \includegraphics[width=\linewidth]{img2.pdf} \caption{\label{fig:post_M} Prior (blue) and posterior distribution of the progenitor mass of HE~0020-1741. We show the posterior for unconstrained (orange) and constrained (green) dilution. In the unconstrained dilution there is a bimodal distribution of progenitor masses, whereas with the constrained dilution only high-mass stars match the observed abundances.} \end{figure} We show the prior and the posterior distribution of stellar masses in Fig. \ref{fig:post_M}. The prior is bottom heavy, as there are many more models of low-mass SNe in the libraries than there are models of high-mass SNe. This could potentially bias fitting results towards lower masses. We perform the fits with unconstrained and with constrained dilution factors. In the former case, we chose the dilution factors to maximize the combined likelihoods as defined in Eq. \eqref{eq:comb_L}. In the latter case, we only allow dilution factors above our analytical limit. For the unconstrained dilution we find a bimodal posterior: there are solutions with progenitor masses either around $M_*\approx25\,{\ensuremath{\mathrm{M}_{\sun}}}$ or at $M_*\approx80\,{\ensuremath{\mathrm{M}_{\sun}}}$, which we will refer to as ``low-mass'' and ``high-mass'' here. In Fig.~\ref{fig:HE-bestfit} we show the abundance pattern produced by the best-fitting model from each of these branches. Both models seem to fit approximately equally well. However, if we consider only the constrained dilution case, only the high-mass progenitors still fit. The low-mass progenitors do not produce enough metals to explain the metal abundances of HE~0020-1741 with only a single SN. Thus, if HE~0020-1741 is to be explained with a single progenitor SN, it should be a massive star with $70\,{\ensuremath{\mathrm{M}_{\sun}}}<M_* < 80\,{\ensuremath{\mathrm{M}_{\sun}}}$ for the \citet{HegerWoosley2010} yields. Note, however, that it may be possible to find additional or better fits with different yield sets \citep[e.g.][]{Limongi12, Ishigaki18, Grimmett18}. However, as the aim here is to show the usefulness of the dilution limit to constrain fits, a comparison of these different yield set exceeds the scope of this study. \begin{figure} \includegraphics[width=\linewidth]{img3.pdf} \caption{\label{fig:HE-bestfit} Best-fitting abundance patterns for HE~0020-1741. We show the observed pattern and the patterns of the best-fitting models with low-mass and high-mass progenitors. If only the abundance ratios are considered, both give an equally plausible fit, yet constraining the dilution rules out a single low-mass star as a progenitor.} \end{figure} \subsection{Example 2: Large sample fitting} \label{sect:ex2} \citet{Ishigaki18} fitted the abundances of 201 EMP stars by picking the best-fitting SN model for each of these stars. The compiled sample of stars has been selected to consist only of stars with determined abundances for the elements C, N, O, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Fe, Co, Ni, and Zn based on spectroscopic data with a resolution of at least $R=28000$. These observed abundances were compared to SN models which were computed over a grid of stellar masses, explosion energies as well as three parameters that quantify the properties of the mixing-and-fallback process. Details on the sample selection and the SN modelling can be found in \citet{Ishigaki18}. Each star in the sample was associated with a best-fitting model, based on $\chi^2$ minimization. In this paper, we use the explosion energies of their best-fit models to compute minimum dilution masses for each of the SN models. In Fig. \ref{fig:comparison} we show the ratio between these minimum dilution masses and the dilution masses from the fits for all 201 stars. Of these 201 best-fitting models, 128 violate our derived limit and 43 do so by more than a factor of four. Notably there is no apparent correlation between $\chi^2$ and the dilution mass. Thus, whether the dilution factor found by fitting is physical is unrelated to the goodness of fit. \begin{figure} \includegraphics[width=\linewidth]{img4.pdf} \caption{\label{fig:comparison} Ratio of the minimum dilution mass and the dilution mass derived in \citet{Ishigaki18} as function of the reduced $\chi^2$, as well as histograms of both values. We show the original fits from \citet{Ishigaki18} (orange) as well as re-fits in which the minimum dilution limit is enforced (green). In some of the original fits, the dilution ratio is very low (down to $\sim 10^{-7}$) and therefore outside of the boundaries of this figure. These stars are included in the lowest bin of the histogram.} \end{figure} For comparison, we have repeated the fitting procedure from \citet{Ishigaki18} while at the same time enforcing the dilution limit. This means that all fits with too small dilution masses are rejected and instead the best-fitting model that fulfils the dilution limit is picked. This leads to a significant increase in the mean (median) $\chi^2$ from 16 (13) in the unconstrained case to 24 (15) in the constrained case. For many stars, the best fit with the dilution constraint becomes worse than that without it. In Fig. \ref{fig:prog_mass_hist} we show that this re-fitting leads to significant changes in the distribution of best-fitting progenitors masses. The most notable difference is that progenitors with a stellar mass of 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ are now much rarer and progenitors with 15\,{\ensuremath{\mathrm{M}_{\sun}}}\ more common. The reason for this is that many of the previously common 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ models were hypernovae (HNe) with a high explosion energy and a large fallback fraction. These stars have relatively low absolute yields, but due to their large explosion energies, we predict large dilution masses in spherical symmetry. Therefore, such models are not able to reproduce the relatively large carbon abundances of many CEMP-no stars, when taking the dilution constraint into account. \begin{figure} \includegraphics[width=\linewidth]{img5.pdf} \caption{\label{fig:prog_mass_hist} Distribution of progenitor masses from \citet{Ishigaki18} as well as our repetition of the fits which take the minimum dilution criterion into account. There are only bins at 13, 15, 25, 40 and 100\,{\ensuremath{\mathrm{M}_{\sun}}}\ as these are the only progenitor masses of the SN models. We separate SNe by explosion energy into low-energy SNe ($E_{51}<1$), CCSNe ($E_{51}=1$) and hypernovae (HNe, $E_{51} \ge 10$).} \end{figure} We note that the prescription of faint SNe used in \citet{Ishigaki18} is chosen to reproduce the angle-averaged yields of aspherical jet SNe \citep{Tominaga09}. Our dilution model, however, does not apply to such SNe if their asphericity is preserved. In the used prescription only the total yields are considered. Even if the abundance distribution in the ejecta is strongly aspherical, this approximation assumes that the SN yields are mixed and the angular variations are washed out during later phases of the expansion of the SN. In principle, the mixing behaviour in aspherical SNe can be very different from our approximation if the metal yield per unit energy shows strong angular variations. Additionally, aspherical SNe from \citet{Ishigaki18} have systematically larger and can have much larger explosion energies, which are used in Equation \eqref{eq:M_dil}, than their 2D counterparts with similar yields \citep{Tominaga09}. This further limits the applicability of our model to these SNe. Our results here suggest that developing realistic models for the dilution of heavy elements produced in aspherical SNe is of vital importance for fitting large samples of stars, not just individual cases. \subsection{Example 3: No spherical progenitor for SMSS0313-6708} As we realized previously that stars with high carbon and low iron abundances are particularly strongly affected by applying the dilution criterion, we will look in more detail at a pathological example of such a star, i.e., SMSS0313-6708 \citep{Keller14}. The star is known for having no detected iron abundance with an upper limit of $\mbox{[Fe/H]}<-7.1$. We here use abundances that are based on 3D atmospheric models that do not assume local thermodynamical equilibrium (3D, NLTE) for Na, Mg, Al, Ca, and Fe from \citet{Nordlander17}. For these elements statistical and systematic errors are provided which we add with a quadratic sum. The systematic errors are typically on a level of 0.1~dex. The remaining abundances are taken from \citet{Bessel15} and are based on 3D LTE models for C, N, and O and on 1D LTE models for Si, Sc, Ti, V, Cr, Mn, Co, Ni, and Cu. Notably, most of these elements are not detected and only upper limits on their abundance have been derived. \citet{Bessel15} only give statistical but no systematical errors for their abundance determinations. Because we want avoid biasing our results towards abundances with an unaccounted source of error, we add a systematical error of 0.1~dex to the abundance determinations from \citet{Bessel15}. \begin{figure} \includegraphics[width=\linewidth]{img6.pdf} \caption{\label{fig:Keller_fit}Best-fitting models for SMSS0313-6708. We show the unconstrained (orange) and the constrained (green) best-fitting model. We find the same best-fitting model as \citet{Bessel15}. For reference we also show the best-fitting model from \citet{Keller14}. We mark with different symbols whether abundances have been derived in 1D LTE (diamonds), 3D LTE (squares), or in 3D NLTE (circles)} For better representation we shifted the upper limits by one standard deviation, such that the upper end of the error bar corresponds to discrepancy at a 84 per cent confidence level. \end{figure} We fit the abundances with the same procedure as described in Sect. \ref{sect:deg}. The best constrained and unconstrained models are shown in Fig. \ref{fig:Keller_fit}. For better graphical representation we shift the upper limits by one standard deviation. Thus, according to Eq.~\eqref{eq:L_lim} the upper end of the error bar represents a discrepancy at a 84 per cent confidence level, and a value $1\sigma$ above the upper limit corresponds to a 98 per cent significant discrepancy. Even with unconstrained dilution, we find no model that produces a convincing fit of the abundance patterns. The best-fitting model overproduces Na, C and Si. There are three features in the abundances that are difficult to fit simultaneously: \begin{enumerate} \item the CNO pattern with high C and O but very low N, \item the low upper limit on Na with the detection of a large amount of Mg, and \item the detection of Ca in conjunction with the low upper limits on Al and Si \end{enumerate} The difficulty involved in reproducing all three of these features may partially be related to the grid of models not containing a sufficiently large variety of SN explosion energies. We note that none of the elemental abundances that have been derived in 1D LTE play a critical role in constraining the models. All 1D LTE abundances are only upper limits that lie well above the best-fitting models. It is still unclear how different the C and O abundances would be in a 3D NLTE analysis, but they would need to differ by approximately 1 dex from 3D LTE in order for us to be able to find matching abundance SNe. \citet{Nordlander17} were able to fit the abundance patterns by interpolating the abundance patterns as function of the explosion energy. However, the result of such a procedure is potentially sensitive to the way the interpolation is done. We therefore decided against interpolating to a finer grid here. Both the unconstrained best-fitting model and the best-fitting model from \citet{Keller14} violate the dilution limit by around two orders of magnitude. They would require the SN ejecta to be diluted with less than 500\,{\ensuremath{\mathrm{M}_{\sun}}}\ of pristine material. The best-fitting model that fulfils the dilution limit is clearly inconsistent with the observed upper limits of N and Na. In \citet{Ishigaki14} this star was best-fitted with $25 M_\odot$ and $40M_\odot$ SN or HNe (jet-induced, aspherical, and energetic SN), where Ca is produce by static/explosive O burning and incomplete Si burning in contrast to the explanation in \citet{Keller14}. Of these models, the SNe are consistent with our dilution limit and the HNe are neither compatible with the dilution criterion nor with the updated upper limit on Si that we use. The fits presented in \citet{Ishigaki18} are compatible both with the abundance pattern we use and with our dilution limit. \citet{Chen17} model potential progenitor SNe for SMSS0313-6708 in one and two dimensions. Of their models only the two dimensional model of the SN of a 60\,{\ensuremath{\mathrm{M}_{\sun}}}\ Pop III star is consistent with our dilution limit. As these elements are not modelled, however, it is unclear whether this model is able to reproduce the observed upper limits of the Na and Al abundances. \citet{Chan20} performed full 3D SNe simulations of the progenitor of SMSS0313-6708 using a 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ star, as suggested by \citet{Bessel15}, with asymmetric explosion of low and high energy. Their nucleosynthesis has the same constraints as those by \citet{Chen17}, and the explosion was not followed beyond shock breakout. The low-energy model does not produce any significant metals, the high-energy model too much iron -- if spherically averaged. \section{DISCUSSION AND SUMMARY} We have introduced a analytical limit for the dilution of metals produced by a single SN with the following three assumptions: \begin{itemize} \item the SN being alone and isolated, \item the explosions being spherical and well-mixed, and \item the surrounding medium being homogeneous. \end{itemize} The first two assumptions are commonly made when comparing observed abundance patterns to SN yields in previous works, because if these are not fulfilled the total elemental yields from a single SN cannot be representative stellar abundance pattern. For the last assumption we compared this limit to all hydrodynamical simulations of metal enrichment in high-redshift minihaloes which we are aware of and which included the needed details and resolution for a comparison. We found that, despite assuming homogeneity, the limit is consistent with all of these simulations. We demonstrate that previous fits were often inconsistent with our understanding of metal dilution and mixing on the scale of minihaloes. Including our dilution criterion into fitting procedures for abundance patterns can have important consequences for the conclusions drawn: \begin{enumerate} \item Considering the dilution can help to break degeneracies in progenitor models of individual stars. \item The limit does not just affect individual stars but it can also change the properties of large samples of progenitor models. In particular, low-yield SNe are disfavoured if constraints on the dilution are taken into account. \item It may be difficult to explain certain stars, such as SMSS0313-6708 by enrichment from a single, spherical SN if the dilution is taken into account. The best-fitting models that have been put forward by \citet{Keller14} and \citet{Bessel15} explain the rough shape of the observed abundance ratios, but with an implicit dilution mass that is too small by approximately two orders of magnitude, the yield from the SNe are too small to explain the absolute metal abundances. \citet{Ishigaki14, Ishigaki18} find fits to the abundance pattern that are consistent with our dilution criterion. \end{enumerate} During the preparation of this manuscript, \citet{Komiya20} derived a similar estimate for the minimal dilution and implemented it into a semi-analytical model of the formation of the Milky Way. While we apply this estimate to the exploration of progenitor scenarios of individual stars, \citet{Komiya20} focus on the chemical evolution of the Milky Way and in particular on whether the overall population of CEMP-no stars can be reproduced. They find it difficult to reproduce the prevalence of large carbon abundances in the lowest metallicity stars with faint SNe. This tension between the mixing-and-fallback SN model and the large observed carbon abundances is consistent with our findings. The minimum dilution estimate can serve for evaluating whether a single, spherical SN is a viable progenitor scenario for a certain star. This test may be less reliable, if applicable at all, for asymmetric SNe or cases with several SNe in one halo. In asymmetric SNe, a large fraction of the metals can be ejected along jets \citep{Tominaga09}. Evidence for such SNe has recently been found by \citet{Ezzeddine19}. The dilution and recollapse occuring after such SNe are yet to be explored by numerical simulations. Altogether, we conclude that for the adequate astrophysical interpretation of the observed elemental abundances in extremely metal-poor stars, both the relative abundance patterns as well as the absolute abundance values need to be taken into account. Only then reliable and well founded constraints on the properties of the preceding generation of stars can be derived. Given the fact that simple spherically symmetric models often fail to match the dilutions mass constraint introduced here, we furthermore conclude that the effects of aspherical supernovae, the impact of inhomogeneous mixing in a highly structured interstellar medium, and the combined yields of multiple supernovae requires further investigation. \section*{Acknowledgements} The authors would like to thank Nozomu Tominaga for very productive discussions and comments. In preparation of this manuscript, the software packages \textsc{F2PY} \citep{f2py}, \textsc{NumPy} \citep{numpy}, \textsc{matplotlib} \citep{matplotlib} and \textsc{SciPy} \citep{SciPy} were used. MM was supported by the Max-Planck-Gesellschaft via the fellowship of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). SCOG and RSK acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG) -- Project-ID 138713538 -- SFB 881 (``The Milky Way System'', sub-projects A1, B1, B2 and B8). Further financial support was provided by the DFG via the Heidelberg Cluster of Excellence {\em STRUCTURES} in the framework of Germany’s Excellence Strategy (grant EXC-2181/1 - 390900948). AH was supported in part by the National Science Foundation under Grant No.\ PHY-1430152 (JINA Center for the Evolution of the Elements), by the the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004, by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, and by a grant from Science and Technology Commission of Shanghai Municipality (Grants No.16DZ2260200) and National Natural Science Foundation of China (Grants No.11655002). C.K. acknowledges funding from the UK Science and Technology Facility Council (STFC) through grants ST/M000958/1 and ST/R000905/1. KN has been supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS KAKENHI Grant Number JP17K05382 and JP20K04024. \section*{Data Availability} No new data were generated in support of this research. The SN models from \citet{HegerWoosley2010} are available on \url{http://2sn.org}. For the availability SN models or the sample of stars used in Section~\ref{sect:ex2}, please inquire with the authors of \citet{Ishigaki18}. \bibliographystyle{mnras} \section{INTRODUCTION} The first stars, so-called Population~III (Pop~III) stars form in the absence of heavy elements in the early Universe. Due to the lack of metal cooling, they are expected to be drastically different from the stars found in our vicinity at the present day \citep{BrommReview, GloverReview, GreifReview}. Initially, Pop~III stars were thought to be very massive \citep[e.g.,][]{Bromm99, Omukai01}, but later it was found that their protostellar disks may fragment, leading to the formation of clusters of low-mass metal-free stars \citep{Greif11b, Clark11}. Whereas it is clear that Pop~III stars may form over a wide range of masses, until today simulations are unable to constrain well the metal-free initial mass functions (IMFs). The results depend significantly on the physics employed, the choice of numerical method, and the resolution \citep[see e.g.,][]{Hosokawa16, Stacy16, Susa19}. There are so far no direct detections of metal-free stars. Pop~III stars are expected to form in high-redshift, relatively low-mass mini- and atomic-cooling haloes. Therefore, ``Pop~III galaxies'' are most likely not bright enough to be detected today \citep{Xu16, Hartwig16b, Visbal17}. Having no direct observations, there are several indirect methods that allow us to gain observational constraints on the IMF of Pop~III stars. Direct detection of supernovae (SNe) \citep{Hummel12, Hartwig18b, Rydberg20} or gravitational waves \citep{Kinugawa14, Kinugawa16, Hartwig16a} from the first stars are challenging, but may provide constraints on the high-mass end of the Pop~III IMF in the coming decade. The 21\,cm absorption feature, as reported by the EDGES experiment \citep{EDGES18}, can constrain the timing of the first star formation and the star formation efficiency, but it is not very sensitive to the assumed IMF \citep{Schauer19}. There are two remaining methods to constrain the pristine IMF that are feasible at present. Both are related to the observations of ancient metal-poor stars in the Milky Way and its satellites. The first one is constraining the low-mass end of the IMF with the current non-detection of metal-free stars \citep{Salvadori07, Hartwig15b, Ishiyama16, Magg18, Magg19}. The second method, which is our focus here, is comparing the abundance patterns observed in metal-poor stars to simulated SN yields in Pop~III stars. It was found that the most metal-poor stars, often called extremely metal poor (EMP) stars with an iron abundance\footnote{For elemental abundances, we use the notation \\$\mbox{[X/H]} = \log_{10}(N_\mathrm{X}/N_\mathrm{H})-\log_{10} (N_{\mathrm{X},\sun} /N_{\mathrm{H},\sun})$ where $N_{\mathrm{X}}$ and $N_{\mathrm{H}}$ are the fractional abundances of any element X and hydrogen, and $N_{\mathrm{X},\sun}$ and $N_{\mathrm{H},\sun}$ are the corresponding solar abundances.} of less than $\mbox{[Fe/H]} =-3$ are surprisingly rich in carbon \citep{Beers2005, Frebel15}. A particularly interesting subgroup of stars are the carbon-enhanced extremely metal-poor (CEMP) stars and among these the CEMP-no stars: those with iron abundances below $\mbox{[Fe/H]}=-3$, an excess of carbon relative to iron of more than $\mbox{[C/Fe]}=1$ and no enhancement in neutron-capture elements, i.e., $\mbox{[Ba/Fe]}<0$\ \citep[e.g.,][]{Frebel05, Keller14, Aguado18, Nordlander19}. The origin of the elemental abundance patterns in these stars is one of the key questions of early chemical enrichment. \citet{Umeda2003} proposed that the abundance pattern of CEMP-no stars is the fingerprint of so-called faint SNe. These are Pop~III SNe with relatively large mixing-and-fallback at the core-collapse explosions\footnote{The explosion energy and progenitor star mass are not necessary larger than those for Pop~II SNe \citep{kob14}.}, and eject much of their outer layers, containing carbon and other light elements, whereas most of the inner shells, containing in particular iron, fall back onto the compact remnant. Thus, when the first metal-enriched stars form from gas enriched by one of these SNe, they form with a very small iron abundance but a much higher carbon abundance. Notably, these SNe do not produce particularly large absolute amounts of carbon compared to more conventional core-collapse SNe; rather, they yield high [C/Fe] ratios because they produce unusually small amounts of iron. Subsequently, it has become common practice to use SN models to infer the properties of the primordial progenitors of the most metal-poor observed stars, both for individual stars \citep[e.g.,][]{HegerWoosley2010, Hansen11, Nomoto13, Ishigaki14, Bessel15, Placco16} and for large samples \citep{Cayrel04, Fraser17, Ishigaki18}. Additionally, constraints on the primordial IMF can be inferred from bulk properties of metal-poor stars with semi-analytical models \citep{deBennassuti17, Hartwig18b, Tarumi20b}. For these purposes, libraries of SN yields have been computed \citep{HegerWoosley2010, Nomoto13, Ishigaki18}. These yields typically depend on the stellar mass of the exploding star, the explosion energy and one or a few parameters that quantify the mixing-and-fallback process, which cannot be simulated self-consistently in the one-dimensional SN simulations \citep[e.g.,][]{Chen17, Chan20}. Comparing the resultant Fe mass to the observed [Fe/H], dilution has been discussed in \citet{tom07} and \citet{kob11}. In order to compare modelled and observed abundance pattern, one further step is required: the SN yields need to be physically diluted with metal-free gas to match the absolute metallicity of the observed star. Usually this dilution is treated as a free parameter and chosen to optimize the quality of fit. Freely adjusting the dilution factor essentially makes the fit independent of the absolute abundances and only considers the ratios of the abundances to each other. Then observed and modelled abundance pattern are compared and the well-fitting ones are interpreted as likely progenitor. For example, the \textsc{starfit}\footnote{\url{http://starfit.org}} \citep{HegerWoosley2010, Fraser17} pipeline can be used for such an analysis. In this study, we argue that this approach has do be amended because the amount of ambient gas into which the metals from a Pop~III supernova are mixed cannot be assumed to be arbitrarily small. We derive a simple analytical model for the lower limit of the mass a SN remnant has to mix with before it can recollapse. We find that this limit is consistent with the results from 3D hydrodynamical simulations. In many cases, there are large differences between the halo-scale mixing found in hydrodynamical simulations and the mixing implicitly assumed by fitting abundance ratios with arbitrary dilution. We show how the dilution limit can be applied in abundance fitting methods. Finally, we investigate examples of the impact this dilution limit has on the conclusions drawn from fitting observed abundances. \section{The minimum mixing mass} \subsection{Analytic estimate} As outlined before, abundance fitting usually employs the observed ratios of abundances of certain metals and compares those to the ratios found in theoretical SN models. Of particular importance is, e.g., the $[\mathrm{C}/\mathrm{Fe}]$ ratio. This method, however, typically neglects the absolute abundances (i.e., [Fe/H] or [C/H]) and treats them as an arbitrary normalization factor. Conceptually, this normalization can be achieved by diluting the SN yields with the correct amount of metal-free gas. As in published work usually only single SNe are fitted to observed abundance patterns, we only consider single, isolated SNe in this work. We consider SN explosions as well as their subsequent expansion into the ambient medium and the corresponding mixing processes in spherical symmetry. Simulations carried out in two \citep{Tominaga09} and three \citep{Chan20} dimensions, however, show that Pop~III SNe can be strongly aspherical. In this context, we note that even when considering anisotropic SNe, the observed abundances in most published studies are compared to angle-integrated yields. This means that the problem considered is effectively spherically symmetric, as the angular average implies that different elements ejected in different directions become well mixed before the second generation stars form. An exception to this may be, if the abundances are distributed more spherically than the energy input, such as seen in some of the models in \citet{Tominaga09}. A critical analysis of the validity of this approximation is one of the primary motives for the study presented here. We argue that properly accounting for the asymmetries expected in Pop~III SNe requires both detailed three dimensional explosion models as well as high-resolution simulations of the expansion of the resulting anisotropic shock wave into an inhomogeneous ambient medium that are able to adequately follow the chemical mixing process. Since there is no analytic model for such a small-scale inhomogeneous mixing, however, we follow the bulk of the existing literature and approximate the SN as a spherical explosion inside a homogeneous ambient medium. As SNe are very energetic events, a large amount of gas is required to confine the metals and thus not all dilution masses are physically plausible. The lowest limit for this mass is the mass enclosed in the final radius of the SN remnant. Analytical solutions to spherical blast waves of SNe can be derived under a variety of assumptions \cite[ e.g.,][]{Ostriker88}, with the expansion of the remnant stalling at the end of the momentum-driven snowplough phase. In this phase, the expansion velocity reaches the speed of sound in the ambient medium. As shocks cannot be subsonic, the shock wave transforms into a sound wave and dissipates. This occurs at the fade-away radius $R_\mathrm{fade}$ which is \begin{equation} R_\mathrm{fade} \approx 2.07\times 10^{20}\,\mathrm{cm}\ E_{51}^{0.32} n_0^{-0.37} \left(\frac{c_\mathrm{s}}{10\,\mathrm{km}\,\mathrm{s}^{-1}}\right)^{-2/5}, \label{eq:R_fade} \end{equation} where $n_0$ is the nucleon number density of the ambient medium in units of cm$^{-3}$, $E_{51}$ is the explosion energy in units of $10^{51}\,\mathrm{erg}$ and $c_\mathrm{s}$ is the ambient medium speed of sound \citep[e.g.,][]{Draine}. We assume the ambient medium is ionized, i.e., that it has a speed of sound of $c_\mathrm{s}=18\,\mathrm{km}\,\mathrm{s}^{-1}$, for a metal-free H\textsc{ii} region \citep[see e.g.,][]{abel07}. In case the medium is actually neutral, the speed of sound would be lower and the stalling radius larger. Thus, this is a conservative assumption. In the homogeneous mixing case, the minimum mass with which the ejecta are mixed is the mass that is enclosed in the stalling radius, i.e., \begin{equation} M_\mathrm{dil, min} = \frac{4}{3}\pi n_0 \mu m_{\rm H} R_\mathrm{fade}^3 = 1.9\times 10^4\,{\ensuremath{\mathrm{M}_{\sun}}}\,E_{51}^{0.96}\,n_0^{-0.11}, \label{eq:M_dil} \end{equation} where $m_{\rm H}$ is the mass of a hydrogen nucleus and where we assumed a mean molecular weight of $\mu=1.22$. The fade-away radius used here is for gas cooling rates of solar metallicity gas. However, the smaller amount of metals in the case considered here would only decrease the cooling rates and therefore increase the total mixing mass. As we aim at computing a lower limit for the mixing mass, the reduced cooling can be neglected. By definition the SN remnant expands faster than the speed of sound in the ionized medium. As we consider haloes below the atomic cooling limit the escape velocity from the haloes is much smaller than this speed of sound. Therefore SN remnants expand much faster than the escape velocity and the effect of gravity can be neglected. This result is very similar to the one obtained through numerical simulations by \citet{Thornton98}. While it has been widely used in the discussion of stellar feedback, it is often neglected when fitting abundance patterns of individual stars. For example, \citet{Tominaga14} note that the minimum dilution mass obtained by \citet{Thornton98} is not a binding limit, as metal mixing is highly inhomogeneous \citep{Ritter12}. We will later see that our derived limit holds even in cases of inhomogeneous mixing. We assume an ambient density of $n_0 = 1\,\mathrm{cm}^{-3}$, which should be the typical case for the ionized regions around massive Pop~III stars \citep{Whalen04}. We note that the density dependence of the minimum mixing mass (Eq. \ref{eq:M_dil}) is very weak, so it would need to be higher by several orders of magnitude to affect our conclusions. If the density is this much higher than the assumed value, the free-fall time of the ambient gas is smaller than the life-time of the star, and thus it should form stars already before the SN explodes or while the remnant expands. Furthermore, simulations show that it is difficult to mix metals into gas that is already very dense when the SN explodes \citep{Ritter16, Chiaki18}. Under the assumptions outlined above, the dilution mass is lower limit for two main reasons: \begin{enumerate} \item We assume a homogeneous medium. If the medium is not homogeneous the denser gas will be less enriched but form stars first. This effect is discussed further below. \item We assume no further mixing. Realistically further mixing with additional pristine gas should occur during recollapse, rather than the stalled SN remnant monolithically collapsing back on itself. This effect would further increase the dilution mass. \end{enumerate} We note that we assume all SNe are able to produce second generation stars. Very energetic explosions may actually disrupt their host haloes, which suppresses or delays second generation star formation \citep{Whalen08b}. This effect is difficult to quantify without hydrodynamical simulations in cosmological context, and is therefore neglected here. \subsection{Consistency with simulations} To see whether sub-galactic-scale inhomogeneous mixing can lead to higher metallicities than predicted by the minimum dilution we will compare it to the dilution found in all suitable published simulations of inhomogeneous mixing and the formation of second generation stars which we are aware of. For comparison with our limit, we use an ambient density of $n_0 = 1\,\mathrm{cm}^{-3}$ in all cases but take the explosion energies used in the simulations to compute the minimum mixing. Simulations are included provided that they \begin{itemize} \item are three dimensional hydrodynamical simulations of the expansion of Pop~III SN remnants into their ambient medium, \item are set up to and have sufficient resolution to model individual, isolated, Pop~III SNe, not combined populations, \item follow the enriched gas until it re-collapses, and \item provides the output needed for our comparison. \end{itemize} This implies that we do not discuss the results from \citet{Greif07} and \citet{Chen15} because the re-collapse of the enriched gas is not modelled. Larger-scale simulations, such as the ones from \citet{Wise12}, \citet{FiBY1}, or \citet{Tarumi20a}, are not considered, because they do not follow individual isolated SNe. The simulations of \citet{Whalen08b} are not included here because they are one-dimensional. Nevertheless, we note that their metal-enriched gas masses are consistent with our upper limit in most cases. Only in one of their models is the enriched gas mass they find smaller than our prediction in Eq. \eqref{eq:M_dil}. In this case, the star completely fails to create an ionized region, and the ability to model an off-centre re-collapse would be crucial to make accurate predictions for the metallicity of the second generations star. We begin with the dilution found in \citet{Ritter12, Ritter15, Ritter16}.\footnote{We only consider the 1\textsc{sn} model from \citet{Ritter15} as the 7\textsc{sn} model deals with enrichment by multiple SNe, which is not the topic of our analysis.} In all three simulations the SNe considered are core collapse (CC) SNe with $E_{51}=1$. They eject $M_\mathrm{met}=4\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metals in \citet{Ritter12, Ritter15} and $M_\mathrm{met}=6\,{\ensuremath{\mathrm{M}_{\sun}}}$ in \citet{Ritter16}. Thus, according to Eq. \eqref{eq:M_dil} the maximum final metallicity we should expect is \begin{equation} Z_\mathrm{max} = \frac{M_\mathrm{met}}{M_\mathrm{dil, min}} \approx 10^{-3.6} \approx 10^{-1.7}\,{\ensuremath{\mathrm{Z}_{\sun}}} \label{eq:ZMass} \end{equation} where ${\ensuremath{\mathrm{Z}_{\sun}}}=0.0142$ is the solar metallicity \citep{Asplund09}. While the mixing is highly inhomogeneous, and orders of magnitude of spread in metallicity can be seen, the newly collapsing cores always show metallicities below this value. All simulations also contain gas at higher metallicities than predicted by the minimum dilution. While from \citet{Ritter12} it is unclear in which phase this gas is contained, in \citet{Ritter15, Ritter16} only some of the very diffuse gas has metallicites above the dilution limit. \citet{Chiaki18} and \citet{Chiaki19} model the inhomogeneous mixing occurring after the SNe of 7 different stars with masses between 13\,{\ensuremath{\mathrm{M}_{\sun}}}\ and 200\,{\ensuremath{\mathrm{M}_{\sun}}}. Some of these SNe are simulated in several different halos. The simulations cover a wide range of different environments in which SNe can explode. For massive stars, halos are often completely photo-evaporated, whereas, for the smallest stars, the gas in the stellar birth-cloud remains dense throughout the lifetime of the star. The results show large variations between the mixing behavior and the metallicities of the second-generation stars. \citet{Chiaki18} distinguish between three separate enrichment channels: \begin{enumerate} \item Internal enrichment: in this case, the SN expands efficiently and the metals mix well with the surrounding gas before the halo collapses back on itself. \item External enrichment: the metals escape from the halo in which the SN explodes and mix with the gas in a different halo that has not formed stars yet. This type of enrichment is also found in \citet{bsmith15}. \item Inefficient internal enrichment: dense structures remain in the halo. When the SN explodes these structures are only enriched to very low metallicities and proceed to form stars with metallicities much lower than the average gas metallicity in the halo. \end{enumerate} None of these simulations, however, show the formation of second generation stars that violate our dilution limit. According to Eq.~\eqref{eq:M_dil} the predicted maximum metallicity ranges between $10^{-2.6}\, {\ensuremath{\mathrm{Z}_{\sun}}} < Z_\mathrm{max} < 10^{-1.6}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. All second generation stars in their simulations have metallicities in the range $10^{-6.3}\,{\ensuremath{\mathrm{Z}_{\sun}}}<Z< 10^{-2.2}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. None of the stars violate our derived limit. The simulated second generation star that is closest to our computed upper limit is enriched by a 25\,{\ensuremath{\mathrm{M}_{\sun}}}\ CCSN that explodes in their halo ``MH1'', which is their smallest halo with a mass $M_\mathrm{vir}=3\times10^5\,{\ensuremath{\mathrm{M}_{\sun}}}$. The re-collapsing region has a metallicity of 40 per cent of our computed upper limit. In these simulations, there are several cases of stars with much lower metallicities than predicted by the minimum dilution model. These are the cases in which the surroundings of the SNe are the most dense and the mixing is the most inhomogeneous. The second generation stars form in clumps that already exist when the SNe explode and only the outer layers of these clumps are enriched with metals. Thus, the enrichment proceeds in what \citet{Chiaki18} label the ``inefficient internal enrichment'' channel. \citet{Greif10} simulate the explosion of a single PISN with $E_{51}=10$ and $100\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metal ejecta. According to our model the maximum metallicity in this extreme case should be below $Z=10^{-1.4}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. They find metallicities in the recollapsing galaxy that are around $Z=10^{-3}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. As \citet{Greif10} note, the average metallicities are initially much higher but they decrease to this low value during the recollapse of the halo, which takes around 300\,Myr. The simulations by \citet{Jeon14} include several SNe exploding in three different haloes. The authors provide information on the metallicity of recollapsing regions in three cases: a 15, 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ star exploding in their ``halo1''. They all explode as $E_{51}=1$ CCSN and eject 5 per cent of their stellar mass as metals. According to our model, this should lead to metallicites of $Z<10^{-2.1}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. Their reported metallicities are all below $Z=10^{-3.5}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. \citet{bsmith15} highlight the external enrichment channel. Their SN is a $E_{51}=1$ CCSN which ejects $11.19\,{\ensuremath{\mathrm{M}_{\sun}}}$ of metals, leading us to predict a maximum metallicity of $Z_\mathrm{max} = 10^{-1.4}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. Only a very small fraction of gas is found at such high metllicities, and none of it is in the re-collapsing region. The metal-enriched star forming gas in this case has a metallicity of $Z=10^{-4.7}\,{\ensuremath{\mathrm{Z}_{\sun}}}$. \begin{figure} \includegraphics[width=\linewidth]{img1.pdf} \caption{\label{fig:sim_comp} Comparison of the minimum dilution model to simulations of inhomogeneous metal mixing. We show the ratio of the effective dilution mass of the simulations and our estimate of the minimum dilution mass as a function of the stellar mass of the exploding star. The effective dilution mass is derived from the metallicity in the second generation stars or the likely sites of second generation star formation in the simulations. All simulations show a ratio above one, i.e. they are consistent with our predicted minimum.} \end{figure} We convert the metallicities found in the simulations back to an ``effective dilution mass'' with eq. \eqref{eq:ZMass} and summarize the simulations in Fig. \ref{fig:sim_comp}. None of the simulations of inhomogeneous mixing show inconsistencies with the minimum dilution mass derived from the spherically symmetric case. In some of the simulations, there is gas above the derived upper limit for the metallicity, but it tends to be diffuse and hot. This can be understood intuitively: as thermal energy and metals are ejected together, more metal-rich gas tends to be hotter. It is important to note that there is significant scatter in the simulation results: even for similar exploding stars the effective dilution mass can vary by many orders of magnitude. The cases with the largest effective dilution masses are usually external or inefficient internal enrichment. We conclude that, to the best of our knowledge and the current state of modelling, our estimate provides a useful limit on the mixing and dilution of metals even in the presence of inhomogeneous mixing. \subsection{Bayesian fitting} We will here briefly discuss how the derived limit on mixing can be implemented in abundance fitting codes. For this purpose we create an algorithm that fits observed abundances by comparing to them to the modelled SN yields from \citet{HegerWoosley2010}. The yields of the SNe generally depend on the progenitor mass ($M_\mathrm{prog}$), the explosion energy ($E_{51}$ in units of $10^{51}\,\mathrm{erg}$) as well as a mixing factor ($f_\mathrm{mix}$). For matching observed and modelled abundances, we use the SN yields and analysis tools provided with \textsc{starfit} and supplement them with a generic Bayesian fitting approach. A general description of Bayesian parameter estimation can be found in \citet{BailerJones2017}. We first compute the likelihoods $L_i (x_i|M)$ that a model $M$, which predicts the abundances $y_i$, results in the observed abundances $x_i$, \begin{equation} L_i(x_i|M) = \exp\left(-\frac{(x_i-y_i)^2}{2\sigma_i^2}\right), \end{equation} where $i$ is any of the observed elements and $\sigma_i$ is the error of the observations. The normalization is left arbitrary for now. This likelihood calculation implicitly assumes that the errors follow a Gaussian distribution. While it is not clear whether this assumption is valid, it is commonly made when fitting SN models to observed abundances \citep[e.g.][]{HegerWoosley2010, Ishigaki18, Ezzeddine19}. Computing the modelled abundances $y_i$ requires, as discussed above, a usually arbitrary dilution mass $M_\mathrm{dil}$. If $M_i$ is the mass of element $i$, which has a mass-number of $\mu_i$, that is ejected by a SN, the model abundance is \begin{equation} y_i = \log_{10} \left(\frac{M_i}{\mu_i\,X_\mathrm{H} M_\mathrm{dil}}\right) - \log_{10} \left(\frac{N_{i,\odot}}{N_{\mathrm{H},\odot}}\right), \end{equation} where $X_\mathrm{H}=0.754$ is the hydrogen abundance of primordial gas \citep{Planck2015}. The respective solar fractions of the element $i$ and of hydrogen are $N_{i,\odot}$ and $N_{\mathrm{H},\odot}$. We iteratively adjust the dilution mass for each model until we find the dilution that gives the maximum final likelihood according to Eq. \eqref{eq:comb_L}. The dilution mass is picked individually for each model, but within each model the same dilution mass is used for every element. This choice comes from our assumption that each element mixes in the same way. This assumption is commonly made for SN fitting, as without it the SN yields would not be representative of the elements found in the second generation stars. However, in many cases elements are not detected and only upper limits on their abundance can be derived. These upper limits need to be treated simultaneously with the detections. For this we assume that the upper limits are strict (i.e.\ the likelihoods are Heaviside step-functions $\Theta$) combined with a Gaussian error on where exactly this limit is. These assumptions lead to a likelihood $L_i(x_i|M)$ for an upper limit of $x_i$ in element $i$ of: \begin{equation} \begin{split} L_i(x_i|M) &= \int_{-\infty}^{\infty} \Theta(x_i-z_i) \exp\left(-\frac{(z_i-y_i)^2}{2\sigma_i^2}\right) \mathrm{d} z_i\\ &=\int_{-\infty}^{x_i} \exp\left(-\frac{(z_i-y_i)^2}{2\sigma_i^2}\right) \mathrm{d} z_i\\ &= \sqrt{\frac{\pi}{2}}\sigma_i\, \mathrm{erf} \left(\frac{x_i-y_i}{\sqrt{2}\sigma_i} \right),\\ \end{split} \label{eq:L_lim} \end{equation} where $\mathrm{erf}$ is the Gaussian error function. The likelihoods of the individual elements can then be combined by multiplication: \begin{equation} L(x|M) = \prod_{i} L_i(x_i|M). \label{eq:comb_L} \end{equation} The same approach to compute fit likelihoods was also used in, e.g., \citet{Fraser17}. In cases where there are only detections and no upper limits, maximizing this likelihood is equivalent to minimizing $\chi^2$. This way of combining likelihoods implicitly assumes that the errors of all abundance determinations are uncorrelated. Especially for errors from uncertainties in the determination of stellar parameters, this may not be true \citep{McWilliam95}. This is because all low-excitation lines arising from neutral minority species tend to have similar sensitivity to the effective temperature, which typically dominates the error budget. However, we only aim at showing the importance of constraining the dilution of SN ejecta, and a complete treatment of the error distributions and dependencies of abundance determinations exceeds the scope of the current investigation. If we assign each model in the SN library the same prior probability, we can further compute the probability of each model $M$ given the observations $x$ by \begin{equation} P(M|x) = N \,L(x|M), \end{equation} where $N$ is a normalization constant chosen such that \begin{equation} \sum_M P(M|x) = 1. \end{equation} \section{Application to observations} In this section we demonstrate in three cases why it is important to consider the dilution when fitting abundances of metal-poor stars. Firstly, we will show that it can help to break degeneracies in a fit; secondly, that it may systematically change properties of large fitted samples of stars; and thirdly, that for some stars there may not be a viable single-progenitor scenario to explain the observed abundance patterns. \subsection{Example 1: The progenitor of HE~0020-1741} \label{sect:deg} To investigate the impact of the minimum dilution mass on abundance fitting, we firstly fit the CEMP-no star HE~0020-1741 ([Fe/H] = -3.6). \citet{Hansen19} have determined abundances for 13 elements (C, N, O, Mg, Ca, Sc, Ti, Cr, Mn, Ni, Fe, Sr, Ba). As the yields from \citet{HegerWoosley2010} do not include \textit{r-} and $s-$process elements, Sr and Ba are excluded from the fits. Because it is generally underpredicted in the models Sc is treated as an upper limit. \begin{figure} \includegraphics[width=\linewidth]{img2.pdf} \caption{\label{fig:post_M} Prior (blue) and posterior distribution of the progenitor mass of HE~0020-1741. We show the posterior for unconstrained (orange) and constrained (green) dilution. In the unconstrained dilution there is a bimodal distribution of progenitor masses, whereas with the constrained dilution only high-mass stars match the observed abundances.} \end{figure} We show the prior and the posterior distribution of stellar masses in Fig. \ref{fig:post_M}. The prior is bottom heavy, as there are many more models of low-mass SNe in the libraries than there are models of high-mass SNe. This could potentially bias fitting results towards lower masses. We perform the fits with unconstrained and with constrained dilution factors. In the former case, we chose the dilution factors to maximize the combined likelihoods as defined in Eq. \eqref{eq:comb_L}. In the latter case, we only allow dilution factors above our analytical limit. For the unconstrained dilution we find a bimodal posterior: there are solutions with progenitor masses either around $M_*\approx25\,{\ensuremath{\mathrm{M}_{\sun}}}$ or at $M_*\approx80\,{\ensuremath{\mathrm{M}_{\sun}}}$, which we will refer to as ``low-mass'' and ``high-mass'' here. In Fig.~\ref{fig:HE-bestfit} we show the abundance pattern produced by the best-fitting model from each of these branches. Both models seem to fit approximately equally well. However, if we consider only the constrained dilution case, only the high-mass progenitors still fit. The low-mass progenitors do not produce enough metals to explain the metal abundances of HE~0020-1741 with only a single SN. Thus, if HE~0020-1741 is to be explained with a single progenitor SN, it should be a massive star with $70\,{\ensuremath{\mathrm{M}_{\sun}}}<M_* < 80\,{\ensuremath{\mathrm{M}_{\sun}}}$ for the \citet{HegerWoosley2010} yields. Note, however, that it may be possible to find additional or better fits with different yield sets \citep[e.g.][]{Limongi12, Ishigaki18, Grimmett18}. However, as the aim here is to show the usefulness of the dilution limit to constrain fits, a comparison of these different yield set exceeds the scope of this study. \begin{figure} \includegraphics[width=\linewidth]{img3.pdf} \caption{\label{fig:HE-bestfit} Best-fitting abundance patterns for HE~0020-1741. We show the observed pattern and the patterns of the best-fitting models with low-mass and high-mass progenitors. If only the abundance ratios are considered, both give an equally plausible fit, yet constraining the dilution rules out a single low-mass star as a progenitor.} \end{figure} \subsection{Example 2: Large sample fitting} \label{sect:ex2} \citet{Ishigaki18} fitted the abundances of 201 EMP stars by picking the best-fitting SN model for each of these stars. The compiled sample of stars has been selected to consist only of stars with determined abundances for the elements C, N, O, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Fe, Co, Ni, and Zn based on spectroscopic data with a resolution of at least $R=28000$. These observed abundances were compared to SN models which were computed over a grid of stellar masses, explosion energies as well as three parameters that quantify the properties of the mixing-and-fallback process. Details on the sample selection and the SN modelling can be found in \citet{Ishigaki18}. Each star in the sample was associated with a best-fitting model, based on $\chi^2$ minimization. In this paper, we use the explosion energies of their best-fit models to compute minimum dilution masses for each of the SN models. In Fig. \ref{fig:comparison} we show the ratio between these minimum dilution masses and the dilution masses from the fits for all 201 stars. Of these 201 best-fitting models, 128 violate our derived limit and 43 do so by more than a factor of four. Notably there is no apparent correlation between $\chi^2$ and the dilution mass. Thus, whether the dilution factor found by fitting is physical is unrelated to the goodness of fit. \begin{figure} \includegraphics[width=\linewidth]{img4.pdf} \caption{\label{fig:comparison} Ratio of the minimum dilution mass and the dilution mass derived in \citet{Ishigaki18} as function of the reduced $\chi^2$, as well as histograms of both values. We show the original fits from \citet{Ishigaki18} (orange) as well as re-fits in which the minimum dilution limit is enforced (green). In some of the original fits, the dilution ratio is very low (down to $\sim 10^{-7}$) and therefore outside of the boundaries of this figure. These stars are included in the lowest bin of the histogram.} \end{figure} For comparison, we have repeated the fitting procedure from \citet{Ishigaki18} while at the same time enforcing the dilution limit. This means that all fits with too small dilution masses are rejected and instead the best-fitting model that fulfils the dilution limit is picked. This leads to a significant increase in the mean (median) $\chi^2$ from 16 (13) in the unconstrained case to 24 (15) in the constrained case. For many stars, the best fit with the dilution constraint becomes worse than that without it. In Fig. \ref{fig:prog_mass_hist} we show that this re-fitting leads to significant changes in the distribution of best-fitting progenitors masses. The most notable difference is that progenitors with a stellar mass of 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ are now much rarer and progenitors with 15\,{\ensuremath{\mathrm{M}_{\sun}}}\ more common. The reason for this is that many of the previously common 25 and 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ models were hypernovae (HNe) with a high explosion energy and a large fallback fraction. These stars have relatively low absolute yields, but due to their large explosion energies, we predict large dilution masses in spherical symmetry. Therefore, such models are not able to reproduce the relatively large carbon abundances of many CEMP-no stars, when taking the dilution constraint into account. \begin{figure} \includegraphics[width=\linewidth]{img5.pdf} \caption{\label{fig:prog_mass_hist} Distribution of progenitor masses from \citet{Ishigaki18} as well as our repetition of the fits which take the minimum dilution criterion into account. There are only bins at 13, 15, 25, 40 and 100\,{\ensuremath{\mathrm{M}_{\sun}}}\ as these are the only progenitor masses of the SN models. We separate SNe by explosion energy into low-energy SNe ($E_{51}<1$), CCSNe ($E_{51}=1$) and hypernovae (HNe, $E_{51} \ge 10$).} \end{figure} We note that the prescription of faint SNe used in \citet{Ishigaki18} is chosen to reproduce the angle-averaged yields of aspherical jet SNe \citep{Tominaga09}. Our dilution model, however, does not apply to such SNe if their asphericity is preserved. In the used prescription only the total yields are considered. Even if the abundance distribution in the ejecta is strongly aspherical, this approximation assumes that the SN yields are mixed and the angular variations are washed out during later phases of the expansion of the SN. In principle, the mixing behaviour in aspherical SNe can be very different from our approximation if the metal yield per unit energy shows strong angular variations. Additionally, aspherical SNe from \citet{Ishigaki18} have systematically larger and can have much larger explosion energies, which are used in Equation \eqref{eq:M_dil}, than their 2D counterparts with similar yields \citep{Tominaga09}. This further limits the applicability of our model to these SNe. Our results here suggest that developing realistic models for the dilution of heavy elements produced in aspherical SNe is of vital importance for fitting large samples of stars, not just individual cases. \subsection{Example 3: No spherical progenitor for SMSS0313-6708} As we realized previously that stars with high carbon and low iron abundances are particularly strongly affected by applying the dilution criterion, we will look in more detail at a pathological example of such a star, i.e., SMSS0313-6708 \citep{Keller14}. The star is known for having no detected iron abundance with an upper limit of $\mbox{[Fe/H]}<-7.1$. We here use abundances that are based on 3D atmospheric models that do not assume local thermodynamical equilibrium (3D, NLTE) for Na, Mg, Al, Ca, and Fe from \citet{Nordlander17}. For these elements statistical and systematic errors are provided which we add with a quadratic sum. The systematic errors are typically on a level of 0.1~dex. The remaining abundances are taken from \citet{Bessel15} and are based on 3D LTE models for C, N, and O and on 1D LTE models for Si, Sc, Ti, V, Cr, Mn, Co, Ni, and Cu. Notably, most of these elements are not detected and only upper limits on their abundance have been derived. \citet{Bessel15} only give statistical but no systematical errors for their abundance determinations. Because we want avoid biasing our results towards abundances with an unaccounted source of error, we add a systematical error of 0.1~dex to the abundance determinations from \citet{Bessel15}. \begin{figure} \includegraphics[width=\linewidth]{img6.pdf} \caption{\label{fig:Keller_fit}Best-fitting models for SMSS0313-6708. We show the unconstrained (orange) and the constrained (green) best-fitting model. We find the same best-fitting model as \citet{Bessel15}. For reference we also show the best-fitting model from \citet{Keller14}. We mark with different symbols whether abundances have been derived in 1D LTE (diamonds), 3D LTE (squares), or in 3D NLTE (circles)} For better representation we shifted the upper limits by one standard deviation, such that the upper end of the error bar corresponds to discrepancy at a 84 per cent confidence level. \end{figure} We fit the abundances with the same procedure as described in Sect. \ref{sect:deg}. The best constrained and unconstrained models are shown in Fig. \ref{fig:Keller_fit}. For better graphical representation we shift the upper limits by one standard deviation. Thus, according to Eq.~\eqref{eq:L_lim} the upper end of the error bar represents a discrepancy at a 84 per cent confidence level, and a value $1\sigma$ above the upper limit corresponds to a 98 per cent significant discrepancy. Even with unconstrained dilution, we find no model that produces a convincing fit of the abundance patterns. The best-fitting model overproduces Na, C and Si. There are three features in the abundances that are difficult to fit simultaneously: \begin{enumerate} \item the CNO pattern with high C and O but very low N, \item the low upper limit on Na with the detection of a large amount of Mg, and \item the detection of Ca in conjunction with the low upper limits on Al and Si \end{enumerate} The difficulty involved in reproducing all three of these features may partially be related to the grid of models not containing a sufficiently large variety of SN explosion energies. We note that none of the elemental abundances that have been derived in 1D LTE play a critical role in constraining the models. All 1D LTE abundances are only upper limits that lie well above the best-fitting models. It is still unclear how different the C and O abundances would be in a 3D NLTE analysis, but they would need to differ by approximately 1 dex from 3D LTE in order for us to be able to find matching abundance SNe. \citet{Nordlander17} were able to fit the abundance patterns by interpolating the abundance patterns as function of the explosion energy. However, the result of such a procedure is potentially sensitive to the way the interpolation is done. We therefore decided against interpolating to a finer grid here. Both the unconstrained best-fitting model and the best-fitting model from \citet{Keller14} violate the dilution limit by around two orders of magnitude. They would require the SN ejecta to be diluted with less than 500\,{\ensuremath{\mathrm{M}_{\sun}}}\ of pristine material. The best-fitting model that fulfils the dilution limit is clearly inconsistent with the observed upper limits of N and Na. In \citet{Ishigaki14} this star was best-fitted with $25 M_\odot$ and $40M_\odot$ SN or HNe (jet-induced, aspherical, and energetic SN), where Ca is produce by static/explosive O burning and incomplete Si burning in contrast to the explanation in \citet{Keller14}. Of these models, the SNe are consistent with our dilution limit and the HNe are neither compatible with the dilution criterion nor with the updated upper limit on Si that we use. The fits presented in \citet{Ishigaki18} are compatible both with the abundance pattern we use and with our dilution limit. \citet{Chen17} model potential progenitor SNe for SMSS0313-6708 in one and two dimensions. Of their models only the two dimensional model of the SN of a 60\,{\ensuremath{\mathrm{M}_{\sun}}}\ Pop III star is consistent with our dilution limit. As these elements are not modelled, however, it is unclear whether this model is able to reproduce the observed upper limits of the Na and Al abundances. \citet{Chan20} performed full 3D SNe simulations of the progenitor of SMSS0313-6708 using a 40\,{\ensuremath{\mathrm{M}_{\sun}}}\ star, as suggested by \citet{Bessel15}, with asymmetric explosion of low and high energy. Their nucleosynthesis has the same constraints as those by \citet{Chen17}, and the explosion was not followed beyond shock breakout. The low-energy model does not produce any significant metals, the high-energy model too much iron -- if spherically averaged. \section{DISCUSSION AND SUMMARY} We have introduced a analytical limit for the dilution of metals produced by a single SN with the following three assumptions: \begin{itemize} \item the SN being alone and isolated, \item the explosions being spherical and well-mixed, and \item the surrounding medium being homogeneous. \end{itemize} The first two assumptions are commonly made when comparing observed abundance patterns to SN yields in previous works, because if these are not fulfilled the total elemental yields from a single SN cannot be representative stellar abundance pattern. For the last assumption we compared this limit to all hydrodynamical simulations of metal enrichment in high-redshift minihaloes which we are aware of and which included the needed details and resolution for a comparison. We found that, despite assuming homogeneity, the limit is consistent with all of these simulations. We demonstrate that previous fits were often inconsistent with our understanding of metal dilution and mixing on the scale of minihaloes. Including our dilution criterion into fitting procedures for abundance patterns can have important consequences for the conclusions drawn: \begin{enumerate} \item Considering the dilution can help to break degeneracies in progenitor models of individual stars. \item The limit does not just affect individual stars but it can also change the properties of large samples of progenitor models. In particular, low-yield SNe are disfavoured if constraints on the dilution are taken into account. \item It may be difficult to explain certain stars, such as SMSS0313-6708 by enrichment from a single, spherical SN if the dilution is taken into account. The best-fitting models that have been put forward by \citet{Keller14} and \citet{Bessel15} explain the rough shape of the observed abundance ratios, but with an implicit dilution mass that is too small by approximately two orders of magnitude, the yield from the SNe are too small to explain the absolute metal abundances. \citet{Ishigaki14, Ishigaki18} find fits to the abundance pattern that are consistent with our dilution criterion. \end{enumerate} During the preparation of this manuscript, \citet{Komiya20} derived a similar estimate for the minimal dilution and implemented it into a semi-analytical model of the formation of the Milky Way. While we apply this estimate to the exploration of progenitor scenarios of individual stars, \citet{Komiya20} focus on the chemical evolution of the Milky Way and in particular on whether the overall population of CEMP-no stars can be reproduced. They find it difficult to reproduce the prevalence of large carbon abundances in the lowest metallicity stars with faint SNe. This tension between the mixing-and-fallback SN model and the large observed carbon abundances is consistent with our findings. The minimum dilution estimate can serve for evaluating whether a single, spherical SN is a viable progenitor scenario for a certain star. This test may be less reliable, if applicable at all, for asymmetric SNe or cases with several SNe in one halo. In asymmetric SNe, a large fraction of the metals can be ejected along jets \citep{Tominaga09}. Evidence for such SNe has recently been found by \citet{Ezzeddine19}. The dilution and recollapse occuring after such SNe are yet to be explored by numerical simulations. Altogether, we conclude that for the adequate astrophysical interpretation of the observed elemental abundances in extremely metal-poor stars, both the relative abundance patterns as well as the absolute abundance values need to be taken into account. Only then reliable and well founded constraints on the properties of the preceding generation of stars can be derived. Given the fact that simple spherically symmetric models often fail to match the dilutions mass constraint introduced here, we furthermore conclude that the effects of aspherical supernovae, the impact of inhomogeneous mixing in a highly structured interstellar medium, and the combined yields of multiple supernovae requires further investigation. \section*{Acknowledgements} The authors would like to thank Nozomu Tominaga for very productive discussions and comments. In preparation of this manuscript, the software packages \textsc{F2PY} \citep{f2py}, \textsc{NumPy} \citep{numpy}, \textsc{matplotlib} \citep{matplotlib} and \textsc{SciPy} \citep{SciPy} were used. MM was supported by the Max-Planck-Gesellschaft via the fellowship of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). SCOG and RSK acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG) -- Project-ID 138713538 -- SFB 881 (``The Milky Way System'', sub-projects A1, B1, B2 and B8). Further financial support was provided by the DFG via the Heidelberg Cluster of Excellence {\em STRUCTURES} in the framework of Germany’s Excellence Strategy (grant EXC-2181/1 - 390900948). AH was supported in part by the National Science Foundation under Grant No.\ PHY-1430152 (JINA Center for the Evolution of the Elements), by the the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), through project number CE170100004, by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, and by a grant from Science and Technology Commission of Shanghai Municipality (Grants No.16DZ2260200) and National Natural Science Foundation of China (Grants No.11655002). C.K. acknowledges funding from the UK Science and Technology Facility Council (STFC) through grants ST/M000958/1 and ST/R000905/1. KN has been supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS KAKENHI Grant Number JP17K05382 and JP20K04024. \section*{Data Availability} No new data were generated in support of this research. The SN models from \citet{HegerWoosley2010} are available on \url{http://2sn.org}. For the availability SN models or the sample of stars used in Section~\ref{sect:ex2}, please inquire with the authors of \citet{Ishigaki18}. \bibliographystyle{mnras}
1,314,259,996,815
arxiv
\section{Introduction} \label{sec:introduction} The ultra-low-luminosity source at the center of the Milky Way, Sagittarius A$^{*}$ (Sgr A$^{*}$), is thought to be powered by accretion onto a supermassive black hole. Sgr A$^{*}$ radiates well below the Eddington limit and there is strong evidence that the accreting gas can be described as an advection-dominated accretion flow (ADAF, also referred to as a radiatively inefficient accretion flow, RIAF) \citep{Narayan1994,Narayan1995,Narayan1995a,Abramowicz1995,Narayan2008,Yuan2014}. In ADAFs, the disk is geometrically thick and optically thin. Additionally, the plasma is predicted to be two-temperature for several reasons: first, in the ADAF configuration, the density of accreting gas is low enough that Coulomb collisions between electrons and protons are extremely rare on accretion timescales, so that the species become thermally decoupled. Second, electrons radiate more efficiently than protons. Lastly, relativistic electrons are heated less than non-relativistic protons when subjected to the same adiabatic compression. For all these reasons, the plasma is expected to be two-temperature, with protons significantly hotter than electrons \citep{Narayan1995,Yuan2003}. Despite the above arguments, the two-temperature gas may be driven to a single-temperature state by kinetic processes, such as reconnection and instabilities \citep{Quataert2002,Riquelme2012,Riquelme2015,Sironi2015,Sironi2015a,Werner2016}. To capture the effects of these plasma processes, one requires a fully-kinetic description, which can be achieved via numerical techniques such as particle-in-cell (PIC) simulations. In principle, such \textit{ab initio} simulations can be used to provide the necessary sub-grid physics that, to date, cannot be captured in magnetohydrodynamic (MHD) simulations \citep[e.g.,][]{Ressler2015,Ressler2017,Ball2016,Ball2017, Chael2017, Sadowski2017}. In supermassive black hole accretion flows, the ratio of ion thermal pressure to magnetic pressure, \begin{align} \beta_{\rm i} = \frac{8 \pi n_0 k_{\rm B} T_{\rm i} }{B_{0}^{2}}, \end{align} (where $n_0$ is the ion number density, $k_{\rm B}$ is Boltzmann's constant, $T_{\rm i}$ is the ion temperature, and $B_{0}$ is the magnitude of the magnetic field) is expected to vary in the disk midplane in the range $\beta_{\rm i} \sim 10$ -- $30$ \citep[See Fig. 1 of ][]{Sadowski2013}. However, in plasma far above and below the midplane, the ``corona,'' the system is expected to be magnetically dominated, such that $\beta_{\rm i}\lesssim 1$. Here, the dissipation of magnetic energy via reconnection can result in particle heating, acceleration, and bulk motion. Even in the magnetized corona, the magnetization, \begin{align} \label{eq:sigmai} \sigma_{\rm i} = \frac{B_{0} ^{2}}{4 \pi n_0 m_{\rm i} c^{2}}, \end{align} is generally small, i.e.,~$\sigma_{\rm i} \lesssim~1$. Electron heating by reconnection in the non-relativistic limit ($\sigma_{\rm i} \ll 1$) has been studied extensively, both theoretically and by means of PIC simulations, in the context of the solar wind, Earth's magnetotail, and laboratory plasmas \citep{Hoshino2001,Jaroschek2004,Loureiro,Schoeffler2011,Schoeffler2013,Shay2014,Dahlin2014,Daughton2014,Li2015b, Haggerty2015, Numata2015, Le2016, Li2017}. Though less commonly studied, relativistic reconnection (i.e., ~$\sigma_{\rm i} \gg~1$) in electron-proton plasmas has also received some attention in recent years \citep{Sironi2015c,Guo2016a}. The collisionless plasma in hot accretion flows around black holes provides a peculiar environment for reconnection, since $\sigma_{\rm i} \lesssim 1$, a regime that falls between the well-studied non-relativistic and ultra-relativistic regimes. For $\beta_{\rm i}\sim 1$ and $\sigma_{\rm i} \lesssim 1$, protons are generally non-relativistic, yet electrons can be ultra-relativistic. This territory remains largely unexplored, in terms of both simulation and theory, and studies have only recently begun to probe reconnection in this parameter regime \citep{Melzani2014,Werner2016}. The aim of this work is to explore particle heating via magnetic reconnection in the trans-relativistic regime $\sigma_{\rm i} \lesssim 1.$ We study heating in the outflows of anti-parallel reconnection (i.e., in the absence of a guide field perpendicular to the alternating fields) by means of fully-kinetic PIC simulations, choosing inflow parameters appropriate for the coronae of collisionless accretion flows. We present the electron and proton heating as a function of mass ratio (up to the physical value), inflow magnetization, ion plasma $\beta_{\rm i}$ and temperature ratio $T_{\rm e}/T_{\rm i}$. We show that heating in the high-$\beta_{\rm i}$ regime is primarily dominated by adiabatic compression (we shall call this contribution ``adiabatic heating''), while for low $\beta_{\rm i}$ the heating is genuine, in the sense that it is associated with an increase in entropy (``irreversible heating''). At our fiducial $\sigma_{\rm i}\sim 0.1$, we find that for $\beta_{\rm i}\lesssim 1$ the irreversible heating efficiency is independent of $T_{\rm e}/T_{\rm i}$ (which we vary from $0.1$ up to $1$). For equal electron and proton temperatures, the fraction of inflowing magnetic energy converted to electron irreversible heating at realistic mass ratios decreases from $\sim 1.6\%$ down to $\sim 0.2\%$ as $\beta_{\rm i}$ ranges from $\beta_{\rm i}\sim 10^{-2}$ up to $\beta_{\rm i}\sim 0.5$, but then it increases up to $\sim 3\%$ as $\beta_{\rm i}$ approaches $\sim2$. Protons are heated much more efficiently than electrons at low and moderate $\beta_{\rm i}$ (by a factor of $\sim7$), whereas the electron and proton heating efficiencies become comparable at $\beta_{\rm i}\sim 2$ if $T_{\rm e}/T_{\rm i}=1$, when both species start already relativistically hot. We find comparable heating efficiencies between the two species also in the limit of relativistic reconnection, when the magnetization exceeds unity. The unifying feature of these two cases (i.e., high magnetization, and high $\beta_{\rm i}$ at low magnetization) is that the scale separation between electrons and protons in the reconnection outflows approaches unity, so the two species behave nearly the same. Motivated by our findings, we propose an empirical formula (Eq.~\ref{eq:fit}) that captures the magnetization and plasma-$\beta_{\rm i}$ dependence of the electron heating efficiency (normalized to the overall electron + proton heating efficiency) over the whole range of magnetization and $\beta_{\rm i}$ that we explore. We also measure the inflow speed (i.e., the reconnection rate) as a function of the flow conditions, finding that for our fiducial magnetization $\sigma_w=0.1$ it decreases from $v_{\rm in}/v_{\rm A} \approx 0.08$ down to $0.04$ as $\beta_{\rm i}$ ranges from $\beta_{\rm i}\sim 10^{-2}$ up to $\beta_{\rm i}\sim 2$ (here, $v_{\rm A}$ is the Alfv\'en speed). Similarly, the outflow speed saturates at the Alfv\'{e}n velocity for low $\beta_{\rm i}$, but it decreases with increasing $\beta_{\rm i}$ down to $v_{\rm out}/v_{\rm A}\approx 0.7$ at $\beta_{\rm i}\sim2.$ The inflow (outflow, respectively) speed is independent of $T_{\rm e}/T_{\rm i}$ at low $\beta_{\rm i}$, with only a minor tendency for lower (higher, respectively) speeds at larger $T_{\rm e}/T_{\rm i}$ in the high-$\beta_{\rm i}$ regime. The organization of the paper is as follows. In Section \ref{sec:setup}, we provide details about the simulation setup and parameters. In Section \ref{sec:technique}, we discuss our technique for extracting from PIC simulations the heating efficiencies. In Section \ref{sec:results}, we discuss the dependence of the reconnection rate, the outflow speed and the electron and proton heating efficiencies on the flow conditions. We conclude in Section \ref{sec:conclusion}, with a summary and discussion. \section{Simulation setup} \label{sec:setup} We use the electromagnetic PIC code \texttt{TRISTAN-MP} to perform fully-kinetic simulations of reconnection \citep{Buneman1993,Spitkovsky2005}. We employ two-dimensional (2D) simulations, but all three components of velocity and electromagnetic fields are tracked. Our setup is similar to that described in \citet{Sironi2014}. The initial field configuration is illustrated in Fig.~\ref{fig:2drecbox}. From the red to the blue region, the polarity of the inflow magnetic field reverses, as shown by the white arrows. An out-of-plane current, in the green region, satisfies Ampere's law for the curl of the magnetic field. The reconnection layer is initialized in Harris equilibrium, with a magnetic field profile $\mathbf{B}=B_{0} \tanh(2 \pi y/\Delta)\, \mathbf{\hat{x}}$. We focus on anti-parallel reconnection, postponing the study of guide field effects to a future work. The field strength is parameterized via the magnetization, \begin{align} \sigma_{w} &= \frac{B_{0}^{2}}{4 \pi w}, \label{eq:sigmaw} \end{align} where $B_{0}$ is the magnitude of the magnetic field in the inflow region, $w=(\rho_{\rm e} + \rho_{\rm i}) c^{2} + \hat{\gamma}_{\rm e} u_{\rm e} + \hat{\gamma}_{\rm i} u_{\rm i}$ is the enthalpy density per unit volume, and $\rho_{\rm e}=m_{\rm e}n_0$, $\rho_{\rm i}=m_{\rm i}n_0$, $\hat{\gamma}_{\rm e}$, $\hat{\gamma}_{\rm i}$, and $u_{\rm e}$, $u_{\rm i}$ are the rest mass densities, adiabatic indices, and internal energy densities, respectively, of electrons and protons. Here, $n_{0}$ is the electron number density in the inflow region, $m_{\rm e}$ and $m_{\rm i}$ are the electron and proton masses. The definition of magnetization in Eq. \ref{eq:sigmaw} reduces to Eq. \ref{eq:sigmai} in the limit of non-relativistic temperatures, but for relativistic particles the enthalpy in $\sigma_{w}$ properly accounts for the relativistic inertia. \begin{figure \centering \includegraphics[width=0.5\textwidth,clip,trim=0cm 0cm 0cm 0.6cm]{2drecbox.pdf} \\ \caption{Schematic depiction of the reconnection layer initial configuration. Red and blue regions show magnetic field lines of opposite polarity. A hot, over-dense component of plasma (green region) balances the magnetic pressure outside the current sheet. \label{fig:2drecbox} \\} \end{figure} In all runs, we set the current sheet thickness to be $\Delta=40\,c/\omega_{\rm pe}$, where $c/\omega_{\rm pe}$ is the electron skin depth, \begin{align} \omega_{\rm pe}=\sqrt{\frac{4 \pi n_{0} e^{2}}{m_{\rm e}}}\left( 1 + \frac{\theta_{\rm e}}{\hat{\gamma}_{\rm e} - 1}\right)^{-1/2} \end{align} is the electron plasma frequency. Here, $\theta_{\rm e}=k_{\rm B} T_{\rm e}/m_{\rm e}c^2$ is the dimensionless electron temperature, whereas $e$ is the electric charge. The size of the computational domain in the $x$ direction is $L_{x}=4318\,c/\omega_{\rm pe},$ which is large enough to resolve both electron and proton heating physics (see Appendix \ref{sec:lxconvergence}, where we study the convergence of our results with respect to the domain size). While $L_{x}$ in units of $c/\omega_{\rm pe}$ remains fixed across our simulations, the domain size in units of the proton skin depth \begin{align}\label{eq:skindepth} \frac{c}{\omega_{\rm pi}}\! \approx \!\frac{c}{\omega_{\rm pe}} \!\sqrt{\frac{m_{\rm i}}{m_{\rm e}}}\! \left( \!1\!+ \!\frac{\theta_{\rm e}}{\hat{\gamma}_{\rm e} - 1}\right)^{-1/2}\! \left( \!1 \!+ \!\frac{\theta_{\rm i}}{\hat{\gamma}_{\rm i} - 1}\right)^{1/2}, \end{align} increases as electrons become more relativistic (see Tab.~\ref{tab:params}). Here, $\theta_{\rm i}=k_{\rm B} T_{\rm i}/m_{\rm i}c^2$ is the dimensionless proton temperature. We typically employ periodic boundary conditions along the $x$ direction, but we have tested that our main results do not change when using outflow boundary conditions, similar to those described in \citet{Sironi2016}. With the latter, it is possible to study the dynamical evolution of the reconnection system over multiple Alfv\'enic crossing times, whereas the evolution of a periodic simulation is limited to a few Alfv\'enic crossing times, before the periodic boundaries start affecting the reconnection physics. We compare the results of simulations with outflow and periodic boundaries in Appendix \ref{sec:outvper}. Fresh plasma, described by a Maxwell-J\"{u}ttner distribution, is introduced at two moving injectors. Each injector recedes from $y=0$ at the speed of light, and the simulation domain is enlarged when the injectors reach the boundaries, so that the injectors may continue receding in the $\pm \mathbf{\hat{y}}$ directions. This strategy --- described in more detail in \citet{Sironi2011} --- ensures that the domain includes all causally connected regions throughout the evolution of the system, while making efficient use of the available memory and computing time. Additional computational optimization is achieved by allowing the injectors to periodically ``jump'' backwards (toward $y=0$), removing all particles beyond the injectors and resetting the electromagnetic fields to their initial values \citep{Sironi2011}. A hot, over-dense population of particles is initialized in the current sheet to balance the magnetic pressure from outside. These particles have temperature $k_{\rm B}T_{cs}/m_{\rm i} c^2=\sigma_{\rm i}/2 \eta,$ where $\eta$ is the over-density relative to the inflowing plasma; we use $\eta = 3$. Reconnection is triggered at the initial time by cooling by hand the over-dense population in the middle of the current sheet $(x,y) \approx (0,0)$. This causes a local collapse of the layer, leading to the formation of an X-point, after which the system evolves self-consistently \citep{Sironi2016}. Adequate resolution of the electron skin depth $c/\omega_{\rm pe}$ is required for accuracy and stability of PIC codes. We use 4 cells per electron skin depth, and fix $c=0.45$ cells/timestep, which is less than required by the Courant-Friedrichs-Lewy condition in 2D. The time resolution of our simulations is then ${\Delta t \approx 0.1\,\omega_{\rm pe}^{-1}}$, which properly captures the physics at electron scales. For two cases ($\beta_{\rm i} = 0.0078$ and $\beta_{\rm i}=2$, with the same $\sigma_{w}=0.1$ and $T_{\rm e}/T_{\rm i}=1$), we have tested for convergence by varying the spatial resolution (we have tested with $c/\omega_{\rm pe} = 2$ or $8$ cells), which has the effect of changing also the temporal resolution (we still fix $c=0.45$ cells/timestep). For both choices of $\beta_{\rm i}$, our results are essentially the same (see Appendix \ref{sec:compconvergence}, where we study the convergence of our results with respect to the spatial resolution of the electron skin depth). For simulations with $\beta_{\rm i} = 2,$ we use $64$ particles per cell ($N_{\rm ppc}$), whereas $N_{\rm ppc}=16$ at lower $\beta_{\rm i}$. We have found that these values of $N_{\rm ppc}$ are sufficient to keep numerical heating under control, even for $T_{\rm e}/T_{\rm i}\ll1$. We have extensively tested the impact of numerical heating in simulations with $\beta_{\rm i}=2$ for several values of $N_{\rm ppc},$ in some cases up to $N_{\rm ppc}=256$; see Appendix \ref{sec:ppc} for some discussion. \begin{table}\centering \ra{1.3} \begin{tabular}{@{}crrrrrr@{}}\toprule \multicolumn{1}{c}{\textbf{ID}} & \multicolumn{1}{c}{\textbf{A[0]}} & \multicolumn{1}{c}{\textbf{A[1]}} & \multicolumn{1}{c}{\textbf{A[2]}} & \multicolumn{1}{c}{\textbf{A[3]}} & \multicolumn{1}{c}{\textbf{A[4]}} & \\ \midrule $\beta_{\rm i}$ &0.0078 &0.031 &0.13 &0.50 & 2.0 &\\ $\beta_{\rm e}$ &0.00078 &0.0031 &0.013 &0.050 & 0.20 &\\ $\theta_{\rm i}$ &0.00041 &0.0016 &0.0066 & 0.028 & 0.16 &\\ $\theta_{\rm e}$ &0.0010 &0.0041 &0.017 & 0.070 & 0.39 &\\ $\upsilon_{\rm i}$ &0.00061 &0.0024 & 0.010 & 0.043 & 0.27 &\\ $\upsilon_{\rm e}$ &0.0015 & 0.0062 & 0.025 & 0.11 & 0.78 &\\ $\sigma_{\rm i}$ & 0.10 & 0.10 & 0.10 & 0.11 & 0.15 &\\ $T_{\rm e}/T_{\rm i}$ & 0.10 & 0.1 0 & 0.10 & 0.10 & 0.10 &\\ $N_{\rm ppc}$ & 16 &16 &16 &16 & 64 &\\ $c/ \omega_{\rm pi}$ & 20 & 20 & 20 & 19 & 16 &\\ $L_{x} [c/ \omega_{\rm pi}]$ &860 & 870 & 870 & 890 & 1100 &\\ \midrule \multicolumn{1}{c}{\textbf{ID}} & \multicolumn{1}{c}{\textbf{B[0]}} & \multicolumn{1}{c}{\textbf{B[1]}} & \multicolumn{1}{c}{\textbf{B[2]}} & \multicolumn{1}{c}{\textbf{B[3]}} & \multicolumn{1}{c}{\textbf{B[4]}} & \\ \midrule $\beta_{\rm i}$ &0.0078 &0.031 &0.13 &0.50 & 2.0 &\\ $\beta_{\rm e}$ &0.0023 &0.0094 &0.038 &0.15 & 0.60 &\\ $\theta_{\rm i}$ &0.00041 &0.0016 &0.0066 &0.029 & 0.18 &\\ $\theta_{\rm e}$ &0.0031 &0.012 &0.050 &0.21 & 1.3 &\\ $\upsilon_{\rm i}$ & 0.00061 &0.0025 &0.010 &0.044 & 0.32 &\\ $\upsilon_{\rm e}$ & 0.0046 &0.019 &0.079 &0.39 & 3.3 &\\ $\sigma_{\rm i}$ &0.10 &0.10 &0.10 &0.11 & 0.17 &\\ $T_{\rm e}/T_{\rm i}$ &0.30 &0.30 &0.30 &0.30 & 0.30 &\\ $N_{\rm ppc}$ &16 &16 &16 &16 & 64 &\\ $c/ \omega_{\rm pi}$ &20 & 20 & 19 & 17 & 11 &\\ $L_{x} [c/ \omega_{\rm pi}]$ & 870 & 870 & 890 & 1000 & 1600 &\\ \midrule \multicolumn{1}{c}{\textbf{ID}} & \multicolumn{1}{c}{\textbf{C[0]}} & \multicolumn{1}{c}{\textbf{C[1]}} & \multicolumn{1}{c}{\textbf{C[2]}} & \multicolumn{1}{c}{\textbf{C[3]}} & \multicolumn{1}{c}{\textbf{C[4]}} & \\ \midrule $\beta_{\rm i}$ &0.0078 &0.031 &0.13 &0.50 & 2.0 &\\ $\beta_{\rm i}$ &0.0078 &0.031 &0.13 &0.50 & 2.0 &\\ $\theta_{\rm i}$ &0.00041 &0.0016 &0.0067 &0.031 & 0.39 &\\ $\theta_{\rm e}$ &0.010 &0.041 &0.17 &0.77 & 9.9 &\\ $\upsilon_{\rm i}$ &0.00061 &0.0024 &0.010 &0.048 & 0.79 &\\ $\upsilon_{\rm e}$ & 0.015 &0.064 &0.30 &1.8 & 29 &\\ $\sigma_{\rm i}$ &0.10 &0.10 &0.10 &0.12 & 0.38 &\\ $T_{\rm e}/T_{\rm i}$ &1.0 &1.0 &1.0 &1.0 & 1.0 &\\ $N_{\rm ppc}$ &16 &16 &16 &16 & 64 &\\ $c/\omega_{\rm pi}$ &20 & 19 & 17 & 12 & 5.0 &\\ $L_{x} [c/ \omega_{\rm pi}]$ & 870 & 890 & 990 & 1500 & 3400 &\\ \bottomrule \end{tabular} \caption{\raggedright Initial parameters for the $m_{\rm i}/m_{\rm e}=25$ simulations with our fiducial $\sigma_w=0.1$. The proton skin depth $c/\omega_{\rm pi}$, calculated according to Eq. \ref{eq:skindepth}, is expressed in number of cells. The definition of the various quantities is in Section \ref{sec:setup}. Simulation sets \textbf{A}, \textbf{B}, and \textbf{C} differ by the initial temperature ratio, with $T_{\rm e}/T_{\rm i}=0.1,0.3,$ and $1$, respectively. From left to right, $\beta_{\rm i}$ increases. We fix the mass ratio $m_{\rm i}/m_{\rm e}=25,$ magnetization $\sigma_{w}=0.1,$ electron skin depth $c/\omega_{\rm pe}=4$ cells, and domain size $L_{x} = 4318\,c/\omega_{\rm pe}.$ We also perform a number of additional simulations, up to the realistic mass ratio $m_{\rm i}/m_{\rm e}=1836$ and with higher magnetizations ($\sigma_w=0.3$, 1, 3, 10), as described in Section \ref{sec:setup}.} \label{tab:params} \end{table} In our parameter scan (Tab.~\ref{tab:params}), we fix $\sigma_{w}$ and study the reconnection physics as a function of $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$. We choose to fix $\sigma_{w}$ rather than $\sigma_{\rm i},$ given that the parameter space we probe involves relativistic particles whose thermal contribution to the inertia is non-negligible (see Eq. \ref{eq:sigmaw}). For a constant $\sigma_w$, the Alfv\'{e}n velocity \begin{align} \frac{v_{\rm A}}{c} &= \sqrt{\frac{\sigma_{w}}{1+\sigma_{w}}}, \end{align} remains fixed across our simulations. The reconnection layer is evolved for $\sim 1$ Alfv\'{e}nic crossing time $(t_{\rm A}=L_x/ v_{\rm A})$, which for our reference magnetization of $\sigma_w=0.1$ and $L_x=4318\,c/\omega_{\rm pe}$ corresponds to $ t \approx 14000\,\omega_{\rm pe}^{-1}.$ The focus of our investigation is the so-called \textit{trans-relativistic} regime of reconnection, hence we select $\sigma_w = 0.1$ as our fiducial magnetization, and we vary $\beta_{\rm i}$ from $0.0078$ to $2.$ Additionally, we study the effect of the initial electron-to-proton temperature ratio $T_{\rm e}/T_{\rm i}$ on the reconnection physics. For each value of $\beta_{\rm i},$ we run three simulations with $T_{\rm e}/T_{\rm i} = 0.1, 0.3,$ and $1.$ Our choice of initial parameters, both physical ($\sigma_{w}, \beta_{\rm i}$, and $T_{\rm e}/T_{\rm i}$) and computational ($N_{\rm ppc}$, $c/\omega_{\rm pe}$), is summarized in Tab.~\ref{tab:params}. Other derived physical parameters in the inflow region, namely the electron plasma $\beta_{\rm e}=\beta_{\rm i}T_{\rm e}/T_{\rm i},$ the dimensionless proton and electron temperatures $\theta_{\rm i}=k_{\rm B}T_{\rm i}/m_{\rm i} c^2$ and $\theta_{\rm e}=k_{\rm B}T_{\rm e}/m_{\rm e} c^2,$ the dimensionless internal energy per particle for protons and electrons $\upsilon_{\rm i}\equiv u_{\rm i}/n_{0} m_{\rm i} c^2$ and $\upsilon_{\rm e} \equiv u_{\rm e}/n_{0} m_{\rm e} c^2$, and the ratio $\sigma_{\rm i}$ of magnetic pressure to rest mass energy density, are also included. In addition to the simulations listed in the table, which employ mass ratio $m_{\rm i}/m_{\rm e}=25$, we also investigate mass ratios $m_{\rm i}/m_{\rm e}=10, 50,$ and $1836$ for $\beta_{\rm i}$ in the range $5 \times 10^{-4}-2$ (with fixed $\sigma_w=0.1$ and a fixed electron-to-proton temperature ratio $T_{\rm e}/T_{\rm i}=1$). With realistic mass ratios and $T_{\rm e}/T_{\rm i}=1$, we also explore the $\beta_{\rm i}$-dependence of the heating efficiency at higher values of the magnetization: $\sigma_w=0.3$, 1, 3 and 10. \begin{figure*}[!tbh \centering \includegraphics[width=\textwidth,trim={0 8.5cm 1.5cm 7.5cm},clip]{plot_2.pdf} \\ % \caption{Time evolution of a representative low-$\beta_{\rm i}$ simulation (\textbf{A[0]} in Tab.~\ref{tab:params}), with $\beta_{\rm i}=0.0078$ and $T_{\rm e}/T_{\rm i}=0.1$. The snapshots show number density of electrons in units of the initial density at (a): $ t = 3713\,\omega_{\rm pe}^{-1} \approx 0.25\,t_{\rm A}$; (b): $ t = 7200\,\omega_{\rm pe}^{-1} \approx 0.50\,t_{\rm A}$; (c): $ t = 10688\,\omega_{\rm pe}^{-1}\approx 0.75\,t_{\rm A}$. We show the whole dimension of the box in $x$, and only a small portion close to the center in $y$. A characteristic feature of this and other low-$\beta_{\rm i}$ simulations is the presence of {\it secondary} magnetic islands, i.e., structures like those at $x \approx 300\,c/\omega_{\rm pe}$ and $x \approx -900\,c/\omega_{\rm pe}$ (panel (c)). These are to be distinguished from the large \textit{primary} island at $x\approx \pm 2200\,c/\omega_{\rm pe},$ whose properties depend on choices at initializiation. As the primary island grows, it will eventually inhibit further accretion of magnetic flux and the reconnection process will terminate. \label{fig:lowbeta}} \end{figure*} \begin{figure* \centering \includegraphics[clip, trim=0.5cm 11cm 1cm 11cm,width=1\textwidth]{plot_3.pdf} \\ \caption{$\,$2D plot of the ratio of top-to-total particle density, $n_{\rm top}/n_{\rm tot},$ for a representative simulation with $\beta_{\rm i} = 0.0078$ and $T_{\rm e}/T_{\rm i}=0.1$ (\textbf{A[0]} in Tab.~\ref{tab:params}) at time $ t \approx 11000\, \omega_{\rm pe}^{-1}\approx0.8\,t_{A}$. The green and black contours show the boundaries of the regions we use to calculate the downstream and upstream temperatures, respectively. The box edges at the interface between upstream and downstream change as the system evolves, and are calculated according to Eqs. \ref{eq:criterion1} and \ref{eq:criterion2}. Particle mixing serves as a tracer for the downstream region. Particles from the top ($y > 0$) of the domain are tagged; as they enter the reconnection layer, they mix with particles from the bottom ($y<0$) of the domain. The reconnection downstream is identified via the mixing fraction $n_{\rm top}/n_{\rm tot}$, and a choice of the threshold $r_{\rm down},$ as in Eq. \ref{eq:criterion1}. \label{fig:mixing2d}} \end{figure*} \begin{figure \centering \includegraphics[width=0.45\textwidth]{test_plot.pdf} \\ \caption{(a):$\,$ 1D profile along the $y$ direction of top-to-total particle density ratio (solid line) and bottom-to-total ratio (dashed line) in a slice at $x\approx1000\,c/\omega_{\rm pe}$, at time $t \approx 8400\,\omega_{\rm pe}^{-1} \approx0.60\,t_{\rm A}.$ The profiles are from the same simulation we show in Fig.~\ref{fig:mixing2d} (with $\beta_{\rm i} = 0.0078$ and $T_{\rm e}/T_{\rm i}=0.1$). Vertical dotted lines indicate the locations in $x$ where the top-to-total density ratio is between 0.3 and 0.7 (at $y\approx-25$ and $25\,c/\omega_{\rm pe},$ respectively). Between the vertical dotted lines (i.e., in the region we define as the reconnection downstream), mixing has efficiently occurred. (b): Proton and electron temperature profiles in the same region. In between the vertical dotted lines, the temperature profiles are nearly flat. \label{fig:mixing}} \end{figure} \section{Technique for extracting the heating efficiency}\label{sec:technique} In this section, we discuss our method of extracting the heating efficiency from PIC simulations. First, in Section \ref{ssec:timeevol}, we discuss the time evolution of the reconnection layer for two representative cases at low and high $\beta_{\rm i}$. Then, in Section \ref{ssec:tagged}, we describe the identification of inflow (upstream) and outflow (downstream) regions. Lastly, in Section \ref{ssec:characterization}, we isolate the irreversible heating, i.e., the part associated with a genuine increase in entropy, from the reversible heating induced by adiabatic compression. \subsection{Time evolution of the reconnection layer}\label{ssec:timeevol} To illustrate the time evolution of the reconnection layer, we show in Fig.~\ref{fig:lowbeta} a few snapshots of density from a representative simulation (\textbf{A[0]} in Tab.~\ref{tab:params}) with $\beta_{\rm i}=0.0078$ and $T_{\rm e}/T_{\rm i}=0.1$. We plot the 2D profile of the number density in units of the initial value, $n/n_{0}$. In each panel, we show only a small fraction of the domain in the $y$ direction (we present only the region closest to the current sheet), and the full extent of the domain in $x$. White lines with arrows show magnetic field lines. Panels (a)--(c) show the time evolution of the system over $\sim1$ Alfv\'{e}nic crossing time. By removing by hand the plasma pressure at the center of the current sheet ($x\sim 0$), we trigger a local collapse of the layer, forming an X-point. After the formation of the central X-point, two reconnection ``wavefronts'' are pulled outwards in the $\pm \mathbf{\hat{x}}$ directions by the magnetic tension of the field lines, and the fronts recede from the center at close to the Alfv\'{e}n speed. In panels (a), (b), and (c), the wavefronts are located at $x\approx\pm 400,1100,$ and $1800\,c/\omega_{\rm pe},$ respectively, corresponding to the innermost (i.e., closest to $x=0$) locations of the large semi-circular red/yellow density blobs. The fronts carry away the hot particles initialized in the current sheet. With periodic boundary conditions, this leads to the formation of a \textit{primary} island at the boundary of the simulation domain (in Fig.~\ref{fig:lowbeta}(c), located at $x\approx\pm 2200\,c/\omega_{\rm pe}).$ The primary island continues to accrete plasma as the system evolves, but eventually it grows so large that further accretion of magnetic flux into the layer is inhibited, and reconnection stops. The primary island shows the hottest electron temperatures. Here, electron heating might be due in part to reconnection, but also in part to weak shocks at the interface between the reconnection outflow and the island. In addition, the plasma conditions in the island are sensitive to our arbitrary choice for the current sheet initialization. For these reasons, we choose not to focus on the heating physics in the primary island. In this paper, we focus exclusively on the outflow (i.e., before the the plasma reaches the primary island; see also \citet{Shay2014}, in the context of non-relativistic reconnection), shown by the green region between the two wavefronts in Fig. \ref{fig:lowbeta}. In Section \ref{ssec:tagged}, we detail the steps we take to avoid contamination of our temperature measurements by the primary island. As the two reconnection fronts recede from the center, plasma flows into the reconnection layer and particles are heated and accelerated as a bulk, flowing along $\pm \mathbf{\hat{x}}$ toward the domain boundaries. The dense (green) region in between the two wavefronts is the reconnection outflow. A key feature of low-$\beta_{\rm i}$ simulations is the formation in the reconnection exhausts of \textit{secondary} islands due to the secondary tearing instability, e.g., Fig.~\ref{fig:lowbeta}(c) at $x\approx 300\,c/\omega_{\rm pe}$ and $x\approx -900\,c/\omega_{\rm pe}$ \citep{Daughton2007,Uzdensky2010}. Between each pair of secondary islands, there is a secondary X-point, e.g., at $x\approx-1000\,c/\omega_{\rm pe}$. We discuss the structure of the reconnection layer as a function of $\beta_{\rm i}$ in Section \ref{ssec:examples}. \subsection{Upstream and downstream identification} \label{ssec:tagged} \label{sec:updownid} We now describe how we determine which computational cells in the simulation domain belong to the upstream (or, inflow) and downstream (or, outflow) regions. We identify downstream cells by using a particle mixing criterion between the two sides of the current sheet. Particles that originate above $y=0$ (top of the domain) are tagged, to distinguish them from particles originating below $y=0$ (bottom of the domain). In Fig.~\ref{fig:mixing2d}, we show the ratio of top-to-total number density. Away from the current sheet, i.e., in the blue and red regions, there is no mixing between the two populations. Particles from the two sides of the current sheet get mixed as they enter the reconnection layer; the region with the greatest amount of mixing is shown in white/light-yellow. We compute the ratio of top-particle density $n_{\rm top}$ to total-particle density $n_{\rm tot}=n$ (including particles from both top and bottom) in each cell. If this ratio in a given cell exceeds a chosen threshold $r_{\rm down}$ and is below the complementary threshold, i.e., \begin{align} \label{eq:criterion1} r_{\rm down} < \frac{n_{\rm top}}{n_{\rm tot}} < 1-r_{\rm down}, \end{align} then the cell is counted as one where plasma has reconnected (i.e., the cell belongs to the reconnection downstream). This technique is similar to that used in \citet{Daughton2014}. In our analysis, we choose $r_{\rm down}=0.3,$ but we have verified that the identification of the reconnection region, and therefore the heating efficiencies that we extract, do not significantly depend on this choice. For $r_{\rm down}$ in the range $0.1$ -- $0.3$, the heating efficiencies typically differ only by $\sim 15\%.$ The choice $r_{\rm down}=0.3$ is restrictive enough to exclude contamination by the upstream region. This is especially important for high $\beta_{\rm i}$, where, even if the electron gyrocenter is located in a cell that is safely part of the downstream, if the cell is close to the interface between downstream and upstream, the particle gyro-motion may temporarily lead this ``downstream'' electron to the upstream side. If $r_{\rm down}$ were to be too small, the region where the electron motion extends into the upstream might be incorrectly counted as part of the downstream, biasing our temperature estimates toward lower values. Our choice of $r_{\rm down}$ is to some extent arbitrary, but we have found that a relatively large value like $r_{\rm down}=0.3$ is suitable for identifying the genuine reconnection downstream. In Fig.~\ref{fig:mixing}, we show 1D plots of the density fraction of tagged particles and the temperature profiles along the $y$ direction, in a slice located at $x\approx1000\,c/\omega_{\rm pe}.$ In panel (a), we show the profiles of the ratio of top- and bottom-density to total density, denoted by solid and dashed lines, respectively, at time $t \approx 8400\, \omega_{\rm pe}^{-1} \approx 0.60\,t_{\rm A}$. Between the two vertical dotted lines, the ratio of top-to-total density ranges between 0.3 and 0.7, as required to satisfy our mixing criterion. As shown in panel (b), both the electron (blue) and the proton (red) temperature in the region between the vertical lines are remarkably uniform, proving that our mixing criterion can confidently capture the reconnection downstream. The upstream region is identified via \begin{align} \label{eq:criterion2} \left( \frac{n_{\rm top}}{n_{\rm tot}}< r_{\rm up} \right) \;{\rm or}\; \left( \frac{n_{\rm top}}{n_{\rm tot}} > 1-r_{\rm up} \right), \end{align} and we choose $r_{\rm up}=3\times 10^{-5}$. As before, this definition avoids contamination of the upstream region by any ``downstream'' particles that leak out of the current sheet. In practice, a buffer zone with a width on the order of a few tens of $c/\omega_{\rm pe}$ is established between the regions we identify as upstream and downstream. While Eq. \ref{eq:criterion1} (Eq. \ref{eq:criterion2}, respectively) identifies the whole reconnection outflow (inflow, respectively), we enforce an additional constraint on the downstream and upstream regions that we employ to extract our heating efficiencies. We select downstream regions far enough from the central X-point that the electron and proton outflow bulk velocities have saturated, and also that the electron and proton temperatures have reached their asymptotic values. At the same time, we select these regions to be far enough from the boundaries to avoid contamination from the material inside the primary island, and only capture the genuine reconnection outflow. The downstream region that satisfies these constraints (identified by the green contours in Fig.~\ref{fig:mixing2d}) varies for different simulations: for $\beta_{\rm i}<2$ it is located at a distance of $\sim 630\,c/\omega_{\rm pe}$ from the center, whereas for $\beta_{\rm i}=2$ it is at $\sim 350\,c/\omega_{\rm pe}$ from the center (as we show below, the primary island tends to be larger at higher $\beta_{\rm i}$). The extent of the downstream region across the layer (i.e., along $y$) is determined by the mixing criterion in Eq. \ref{eq:criterion1}, while the length along the layer is fixed at $\sim170\,c/\omega_{\rm pe} $ (see the green contours in Fig.~\ref{fig:mixing2d}). The corresponding upstream values are measured at the same distance from the center of the layer, within the black contours in Fig.~\ref{fig:mixing2d}. Their exent along the $y$ direction does not significantly affect our results. \subsection{Characterization of heating} \label{ssec:characterization} In this section, we describe our assessment of particle heating. First, in Section \ref{sssec:tcalc}, we describe our calculation of rest-frame internal energy and temperature. Next, in Section \ref{sssec:efficiency}, we define ratios that characterize the total amount of heating. Finally, in Section \ref{sssec:compnoncomp}, we provide a more detailed analysis of the heating physics by isolating the effect of a genuine entropy increase (which we call ``irreversible heating'') from the contribution of adiabatic compression (giving ``adiabatic heating''). \subsubsection{Temperature calculation}\label{sssec:tcalc} We measure the total particle energy density in the simulation frame, then extract the corresponding fluid-frame internal energy and temperature, by employing the perfect, isotropic fluid approximation, i.e. \begin{align}\label{eq:pf} T^{\mu \nu} = \left( {e} + {p} \right) U^{\mu} U^{\nu} - {p} g^{\mu \nu}, \end{align} where $T^{\mu \nu}$ is the stress-energy tensor of the fluid, ${e}$ is the rest-frame energy density, ${p}$ is the pressure, $U^{\mu}$ is the fluid dimensionless four-velocity, and $g^{\mu \nu}$ is the flat-space Minkowski metric. The rest-frame energy density is the sum of rest-mass and internal energy densities, i.e. \begin{align} {e} &= \xoverline{n} m c^{2} + {u} \\ &= \xoverline{n} m c^{2} + \frac{{p}}{{\hat{\gamma}} - 1}, \end{align} where $\xoverline{n}$ is the rest-frame particle number density, ${u}$ is the internal energy density, and ${\hat{\gamma}}$ is the adiabatic index. The dimensionless internal energy per particle in the fluid rest frame ${\upsilon}_{s}$ may be expressed as \begin{align} \label{eq:approxequation} {\upsilon}_{s} &= \frac{(T^{00}_{s}/n_{s} m_{s} c^2 - \Gamma_{s}) \Gamma_{s}}{1 + {\hat{\gamma}}_{s} (\Gamma_{s}^2 - 1)}, \end{align} where $T^{00}_{s}$ is the total energy density in the simulation frame, $n_{s}$ is the lab-frame particle number density, $\Gamma_{s}$ is the Lorentz factor corresponding to the local fluid velocity, ${\hat{\gamma}}_{s}$ is the adiabatic index, and the subscript $s=\rm e, i$ refers to the particle species. To make use of Eq. \ref{eq:approxequation}, we need to express the adiabatic index $\hat{\gamma}_{s}$ as a function of the internal energy per particle, so that the equation may be solved iteratively. For a plasma described by a Maxwell-J\"{u}ttner distribution with dimensionless temperature $\theta_s$, \begin{align} f_{\rm MJ}(\gamma, \theta_{s}) \propto \gamma \sqrt{\gamma^{2} - 1} \exp \left( -\gamma / \theta_{s} \right), \end{align} where $\gamma$ denotes the particle Lorentz factor, the dimensionless internal energy is given by \begin{align} \upsilon_{s} = \frac{\int_{1}^{\infty} \gamma f_{\rm MJ}(\gamma, \theta_{s}) d \gamma}{\int_{1}^{\infty} f_{\rm MJ}(\gamma, \theta_{s}) d \gamma} - 1. \end{align} We have numerically evaluated the integral on the right hand side for a range of temperatures and thereby produced interpolating tables for $\hat{\gamma}_{s}(\upsilon_{s})$ and $\theta_{s}(\upsilon_{s})$, to be used for finding $\upsilon_{s}$ in Eq. \ref{eq:approxequation}. Eqs. \ref{eq:pf} and \ref{eq:approxequation} assume that the stress-energy tensor is diagonal and isotropic in the fluid frame. We have explicitly tested this assumption by measuring all the components of the stress-energy tensor in our computational domain. By boosting into the local fluid frame, we can calculate all the components of the pressure tensor. We find that the off-diagonal components are generally negligible. As regard to the diagonal components, we quantify the degree of anisotropy with the temperature ratios $T_{xx}/T_{\rm tot}, \,T_{yy}/T_{\rm tot},$ and $T_{zz}/T_{\rm tot},$ where $T_{\rm tot}=(T_{xx}+T_{yy}+T_{zz})/3$. For an isotropic fluid, $T_{xx}/T_{\rm tot}=T_{yy}/T_{\rm tot}=T_{zz}/T_{\rm tot}=1$. For electrons in the reconnection downstream, we find that these ratios typically lie in the range $T_{yy}/T_{\rm tot} \approx T_{zz}/T_{\rm tot} \approx 0.9$ -- $0.95$ and $T_{xx}/T_{\rm tot} \approx 1.2$ -- $1.1$ (see Appendix \ref{sec:aniso} for further discussion, including the dependence of the anisotropy on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$). We find greater anisotropy along the outflow direction $\mathbf{\hat{x}}$ than either $\mathbf{\hat{y}}$ or $\mathbf{\hat{z}}$. This is in qualitative agreement with the findings of \citet{Shay2014}, who demonstrated that the electron pressure tensor in the immediate reconnection exhausts is anisotropic, with the component parallel to the local magnetic field larger than the perpendicular component. As an additional test, we have also measured the temperature and internal energy via an explicit boost of the stress-energy tensor into the fluid rest frame, and compared the results to those computed by employing the perfect-fluid approximation as described above. We find that the disagreement between the two methods is only of order $\sim 1\%,$ providing \textit{a posteriori} a justification for the perfect-fluid assumption. \subsubsection{Total heating} \label{sssec:efficiency} The main focus of our investigation is particle heating by reconnection, and how the heating efficiency depends on the upstream parameters. From each simulation, we extract a dimensionless ratio $M_{u\rm e,tot},$ which we define as \begin{align} M_{u\rm e,tot} &\equiv \frac{\upsilon_{\rm e,down}-\upsilon_{\rm e,up}}{\sigma_{\rm i} m_{\rm i} / m_{\rm e}} \label{eq:mue}. \end{align} The numerator is the difference in dimensionless internal energy per electron between downstream and upstream, while the denominator represents (apart from a factor of two) the available magnetic energy per electron in the upstream, in units of the electron rest mass energy ($=B_0^2/4 \pi n_0 m_{\rm e} c^2$). The ratio $M_{u\rm e,tot}$ is then a measure of the efficiency of reconnection in converting available magnetic energy to electron heating. Alternatively, the efficiency parameter may be phrased in terms of the dimensionless temperature, \begin{align} M_{T\rm e,tot} &\equiv \frac{\theta_{\rm e,down}-\theta_{\rm e,up}}{\sigma_{\rm i} m_{\rm i}/m_{\rm e}} \label{eq:mte}, \end{align} as in \citet{Shay2014}. We define analogous ratios for protons as \begin{align} M_{u\rm i,tot} &\equiv \frac{\upsilon_{\rm i,down}-\upsilon_{\rm i,up}}{\sigma_{\rm i}} \label{eq:mui}, \end{align} and \begin{align} M_{T\rm i,tot} &\equiv \frac{\theta_{\rm i,down}-\theta_{\rm i,up}}{\sigma_{\rm i}} \label{eq:mti}. \end{align} For the results presented below, we average the dimensionless internal energy and temperature appearing in the above equations over time, starting at $\sim0.3$ Alfv\'{e}nic crossing times (or equivalently, $ \sim 4500\;\omega_{\rm pe}^{-1}$), when the two reconnection wavefronts --- and with them, the particles initialized in the current sheet --- have moved beyond the region that we use for our computations (green and black boxes in Fig.~\ref{fig:mixing2d}). We typically time-average our results over an interval of $\sim0.3$ Alfv\'{e}nic crossing times. \subsubsection{Adiabatic and irreversible heating} \label{sssec:compnoncomp} When gas is adiabatically compressed, its internal energy increases while its entropy remains constant. The reconnecting plasma may experience such adiabatic heating, since the downstream region is denser than the upstream (see Fig.~\ref{fig:lowbeta}). However, adiabatic heating is not a genuine signature of the conversion of field energy into particle energy. We isolate the irreversible heating generated by magnetic field dissipation by subtracting out the adiabatic heating from the total particle heating. The predicted internal energy per particle in the downstream resulting from adiabatic compression alone (which we shall call $\upsilon^{\rm ad}_{s, \rm down}$ for species $s$) is calculated from the upstream internal energy per particle $\upsilon_{s, \rm up}$, the upstream rest-frame number density $\bar{n}_{s, \rm up}$ and the downstream rest-frame number density $\bar{n}_{s, \rm down}$ using the second law of thermodynamics for constant entropy, \begin{align} \label{eq:igl} dU_s &= -p_s dV \end{align} From the ideal gas equation of state, the pressure is $p_{s}=\bar{n}_{s} k_{\rm B} T_{s}= (\hat{\gamma}_{s}-1)u_{s}$. Using the relation $U_s/V=u_s=\upsilon_s \bar{n}_s m_s c^2$, we can integrate Eq.~\ref{eq:igl} to obtain \begin{align} \int_{\upsilon_{s,\rm up}}^{\upsilon^{\rm ad}_{s,\rm down}} \frac{1}{(\hat{\gamma}(\upsilon_{s}) - 1) \upsilon_{s}} d\upsilon_{s} - \log \left(\frac{\bar{n}_{s,\rm down}}{\bar{n}_{s,\rm up}} \right) &= 0 \label{eq:log}. \end{align} We compute the argument of the logarithm in Eq. \ref{eq:log} as the ratio of downstream to upstream rest-frame density, spatially averaged over the regions marked in Fig.~\ref{fig:mixing2d}. The lower bound of the integral $\upsilon_{s,\rm up}$ is computed as a density-weighted spatial average in the selected upstream region. The adiabatic index $\hat{\gamma}_s(\upsilon_s)$ is tabulated as discussed above. We numerically solve Eq.~\ref{eq:log} for the predicted downstream dimensionless internal energy per particle $\upsilon^{\rm ad}_{s,\rm down}$ resulting from adiabatic compression. We refer to the corresponding dimensionless temperature as $\theta^{\rm ad}_{s,\rm down}$. We call the difference between the initial and the predicted dimensionless temperature or internal energy per particle due to adiabatic compression, i.e., $\Delta \theta_{s,\rm ad} \equiv \theta_{s,\rm down}^{\rm ad}-\theta_{s,\rm up}$ and $\Delta \upsilon_{s,\rm ad} = \upsilon_{s,\rm down}^{\rm ad}-\upsilon_{s,\rm up}$, as the ``adiabatic'' component of heating. The irreversible heating, which is associated with a genuine increase in entropy, is the residual between the total heating and the adiabatic heating: \begin{align} \Delta \theta_{s,\rm irr} &= (\theta_{s,\rm down} - \theta_{s,\rm up}) - \Delta \theta_{s,\rm ad}, \\ \Delta \upsilon_{s,\rm irr} &= (\upsilon_{s,\rm down} - \upsilon_{s,\rm up}) - \Delta \upsilon_{s,\rm ad}. \end{align} As in Section \ref{sssec:efficiency}, we introduce efficiency ratios to characterize the irreversible and adiabatic heating of electrons, \begin{alignat}{3} \label{eq:mtencmtec} M_{T\rm e,\rm irr} &\equiv \frac{\Delta \theta_{\rm e,irr}}{\sigma_{\rm i} m_{\rm i}/m_{\rm e}}, \qquad M_{T\rm e,ad} && \equiv \frac{\Delta \theta_{\rm e,ad}}{\sigma_{\rm i} m_{\rm i}/m_{\rm e}}, \\ \label{eq:muencmuec} M_{u\rm e,irr} &\equiv \frac{\Delta \upsilon_{\rm e,irr}}{\sigma_{\rm i} m_{\rm i}/m_{\rm e}}, \,\,\,\qquad M_{u\rm e,ad} &&\equiv \frac{\Delta \upsilon_{\rm e,ad}}{\sigma_{\rm i} m_{\rm i}/m_{\rm e}}, \end{alignat} and define analogous ratios for protons \begin{alignat}{3} \label{eq:mtincmtic} M_{T\rm i,irr} &\equiv \frac{\Delta \theta_{\rm i,irr}}{\sigma_{\rm i}}, \qquad M_{T\rm i,ad} && \equiv \frac{\Delta \theta_{\rm i,ad}}{\sigma_{\rm i}}, \\ M_{u\rm i,irr} &\equiv \frac{\Delta \upsilon_{\rm i,irr}}{\sigma_{\rm i} }, \qquad M_{u\rm i,ad} && \equiv \frac{\Delta \upsilon_{\rm i,ad}}{\sigma_{\rm i}}. \end{alignat} \begin{figure*}[!h \centering \includegraphics[width=\textwidth,trim={0 7.5cm 0 5cm},clip]{plot_5.pdf} \\ \caption{$\,$2D structure at $t = 11250\,\omega_{\rm pe}^{-1} \approx 0.75 \,t_{\rm A}$ from a representative low-$\beta_{\rm i}$ simulation (\textbf{A[0]} in Tab. \ref{tab:params}) with $\beta_{\rm i}=0.0078$, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. We present 2D plots of (a): particle density in units of the upstream initial value, $n/n_{0}$, with overplotted magnetic field lines; (b): dimensionless electron temperature, $\theta_{\rm e}$; (c): logarithm of the magnetic energy fraction, $\varepsilon_{B}=B_{0}^{2}/8 \pi n_0 m_{\rm i} c^{2}$; (d): inflow velocity, in units of Alfv\'{e}n speed $v_{\rm in}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{y}}/v_{\rm A};$ (e): outflow velocity, in units of Alfv\'{e}n speed $v_{\rm out}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{x}}/v_{\rm A}$. We show the full extent of the domain in the $x$ direction ($L_{x}=4318\,c/\omega_{\rm pe}$), and only a small fraction of the box close to the current sheet in the $y$ direction. The primary island, which contains the particles initialized in the current sheet, can be seen at the boundaries ($x=\pm2200\,c/\omega_{\rm pe}$). As shown in panel (a), the density reaches $n/n_{0}\approx 2.3$ in the bulk of the outflow, with sharp increases up to $n/n_{0}\approx 5$ in the core of secondary islands (e.g., at $x=-1000\,c/\omega_{\rm pe}$ and $x=300\,c/\omega_{\rm pe}$). The primary island has a high density throughout its interior, $n/n_{0}\approx 5.$ Similarly, the temperature (panel (b)) is uniform $\theta_{\rm e} \approx 0.1$ in the bulk of the outflow, with spikes up to $\theta_{\rm e} \approx 0.25$ at the center of secondary islands. The primary island has a temperature $\theta_{\rm e} \approx 0.15$ throughout its interior. In panel (c), we show that the magnetic energy fraction $\varepsilon_{B}$ is extremely small in the outflow, $\varepsilon_{B} \lesssim 0.01$. The inflow velocity in panel (d) is a fraction of the Alfv\'{e}n limit $|v_{\rm in}|/v_{\rm A}\approx0.08$, and the outflow velocity in panel (e) approaches the Alfv\'{e}n limit, $|v_{\rm out}|/v_{\rm A} \approx 1.$ \label{fig:lowbeta2d}} \end{figure*} \begin{figure* \centering \includegraphics[width=\textwidth,trim={0 8cm 0 5cm},clip]{plot_6.pdf} \\ \caption{2D structure at $t = 11250\,\omega_{\rm pe}^{-1} \approx 0.75 \,t_{\rm A}$ from a representative high-$\beta_{\rm i}$ simulation (\textbf{A[4]} in Tab. \ref{tab:params}) with $\beta_{\rm i}=2$, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$ (i.e., apart from $\beta_{\rm i}$, with the same parameters as in Fig.~\ref{fig:lowbeta2d}). The panels show the same quantities as in Fig.~\ref{fig:lowbeta2d}. As shown in panel (a), the density is roughly $n/n_{0}\approx 1.2$ in the bulk of the outflow, which is only slightly larger than the upstream density. In the primary island, the density reaches $n/n_{0}\approx 4.$ The electron temperature (panel (b)) is nearly uniform in the reconnection exhausts (i.e., within a distance of $\approx700\,c/\omega_{\rm pe}$ from the central X-point), with $\theta_{\rm e} \approx 0.6$. Within the primary island, the temperature reaches $\theta_{\rm e} \approx 0.8.$ In panel (c), we present the logarithm of magnetic energy fraction $\varepsilon_{B}$, showing that the reconnection layer is weakly magnetized ($\varepsilon_{B} \lesssim 0.01$). Panel (d) shows that the inflow velocity is nearly uniform in the upstream, with a typical value $|v_{\rm in}|/v_{\rm A}\approx0.04.$ Panel (e) shows that the outflow velocity in the reconnection exhausts is $|v_{\rm out}|/v_{\rm A}\approx 0.6$. At the center of the primary island, $x\approx\pm2200\,c/\omega_{\rm pe},$ the plasma from the reconnection outflows comes to rest, $|v_{\rm out}|/v_{\rm A}\approx 0.$ \label{fig:highbeta2d}} \end{figure*} \begin{figure* \centering \includegraphics[width=\textwidth,trim={0 7.5cm 0 7.5cm},clip]{highlow_compare_ffffff.pdf} \\ \caption{Comparison between a low-$\beta_{\rm i}$ (left column, with $\beta_{\rm i}=0.0078$, \textbf{A[0]} in Tab. \ref{tab:params}) and a high-$\beta_{\rm i}$ (right column, with $\beta_{\rm i}=0.5$, \textbf{A[3]} in Tab.~\ref{tab:params}) simulation, at time $ t = 9225\,\omega_{\rm pe}^{-1} \approx 0.65\,t_{\rm A}$. In both cases, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. (a),(d): 1D profiles along $x$ (averaged along $y$ within the reconnection downstream, as identified by Eq. \ref{eq:criterion1}) of proton (red) and electron (blue) outflow velocity in units of the Alfv\'{e}n speed, $v_{\rm out}/v_{\rm A}$; (b),(e): 1D profiles along $x$ of the upstream (magenta) and downstream (green) dimensionless electron temperature, $\theta_{\rm e}$ (the two slabs in between the vertical dotted lines indicate the regions we use to calculate the downstream and upstream temperatures); (c),(f): 2D plots of $\log(\theta_{\rm e})$. In both the low- and high-$\beta_{\rm i}$ cases, the spatial profiles of outflow velocity and electron temperature show that the downstream region reaches a quasi-steady state. \label{fig:saturation}} \end{figure*} \begin{figure \centering \includegraphics[width=0.45\textwidth]{plot_8.pdf} \caption{Time evolution of total ($M_{T\rm e,tot}$; black solid), irreversible ($M_{T\rm e,irr}$; red dashed), and adiabatic ($M_{T\rm e,ad}$; blue dashed) heating efficiency, for a low-$\beta_{\rm i}$ simulation (top panel, with $\beta_{\rm i}=0.0078$) and a high-$\beta_{\rm i}$ case (bottom panel, with $\beta_{\rm i}=2$). In both cases, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. The heating efficiencies are measured starting at $t \approx 5000\,\omega_{\rm pe}^{-1},$ at which point the two reconnection wavefronts recede past the location of the downstream region used for our computations (shown in Fig.~\ref{fig:mixing2d} with the green contours). For the low-$\beta_{\rm i}$ case, the total heating efficiency oscillates around $M_{T\rm e} \approx 0.04,$ and it is dominated by genuine/irreversible heating (panel (a)). For high $\beta_{\rm i},$ the total heating efficiency saturates at a smaller value, $M_{T\rm e}\approx0.016.$ Here, adiabatic and irreversible heating equally contribute (panel (b)). \label{fig:MTetimeevol}} \end{figure} \section{Results} \label{sec:results} In this section, we describe our main results, focusing on the dependence of the heating efficiency on the plasma conditions. First, in Section \ref{ssec:examples}, we present the dynamics of the reconnection layer, and describe the main differences between low-$\beta_{\rm i}$ and high-$\beta_{\rm i}$ cases, for our fiducial magnetization $\sigma_w=0.1$ and mass ratio $m_{\rm i}/m_{\rm e}=25$. Next, in Section \ref{ssec:inflowoutflow}, we discuss the inflow and outflow rates as a function of $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}.$ Then, in Section \ref{ssec:moneyplots}, we show the dependence of electron and proton heating on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}, $ still for our fiducial magnetization $\sigma_w=0.1$ and mass ratio $m_{\rm i}/m_{\rm e}=25$. In Section \ref{ssec:massratio}, we extend our results for $T_{\rm e}/T_{\rm i}=1$ and $\sigma_w=0.1$ up to the physical mass ratio $m_{\rm i}/m_{\rm e}=1836,$ emphasizing the $\beta_{\rm i}$-dependence of the particle heating efficiencies. Finally, in Section \ref{ssec:sigdep}, we show how the heating physics changes when the magnetization $\sigma_{w}$ extends above unity (i.e., in the regime of ultra-relativistic reconnection), for mass ratio $m_{\rm i}/m_{\rm e}=1836$ and temperature ratio $T_{\rm e}/T_{\rm i}=1$. \subsection{Reconnection physics as a function of \texorpdfstring{$\beta_{\MakeLowercase{i}}$}{betai}} \label{ssec:examples} The physics of reconnection shows a marked difference between low- and high-$\beta_{\rm i}$ regimes. In Figs. \ref{fig:lowbeta2d} and \ref{fig:highbeta2d}, we present various fluid quantities for representative low- and high-$\beta_{\rm i}$ simulations, respectively ($\beta_{\rm i}=0.0078$ in Fig.~\ref{fig:lowbeta2d} and $\beta_{\rm i}=2$ in Fig.~\ref{fig:highbeta2d}). In both cases, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. At $ t = 11250\,\omega_{\rm pe}^{-1} \approx 0.75\,t_{\rm A}$, we show 2D plots of: (a) the total density in the simulation frame in units of the initial density, $n/n_{0}$; (b) the dimensionless electron temperature $\theta_{\rm e}$; (c) the magnetic energy fraction $\varepsilon_{B}=B^{2}/8 \pi n_0 m_{\rm i} c^{2}$; (d) the inflow velocity $v_{\rm in}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{y}} / v_{\rm A}$ ($v_{\rm A}$ is the Alfv\'{e}n speed), and (e) the outflow velocity $v_{\rm out}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{x}} / v_{\rm A}$. A striking difference between the simulations shown in Figs.~\ref{fig:lowbeta2d} and \ref{fig:highbeta2d} is that, while the reconnection outflow at high $\beta_{\rm i}$ is nearly homogeneous, a number of secondary magnetic islands appear at low $\beta_{\rm i}$ (see Fig.~\ref{fig:lowbeta2d}(a)). The secondary islands are over-dense, and at their center they can reach temperatures a few times larger than the bulk of the outflow (Fig.~\ref{fig:lowbeta2d}(b)). They also correspond to peaks in magnetic energy (Fig.~\ref{fig:lowbeta2d}(c)). The difference in electron temperature between inflow and outflow regions is more pronounced in the low- than in the high-$\beta_{\rm i}$ case (compare Figs. \ref{fig:lowbeta2d}(b) and \ref{fig:highbeta2d}(b)). However, as we demonstrate in Section \ref{ssec:moneyplots}, the fraction of available magnetic energy converted into total electron heating is roughly comparable between the two cases. The inflow velocity $v_{\rm in}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{y}}/v_{\rm A}$ is shown in panel (d). For low-$\beta_{\rm i}$, the inflow velocity is $|v_{\rm in}|/v_{\rm A} \approx 0.08.$ It is nearly uniform in the upstream, with the exception of the regions ahead of the secondary islands, where the velocity reverses its sign relative to the ambient inflow (see, e.g., Fig.~\ref{fig:lowbeta2d}(d) at $x\approx-1100\,c/\omega_{\rm pe}).$ This reversal occurs as the secondary island moves along the outflow direction, pushing aside the inflowing plasma. For high-$\beta_{\rm i},$ the plasma inflow is remarkably uniform, with $|v_{\rm in}|/v_{\rm A}\approx0.04,$ which is half the value of the low-$\beta_{\rm i}$ case. The inflow velocity at high $\beta_{\rm i}$ shows no reversals near the reconnection exhausts, as there are no secondary islands. The outflow velocity $v_{\rm out}/v_{\rm A}=\mathbf{v} \cdot \mathbf{\hat{x}}/v_{\rm A}$ is shown in panel (e). For low-$\beta_{\rm i},$ the outflow speed nearly reaches the Alfv\'{e}n limit, $|v_{\rm out}|/v_{\rm A} \approx 1$, whereas for high-$\beta_{\rm i}$ it approaches a smaller value, $|v_{\rm out}|/v_{\rm A} \approx 0.6.$ For both low and high $\beta_{\rm i},$ the outflow velocity is nearly uniform in the reconnection exhausts, but it drops close to the periodic boundaries at $x\approx\pm2200,$ as the outflowing plasma accretes onto the primary island. We show in Fig.~\ref{fig:saturation} a direct comparison between one low-$\beta_{\rm i}$ and one high-$\beta_{\rm i}$ simulation. The left column in Fig.~\ref{fig:saturation} refers to $\beta_{\rm i}=0.0078$ (the same as in Fig. \ref{fig:lowbeta2d}), whereas $\beta_{\rm i}=0.5$ for the right column. In both cases, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. In the top row, we show the profile along $x$ of the outflow velocity, for protons (red) and electrons (blue). We find that electrons move slightly faster than protons in the vicinity of the central X-point, but at larger distances the speeds of the two species are the same, and they saturate at a fixed fraction of the Alfv\'{e}n limit. We show in the middle row of panels the $x$-profile of the dimensionless electron temperature $\theta_{\rm e}$, in the upstream (magenta) and downstream (green). The secondary magnetic islands present in the low-$\beta_{\rm i}$ simulation (panel (c)) are correlated with spikes in the downstream electron temperature (see the peak at $x\approx -500\,c/\omega_{\rm pe}$ in Fig.~\ref{fig:saturation}(b)). Aside from the temperature spikes at low $\beta_{\rm i}$, the two panels in the middle row of Fig.~\ref{fig:saturation} demonstrate that, far enough from the central X-point, the electron temperature is nearly uniform. To estimate the reconnection heating efficiency, we measure the downstream temperature in the two slabs delimited by the vertical dotted lines in Fig.~\ref{fig:saturation}(b) and (e) (more precisely, within the green contours in Fig.~\ref{fig:mixing2d}). The time evolution of the total electron heating efficiency $M_{T\rm e, tot}$, of the adiabatic contribution $M_{T\rm e,ad}$ and of the irreversible component $M_{T\rm e,irr}$ are shown in Fig.~\ref{fig:MTetimeevol} with black, dashed blue and dashed red lines, respectively. The top panel refers to a low-$\beta_{\rm i}$ simulation with $\beta_{\rm i}=0.0078$, whereas the bottom panel refers to the high-$\beta_{\rm i}$ case $\beta_{\rm i}=2$. In both cases, $\sigma_w=0.1$, $T_{\rm e}/T_{\rm i}=0.1$ and $m_{\rm i}/m_{\rm e}=25$. The horizontal axis in the figure starts from $t = 5000\,\omega_{\rm pe}^{-1}$, which corresponds to the time when the two reconnection wavefronts pass beyond the region that we employ for calculating the downstream quantities (as discussed above, after this time the measurements are no longer affected by our choice of initialization of the current sheet).\footnote{This time is typically in the range $t \approx 4000-5000\,\omega_{\rm pe} ^{-1}$, with marginal dependence on $\beta_{\rm i}$ and on the initial sheet thickness $\Delta$.} While the heating efficiencies are nearly constant in time for high $\beta_{\rm i}$ (bottom panel), the temporal profiles at low $\beta_{\rm i}$ (top panel) present quasi-periodic modulations. They mark the passage of secondary islands --- whose temperature is typically hotter than the bulk outflow --- through the region used for our computations. To minimize the temperature variations associated with secondary islands, we average the heating efficiencies over time, as described in Section \ref{sssec:efficiency}. In doing so, the results we obtain are a reliable assessment of the steady-state heating physics in reconnection. Panels (a) and (b) in Fig.~\ref{fig:MTetimeevol} also demonstrate that the fractional contributions of adiabatic and irreversible heating to the total electron heating significantly depend on $\beta_{\rm i}$, as we further discuss in Section \ref{ssec:moneyplots}. In the low-$\beta_{\rm i}$ regime, adiabatic heating is unimportant as compared to the irreversible part, whereas the two components are comparable at high $\beta_{\rm i}$. \begin{figure \centering \includegraphics[width=0.43\textwidth,trim={0.75cm 3cm 0 0cm},clip]{plot_9f.pdf} \caption{For temperature ratios $T_{\rm e}/T_{\rm i} = 0.1$ (blue), $0.3$ (green), and 1 (red), $\beta_{\rm i}$-dependence of (a): inflow velocity $|v_{\rm in}|/v_{\rm A}$; (b): outflow velocity $|v_{\rm out}|/v_{\rm A};$ (c): reconnection rate $|v_{\rm in}|/|v_{\rm out}|$; (d): downstream density in units of initial density in the upstream $\xoverline{n}_{\rm down}/n_{0}$; (e): width of reconnection layer $\delta_{\rm rec}$. Error bars represent one standard deviation from the mean. The inflow velocity is averaged over a region of length $L_{x}/3\approx 1440\,c/\omega_{\rm pe}$ in $x$ and width $20\,c/\omega_{\rm pe}$ in $y$, located $|y|\sim 100\,c/\omega_{\rm pe}$ upstream of the central X-point. We have checked that the saturation value is insensitive to the choice of averaging region. The outflow velocity is computed as an average over the 20 cells with the largest $|\mathbf{v} \cdot \mathbf{\hat{x}}|$ located along the central region of the outflow ($|y|\lesssim4\,c/\omega_{\rm pe}$). We have tested that the resulting outflow velocity is nearly insensitive to our averaging procedure. The regions used for measuring density in the upstream and downstream are described in Section \ref{sec:updownid}. The width of the reconnection layer is measured at a distance $\sim430\,c/\omega_{\rm pe}$ downstream of the central X-point. All quantities are time-averaged over $\sim0.3\,t_{A} \approx 4500\,\omega_{\rm pe}^{-1}$. Both inflow and outflow velocities tend to decrease with $\beta_{\rm i}$, with weak dependence on $T_{\rm e}/T_{\rm i}$ (noticeable only at high $\beta_{\rm i}$). The density compression decreases with $\beta_{\rm i}.$ The width $\delta_{\rm rec}$ of the layer increases with $\beta_{\rm i}$, yet with large error bars.} \label{fig:betainout} \end{figure} \vspace{0.1in} \subsection{Dependence of inflow and outflow velocity on \texorpdfstring{$\beta_{\MakeLowercase{i}}$}{betai} and \texorpdfstring{$T_{\MakeLowercase{e}}/T_{\MakeLowercase{i}}$}{teti}} \label{ssec:inflowoutflow} In Fig.~\ref{fig:betainout}, we show the dependence on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$ of various fluid quantities, from a suite of simulations with fixed $\sigma_w=0.1$ and $m_{\rm i}/m_{\rm e}=25$. We present the (a) inflow velocity normalized to the Alfv\'en speed $|v_{\rm in}|/v_{\rm A}$; (b) outflow velocity normalized to the Alfv\'en speed $|v_{\rm out}|/v_{\rm A}$; (c) ratio of inflow to outflow velocity $|v_{\rm in}|/|v_{\rm out}|$; (d) downstream rest-frame density in units of the initial density in the upstream $\bar{n}_{\rm down}/n_{0}$; (e) width of the reconnection region at a distance of $430\,c/\omega_{\rm pe}$ from the center of the layer. Blue, green, and red points denote simulations with upstream temperature ratios $T_{\rm e}/T_{\rm i} = 0.1, 0.3,$ and $1, $ respectively. As described in Section~\ref{sssec:efficiency}, the quantities we extract are time-averaged, typically over 0.3 Alfv\'{e}nic crossing times, corresponding to $\sim 4500 \,\omega_{\rm pe}^{-1}.$ The points in Fig.~\ref{fig:betainout} represent the time averages, with vertical error bars indicating one standard deviation. At low $\beta_{\rm i},$ the inflow velocity is $|v_{\rm in}|/v_{\rm A} \approx 0.08,$ independent of the upstream temperature ratio (panel (a)). In the high-$\beta_{\rm i}$ case, the inflow speed is smaller, $|v_{\rm in}|/v_{\rm A}\approx0.04$, and shows a weak dependence on the temperature ratio, with higher temperature ratios attaining lower values of $|v_{\rm in}|/v_{\rm A}$. The outflow velocity (panel (b)) nearly saturates the Alfv\'{e}n limit at low $\beta_{\rm i}$ (the Alfv\'{e}n limit is indicated with the horizontal dashed black line), whereas for high $\beta_{\rm i}$ it is sub-Alfv\'{e}nic, $|v_{\rm out}|/v_{\rm A} \approx 0.75$. For low values of $\beta_{\rm i}$, i.e., $\beta_{\rm i} \lesssim 0.1,$ the outflow velocity is nearly independent of the temperature ratio, whereas at high $\beta_{\rm i}$ it shows a weak dependence on $T_{\rm e}/T_{\rm i}$, with higher temperature ratios corresponding to greater outflow speeds. The dependence of the reconnection rate $|v_{\rm in}|/|v_{\rm out}|$ on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$ (panel (c)) follows from the variations in inflow speed and outflow velocity that we have just discussed. At low $\beta_{\rm i},$ the reconnection rate is $|v_{\rm in}|/|v_{\rm out}| \approx 0.08$ regardless of the temperature ratio, whereas at high $\beta_{\rm i}$, and specifically for $\beta_{\rm i} = 2,$ the reconnection rate at $T_{\rm e}/T_{\rm i} =1$ is $|v_{\rm in}|/|v_{\rm out}| \approx 0.04,$ only half that of the $T_{\rm e}/T_{\rm i} =0.1$ case. So, in the high-$\beta_{\rm i}$ regime reconnection proceeds slower for hotter upstream electrons. As $\beta_{\rm i}$ increases, the plasma is less prone to be compressed during the reconnection process. As shown in Fig.~\ref{fig:betainout} (d), the downstream to upstream density ratio decreases as $\beta_{\rm i}$ increases. The value of $\xoverline{n}_{\rm down}/n_{0}$ is nearly independent of the upstream temperature ratio. Though the ratio $\xoverline{n}_{\rm down}/n_{0}$ approaches unity at high $\beta_{\rm i},$ this does not necessarily imply that the fractional contribution of adiabatic heating to total heating is negligible at high $\beta_{\rm i}$ (we demonstrate this in Section~\ref{ssec:moneyplots}). Lastly, in panel (e) we show the $\beta_{\rm i}$-dependence of the reconnection layer width $\delta_{\rm rec},$ in units of the electron skin depth $c/\omega_{\rm pe}$. We measure the width across the reconnection layer, as identified by Eq.~\ref{eq:criterion1}, at a distance $\sim430\,c/\omega_{\rm pe}$ downstream of the central X-point. The width shows strong variability in time at low $\beta_{\rm i}$, as secondary islands pass through the region employed for our measurements (note the large error bars). Despite the uncertainty in the measurement, panel (e) shows a consistent trend of increasing reconnection layer width $\delta_{\rm rec}$ with $\beta_{\rm i}$. The measured values of $\delta_{\rm rec}$ lie in the range $25$ -- $50\,c/\omega_{\rm pe},$ which is apparently close to the chosen current sheet thickness at initialization, $\Delta = 40\,c/\omega_{\rm pe}.$ However, we demonstrate in Appendix \ref{sec:recl_conv} that the measured reconnection layer width is independent of our choice of the initial sheet thickness. It follows that our measurement leads to a reliable assessment of the opening angle of the reconnection outflow, $\sim \delta_{\rm rec}/(430\,c/\omega_{\rm pe})\sim 0.1$. \begin{figure \centering \includegraphics[width=0.45\textwidth,trim={0.2cm 1cm 0 1cm},clip]{plot_10.pdf} \\ \caption{For upstream temperature ratios $T_{\rm e}/T_{\rm i} =0.1$ (blue), 0.3 (green), and 1 (red), we present the $\beta_{\rm i}$-dependence of various upstream (dashed) and downstream (solid) quantities; (a): electron dimensionless temperature, $\theta_{\rm e}$; (b): proton dimensionless temperature, $\theta_{\rm i}$; (c): proton-to-electron skin depth ratio, $(c/\omega_{\rm pi})/(c/\omega_{\rm pe}).$ The simulations shown here use a mass ratio $m_{\rm i}/m_{\rm e}=25$ and magnetization $\sigma_{w}=0.1$. \label{fig:theta_updown}} \end{figure} \begin{figure* \centering \includegraphics[width=\textwidth]{plot_11.pdf} \\ \caption{For upstream temperature ratios $T_{\rm e}/T_{\rm i} =0.1$ (blue), 0.3 (green), and 1 (red), $\beta_{\rm i}$-dependence of heating efficiencies; (a): electron total, $M_{T\rm e, tot}$; (b): electron adiabatic, $M_{T\rm e,ad}$; (c): electron irreversible, $M_{T\rm e,irr}$; (d): proton total, $M_{T\rm i, tot}$; (e): proton adiabatic, $M_{T\rm i,ad}$; (f): proton irreversible, $M_{T\rm i,irr}$; (g): electron and proton total, $M_{T\rm e, tot}+M_{T\rm i, tot}$; (h): electron and proton adiabatic, $M_{T\rm e,ad}+M_{T\rm i,ad}$; (i): electron and proton irreversible, $M_{T\rm e,irr}+M_{T\rm i,irr}$. The simulations shown here use a mass ratio $m_{\rm i}/m_{\rm e}=25$ and magnetization $\sigma_{w}=0.1$. Error bars, mostly smaller than the plotted symbols, represent one standard deviation from the mean. The decomposition of total heating into irreversible and adiabatic components shows that electron and proton heating at low $\beta_{\rm i}$ is accompanied by an increase in entropy, while heating in the high-$\beta_{\rm i}$ regime tends to be dominated by adiabatic compression. \label{fig:mttt}} \end{figure*} \subsection{Dependence of particle heating on \texorpdfstring{$\beta_{\MakeLowercase{i}}$}{betai} and \texorpdfstring{$T_{\MakeLowercase{e}}/T_{\MakeLowercase{i}}$}{teti}} \label{ssec:moneyplots} In Fig.~\ref{fig:theta_updown}, we show the $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$ dependence of electron (panel (a)) and proton (panel (b)) dimensionless temperature, and the ratio of proton-to-electron skin depth (panel (c); see Eq.~\ref{eq:skindepth}). In each panel, solid and dashed lines indicate downstream and upstream quantities, respectively. As in Fig.~\ref{fig:betainout}, blue, green, and red points refer to electron-to-proton temperature ratios $T_{\rm e}/T_{\rm i}=0.1, 0.3$, and $1$, respectively. The upstream electron dimensionless temperatures lie in the range $10^{-3}$ to $10$, as in Table~\ref{tab:params}; for protons, the dimensionless temperature in the upstream ranges from $4\times10^{-4}$ to $0.4$. The range of temperatures in the downstream is smaller than in the upstream (compare the solid and dashed lines in Fig.~\ref{fig:theta_updown}(a) and (b)). At low $\beta_{\rm i}$, the available magnetic energy is large compared to the particle thermal content in the upstream, so dissipation of the magnetic field leads to electron and proton temperatures in the downstream that are nearly independent of $\beta_{\rm i}$. At high $\beta_{\rm i}$, the energy transferred from the fields to the particles is much smaller than the initial particle thermal content, giving a minor increase of temperature from upstream to downstream. Even if the fractional increase in temperature is extremely small at high $\beta_{\rm i}$, the fraction of available magnetic energy being converted into particle heating might still be as large as at low $\beta_{\rm i}$. The rest of the section addresses this question. We show the plasma-$\beta_{\rm i}$ and temperature ratio $T_{\rm e}/T_{\rm i}$ dependence of electron and proton heating in Fig.~\ref{fig:mttt}. The simulations presented here are those referenced in Tab.~\ref{tab:params}, for which $m_{\rm i}/m_{\rm e}=25$ and $\sigma_{w}=0.1$. We indicate the total, adiabatic, and irreversible heating by $M_{T\rm e,tot}$ (Eq. \ref{eq:mte}), $M_{T\rm e,ad}$ and $M_{T\rm e,irr}$ (Eqs. \ref{eq:mtencmtec}) for electrons, and by $M_{T\rm i,tot}$ (Eq. \ref{eq:mti}), $M_{T\rm i,ad}$ and $M_{T\rm i,irr}$ (Eqs. \ref{eq:mtincmtic}) for protons. Blue, green, and red points indicate simulations with upstream electron-to-proton temperature ratios of $0.1, 0.3,$ and $1$, respectively. As in Section \ref{ssec:inflowoutflow}, filled points are the time-averaged results of our simulations, and vertical error bars indicate one standard deviation from the mean. The top, middle, and bottom rows show heating fractions of electrons, protons, and of the overall fluid, respectively, which we now discuss in turn. In Fig.~\ref{fig:mttt}(a), we show the dependence of the total electron heating efficiency $M_{T\rm e,tot}$ on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$. Although the initial plasma $\beta_{\rm i}$ spans more than two orders of magnitude, and the initial temperature ratio an order of magnitude, even the most extreme values of $M_{T\rm e,tot}$ differ by no more than a factor of $\sim4$. The value of $M_{T\rm e,tot}$ in our $\beta_{\rm i} \lesssim 0.5$ simulations, for which electrons stay non-relativistic both in the upstream and in the downstream, is $\sim 0.04,$ which is consistent with the results of non-relativistic reconnection by \citet{Shay2014} for mass ratio $m_{\rm i}/m_{\rm e}=25$.\footnote{In \citet{Shay2014}, the magnetization was $\sigma_w\approx0.004-0.1$. However, as long as $\sigma_w\ll1$ and all the species stay at non-relativistic temperatures, we expect the reconnection physics to be independent of the flow magnetization.} As shown by \citet{Shay2014}, the electron heating efficiency in non-relativistic reconnection is expected to decrease with increasing mass ratio; in Sections \ref{ssec:massratio} and \ref{ssec:tratio1836}, we present the dependence of the electron and proton heating fractions in trans-relativistic reconnection on $m_{\rm i}/m_{\rm e}$, up to the physical value. The total electron heating fraction $M_{T\rm e,tot}$ is decomposed into adiabatic and irreversible components in panels (b) and (c). By comparing the two panels, we see that most of the heating in the low-$\beta_{\rm i}$ regime comes from irreversible processes, i.e., it is accompanied by a genuine increase in entropy, while heating at high $\beta_{\rm i}$ mostly results from adiabatic compression. The electron adiabatic heating efficiency increases with the inflow temperature ratio $T_{\rm e}/T_{\rm i}$ (Fig.~\ref{fig:mttt}(b)). The dependence is most apparent at high $\beta_{\rm i},$ where adiabatic heating represents a significant contribution to the total heating. The dependence of adiabatic heating on temperature ratio can be simply understood through the adiabatic law $T/\xoverline{n}^{\hat{\gamma}-1}=\text{const.}$ As electrons get compressed from upstream to downstream, the adiabatic heating fraction can be written as\footnote{Eq. \ref{eq:compapprox} assumes that the adiabatic index is constant as electrons pass from upstream to downstream, which is a good approximation when electrons are ultra-relativistic in both regions (so, for high $\beta_{\rm i}$ and large $T_{\rm e}/T_{\rm i}$); still, in all the simulations used in Fig.~\ref{fig:mttt}, we find that the electron adiabatic index changes by no more than $\hat{\gamma}_{\rm e,up} - \hat{\gamma}_{\rm e,down} \approx0.1$. In any case, Eq. \ref{eq:compapprox} is presented only for illustrative purposes, and we properly account for the effect of a variable adiabatic index in our calculation of the heating fractions.} \begin{align} \label{eq:compapprox} M_{T\rm e,ad} &= \frac{1}{2}\beta_{\rm i} \frac{T_{\rm e}}{T_{\rm i}}\left[ \left(\frac{\xoverline{n}_{\rm down}}{n_{0}} \right)^{\hat{\gamma}_{\rm e}-1} - 1\right]. \end{align} As shown in Fig.~\ref{fig:betainout}(d), the ratio of downstream to initial upstream density $\xoverline{n}_{\rm down}/n_{0}$ is nearly independent of the upstream temperature ratio, so that $M_{T\rm e,ad}\proptoT_{\rm e}/T_{\rm i}$ at fixed $\beta_{\rm i}$. Eq. \ref{eq:compapprox} also provides insight into the $\beta_{\rm i}$-dependence of adiabatic heating. It shows that, for a given temperature ratio, the adiabatic heating efficiency would scale linearly with $\beta_{\rm i}$, if the compression ratio $\xoverline{n}_{\rm down} /n_{0}$ were to be fixed. As shown in Fig. \ref{fig:betainout}(d), the downstream to upstream density ratio decreases with $\beta_{\rm i}$, approaching unity at high $\beta_{\rm i}$. However, the decrease of $\xoverline{n}_{\rm down} /n_{0}$ with $\beta_{\rm i}$ is quite shallow, and insufficient to counteract the linear dependence on $\beta_{\rm i}$ in Eq. \ref{eq:compapprox}. It follows that at low $\beta_{\rm i}$ the effect of adiabatic heating is negligible, while at high $\beta_{\rm i}$ the role of adiabatic heating can be more important than that of irreversible heating. This statement can be further justified by considering electron energization in the diffusion region as the main source of irreversible electron heating, following \citet{Le2016}. In the diffusion region, the electron energy will increase by $e E_{\rm rec} \ell_{\rm e}$, where $E_{\rm rec}\sim 0.1 (v_{\rm A}/c) B_0$ is the reconnection electric field (assuming a reconnection inflow rate of $\sim 0.1 \,v_{\rm A}/c$, see Fig. \ref{fig:betainout}(a)) and $\ell_{\rm e}$ is the distance traveled by electrons along the electric field (along $z$, in our geometry). For the sake of simplicity, let us now assume that $\beta_{\rm i}$ is sufficiently small that $w\sim n_0 m_{\rm i} c^2$ and so $\sigma_w\sim \sigma_{\rm i}$ (this is the case for $\beta_{\rm i}\lesssim0.1$, at our reference magnetization $\sigma_w=0.1$). The corresponding irreversible heating efficiency can be written in the case $\sigma_w\sim \sigma_{\rm i}\lesssim1$ as \begin{equation}\label{eq:dwetot} M_{T\rm e,irr}\sim 0.1\frac{\ell_{\rm e}}{c/\omega_{\rm pi}} \end{equation} which does not depend explicitly on $\beta_{\rm i}$. It follows that, as long as $\ell_{\rm e}$ is a weak function of $\beta_{\rm i}$, the adiabatic heating efficiency in Equation \ref{eq:compapprox}, which scales linearly with $\beta_{\rm i}$, will be unimportant at low $\beta_{\rm i}$, whereas it will dominate over irreversible heating at high $\beta_{\rm i}$. We remark that Equation \ref{eq:dwetot} does not capture a number of important effects. First, by tracking individual particle orbits, we have found that the in-plane components of the electric field, that we have neglected above, can provide a significant contribution to the total electron energization (a comprehensive discussion of the physics of electron heating will be presented elsewhere). Second, we have neglected the $\beta_{\rm i}$-dependence of the reconnection rate. Third, we have assumed $w\sim n_0 m_{\rm i} c^2$, which is incorrect at high $\beta_{\rm i}$. Fourth, we do not have a direct measure of $\ell_{\rm e}$, which would assess its dependence on the flow conditions. For these reasons, it is likely that the electron irreversible heating will be dependent on $\beta_{\rm i}$. In fact, the irreversible electron heating efficiency (shown in Fig.~\ref{fig:mttt}(c)) systematically decreases with $\beta_{\rm i}$, and the trend is largely independent of the initial temperature ratio, apart from the case with $\beta_{\rm i}=2$ (rightmost points in Fig.~\ref{fig:mttt}(c)). Here, the irreversible heating fraction reaches $M_{T\rm e,irr} \approx0.03$ for $T_{\rm e}/T_{\rm i}=1$, a factor of $\sim3$ larger than for the $\beta_{\rm i}=2$ cases with lower temperature ratios, $T_{\rm e}/T_{\rm i}=0.1$ and 0.3.\footnote{We have extensively checked this result, finding that it holds regardless of the simulation boundary conditions (periodic or outflow in the $x$ direction, or double periodic; see Appendix~\ref{sec:outvper}) and the number of computational particles per cell.} We attribute the peculiar behavior of this case to the fact that, among the $m_{\rm i}/m_{\rm e}=25$ simulations presented in Fig.~\ref{fig:mttt}, the $\beta_{\rm i}=2,\, T_{\rm e}/T_{\rm i}=1$ case is the only one for which the scale separation $(c/\omega_{\rm pi})/(c/\omega_{\rm pe})$ between protons and electrons approaches unity (see Fig.~\ref{fig:theta_updown}(c)). For the case $\beta_{\rm i}=2, T_{\rm e}/T_{\rm i}=1$ in Fig.~\ref{fig:mttt}, this statement holds true for both the {\it upstream} and the {\it downstream} scale separation, since the reconnection process at high $\beta_{\rm i}$ does not appreciably change the plasma thermal content. However, as we further discuss in the next two subsections, where we investigate the dependence of our results on the mass ratio and the magnetization, we find that the necessary and sufficient condition for the electron and proton heating efficiencies to be comparable is that the {\it downstream} scale separation approaches unity. In retrospect, this is not surprising, since if $(c/\omega_{\rm pi})/(c/\omega_{\rm pe})\rightarrow 1$ in the downstream, the fluid effectively behaves like an electron-positron plasma. In Fig.~\ref{fig:mttt} (second row of panels), we also explore the $\beta_{\rm i}$-dependence of (d) total, (e) adiabatic, and (f) irreversible proton heating. As before, blue, green, and red points correspond to simulations with upstream $T_{\rm e}/T_{\rm i}$ of $0.1, 0.3,$ and $1,$ respectively (we change the temperature ratio by varying the electron temperature, while the proton temperature at a given $\beta_{\rm i}$ is kept fixed). While the initial dimensionless electron temperature in our simulations ranges from non-relativistic to ultra-relativistic values, protons always stay at non-relativistic or trans-relativistic energies, $\theta_{\rm i} \approx 0.0004$ -- $0.5$ (this is true in both upstream and downstream). At low $\beta_{\rm i}$, protons are heated more efficiently than electrons, typically by a factor of 2 -- 3 at mass ratio $m_{\rm i}/m_{\rm e}=25$ (compare panels (a) and (d), $M_{T\rm e, tot} \approx 0.05$ while $M_{T\rm i, tot} \approx 0.13$). At larger values of $m_{\rm i}/m_{\rm e},$ the ratio of proton to electron heating is even larger, as we discuss in Sections~\ref{ssec:massratio} and \ref{ssec:tratio1836}. Once again, the notable exception is the high-$\beta_{\rm i}$ case with $\beta_{\rm i}=2 $ and $T_{\rm e}/T_{\rm i}=1,$ where the electron and proton heating fractions are comparable, $M_{T\rm e, tot} \approx 0.06$ and $M_{T\rm i, tot} \approx 0.08$. Similar to electrons, the irreversible component of proton heating decreases with $\beta_{\rm i},$ and shows only weak dependence on the upstream temperature ratio $T_{\rm e}/T_{\rm i}$ (panel (f)). As shown in panel (e), the fractional contribution of adiabatic heating to the total proton heating increases with $\beta_{\rm i}$, as for electrons. Finally, we show the total particle (i.e., sum of electron and proton) heating, as well as the corresponding adiabatic and irreversible components, in Fig.~\ref{fig:mttt}(g)-(i). Given that protons are heated more efficiently than electrons, the trends in the bottom row of Fig.~\ref{fig:mttt} are primarily controlled by protons (again, with the exception of the case $\beta_{\rm i}=2, $ $T_{\rm e}/T_{\rm i}=1$). Panel (g) shows that the total particle heating efficiency is ${\sim0.15}$ across all simulations, with a weakly declining trend with increasing $\beta_{\rm i}$. Panels (h) and (i) show that, as discussed for electrons and protons individually, heating in the low-$\beta_{\rm i}$ regime is associated with an increase in entropy, while at high $\beta_{\rm i}$ it is dominated by adiabatic compression. While we cast the heating fractions in Fig.~\ref{fig:mttt} in terms of temperature differences between upstream and downstream, they may be expressed, alternatively, via differences in internal energy per particle; see Appendix \ref{sec:appmu}. \begin{figure* \centering \includegraphics[width=\textwidth,trim={0cm 0cm 0 0cm},clip]{plot_12.pdf} \\ \caption{Mass ratio $m_{\rm i}/m_{\rm e}$ dependence of heating efficiencies; (a): electron total, $M_{T\rm e,tot}$; (b): electron adiabatic, $M_{T\rm e,ad}$; (c): electron irreversible, $M_{T\rm e,irr}$; (d): proton total, $M_{T\rm i,tot}$; (e): proton adiabatic, $M_{T\rm i,ad}$; (f): proton irreversible, $M_{T\rm i,irr}$, for $T_{\rm e}/T_{\rm i}=1$ simulations with mass ratios $m_{\rm i}/m_{\rm e}=10$ (dotted), $25$ (dashed), and $1836$ (solid); the legend is located in the upper part of panel (b). Points in panels (a)--(c) are colored according to electron dimensionless temperature in the upstream (color bar is to the right of (c)), and points in (d)--(f) according to proton dimensionless temperature in the upstream (color bar is to the right of (f)). The irreversible heating is remarkably independent of mass ratio at high $\beta_{\rm i}(= 2),$ while at low $\beta_{\rm i}$, the irreversible electron heating efficiency decreases with increasing mass ratio. \label{fig:weirdpoint}} \end{figure*} \begin{figure* \centering \includegraphics[width=0.9\textwidth,trim={0.5cm 0cm 0cm 0cm},clip]{qtefff.pdf} \\ \caption{(a): $\beta_{\rm i}$-dependence of electron total heating $M_{T\rm e,tot}$ (solid blue), electron irreversible heating $M_{T\rm e,irr}$ (dashed blue), proton total heating $M_{T\rm i,tot}$ (solid red), and proton irreversible heating $M_{T\rm i,irr}$ (dashed red); (b): $\beta_{\rm i}$-dependence of electron-to-overall total heating ratio $q_{T\rm e,tot}$ (solid blue) and electron-to-overall irreversible heating ratio $q_{T\rm e,irr}$ (dashed blue), as defined in Eqs. \ref{eq:qte1}, \ref{eq:qte2}. Here, $\sigma_{w} = 0.1,$ $T_{\rm e}/T_{\rm i}=1,$ and $m_{\rm i}/m_{\rm e}=1836.$ \label{fig:qte}} \end{figure*} \subsection{Dependence of particle heating on \texorpdfstring{$m_{\MakeLowercase{i}}/m_{\MakeLowercase{e}}$}{mime}} \label{ssec:massratio} We have extended our results up to the physical mass ratio $m_{\rm i}/m_{\rm e}=1836$, and in this section we focus on the case with $T_{\rm e}/T_{\rm i} = 1$ (runs with $\sigma_w=0.1$ and unequal temperature ratios are presented in Section \ref{ssec:tratio1836}). The separation between the electron scale $c/\omega_{\rm pe}$ and the proton scale $c/\omega_{\rm pi}$ is regulated by Eq. \ref{eq:skindepth}. For non-relativistic particles, the ratio of proton to electron skin depth is $\sqrt{m_{\rm i}/m_{\rm e}}\sim 40$, so that a large simulation domain is required to properly capture the proton physics. However, in the trans-relativistic regime of our simulations, the particles can approach (or exceed, in the case of electrons) relativistic temperatures. Here, the effective increase in electron inertia can bring the ratio of proton to electron skin depth close to unity (see Eq. \ref{eq:skindepth}). This condition holds, for example, in simulations \textbf{C[3], C[4],} and \textbf{B[4]}, when the mass ratio is increased to $m_{\rm i}/m_{\rm e}=1836$ at fixed $\sigma_w$ and $\beta_{\rm i}$. We show in Fig.~\ref{fig:weirdpoint} the dependence of total (a), adiabatic (b), and irreversible (c) electron heating on $\beta_{\rm i},$ for mass ratios $m_{\rm i}/m_{\rm e}=10,25,$ and $1836.$ We fix the magnetization $\sigma_{w}=0.1,$ and the temperature ratio $T_{\rm e}/T_{\rm i}=1;$ the legend is shown in the upper part of panel (b). The points are colored according to the dimensionless temperature of upstream electrons (the corresponding colorbar is to the right of panel (c)), ranging from non-relativistic ($\theta_{\rm e} \sim 10^{-4}$) to ultra-relativistic ($\theta_{\rm e} \sim 10^{3}$) values. In agreement with earlier studies of non-relativistic reconnection by \citet{Dahlin2014} and \citet{Le2016}, we find that the total electron heating efficiency at low $\beta_{\rm i}$ is a decreasing function of mass ratio. For the realistic mass ratio, at low $\beta_{\rm i}$ the total heating fraction $M_{T\rm e,tot} \approx 0.016$ is in good agreement with the observed value in the magnetopause, $M_{T\rm e,tot}=0.017$ \citep{Phan2013}. At $\beta_{\rm i} = 2,$ the electron heating efficiency is remarkably insensitive to the mass ratio, with $M_{T\rm e,tot} \approx 0.06$. As we have anticipated above, in this case the upstream and downstream skin depths of protons and electrons are comparable (once we account for the effects of relativistic inertia), so the physics should resemble that of an electron-positron plasma, regardless of the mass ratio. The adiabatic heating efficiency (panel (b)) shows only a weak dependence on mass ratio, in agreement with Eq. \ref{eq:compapprox}. For realistic mass ratios, electron heating is governed by irreversible processes at low $\beta_{\rm i},$ adiabatic heating dominates at intermediate $\beta_{\rm i}\sim 0.1-1$, while the two components equally contribute at high $\beta_{\rm i}\sim 2$. We show the $\beta_{\rm i}$-dependence of the proton heating fractions $M_{T\rm i,tot}, M_{T\rm i,ad},$ and $M_{T\rm i,irr}$ in panels (d), (e), and (f). The points are colored according to the upstream dimensionless proton temperature, $\theta_{\rm i}$ (the scale is to the right of panel (f)). The upstream proton temperatures are non-relativistic or trans-relativistic, with $\theta_{\rm i} \lesssim 0.5$. At fixed $\sigma_w$ and $\beta_{\rm i}$, the initial proton temperature stays the same, when we vary the mass ratio (as opposed to the electron temperature, which increases with mass ratio). So, the proton heating efficiencies are expected to remain unchanged, as long as the box size $L_{x}$ is sufficiently large (in units of the proton skin depth $c/\omega_{\rm pi}$) to capture the physics of proton heating. In the bottom row of Fig.~\ref{fig:weirdpoint}, the proton heating fractions $M_{T\rm i,tot}, M_{T\rm i,ad},$ and $M_{T\rm i,irr}$ are nearly independent of the mass ratio, which demonstrates that even for the realistic mass ratio, the box used here is sufficiently large to capture the physics of proton heating (and even more so, of electron heating). The results discussed in Section \ref{ssec:moneyplots} for $m_{\rm i}/m_{\rm e}=25$ and $T_{\rm e}/T_{\rm i}=1$ are therefore still valid here: proton heating is dominated by irreversible processes at low $\beta_{\rm i},$ whereas irreversible and adiabatic components equally contribute at high $\beta_{\rm i};$ the irreversible heating efficiency of protons is a decreasing function of $\beta_{\rm i}$; protons are heated more efficiently than electrons (although the total proton-to-electron heating ratio for $m_{\rm i}/m_{\rm e} =1836$ is $\sim 7$ at low $\beta_{\rm i}$, larger than the value measured for $m_{\rm i} / m_{\rm e}=25$, since the electron heating efficiency decreases with mass ratio); the heating fractions of the two species approach comparable values at $\beta_{\rm i} = 2,$ with $M_{T\rm i, tot} \approx 0.08$ and $M_{T\rm e, tot} \approx 0.06$. In Fig.~\ref{fig:qte}(a), we directly compare the $\beta_{\rm i}$-dependence of electron and proton heating fractions $M_{T\rm e,tot}$ (solid blue), $M_{T\rm e,irr}$ (dashed blue), $M_{T\rm i,tot}$ (solid red), and $M_{T\rm i,irr}$ (dashed red) for $m_{\rm i}/m_{\rm e}=1836, \sigma_{w}=0.1,$ and $T_{\rm e}/T_{\rm i}=1$.\footnote{The error bars in Fig.~\ref{fig:qte}(a) are larger for protons than electrons (for electrons, they are smaller than the size of the plot symbols), but the fractional error is the same. Additionally, the error bars are larger at low $\beta_{\rm i}$. As described in \ref{ssec:inflowoutflow}, this results from the frequent formation of secondary islands at low $\beta_{\rm i}.$} As anticipated above, the proton and electron total and irreversible heating fractions differ roughly by a factor of $\sim 7$ at low $\beta_{\rm i},$ but they approach a similar value at $\beta_{\rm i}=2$ ($\approx0.03$ for the irreversible component and $\approx0.06$ for the total). In Fig.~\ref{fig:qte}(b), we show the $\beta_{\rm i}$-dependence of the ratio of electron-to-overall total heating ratio (solid blue), \begin{align} \label{eq:qte1} q_{T\rm e,tot} = \frac{M_{T\rm e,tot}}{M_{T\rm e,tot} + M_{T\rm i,tot}}, \end{align} and similarly, the ratio of electron-to-overall irreversible heating ratio (dashed blue), \begin{align} \label{eq:qte2} q_{T\rm e,irr} = \frac{M_{T\rm e,irr}}{M_{T\rm e,irr} + M_{T\rm i,irr}}. \end{align} At low $\beta_{\rm i},$ the electron-to-overall total heating ratio is $q_{T\rm e,tot} \approx 0.14$, and it increases with $\beta_{\rm i}$ up to $q_{T\rm e,tot} \approx 0.45$ at $\beta_{\rm i}=2.$ The corresponding ratio of the irreversible components $q_{T\rm e, irr}$ is comparable to $q_{T\rm e, tot}$ at both low $\beta_{\rm i}$ (where adiabatic heating is negligible) and $\beta_{\rm i}=2$ (where adiabatic and irreversible contributions are similar), but for intermediate $\beta_{\rm i}$ we find that $q_{T\rm e, irr}$ can be as low as $0.07$, smaller than $q_{T\rm e, tot}$ by up to a factor of $\approx 3$. \begin{figure* \centering \includegraphics[width=\textwidth,trim={0cm 0cm 0cm 0cm},clip]{plot_14.pdf} \\ \caption{Dependence of the heating efficiencies on the magnetization $\sigma_w$ (normalized to the enthalpy density), with a layout similar to that of Fig.~\ref{fig:weirdpoint}; (a): electron total, $M_{T\rm e,tot}$; (b): electron adiabatic, $M_{T\rm e,ad}$; (c): electron irreversible, $M_{T\rm e,irr}$; (d): proton total, $M_{T\rm i,tot}$; (e): proton adiabatic, $M_{T\rm i,ad}$; (f): proton irreversible, $M_{T\rm i,irr}$. We fix $T_{\rm e}/T_{\rm i}=1$ and $m_{\rm i}/m_{\rm e}=1836$, and vary the magnetization $\sigma_{w}=0.1$ (green), $0.3$ (purple), $1$ (brown), $3$ (magenta), $10$ (black); the legend is located in the upper part of panel (b). Points in panels (a)--(c) are colored according to $\theta_{\rm e, up}$ (color bar is to the right of (c)), and points in (d)--(f) according to $\theta_{\rm i, up}$ (color bar is to the right of (f)). \label{fig:mime1836_sig}} \end{figure*} \begin{figure \centering \includegraphics[width=0.45\textwidth,trim={0 0 0 0},clip]{scalesep_sig.pdf} \\ \caption{$\beta_{\rm i}$-dependence of downstream proton-to-electron skin depth ratio, $(c/\omega_{\rm pi})/(c/\omega_{\rm pe})$ (see Eq.~\ref{eq:skindepth}), for magnetizations $\sigma_{w}=0.1$ (green), 0.3 (purple), 1 (brown), 3 (magenta), and 10 (black). For these simulations, the upstream electron-to-proton temperature ratio is $T_{\rm e}/T_{\rm i}=1,$ and $m_{\rm i}/m_{\rm e}=1836.$ \label{fig:scalesep_sig}} \end{figure} \begin{figure* \centering \includegraphics[width=0.9\textwidth,trim={0.5cm 0cm 0cm 0cm},clip]{quefff.pdf} \\ \caption{(a): Dependence on the magnetization $\sigma_{\rm i}$ (normalized to rest mass energy density) of electron total heating efficiency $M_{u\rm e,tot}$ (solid blue), irreversible heating efficiency $M_{u\rm e,irr}$ (dashed blue), proton total heating efficiency $M_{u\rm i,tot}$ (solid red), and proton irreversible heating efficiency $M_{u\rm i,irr}$ (dashed red). (b): Dependence on the magnetization $\sigma_{\rm i}$ of the electron-to-overall heating ratio $q_{u\rm e,tot}$ (solid blue) as in Eq. \ref{eq:quetot}, electron-to-overall irreversible heating ratio $q_{u\rm e,irr}$ (dashed blue) as in Eq. \ref{eq:que}, and empirical formula Eq. \ref{eq:que_emp} (dotted black) obtained by \citet{Werner2016} in the case $\beta_{\rm i}=0.01$. Here, $\beta_{\rm i} \approx 0.03,$ $T_{\rm e}/T_{\rm i}=1,$ and $m_{\rm i}/m_{\rm e}=1836.$ \label{fig:que}} \end{figure*} \begin{figure \centering \includegraphics[width=0.45\textwidth,trim={0cm 0cm 0cm 0cm},clip]{fitfunc3.pdf} \\ \caption{Comparison of the electron-to-overall heating ratio $q_{u\rm e,tot}$ between our simulations with $m_{\rm i}/m_{\rm e}=1836$ and $T_{\rm e}/T_{\rm i}=1$ (filled circles with error bars) and the best fitting formula in Eq.~\ref{eq:fit} (solid curves). We show the dependence on (a) plasma-$\beta_{\rm i}$ and (b) magnetization $\sigma_w$. In panel (a), the different colors represent magnetizations $\sigma_{w}=0.1$ (green), $0.3$ (purple), $1$ (brown), $3$ (magenta), and $10$ (black). In panel (b), the color coding of the curves is indicated in the legend (from cyan to red for increasing $\beta_{\rm i}$), while the color of the filled points refers to the colorbar on the right. In both panels, the black dotted line at $q_{u\rm e,tot}=0.5$ shows the limit of comparable heating efficiencies between electrons and protons, expected when $\beta_{\rm i} \rightarrow \beta_{\rm i, max}$ (regardless of $\sigma_{w}$) or $\sigma_{w} \gg1$ (independently of $\beta_{\rm i}$). \label{fig:fitfunc}} \end{figure} \subsection{Dependence of particle heating on magnetization} \label{ssec:sigdep} In the previous sections, we have focused on the case $\sigma_{w} = 0.1;$ in Fig.~\ref{fig:mime1836_sig}, we show the $\beta_{\rm i}$-dependence of the heating efficiencies for a suite of simulations with $\sigma_{w} =0.1, 0.3, 1, 3,$ and $10$.\footnote{At high $\sigma_{w}$, the rate of secondary island production is enhanced \citep{Sironi2016}. In the simulations with $\sigma_{w} = 1, 3, 10,$ we employ outflow boundary conditions in order to evolve the system to longer times. This allows us to average the downstream quantities in the reconnection exhausts over a longer timespan, and obtain more reliable estimates.} We fix the temperature ratio $T_{\rm e}/T_{\rm i}=1$ and the mass ratio $m_{\rm i}/m_{\rm e} = 1836.$ The panels are similar to those in Fig.~\ref{fig:weirdpoint}: (a), (b), and (c) show the electron heating fractions $M_{T\rm e, tot}, M_{T\rm e,ad},$ and $M_{T\rm e,irr}$; (d), (e), and (f) show the proton heating fractions $M_{T\rm i, tot}, M_{T\rm i,ad},$ and $M_{T\rm i,irr}.$ The legend is in panel (b): green, purple, brown, magenta, and black curves connect the points having $\sigma_{w} = 0.1, 0.3, 1, 3,$ and $10$, respectively, to guide the eye. The points of panels (a)--(c) are colored according to the upstream dimensionless electron temperature $\theta_{\rm e}$, as indicated by the color bar to the right of panel (c). Similarly, in panels (d)--(f) the points are colored according to the upstream dimensionless proton temperature $\theta_{\rm i}$, as indicated by the color bar to the right of panel (f). For fixed $\beta_{\rm i}, T_{\rm e}/T_{\rm i},$ and $m_{\rm i}/m_{\rm e},$ an increase in magnetization leads to an increase in the upstream dimensionless temperature of both electrons and protons, which can be seen by comparing the colors of data points in panel (a) or (d) at fixed $\beta_{\rm i}$. We note that the data points in Fig.~\ref{fig:mime1836_sig} extend up to a maximum value of $\beta_{\rm i}$ that depends on $\sigma_w$. For our choice of defining the magnetization using the enthalpy density, rather than the rest-mass energy density, the ion $\beta_{\rm i}$ cannot exceed $\beta_{\rm i,max}\sim 1/4\sigma_w$. For each value of $\sigma_{w},$ the points with the highest value of $\beta_{\rm i}$ are also those for which the proton-to-electron scale separation ratio $(c/\omega_{\rm pi})/(c/\omega_{\rm pe})$ is the smallest {(see Fig.~\ref{fig:scalesep_sig})}. We find that in the limit $\beta_{\rm i}\rightarrow \beta_{\rm i,max}$, the total electron heating efficiency shows a characteristic upturn (panel (a)), with a typical value $M_{T\rm e,tot}\approx 0.05$ that is nearly independent of $\sigma_w$. In the low-$\beta_{\rm i}$ regime, the electron total heating efficiency approaches a $\sigma_w$-dependent plateau, with higher $\sigma_w$ yielding larger electron efficiencies (panel (a)). The opposite holds for protons: higher magnetizations give smaller proton heating efficiencies (panel (d)). Indeed, for $\sigma_w=10$ the electron and proton efficiencies are comparable in the whole range of $\beta_{\rm i}$ we have explored, in agreement with the results by \citet{Sironi2015c}. As anticipated in Section \ref{ssec:moneyplots}, we find that the necessary and sufficient condition for having comparable electron and proton heating efficiencies is that the separation between the electron and proton scales in the downstream be of order unity (or equivalently, that the two species be relativistically hot, with comparable temperatures). As shown in {Fig.~\ref{fig:scalesep_sig}}, this can be achieved in two ways: ({\it i}) at high $\sigma_w$, regardless of $\beta_{\rm i}$, the reconnection process transfers so much magnetic energy to the particles that both species become relativistically hot, with comparable temperatures; ({\it ii}) at low $\sigma_w$ and in the limit $\beta_{\rm i}\rightarrow \beta_{\rm i,max}$, both electrons and protons already start relativistically hot in the upstream region (and more so, will be relativistically hot in the downstream). Most of the $\sigma_w$-dependences that we have now presented for the total heating efficiencies $M_{T\rm e, tot}$ and $M_{T\rm i, tot}$ also apply to the irreversible components $M_{T\rm e, irr}$ and $M_{T\rm i, irr}$, since the adiabatic contribution is independent of the magnetization, at fixed $\beta_{\rm i}$ (see Eq.~\ref{eq:compapprox}). However, since the magnetization affects the efficiency of irreversible heating at fixed $\beta_{\rm i}$, while the adiabatic component remains the same, this can lead to a significant change in the relative contributions of irreversible and adiabatic heating. This can be seen, for example, at $\beta_{\rm i}\approx0.5$. For $\sigma_{w}=0.1$, $M_{T\rm e,irr}/M_{T\rm e,tot}\approx 0.1,$ whereas at $\sigma_{w}=0.3$, we find $M_{T\rm e,irr}/M_{T\rm e,tot} \approx 0.5.$ To connect with the recent work of \citet{Werner2016}, we show in Fig.~\ref{fig:que} the dependence of electron and proton heating on the magnetization $\sigma_{\rm i}$, defined with the rest-mass energy density (see Eq. \ref{eq:sigmai}). We fix temperature ratio $T_{\rm e}/T_{\rm i}=1$, mass ratio $m_{\rm i}/m_{\rm e}=1836,$ and $\beta_{\rm i} \approx 0.03$ (which is close to the upstream plasma $\beta_{\rm i}$ employed in \citet{Werner2016}, $\beta_{\rm i}=0.01$). In panel (a), we show the $\sigma_{\rm i}$-dependence of the electron total (solid blue), electron irreversible (dashed blue), proton total (solid red), and proton irreversible (dashed red) heating fractions, phrased in terms of internal energy as in \citet{Werner2016}, $M_{u\rm e,tot}$, $M_{u\rm e,irr}$, $M_{u\rm i,tot}$, and $M_{u\rm i,irr}$ (see Eqs.~\ref{eq:mue},~\ref{eq:mui}). As $\sigma_{\rm i}$ increases, the downstream scale separation between protons and electrons gets reduced {(see Fig.~\ref{fig:scalesep_sig})}, and the two species approach comparable heating efficiencies (whereas the two differ by a factor of $\sim3$ at low magnetization). This holds for both the total efficiencies $M_{u\rm e,tot} $ and $M_{u\rm i,tot} $ and the irreversible components $M_{u\rm e,irr} $ and $M_{u\rm i,irr} $, since the amount of adiabatic heating at fixed $\beta_{\rm i}$ does not depend on $\sigma_w$. This is further illustrated in Fig.~\ref{fig:que}(b), where we show the $\sigma_{\rm i}$-dependence of the electron-to-overall total heating fraction, phrased in terms of internal energy (solid blue), \begin{align} \label{eq:quetot} q_{u\rm e,tot} = \frac{M_{u\rm e,tot}}{M_{u\rm e,tot}+ M_{u\rm i,tot}}, \end{align} and the electron-to-overall irreversible heating ratio (dashed blue), \begin{align} \label{eq:que} q_{u\rm e,irr} = \frac{M_{u\rm e,irr}}{M_{u\rm e,irr}+ M_{u\rm i,irr}}. \end{align} Blue circles show the results of our simulations, and the black dotted line indicates the empirical formula suggested by \citet{Werner2016}, \begin{align} \label{eq:que_emp} q_{u\rm e,emp} = \frac{1}{4} \left( 1 + \sqrt{\frac{\sigma_{\rm i}/5}{2 + \sigma_{\rm i}/5}} \right) \end{align} We find reasonable agreement between this empirical formula and our simulations, for $\beta_{\rm i}\approx0.03$. For low values of the magnetization, $q_{u\rm e,tot} \approx q_{u\rm e,irr}\approx0.25$, but as $\sigma_{\rm i}$ increases toward the ultra-relativistic limit, $q_{u\rm e,tot}$ and $q_{u\rm e,irr}$ approach $\approx 0.5$, i.e., electrons and protons are heated with comparable efficiencies. However, Fig.~\ref{fig:mime1836_sig} shows that, at fixed magnetization, the heating efficiencies depend on $\beta_{\rm i}$, a trend which cannot be properly captured by the empirical formula of \citet{Werner2016}. We then propose the following formula, which captures the dependence of the electron-to-overall heating ratio $q_{u\rm e,tot}$ on both magnetization $\sigma_w$ and proton $\beta_{\rm i}$: \begin{align} q_{u\rm e,fit}=\frac{1}{2} \exp \left[ \frac{-(1-\beta_{\rm i}/\beta_{\rm i,max})^{3.3}}{1 + 1.2\, \sigma_{w}^{0.7}} \right], \label{eq:fit} \end{align} where $\beta_{\rm i}\leq\beta_{\rm i,max}=1/4\sigma_{w}.$ The formula in Eq.~\ref{eq:fit} has two desirable, and physically motivated, features. First, for $\beta_{\rm i} \rightarrow \beta_{\rm i,max},$ the electron-to-overall heating ratio approaches $0.5,$ independently of the magnetization. Second, for $\sigma_{w} \gg1,\;q_{u\rm e, tot}=0.5$, regardless of $\beta_{\rm i}$. In both these limits, the scale separation between electrons and protons in the downstream will be of order unity (as we have discussed above), which we have demonstrated is a necessary and sufficient condition for comparable heating efficiencies between electrons and protons. In Fig.~\ref{fig:fitfunc}, we compare Eq.~\ref{eq:fit} to the results of simulations with $m_{\rm i}/m_{\rm e}=1836$ and $ T_{\rm e}/T_{\rm i}=1$ (this is the same set of simulations presented earlier in this section, as well as in Section~\ref{ssec:massratio}). In Fig.~\ref{fig:fitfunc}(a), we show the $\beta_{\rm i}$-dependence of the electron-to-overall heating ratio $q_{u\rm e,tot}$ for a range of $\sigma_{w}$ (see the legend). The simulation results are shown by solid filled circles, while solid lines are based on Eq.~\ref{eq:fit}. The curves are plotted up to to the maximum allowed value of $\beta_{\rm i}$, namely $\beta_{\rm i,max}=1/4\sigma_w$. The black dotted line at $q_{u\rm e, tot}=0.5$ shows the limit of comparable heating efficiencies for electrons and protons, which will be reached as $\beta_{\rm i} \rightarrow \beta_{\rm i,max}$, independently of $\sigma_w$. We find that both the simulation data and the fitting formula in Eq.~\ref{eq:fit} asymptote to a constant value for $\beta_{\rm i}\ll\beta_{\rm i,max}$, with smaller heating ratios at lower $\sigma_w$. In the non-relativistic limit $\sigma_w\ll1$, our formula prescribes that $q_{u\rm e,fit}\rightarrow0.18$, not very different from the value $q_{u\rm e,fit}\approx 0.22$ obtained for $\sigma_w=0.1$. This is consistent with the expectation that in the non-relativistic regime $\sigma_w\ll1$, the heating efficiencies will be independent from the magnetization. In Fig.~\ref{fig:fitfunc}(b), we show the dependence of the electron-to-overall heating ratio on the magnetization $\sigma_w$, for a range of $\beta_{\rm i}$. The simulations results are shown by filled solid circles, which are colored according to the value of $\beta_{\rm i}$ in the upstream (the color scale is located to the right of Fig.~\ref{fig:fitfunc}(b)). We select a few representative values of $\beta_{\rm i}$ and plot the corresponding predictions based on Eq.~\ref{eq:fit} with the solid curves (see the legend in the plot). The curves are plotted up to $\sigma_{w,\rm max},$ which for a fixed $\beta_{\rm i}$ is given by $\sigma_{w,\rm max} \sim 1/4 \beta_{\rm i}$. In summary, Figs.~\ref{fig:fitfunc}(a) and (b) show that our proposed formula (Eq.~\ref{eq:fit}) properly captures the magnetization and plasma $\beta_{\rm i}$ dependence of the electron-to-overall heating ratio over the whole range of $\sigma_{w}$ and $\beta_{\rm i}$ explored in this work. \begin{figure* \centering \includegraphics[width=\textwidth]{mt1836_pts-eps-converted-to.pdf} \\ \caption{For mass ratio $m_{\rm i}/m_{\rm e} = 1836$, magnetization $\sigma_w=0.1$ and upstream temperature ratios $T_{\rm e}/T_{\rm i} =0.1$ (blue), 0.3 (green), and 1 (red), we present the $\beta_{\rm i}$-dependence of heating efficiencies; (a): electron total, $M_{T\rm e, tot}$; (b): electron adiabatic, $M_{T\rm e,ad}$; (c): electron irreversible, $M_{T\rm e,irr}$; (d): proton total, $M_{T\rm i, tot}$; (e): proton adiabatic, $M_{T\rm i,ad}$; (f): proton irreversible, $M_{T\rm i,irr}$; (g): electron and proton total, $M_{T\rm e, tot}+M_{T\rm i, tot}$; (h): electron and proton adiabatic, $M_{T\rm e,ad}+M_{T\rm i,ad}$; (i): electron and proton irreversible, $M_{T\rm e,irr}+M_{T\rm i,irr}$.} \label{fig:mttt1836} \end{figure*} \subsection{Dependence of particle heating on \texorpdfstring{$T_{\MakeLowercase{e}}/T_{\MakeLowercase{i}}$}{teti} for \texorpdfstring{$m_{\MakeLowercase{i}}/m_{\MakeLowercase{e}} = 1836$}{mime}} \label{ssec:tratio1836} In Fig. \ref{fig:mttt1836}, we present the dependence of electron and proton heating efficiencies on the proton beta $\beta_{\rm i}$ and the temperature ratio $T_{\rm e}/T_{\rm i}$ for the realistic mass ratio $m_{\rm i}/m_{\rm e} = 1836$ (the figure layout is the same as in Fig. \ref{fig:mttt}, where we had employed a reduced mass ratio $m_{\rm i}/m_{\rm e}=25$). We fix $\sigma_w=0.1$. Even at the realistic mass ratio, the conclusions drawn in the reduced mass ratio case $m_{\rm i}/m_{\rm e}=25$ (see Section \ref{ssec:moneyplots}) still hold: electron and proton heating at low $\beta_{\rm i}$ is dominated by irreversible processes, while heating in the high-$\beta_{\rm i}$ regime is mostly a result of adiabatic compression; the irreversible component of electron heating is independent of $T_{\rm e}/T_{\rm i}$ at $\beta_{\rm i}\lesssim 1$ (Fig. \ref{fig:mttt1836} (c)); the proton irreversible heating shows only a weak dependence on temperature ratio (Fig. \ref{fig:mttt1836} (f)); protons are heated more efficiently than electrons (compare the top and middle rows). For both electrons and protons, the adiabatic heating efficiencies for $m_{\rm i}/m_{\rm e}=1836$ (Figs. \ref{fig:mttt1836}(b) and (e)) are similar to those of the reduced mass ratio case. In fact, according to Eq. \ref{eq:compapprox}, the adiabatic heating efficiency is independent of mass ratio.\footnote{While Eq. \ref{eq:compapprox} is written for electrons, an analogous equation holds for the adiabatic heating of protons, if we replace $\beta_{\rm i}T_{\rm e}/T_{\rm i}\rightarrow \beta_{\rm i}$ and $\hat{\gamma}_{\rm e}\rightarrow\hat{\gamma}_{\rm i}$.} For protons, the adiabatic heating efficiency decreases at $\beta_{\rm i} \gtrsim 2$; this is largely an effect of the decrease in the adiabatic index, as the protons transition from non-relativistic to relativistic temperatures. For $m_{\rm i}/m_{\rm e}=1836$, the irreversible heating of protons at low $\beta_{\rm i}$ is a factor of $\sim 5-7$ greater than that of electrons; in the $m_{\rm i}/m_{\rm e}=25$ case, the ratio of proton-to-electron irreversible heating was smaller, $\sim 2 - 3$. As in the reduced mass ratio case, the simulation with $\beta_{\rm i}=2$ and $T_{\rm e}/T_{\rm i}=1$ shows a sharp increase in irreversible electron heating as compared to the decreasing trend observed at lower $\beta_{\rm i}$ (Fig. \ref{fig:mttt1836} (c)), and the heating efficiencies of the two species become comparable. As we argued in Section \ref{ssec:sigdep}, the electron and proton heating efficiencies are about equal if and only if the downstream scale separation is of order unity. Even for the highest values of $\beta_{\rm i}$ that we can explore ($\approx 3.9$ for $T_{\rm e}/T_{\rm i}=0.1,$ and $\approx 4.6$ for $T_{\rm e}/T_{\rm i}=0.3.$), this condition is not realized for smaller temperature ratios ($(c/\omega_{\rm pi})/(c/\omega_{\rm pe}) \gtrsim 3.2$ for $T_{\rm e}/T_{\rm i}=0.1,$ and $(c/\omega_{\rm pi})/(c/\omega_{\rm pe}) \gtrsim 1.8$ for $T_{\rm e}/T_{\rm i}=0.3$), which explains why --- despite the upturn in electron heating efficiency at high $\beta_{\rm i}$ (Fig. \ref{fig:mttt1836} (c)) --- the ratio of irreversible proton to electron heating for $T_{\rm e}/T_{\rm i}=0.1$ and 0.3 remains larger than unity. \section{Summary and discussion} \label{sec:conclusion} In this work, we have presented the results of a series of 2D fully-kinetic PIC simulations to explore electron and proton heating by magnetic reconnection in the trans-relativistic regime. Here, protons are typically non-relativistic, yet electrons can be moderately relativistic or even ultra-relativistic. We vary the flow magnetization $\sigma_w$, the proton $\beta_{\rm i}$ and the electron-to-proton temperature ratio $T_{\rm e}/T_{\rm i}$, extending our results up to the physical mass ratio $m_{\rm i}/m_{\rm e}=1836$. We show that heating in the high-$\beta_{\rm i}$ regime is primarily dominated by adiabatic compression, while for low $\beta_{\rm i}$ the heating is genuine, in the sense that it is associated with an increase in entropy. At our fiducial $\sigma_{ w}= 0.1$, we find that for $\beta_{\rm i}\lesssim 1$ the irreversible heating efficiency is independent of $T_{\rm e}/T_{\rm i}$ (which we vary from $0.1$ up to $1$), for both electrons and protons. For $T_{\rm e}/T_{\rm i}=1$, the fraction of inflowing magnetic energy converted to electron irreversible heating at realistic mass ratios decreases from $\sim 1.6\%$ down to $\sim 0.2\%$ as $\beta_{\rm i}$ ranges from $\beta_{\rm i}\sim 10^{-2}$ up to $\beta_{\rm i}\sim 0.5$, but then it increases up to $\sim 3\%$ as $\beta_{\rm i}$ approaches $\sim2$. Protons are heated much more efficiently than electrons at low and moderate $\beta_{\rm i}$ (by a factor of $\sim7$), whereas the electron and proton heating efficiencies become comparable at $\beta_{\rm i}\sim 2$ if $T_{\rm e}/T_{\rm i}=1$. We find that comparable heating efficiencies between electrons and protons are achieved when the scale separation between the two species in the reconnection exhaust approaches unity, so that the electron-proton plasma effectively resembles an electron-positron fluid. This occurs at high $\beta_{\rm i}$ for low magnetizations, or regardless of $\beta_{\rm i}$ at high magnetizations (i.e., in the regime $\sigma_w\gg1$ of ultra-relativistic reconnection). We propose a fitting formula (Eq.~\ref{eq:fit}) that captures the magnetization and plasma-$\beta_{\rm i}$ dependence of the electron-to-overall heating ratio over the whole range of $\sigma_{w}$ and $\beta_{\rm i}$ explored in this work. The low- and high-$\beta_{\rm i}$ cases differ with respect to secondary island formation. The formation of secondary islands is suppressed at high $\beta_{\rm i}$, which leads to a homogeneous reconnection outflow. Secondary islands occur frequently at low $\beta_{\rm i}$ and high magnetizations. We also measure the inflow speed for our fiducial magnetization $\sigma_w=0.1$, finding that it decreases from $v_{\rm in}/v_{\rm A} \approx 0.08$ down to $0.04$ as $\beta_{\rm i}$ ranges from $\beta_{\rm i}\sim 10^{-2}$ up to $\beta_{\rm i}\sim 2$ (here, $v_{\rm A}$ is the Alfv\'en speed). Similarly, the outflow speed saturates at the Alfv\'{e}n velocity for low $\beta_{\rm i}$, but it decreases with increasing $\beta_{\rm i}$ down to $v_{\rm out}/v_{\rm A}\approx 0.7$ at $\beta_{\rm i}\sim2.$ The inflow (outflow, respectively) speed is independent of $T_{\rm e}/T_{\rm i}$ at low $\beta_{\rm i}$, with only a minor tendency for lower (higher, respectively) speeds at larger $T_{\rm e}/T_{\rm i}$ in the high-$\beta_{\rm i}$ regime. This investigation provides important insights into the physics of low-luminosity accretion flows, such as the accretion disk of Sgr A$^{*}.$ Collisionless accretion flows are often assumed to be two-temperature, and our results indeed show that in the trans-relativistic regime relevant to hot accretion flows and accretion disk coronae, magnetic reconnection preferentially heats protons more than electrons. Our results --- and in particular, our fitting formula in Eq.~\ref{eq:fit} --- can be used to provide general relativistic MHD simulations of accretion flows with the sub-grid physics of energy partition between electrons and protons \citep{Ressler2015,Ressler2017,Sadowski2017}. This ingredient is of fundamental importance in producing emission models that can be compared with the forthcoming observations by the Event Horizon Telescope \citep{Doeleman2008}. To conclude, we note a few lines of investigation that have not been considered in the current work. First, we limited our focus to the case of symmetric, anti-parallel reconnection. The more general case of guide-field reconnection will be a topic of future investigation. Second, while we have provided a quantitative description of energy partition between electrons and protons, we have not addressed the question of the underlying heating mechanism. A detailed study of the heating mechanism is deferred to future work. Lastly, we have focused on thermal heating, as opposed to nonthermal acceleration. The dependence of nonthermal acceleration efficiency on magnetization is the focus of \citet{Werner2016}, though the dependence on $\beta_{\rm i}$ and $T_{\rm e}/T_{\rm i}$ remains unexplored. \section*{Acknowledgements} This work is supported in part by NASA via the TCAN award grant NNX14AB47G and by the Black Hole Initiative at Harvard University, which is supported by a grant from the Templeton Foundation. LS acknowledges support from DoE DE-SC0016542, NASA Fermi NNX16AR75G, NSF ACI-1657507, and NSF AST-1716567. The simulations were performed on Habanero at Columbia, on the BHI cluster at the Black Hole Initiative, and on NASA High-End Computing (HEC) resources. The authors acknowledge computational support from NSF via XSEDE resources (grants TG-AST80026N and TG-AST120010). \clearpage
1,314,259,996,816
arxiv
\section{Introduction} \label{intro} Total solar irradiance changes by about $0.1\%$ between solar activity maximum and minimum. Accurate measurements of this quantity are only available since 1978 \citep{froehlich06} and do not provide information on longer-term secular trends. In order to reliably evaluate the Sun's role in recent global climate change, longer time series are, however, needed. They can only be assessed with the help of suitable models. The most successful models are those attributing irradiance variations on timescales longer than a day to the evolution of the Sun's surface magnetic field \citep{solanki05,krivova05}. Such models explain more than 90$\%$ of all observed changes in the total and spectral irradiance at these timescales \citep{krivova03,wenzler04,wenzler05,wenzler06,krivova06}. The continuously evolving distribution of the solar magnetic field on the surface is described in these models by recourse to magnetograms, which are only available since 1974. For a longer term reconstruction, another proxy of solar magnetic activity has to be employed. The available historical proxies of solar activity, such as the Group and Zurich sunspot numbers, sunspot, facular or Ca II plage areas mainly describe the evolution of the larger magnetic features, such as sunspots or faculae, but do not provide any direct information about the weaker features. Therefore, whereas the reconstruction of the cyclic component of the irradiance variation is typically not a problem, evaluation of the amount of the secular change is not straightforward \citep[see][and references therein]{solanki04a}. \citet{solanki00,solanki02} proposed a simple physical mechanism which can lead to such a secular trend in the magnetic flux based on the overlap of consecutive activity cycles. Here, we use their model to reconstruct the magnetic flux of the Sun back to 1610 from the Group sunspot number \citep{Hoyt98} which is employed to reconstruct total solar irradiance for the same period. \section{Approach} \label{approach} \subsection{Photospheric Magnetic Flux} \label{magflux} The basic assumption of our model is that the irradiance variations are caused entirely by the evolution of the magnetic features on the solar surface. As in the model of \citet{solanki02}, magnetic features on the Sun's surface are divided into active regions (AR) and ephemeral regions (ER). The flux emergence rate in AR, $\phi_{act}$, can be estimated from the Group sunspot number since it serves as a good proxy for the fresh flux threading the solar surface. The time evolution of the flux emerging in ephemeral regions, $\phi_{eph}$, is more uncertain, however. Observations suggest that the emergence rate in ER is related to that of AR, but the exact shape of this relationship is not yet well established. They show that ER associated with the new cycle start emerging at the solar surface before the corresponding AR cycle begins and while magnetic features from the previous cycle are still appearing \citep{harvey92,harvey93}. Thus, the ER cycle length is extended with respect to that of AR. Therefore we prescribe a sine approximation for the shape of the ER cycle, with its length being somewhat stretched in time with respect to that of the corresponding AR cycle and with its amplitude being proportional to the amplitude of the AR cycle \citep[see][for details]{krivova07}. Both active and ephemeral regions contribute to the open flux ($\phi_{open}$), which is dragged by the coronal gas and reaches far into the heliosphere. Since this flux is mainly unipolar, it decays slowly and lives much longer on the solar surface (up to several years) than the AR and ER flux. The extended length of the ER cycles and the long lifetime of the open flux lead to an overlap between consecutive cycles such that some background magnetic flux is present on the solar surface even at activity minima. The amount of this flux changes with time due to variations in the length and the amplitude of the magnetic activity cycle. This mechanism thus provides a physical explanation for a secular change in the total photospheric magnetic flux. \subsection{Variations of the total solar irradiance} \label{irra} Following \citet{krivova03,wenzler05}, the solar photosphere is divided into 5 atmospheric components: the quiet Sun, sunspot umbrae and penumbrae, faculae and the network, denoted with subindices \textit{q, u, p, f} and \textit{n}, respectively. The model consists of two main ingredients: one which is temporally invariant and another that introduces a variation with time. The time-independent brightness of each component $F_{q,u,p,f,n}(\lambda)$, with $\lambda$ being the wavelength, is calculated using the ATLAS9 code of Kurucz from plane-parallel model atmospheres \citep[see][for a description of the models]{unruh99}. Faculae and the network are described by the same model atmosphere. All fluxes obtained in this way depend only on the wavelength. On the other hand, variability in time is due to the changing surface distribution of the magnetic components. To describe this, we need to determine which part of the solar surface is covered by each component at a given time, i.e. the corresponding filling factors, $\alpha$. In case of sunspots they are directly extracted from the sunspot area time series available since 1874 \citep[see][]{balmaceda05}. Before that time, sunspot areas are extrapolated by comparing them with the sunspot number. To estimate the filling factors for umbrae and penumbrae separately, we use the umbral to penumbral area ratio $\alpha_u/(\alpha_u+\alpha_p)=0.2$ as found by \citet{wenzler06}. The filling factors of the other components are obtained from the reconstructed solar magnetic flux. The magnetic flux in faculae, $\phi_f$, can be obtained from $\phi_f=\phi_{act}-\phi_u - \phi_p$, where $\phi_{act}$ is the flux in AR and $\phi_{u},\phi_p$ represent the flux in sunspot umbrae and penumbrae, respectively. Finally, the evolution of the network magnetic flux, $\phi_{n}$, which is responsible for the secular change, is given by the sum of the flux from ER, $\phi_{eph}$, and the open flux, $\phi_{open}$: $\phi_{n}=\phi_{eph}+\phi_{open}$. The final step is the conversion of the magnetic flux in faculae and the network into the corresponding filling factors. For this, we follow the conversion scheme described by \citet{fligge00} and \citet{krivova03}. The facular and network filling factors increase linearly from 0 at $\phi=0$ to 1 at $\phi_{sat}$. For magnetic flux larger than $\phi_{sat}$, the filling factor remains unity. Following \citet{krivova03,wenzler04,wenzler05,wenzler06} we use the value $\phi_{sat,f} = 300~\rm{G}$ for faculae while for the network we employ $\phi_{sat,n}= 500~\rm{G}$ . This somewhat enhanced saturation level for the network takes into account the fact that a significant amount of weak magnetic flux in the network is lost due to the insufficient spatial resolution and relatively high noise level of the magnetograms employed. Finally, the solar radiative flux at a given wavelength, $\lambda$, can be obtained by combining the fluxes from the 5 components: \begin{eqnarray*} \nonumber F(\lambda,t) & = & \alpha_q(t)F_q(\lambda) +\\ & + & \alpha_u(t)F_u(\lambda) +\\ & + & \alpha_p(t)F_p(\lambda) +\\ & + & (\alpha_f(t) + \alpha_n(t)) \cdot F_f(\lambda). \end{eqnarray*} Here, $\alpha_q(t)=1-\alpha_u(t)-\alpha_p(t)-\alpha_n(t)-\alpha_f(t)$. The total solar irradiance is then obtained by integrating $F(\lambda,t)$ over all wavelengths. \section{Results} \label{results} The reconstructed total magnetic flux for individual Carrington rotations is compared to data obtained at different observatories, WSO, NSO KP and MWO, in Fig. 1a. The model reproduces both the amplitude and the length of the solar cycle in the observed magnetic flux. The evolution of the magnetic flux in active and ephemeral regions, as well as of the open and total magnetic flux is shown in Fig. 1b. The effect of the spatial resolution on the detection of small ER noted by \citet{krivova04} is taken into account by considering the quantity $\phi_{tot}=\phi_{act}+0.4 \cdot \phi_{eph}+\phi_{open}$. Since ER cycles overlap, the flux in these regions varies only by a factor of 2 over a cycle, which is much smaller than the variation in the AR cycles. The ER flux is comparable to that of AR during activity maxima while it clearly dominates during minima, in agreement with \citet{harvey93} and \citet{krivova04}. Our model also reproduces the secular increase in the open flux during the last century found by \citet{lockwood99}. This is illustrated in Fig.~2, where the modelled open flux is compared to the reconstruction based on the aa-index by these authors. Figure 3a shows the comparison between the PMOD composite of TSI \citep[][grey solid line]{froehlich06} and the solar irradiance reconstructed from the Group sunspot number (dotted line). In order to facilitate comparison, we plot 3-month running means. Note, however, that individual dips due to sunspots are also reproduced if daily time series is considered. Both the amplitude and the phase of the variations are reasonably reconstructed. Of course, the coarse magnetic flux model we use is rather simple to reproduce all details of the irradiance variations. For example, the sine approximation of the cycle shape does not produce the double-peak structure of cycle 23, as was observed. The daily values of the reconstructed solar irradiance since 1610 are shown in Fig. 3b. The 11-yr running mean is indicated by the grey solid line. As mentioned before, for the period prior to 1874 when no sunspot area measurements are available, they are obtained by extrapolating sunspot number back to 1611. For the period prior to 1753 only monthly values of the Group sunspot number can be used. After 1753 daily values are available although prior to approximately 1850 there are many gaps in the data. Sampling becomes more regular after 1818. Our model predicts an increase in the solar irradiance since the end of the Maunder Minimum (i.e., 1700) till the present (average over about 30 years) of about $0.095\%$ or 1.3~\rm{Wm$^{-2}$}. \section{Conclusion} We have reconstructed total solar irradiance back to 1610. The cyclic variation of ER was assumed to be related to the properties of the corresponding AR cycle, whose variation can be estimated from the Group sunspot number \citep{solanki02}. The secular change in the total magnetic flux of the Sun and, therefore, in the irradiance is caused by the overlap of the consecutive ER cycles. The predicted secular change since 1700 is about 1.3~\rm{Wm$^{-2}$}. This value lies within the range suggested by other recent reconstructions of solar irradiance \citep{foster04,wang05}, but is significantly lower than the ones obtained in earlier investigations based on stellar data ranging from 2 to 16~\rm{Wm$^{-2}$} \citep[e.g.,][]{mendoza97,lean00a}. However, the stellar evidence for such a change has been recently critized \citep{hall04,wright04,giampapa05} and the magnitude of the increase in TSI obtained using these results might have been overestimated. \section{Acknowledgments} This work was supported by the \textit{Deutsche Forschungsgemeinschaft, DFG} project number SO 711/1-1. \bibliographystyle{elsart-harv}
1,314,259,996,817
arxiv
\section{Introduction} For correctly modeling galaxy evolution, the availability of accurate redshifts for both normal galaxies and active galactic nuclei (AGN) is crucial. Although redshifts measured via spectroscopic observations are very reliable, they are time consuming. Long exposure times are required for the faint sources typically found in deep field observations, and the relatively low sky density of AGN means that it is difficult to obtain large samples. Furthermore, spectroscopic observations have observational limits such as the redshift range available to optical spectrographs and the telluric OH lines for observations with near-infrared (NIR) spectrographs from the ground. This restricts the availability of spectroscopic redshifts (spec-z), in particular for deep pencil-beam surveys. About 65\% of sources in the Cosmic Assembly Near-IR Deep Legacy Survey \citep[CANDELS;][]{gro11,koe11} in the GOODS-S region are fainter than $H = 25$, beyond any reasonable spectroscopic limit. Similarly, only about 60\% of the X-ray sources in the 4~Ms Chandra Deep Field-South (4Ms-CDFS) survey have reliable spec-z \citep{xue11}. Therefore, a large number of accurate photometric redshifts (photo-z) are needed, particularly at the faint and high-redshift ends of the source distribution. For normal galaxies, previous work has achieved photo-z accuracy (defined as $1.48\times \mathrm{median} (\frac { |\Delta z |}{ 1+z_{{s}}})$) of $\sim 0.01$ by using well-verified spectral energy distribution (SED) templates for galaxies in many fields \citep{ilb09,car10}. Within the Extended Chandra Deep Field-South (ECDFS), photo-z for many samples are available in the literature (e.g., \citealt{zhe04,gra06,wuy08,car10,luo10,dah13} ). Although the accuracy reported in each paper is similar, discrepancies emerge when comparing photo-z for objects without spectroscopic information, especially at high redshift and for faint sources. Deep NIR observations are necessary to obtain reliable redshifts at $z> 1.5$, where the prominent 4000~{\AA} break shifts to NIR wavelengths. Photo-z accuracy also depends on the number and resolution of wavelength bands available as already shown by \citet{ben09}. One of the fields with the greatest number of photometric bands is the Great Observatories Origins Deep Survey-South \citep[GOODS-S;][]{gia04}, which has been observed recurrently as new facilities have become available. The GOODS-S region is at the moment in a unique niche as homogeneous and deep data (including the exquisite X-ray coverage with {\it Chandra}) are available. In addition to intermediate-band photometry from {\it Subaru} \citep{car10} and deep {\it Spitzer}/IRAC data \citep{dam11,ash13}, {\it HST}/WFC3 NIR data from the CANDELS survey and $J$ and $K_{S}$ bands from the Taiwan ECDFS Near-Infrared Survey \citep[TENIS;][]{hsi12} are now available. The availability of these new data will improve the already high accuracy of photo-z for galaxies. Even with the best data, photometric redshifts for AGNs remain challenging \citep{sal09, sal11}. Photo-z errors for AGNs can have a significant impact on galaxy/AGN coevolution studies. For example, \citet{ros13a} found that at high redshifts the AGNs tend to have bluer colors than inactive galaxies, implying younger stellar populations and higher specific star formation rates in the AGN hosts. This result, as \citeauthor{ros13a} mentioned, may be biased by the spectroscopic selection effect and photo-z errors leading to a bluer host color. Low accuracy of AGN photo-z also affects study of the evolution of the X-ray luminosity function (XLF) of AGNs. \citet{air10} argued that luminosity-dependent density evolution with a flattening faint-end slope of the XLF at $z \geq 1.2$ may result from catastrophic photo-z failures caused by observational limitations and improper templates used for photo-z computation. For all these reasons, it is important to understand how to improve AGN photo-z accuracy, especially for the faintest and highest-redshift AGNs. The situation for AGNs is further complicated by the need for an association with multi-wavelength data before photo-z can be calculated. This makes the accuracy of positions for the X-ray sources and the method and data used for the associations of crucial importance. Uncertain positions or different depths and wavelengths covered by the data may yield different counterparts, and often multiple potential counterparts exist. A simple match in coordinates often fails to yield a reliable counterpart to any given X-ray source. Several works have instead used the likelihood ratio method (e.g., \citealt{sut92, bru07, lai09,luo10, xue11,civ12}), which relies on homogeneous coverage in a given visible or infrared band. Most works repeat the association for several different reference bands and finally choose a counterpart by comparing the results. The past decade has witnessed important developments in normal galaxy photo-z both by SED fitting and by machine learning techniques, and some of these improvements can be directly used for AGNs. For example, improvement of template-fitting photo-z by adding emission lines to the templates has been demonstrated by \citet{gab04,gab06} and \citet{ilb06}. Intermediate- and narrow-band (IB, NB) photometry is valuable to pinpoint emission lines in the SEDs \citep{ilb09,sal09,car10,mat12}, as simulations \citep[e.g.,][]{ben09} have predicted. Additional improvements for AGNs are to take variability and X-ray intensity related to optical/infrared emission into account \citep{sal09, sal11}. The main goal of this paper is to release homogeneously computed photo-z for both normal galaxies and X-ray-detected AGNs in the GOODS-S, CDFS, and ECDFS and to provide a new X-ray source list compiled from the literature along with new optical/NIR/MIR associations. The paper is organized as follows: Section~\ref{datasets} introduces the photometric and spectroscopic data sets used for photo-z computation and analysis. Section~\ref{match} associates X-ray sources with optical/NIR/MIR counterparts using a new Bayesian method. Two different X-ray catalogs for the CDFS field are available, and we discuss the differences and the implications for the association of the counterparts. Section~\ref{galphz} presents the photo-z results for normal galaxies, showing the improvement by using CANDELS photometry and visible-wavelength IB filters. Section~\ref{x-phz} presents the photo-z results for X-ray sources, and Section~\ref{discussion} discusses key factors affecting the photo-z results. Section~\ref{catalog} gives details of the released catalogs, which include redshift probability distribution functions. Finally, Section~\ref{summary} summarizes the work. Throughout this paper, we adopt the AB magnitude system and assume a cosmology with $H_{0}=70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_{\Lambda}=0.7$, and $\Omega _{M}=0.3$ \citep{spe03}. \setcounter{footnote}{0} \section{The Data Sets } \label{datasets} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{zone.png} \caption{Major areas defined in ECDFS. Background is the negative $J+K_s$ image from TENIS. The inner dashed line encloses the CANDELS/GOODS-S area (``Area~1''), the solid line encloses the deep X-ray coverage (CDFS, ``Area~2''), and the outer dashed line (ECDFS) shows the MUSYC \citep{car10} coverage (``Area~3'') that defines the full area used in this paper. \label{zone} } \end{figure} The area centered on the GOODS-S field has been observed repeatedly with a large variety of facilities and instruments. As a result, numerous datasets with different bands and depths are available depending on the exact location. Reliable X-ray-to-optical associations and photometric redshifts can be obtained only when the data are homogeneous, and for this reason we split the area into subregions where the data are uniform. Three main regions share the same sets of data: Area~1 ($\sim$176~arcmin$^2$) is the region covered by CANDELS and GOODS-S, Area~2 ($\sim$ 290~arcmin$^2$) is the outer CDFS region surrounding CANDELS/GOODS-S, and Area~3 ($\sim$ 435~arcmin$^2$) is the ECDFS region outside the CDFS. Figure~\ref{zone} shows the three regions. \subsection{Photometric data from UV to MIR} \label{optdata} Altogether the ECDFS has been covered by 50 bands from ultraviolet (UV) to mid-infrared (MIR) as listed in Table~\ref{photometry}. Table~\ref{sets} summarizes the catalogs used in each area. \begin{table*}\footnotesize \begin{center} \caption{Photometric Data\label{photometry}} \begin{tabular}{lllllc} \toprule[1.5pt] Filter & $\lambda_{\mathrm{eff}}$ & FWHM& $5\sigma$ Limiting Depth& Instrument &Area\\ &(\AA{})&(\AA{}) &(AB mag) &Telescope& \\ \midrule $U$-CTIO\tablenotemark{a} &3734 &387 &26.63 &Blanco/Mosaic-II &1 \\ $U$-VIMOS\tablenotemark{a} &3722 &297&27.97& VLT/VIMOS &1\\ F435W\tablenotemark{a} &4317 &920 &28.95/30.55\tablenotemark{f}&HST/ACS &1\\ F606W\tablenotemark{a} &5918 &2324&29.35/31.05\tablenotemark{f}&HST/ACS &1\\ F775W\tablenotemark{a} &7693&1511&28.55/30.85\tablenotemark{f}&HST/ACS &1\\ F814W\tablenotemark{a} &8047&1826&28.84&HST/ACS &1\\ F850LP\tablenotemark{a}&9055&1236&28.55/30.25\tablenotemark{f}&HST/ACS &1\\ F098M\tablenotemark{a} &9851&1696&28.77&HST/WFC3 &1\\ F105W\tablenotemark{a} &10550&2916&27.45/28.45/29.45\tablenotemark{g}&HST/WFC3 &1\\ F125W\tablenotemark{a} &12486&3005&27.66/28.34/29.78\tablenotemark{g}&HST/WFC3 &1\\ F140W\tablenotemark{a} &13635&3947&26.89/29.84\tablenotemark{h}&HST/WFC3 &1\\ F160W\tablenotemark{a} &15370&2874&27.36/28.16/29.74\tablenotemark{g}&HST/WFC3 &1\\ $Ks$-ISAAC\tablenotemark{a} &21605&2746&25.09& VLT/ISAAC &1\\ $Ks$-HAWKI\tablenotemark{a} &21463&3250&26.45& VLT/HAWK-I &1\\ $3.6 \mu \mathrm{m}$-SEDS\tablenotemark{a} &35508&7432&26.52& Spitzer/IRAC &1\\ $4.5 \mu \mathrm{m}$-SEDS\tablenotemark{a} &44960&10097&26.25& Spitzer/IRAC&1\\ $5.8 \mu \mathrm{m}$-GOODS\tablenotemark{a} &57245&13912&23.75& Spitzer/IRAC&1\\ $8.0 \mu \mathrm{m}$-GOODS\tablenotemark{a} &78840&28312&23.72& Spitzer/IRAC&1\\ $3.6 \mu \mathrm{m}$-SIMPLE\tablenotemark{b} &35508&7432&23.89& Spitzer/IRAC &2, 3\\ $4.5 \mu \mathrm{m}$-SIMPLE\tablenotemark{b} &44960&10097&23.75& Spitzer/IRAC& 2, 3\\ $5.8 \mu \mathrm{m}$-SIMPLE\tablenotemark{b} &57245&13912&22.42& Spitzer/IRAC& 2, 3\\ $8.0 \mu \mathrm{m}$-SIMPLE\tablenotemark{b} &78840&28312&22.50& Spitzer/IRAC& 2, 3\\ $U38$\tablenotemark{b} &3706 &357&25.33&WFI/ESO MPG &2, 3\\ $U$\tablenotemark{b} &3528 &625&25.86& ESO MPG/WFI&2, 3\\ $B$\tablenotemark{b} &4554 &915&26.45& ESO MPG/WFI&2, 3\\ $V$\tablenotemark{b} &5343 &900&26.27& ESO MPG/WFI&2, 3\\ $R$\tablenotemark{b} &6411 &1602&26.37& ESO MPG/WFI&2, 3\\ $I$\tablenotemark{b} &8554 &1504&24.30& ESO MPG/WF&2, 3\\ $z$\tablenotemark{b} &8989 &1285&23.69& Blanco/Mosaic-II&2, 3\\ $J$\tablenotemark{b} &12395 &1620&22.44& Blanco/ISPI&2, 3\\ $H$\tablenotemark{b} &16154 &2950&22.46& ESO NTT/SofI&2, 3\\ $K$\tablenotemark{b} &21142 &3312&21.98& Blanco/ISPI&2, 3\\ $J$\tablenotemark{c} &12481 &1588&24.50&CFHT/WIRCam&2, 3 \\ $Ks$\tablenotemark{c} &21338 &3270&23.90&CFHT/WIRCam&2, 3\\ FUV\tablenotemark{e} &1543 &228 &25.69 &GALEX &1, 2, 3\\ NUV\tablenotemark{e} &2278 &796 &25.99 &GALEX &1, 2, 3\\ IA427\tablenotemark{b,d} &4253 &210&25.01&Subaru&1, 2, 3\\ IA445\tablenotemark{b,d} &4445 &204&25.18&Subaru&1, 2, 3\\ IA464\tablenotemark{b,d} &4631 &216&24.38&Subaru&1, 2, 3\\ IA484\tablenotemark{b,d} &4843 &230&26.22&Subaru&1, 2, 3\\ IA505\tablenotemark{b,d} &5059 &234&25.29&Subaru&1, 2, 3\\ IA527\tablenotemark{b,d} &5256 &243&26.18&Subaru&1, 2, 3\\ IA550\tablenotemark{b,d} &5492 &276&25.45&Subaru&1, 2, 3\\ IA574\tablenotemark{b,d} &5760 &276&25.16&Subaru&1, 2, 3\\ IA598\tablenotemark{b,d} &6003 &297&26.05&Subaru&1, 2, 3\\ IA624\tablenotemark{b,d} &6227 &300&25.91&Subaru&1, 2, 3\\ IA651\tablenotemark{b,d} &6491 &324&26.14&Subaru&1, 2, 3\\ IA679\tablenotemark{b,d} &6778 &339&26.02&Subaru&1, 2, 3\\ IA709\tablenotemark{b,d} &7070 &321&24.52&Subaru&1, 2, 3\\ IA738\tablenotemark{b,d} &7356 &324&25.93&Subaru&1, 2, 3\\ IA768\tablenotemark{b,d} &7676 &366&24.92&Subaru&1, 2, 3\\ IA797\tablenotemark{b,d} &7962 &354&24.69&Subaru&1, 2, 3\\ IA827\tablenotemark{b,d} &8243 &339&23.60&Subaru&1, 2, 3\\ IA856\tablenotemark{b,d} &8562 &324&24.41&Subaru&1, 2, 3\\ \bottomrule[1.5pt] \end{tabular} \tablenotetext{1}{CANDELS-TFIT catalog \citep{guo13}} \tablenotetext{2}{MUSYC catalog \citep{car10}} \tablenotetext{3}{TENIS catalog \citep{hsi12}} \tablenotetext{4}{IB-TFIT catalog (Donley et~al.\ in preparation)} \tablenotetext{5}{GALEX DR6/7} \tablenotetext{6}{Measurements from two regions: GOODS-S and HUDF09. See the detail in \citet{guo13} } \tablenotetext{7}{Measurements from three regions: CANDELS wide, deep, and HUDF09. See \citet{guo13} for details.} \tablenotetext{8}{Measurements from two regions: 3D-HST and HUDF12. This is an updated version of \citet{guo13}} \end{center} \end{table*} {\bf 1. Area 1}: In this region, we primarily used the CANDELS-TFIT multi-wavelength catalog of \citet[][hereafter G13]{guo13}, which covers the CANDELS GOODS-S area with 18 broad-band filters mostly from space observatories. The photometry was based on template-fitting \citep[TFIT;][]{lai07}, using the high resolution WFC3/$H$-band image to detect sources and define apertures, which were then used for photometry in lower-resolution images. TFIT was also applied to the MIR data from the Spitzer Extended Deep Survey \citep[SEDS;][]{ash13}. This deblending yields more accurate photo-z and also increases the probability of making correct X-ray to IR associations (see Section~\ref{match}). In addition, the Area~1 data include 18 IBs at optical wavelengths provided by the MUSYC team\footnote{Multi-wavelength Survey by Yale-Chile. The reduced images are available at \url{http://www.astro.yale.edu/MUSYC/} } \citep{car10}. CANDELS collaborators (Donley et~al.\ in preparation) have produced an IB-TFIT catalog with the same parameters used by G13. Despite being up to 2 magnitudes shallower than the rest of the optical data, the IB data are useful to identify emission lines, which can modify the choice of template best fitting the data and thus the photo-z (see Section~\ref{impact_em}). To these 36 bands we also added the near-UV (NUV) and far-UV (FUV) data from GALEX Data Release 6 and 7. The association between GALEX data and the WFC3/$H$-band catalog was done via positional matching within a radius of 1\arcsec. About 5\% of all sources and $\sim$25\% of X-ray-detected sources have UV counterparts. The combined data, which we refer to as ``$\rm{TFIT}_{\rm{CANDELS+IB}}$ '', have 34930 sources with up to 38 bands for computing photo-z.\\ {\bf 2. Areas 2+3}: These areas differ in depth of X-ray coverage (Section~\ref{xdata}) but have the same data sets otherwise. For the CDFS and ECDFS surrounding Area~1, we merged the following photometric catalogs via coordinate cross match, allowing a maximum separation of 1\arcsec: (1) GALEX catalog (as above), (2) the original MUSYC catalog \citep{car10}, and (3) the $J$ and $K_s$-band data from the Taiwan ECDFS Near-Infrared Survey\footnote{The TENIS data are available at \url{http://www.asiaa.sinica.edu.tw/~bchsieh/TENIS/About.html} } \citep[TENIS;][]{hsi12}. Although TENIS is no deeper than existing NIR data, the TENIS data are more homogeneous over the entire field and have slightly different transmission curves, increasing the wavelength coverage. The MIR data for Area~2 and~3 came from the Spitzer IRAC/MUSYC Public Legacy in ECDFS \citep[SIMPLE;][]{dam11}. These data are shallower than the SEDS data available in Area~1. Table~\ref{photometry} lists the data sets used, and we refer to this dataset as ``MUSYC+TENIS''. There are 70049 sources in this photometry. \subsection{X-ray data}\label{xdata} The X-ray catalogs to cross-match were obtained from the {\it Chandra} survey of 4Ms-CDFS observations covering Area~1+2 and from the 250ks-ECDFS observations covering Area~3. Two independent groups \citep{xue11, ran13} have provided source catalogs for 4Ms-CDFS using different methods for data reduction and source detection. Similarly for Area~3, both \citet{leh05} and \citet{vir06} have released X-ray source catalogs for the 250ks-ECDFS survey. We have cross-matched X-ray sources from both catalogs in each area.\\ \noindent {\bf For Areas 1+2 we used}: \hangindent=0.9cm a. The 4Ms-CDFS source catalog of \cite{xue11} (hereafter X11) with 740 point-like X-ray sources. The sensitivity limits of the X-ray data are $3.2\times 10^{-17}$, $9.1\times 10^{-18}$, and $5.5\times 10^{-17}$~erg~cm$^{-2}$~s$^{-1}$ for the full (0.5--8~keV), soft (0.5--2~keV), and hard (2--8~keV) bands, respectively. \hangindent=0.9cm b. The 4Ms-CDFS source catalog of \cite{ran13} (hereafter R13)\footnote{ The 4Ms-CDFS X-ray catalog of R13 is available under [Surveys] $>$ [CDFS] through the portal \url{http://www.mpe.mpg.de/XraySurveys}.} produced using the analysis methodology of \citet{lai09}. The catalog contains 569 point-like X-ray sources and has sensitivity limits $4.2\times 10^{-17}$, $1.2\times 10^{-17}$, and $8.8\times 10^{-17}$~erg~cm$^{-2}$~s$^{-1}$ in the full, soft, and hard bands, respectively.\\ \noindent {\bf For Area 3 we used}: \hangindent=0.9cm c. The 250ks-ECDFS X-ray catalog from \citet{leh05} (hereafter L05) with 762 sources in the entire ECDFS of which 457 are in Area~3 (i.e., outside the 4Ms-CDFS area). Catalog sensitivity limits are $1.1\times 10^{-16}$~erg~cm$^{-2}$~s$^{-1}$ in the soft (0.5--2~keV) band and $6.7\times 10^{-16}$ in the hard (2--8~keV) band. \hangindent=0.9cm d. The 250ks-ECDFS X-ray catalog from \citet{vir06} (hereafter V06) with 651 sources in the entire ECDFS of which 404 are in Area~3. Sensitivity limits are $1.7\times 10^{-16}$ and $3.9\times 10^{-16}$~erg~cm$^{-2}$~s$^{-1}$ in the soft and hard bands, respectively. \subsection{Spectroscopic Data} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{hmag_spz.png} \caption{$H$-band magnitude as a function of spec-z for all objects with spectroscopic redshifts. Black dots in the top panel represent normal galaxies in Area~1, where $\rm{TFIT}_{\rm{CANDELS+IB}}$\ data are available. The middle panel shows normal galaxies identified from the MUSYC catalog in Areas~2 and~3. Black dots in the bottom panel represent X-ray-detected sources in Areas~1 and~2, and magenta triangles denote sources detected in the shallower X-ray data in Area~3. Open blue circles in all three panels indicate objects used for training. \label{hmag_spz} } \end{figure} The availability of spec-z for a subgroup of sources is essential for computing reliable photo-z via SED fitting \citep{dah13}. A subset of spec-z can first be used for training under the assumption that they are representative of the entire population. A different subset can then be used for testing photo-z quality. For this work we cross matched the photometric catalogs to a compilation of spec-z (N.~Hathi, private communication), allowing a maximum separation of 1\arcsec. There are 2314 ($\sim$7\%) Area~1 sources that have reliable spec-z and 3880 ($\sim6\%$) such sources in Areas~2 and~3 (2016 in Area~2, 1864 in Area~3). As discussed by \citet{dah13}, optimal results are obtained when the templates used for the photo-z computation are calibrated on the photometry available for the spectroscopic training samples. For this reason the training samples should fully span the entire magnitude--redshift parameter space. Figure~\ref{hmag_spz} shows that the 1000 sources randomly selected as our training samples are indeed spread over all redshift and magnitude ranges in the respective Areas. Because the photometry available in Area~1 differs from that in Area~2+3, two sets of training samples and computations of the zero-point offsets\footnote{Zero-point offset is the average for each photometric band of the difference between the photometry of training set objects and photometry predicted by the best-fit template at the object's redshift. The offset in each band depends on the set of templates used and the number of bands available.} were used. For the X-ray sources, we forgo using the training sample for computing zero-point offsets and instead use it to sample the AGN population and help build the AGN-galaxy hybrid templates needed for proper SED fitting and photo-z measurement \citep{sal09}. For this purpose, we randomly chose $\sim$25\% of the 4Ms-CDFS detections with available spec-z over the entire range of redshift and magnitude that have CANDELS data and used them as the training set to build hybrid templates. The remaining $\sim$75\% were used for unbiased testing of the results. Details are given in the Appendix. \begin{table*}\footnotesize \centering \caption{Catalogs used for redshift estimation and counterpart identification.\label{sets}} \begin{tabular}{llll} \toprule[1.5pt] &Area 1 &Area 2 &Area 3 \\ \midrule &4Ms-CDFS-X11\tablenotemark{e} &4Ms-CDFS-X11 &250ks-ECDFS-L05\tablenotemark{i} \\ &4Ms-CDFS-R13\tablenotemark{f} &4Ms-CDFS-R13 &250ks-ECDFS-V06\tablenotemark{j} \\ Cross &CANDELS-TFIT &MUSYC &MUSYC \\ matching &MUSYC &TENIS &TENIS \\ &TENIS &SIMPLE-IRAC\tablenotemark{h} & SIMPLE-IRAC\\ &SEDS-IRAC\tablenotemark{g} & &\\ \midrule \multirow{3}{*}{Photo-z} &CANDELS-TFIT\tablenotemark{a} &MUSYC\tablenotemark{c} &MUSYC \\ &IB-TFIT\tablenotemark{b} &TENIS\tablenotemark{d} &TENIS \\ &GALEX-UV &GALEX-UV &GALEX-UV \\ \midrule $N_\mathrm{spz}$ & 2314&2016&1864\\ \bottomrule[1.5pt] \end{tabular} \tablecomments{$N_\mathrm{spz}$ is the number of spec-z used in each Area (N.~Hathi, private communication)} \footnotetext[1]{\citet{guo13}} \footnotetext[2]{Donley et~al.\ in preparation} \footnotetext[3]{\citet{car10}} \footnotetext[4]{\citet{hsi12}} \footnotetext[5]{\citet{xue11}} \footnotetext[6]{\citet{ran13}} \footnotetext[7]{\citet{ash13}} \footnotetext[8]{\citet{dam11}} \footnotetext[9]{\citet{leh05}} \footnotetext[9]{\citet{vir06}} \end{table*} \vspace{10mm} \section{X-ray to optical/NIR/MIR Associations}\label{match} X-ray source positions can differ between catalogs because of different methods adopted for data reduction and source detection. The goal of this paper is not to judge which method of X-ray source detection is superior but rather to provide accurate photo-z for optical/NIR/MIR sources associated with X-ray sources. Associations between X-ray sources and possible counterparts were therefore done independently for each of the four X-ray catalogs (Sec~\ref{xdata}), and duplicate sources were removed only at the end of the process as described below. \subsection{Comparing X-ray Catalogs} \label{x11r13} \noindent {\bf 1. In Areas 1+2:} The major difference between the R13 and X11 catalogs is that R13 adopted a higher threshold for source detection. Despite that, there are some sources in the R13 catalog but not in X11. There are also astrometric differences, which can affect the association to an optical/NIR/MIR counterpart. Thus the redshift assigned to the X-ray source and also to the supposed counterparts can be different because different template libraries and priors were used for X-ray galaxies than for normal ones. In order to match X-ray catalogs, we shifted the X11 positions by $-0\farcs175$ in R.A.\ and $0\farcs284$ in Dec.\footnote{The original X11 positions are on the radio astrometric frame. The shifts needed to bring them to the optical frame are in Sec.~3.1 of the X11 paper.} to register them to the optical frame \citep{gia04}. The R13 catalog is already on the MUSYC optical frame and was not shifted. After astrometric shifting, we matched the X11 and R13 catalog coordinates, allowing a maximum distance of 10\arcsec. There are 545 sources in common with a maximum offset $<6\arcsec$ as shown in Figure~\ref{offset1}. For these 545 sources, neither catalog has any additional X-ray source within 10\arcsec. As expected, all of the large offsets are for sources at large off-axis angles. For off-axis angles $<6\arcmin$, the median coordinate offset is 0\farcs13, and except for one source, the maximum offset at any off-axis angle is $<3\farcs5$. We treat each of the 545 matched sources as a single X-ray detection. $54\%$ of these sources have a distance from each other larger than the positional error claimed for either of the catalogs. In addition, there are 195 sources detected by X11 but not R13 and 24 sources detected by R13 but not X11 for a total of 764 X-ray sources in Areas 1+2. As R13 mentioned, the unique sources to either of the two catalogs are mostly low-significance detections and therefore of lower reliability. In the following discussions, ``X-'' sources indicate those from X11 and ``R-'' those from R13. \\ \noindent {\bf 2. In Area 3: } We adopted the \citet{car08} astrometric calibration to align the V06 positions to the MUSYC and L05 catalogs, which were already in agreement. After the shift, the two catalogs have 366 source matches with offsets $<6\arcsec$. These have a median separation of 0\farcs16 (Fig.~\ref{offset2}). We consider these 366 sources as the same X-ray detection. $12\%$ of these sources have a separation that is larger than the positional error associated with either of the catalogs. In addition, there are 91 sources in the L05 catalog but not V06 and 38 sources in the V06 catalog but not L05 for a total of 495 X-ray sources in Area~3. A compilation of the four X-ray catalogs with their original positions and fluxes is available under [Surveys] $>$ [CDFS] through the portal \url{http://www.mpe.mpg.de/XraySurveys}. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{xue_imp2_offset_6arcsec.png} \caption{Coordinate differences between X11 and R13 X-ray catalogs. The lower panel shows a histogram of offsets for the 545 sources Area~1 and~2 in common in the two catalogs. The upper panel shows the off-axis angle from the {\it Chandra} aim point as a function of the angular offset. \label{offset1}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{L05_V06_offset_6arcsec.png} \caption{Coordinate differences between L05 and V06 X-ray catalogs. The lower panel shows a histogram of offsets for the 495 sources in Area~3 in common in the two catalogs. The upper panel shows the off-axis angle from the {\it Chandra} aim point as a function of the angular offset. \label{offset2}} \end{figure} \subsection{Matching method}\label{sec:matching_method} We used a new association method based on Bayesian statistics which allows pairing of sources from more than two catalogs at once while also making use of priors. Salvato et~al.\ (2014, in preparation) will give details, but in brief the code finds matches based on the equations developed by \citet{bud08}. Then additional probability terms based on the magnitude and color distributions are applied. (See \citealt{nay13} for a similar approach.) The code was developed in view of the launch of eROSITA \citep{mer12}, where an expected million sources will be scattered over the entire sky and will have a non-negligible positional error and/or non-homogenous multi-wavelength coverage, conditions not optimal for association methods like Maximum Likelihood (e.g., \citealt{bru07, luo10, civ12}). The new method provides the same quality of results as Maximum Likelihood in a much shorter time because matches are done simultaneously across all bands. Thus for example sources that are extremely faint or undetected in optical bands but brighter in the IRAC 3.6~$\mu \mathrm{m}$ band can be identified as counterparts in a single iteration. For the 4Ms-CDFS sources (X11 and R13) located in Area 1, we used the CANDELS/$H$-selected catalog, TENIS/\linebreak[0]$J\&K_s$-selected catalog, MUSYC/\linebreak[0]$BVR$-selected catalog, and the deblended SEDS/IRAC 3.6~\micron\ catalog. For the 4Ms-CDFS sources located in Area~2, we matched the X-ray sources to the TENIS/$J\&K_s$-selected catalog, MUSYC/$BVR$-selected catalog, and SIMPLE/IRAC 3.6$\mu \mathrm{m}$ catalog. The same set of these three catalogs was also used in Area~3 for finding the associations for the 250ks-ECDFS sources (L05 and V06). Table~\ref{sets} summarizes the catalogs matched in each Area. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{post.png} \caption{Cumulative fraction of the posterior $p$ for the possible counterparts to the X-ray sources in Area 1, 2, and 3 as indicated in the legend. \label{post}} \end{figure} For each X-ray source (740 from X11, 569 from R13, 440 from L05, and 374 from V06), we considered all catalog objects lying within 4\arcsec\ of the X-ray position and computed the posterior probability $p$ that the given object is the correct counterpart. Figure~\ref{post} shows the distribution of the posteriors for all the possible associations in the three areas. In Area 1 where the data are deeper and better resolved, more than $98\%$ of the X-ray sources have at least one association with $p>0.7$, and we consider this $p$ value the threshold for defining an association in all three areas. Area 3 has a distribution of $p$ that reaches lower values, but because of the shallowness and lower resolution of the data, we consider not reliable the association with $p<0.7$. Our catalogs (see Sec.~\ref{catalog}) include the $p$ value to allow users to define a stricter threshold, depending on the scientific use intended. Figure~\ref{close} shows examples of ambiguous identifications. In all three cases shown, a single X-ray source has two possible $H$-band associations with $p>0.99$. Even the simultaneous use of deblended IRAC photometry from SEDS does not help in associating a unique counterpart. The example in the middle shows that despite the high resolution of the CANDELS images, the upper source is still blended, and probably a third component is present. If a further deblending were applied, the $H$ flux would be split among multiple components, thus reducing the probability of the upper source being the right association. In practice, we attempted no further deblending and simply flagged these kinds of objects as sources with multiple counterparts. For these cases, in addition to the photo-z computed using normal galaxy templates, we also provide the values obtained by assuming that they are AGNs. The photo-z results reveal that $\sim 20\%$ of these close pairs have similar redshifts and may be associated with galaxy mergers or galaxy groups. However the majority of apparent pairs are projections of unrelated objects. \begin{figure}[h] \epsscale{1.15} \plotone{close_pair} \caption{Three examples of multiple $H$-band associations (from left to right X-115, X-517, and X-224) in $H$-band (upper) and IRAC-3.6$\mu \mathrm{m}$ (lower). The size of each cutout is $5'' \times 5''$ . The red circles are centered at the X-ray position with radius corresponding to the positional error. The cyan crosses indicate the positions of $H$-band detected sources from G13. These three cases have two $H$-band associations both with probabilities greater then 0.99. The uses of deblended IRAC photometry does not help in making a unique secure association.\label{close}} \end{figure} \subsection{Matching results} \label{match_result} \begin{figure} \centering \includegraphics [width=0.47\textwidth]{flowchart_v3.png} \caption{Flow chart of the process for four cases of X-ray to optical/NIR/MIR associations. $H$-band negative images ($5'' \times 5''$) are provided as examples for each case . Dashed-line circles show the X11 (red) and R13 (cyan) X-ray positions and positional uncertainties. Red and cyan crosses show the corresponding $H$-band counterparts. \label{flowchart}} \end{figure} Figure~\ref{flowchart} shows the decision tree for X-ray source associations and computing photo-z, and Table~\ref{xcase} gives numbers for each case in each Area. There are four cases:\\ \hangindent=0.9cm {\bf 1. Case~1:} An X-ray source in both catalogs with one optical/NIR/MIR association. Case~1 means the same unique association was chosen even though the X-ray catalog positions may differ between X11 and R13 or between L05 and V06. There are 714 of these sources in Areas~1+2+3. \hangindent=0.9cm {\bf 2. Case~2:} An X-ray source in both catalogs with differing optical/NIR/MIR associations. Case~2 can arise from two causes: (1) position differences in the X-ray catalogs may point to different counterparts, or (2) there may be more than one potential counterpart near the X-ray position(s), and we cannot tell which is the right one. Some of the latter may be blended sources with more than one galaxy contributing to the X-ray flux. In total, there are 181 case~2 sources in Areas~1+2+3. These sources are identified in the final catalogs, and counterpart photo-z are calculated using both AGN and normal galaxy SED templates (see Section~\ref{catalog}) . \hangindent=0.9cm {\bf 3. Case~3:} X-ray sources found in one catalog but not the other, having a unique counterpart. There are 235 of these sources in Areas~1+2+3. \hangindent=0.9cm {\bf 4. Case~4:} X-ray sources found in only one catalog and having multiple possible counterparts. There are 77 such sources in Areas~1+2+3. As for Case~2, the catalogs identify all the possible counterparts and provide both AGN and normal galaxy photo-z results for each.\\ In summary, 1207 out of 1259 ($\sim$96\%) of the X-ray sources are associated with multi-wavelength counterparts, and 258 of them ($\sim$21\%) have multiple counterparts possible. There are 26 sources for which the counterpart is detected only in the IRAC bands, and no photo-z computation is possible for these. All the other sources have at least six photometric points, and a photo-z is provided. The photo-z catalog (see Sec.~\ref{catalog}) entry for each source indicates the number of photometric points used for the photo-z computation. The remaining 52 sources ($\sim 4\%$) either have no identifications in any of the optical/NIR/MIR catalogs ($\sim 1\%$) or have possible counterparts identified with $p<0.7$ ($\sim 3\%$). For these sources, the photo-z are not available as well. \begin{table*} \centering \caption{Results of X-ray to optical/NIR/MIR associations in ECDFS . \label{xcase}} \begin{tabular}{cccccccccccc} \toprule[1.5pt] &$N_{x}$ & Case1& Case2& Case3& Case4&$N_{ctp}^{single}$&$N_{ctp}^{multi}$ &$N_{ctp}$& $N_{ctp}^{multi}/N_{ctp} $ & $N_{ctp}/N_{x} $ \\ \midrule Area 1 &509 &272 &67 &130 &29 &402 &96 &498 &19\% &$98\%$ \\ \midrule Area 2 &255 &170 &29 &35 &12 &205 &41 &246 &17\% &$96\%$ \\ \midrule Area 3 &495 &272 &85 &70 &36 &342 &121 &463 &26\% &$94\%$ \\ \midrule TOTAL &1259 &714 &181 &235 &77 &949 &258 &1207 &21\% &$96\%$ \\ \bottomrule[1.5pt] \end{tabular} \tablecomments{$N_{x}$: Number of X-ray sources; $N_{ctp}^{single}$: Number of sources that have only one possible counterpart; $N_{ctp}^{multi}$: Number of sources that have more than one possible counterpart; $N_{ctp}$: Total number of sources for which at least a counterpart was found. } \end{table*} \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{not_onir.png} \caption{Multi-wavelength images of the seven sources from X11 for which we found new, secure ($p>0.9$) counterparts. Wavelengths are indicated above each set of panels. The four sources in the upper group are in Area~1 and have CANDELS $H$-band images. The three sources in the lower group have no WFC3-$H$, and TENIS-$K_s$ is shown instead. X-ray images are full-band from X11. The red dashed circles are centered at the X11 positions with their radii showing the corresponding positional uncertainty. Cyan crosses in the upper panels show all $H$-band detections, and the solid red circles show the catalog position of the chosen counterpart. All cutouts are $5'' \times 5''$ except that X-736 is $10'' \times 10''$. \label{not_onir}} \end{figure*} \subsection{Comparison to previous results in Area 1+2}\label{sec:comp_match} X11 used likelihood ratio matching to assign counterparts to 716 out of 740 X-ray sources in Areas~1+2. Our code and the newly available ancillary data give secure counterparts (with $p>0.9$) for seven additional sources shown in Figure~\ref{not_onir}. Most of the new counterparts are offset from the X-ray position by 1--2 times the X-ray position uncertainty. The most likely reason for finding new identifications is having better imaging data available, but there remains a chance some of the X-ray sources are not real. Figure~\ref{234} shows an example of a revised X-ray association. In this case low resolution catalogs give a single counterpart for the source (R-57=X-234) for either X-ray position However, the high-resolution WFC3/$H$-band image reveals at least four sources close together, and the slightly different coordinates provided by X11 and R13 point to different but equally likely counterparts. This difference is mainly due to the catalogs chosen for cross matching rather than the matching method. The Bayesian method should in principle give the same result as the maximum likelihood method, but the ability to match several catalogs simultaneously greatly improves the efficiency of the matching. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{xue_234a.png} \caption{ Negative images of the source R-57 (=X-234). Image wavelengths are indicated at the top, and each image is $5'' \times 5''$. Red dashed-line circles are centered at the position provided by X11 and cyan dashed-line circles at the position given by R13. Circle sizes indicate the respective X-ray position uncertainties. Red and cyan solid-line circles are the counterparts we assign to the two X-ray positions, and the blue circle indicates the counterpart assigned by X11. \label{234} } \end{figure} \section{Photo-z for Non X-ray detected Galaxies}\label{galphz} This section focusses on the X-ray-undetected sources, which we refer to as ``normal galaxies'' even though some will in fact be AGNs.\footnote{A large fraction of galaxies that host AGNs in their central regions do not emit detectable X-rays but are identified at infrared and/or radio wavelengths or by emission line ratios \citep[e.g.,][]{don12}. } The derived photo-z will be reliable to the extent of the ``normal galaxies'' which have normal galaxy SEDs at observed visible/infrared wavelengths. X-ray sources need more tuning for accurate photo-z and are discussed in Section~\ref{x-phz}. The photo-z were computed using {\it LePhare} \citep{arn99,ilb06}, which is based on a $\chi^{2}$ template-fitting method. For the normal galaxies, we adopted the same templates, extinction laws, and absolute magnitude priors as \citet{ilb09}. In short, 31 stellar population templates were corrected for theoretical emission lines by modeling the fluxes with line ratios of [\ion{O}{3}]/[\ion{O}{2}], H$\beta$/[\ion{O}{2}], H$\alpha$/[\ion{O}{2}], and Ly$\alpha$/[\ion{O}{2}]. In addition to the galaxy templates, we also included a complete library of star templates as did \citet{ilb09} and \citet{sal09}. Four extinction laws (those of \citealt{pre84}, \citealt{cal00}, and two modifications of the latter, depending on the kind of templates) were used with $E(B-V)$ values of 0.00 to 0.50 in steps of 0.05~mag. Photo-z values were allowed to reach $z=7$ (in steps of 0.01) because deeper photometry allows us to reach higher redshifts (see details given by \citealt{ilb09}). The fitting procedure included a magnitude prior, forcing sources to have an absolute magnitude in rest $B$-band between $-8$ and $-24$. Photometric zero-point corrections were incorporated but never exceeded 0.1~mag. Final best parameters came from minimizing $\chi^2$. We advise against this step for the AGNs because optical variability is intrinsic to the source and not accounted for in the photometry. All the normal galaxies were selected from either the CANDELS-TFIT catalog or the MUSYC catalog (Sec.~\ref{optdata}). Photo-z are based on $\rm{TFIT}_{\rm{CANDELS+IB}}$\ photometry for sources detected in the CANDELS-TFIT catalog and otherwise on MUSYC+TENIS photometry. The majority of normal galaxies have $\rm{TFIT}_{\rm{CANDELS+IB}}$\ photometry in Area~1 but only MUSCY+TENIS photometry in Areas~2+3. \\ Quantifying the photo-z accuracy ($\sigma_{\mathrm{NMAD}}$)\footnote{Our measure of photo-z accuracy is the normalized median absolute deviation (NMAD): $\sigma_{\mathrm{NMAD}} \equiv 1.48\times \mathrm{median} (\frac { |\Delta z |}{ 1+z_{{s}}}$), where $z_{{s}}$ is spec-z, $z_{{p}}$ is photo-z, and $\Delta z\equiv(z_{{p}}-z_{{s}})$. Outliers were not removed before computing $\sigma_{\mathrm{NMAD}}$.}, the percentage of the outliers ($\eta$)\footnote{Outliers are defined as $\frac{ | \Delta z|}{1+z_{{s}}} > 0.15$}, and the mean offset between photo-z and spec-z ($\mathrm{bias}_{z})$\footnote{$\mathrm{bias}_{z} \equiv \mathrm{mean} (\frac { \Delta z}{1+z_{{s}}})$ after excluding outliers.} was based on the spectroscopic samples. Table~\ref{gal_phz_tab} gives these measures of photo-z quality for the global samples and for subsamples split into magnitude and redshift bins. \subsection{Area 1}\label{galphz_zone1} The overall outlier fraction of $\sim$3.8\% in this region is comparable to the most recent work by the CANDELS team \citep{dah13}. However, the deblended IB photometry from MUSYC improves the accuracy to $\sigma_{\mathrm{NMAD}} =0.012$ (from $\sigma_{\mathrm{NMAD}} =0.026$ by \citealt{dah13}) and $\mathrm{bias}_{z} =-0.001$ (from $\mathrm{bias}_{z} =-0.005$). Figure~\ref{gal_phz1} illustrates the results. Outlier fractions and scatter are larger for the fainter galaxies (Table~\ref{gal_phz_tab}), but bias is only a weak function of source magnitude. Bias is, however, larger for $z>1.5$ galaxies than for those at lower redshifts. Scatter and outlier fraction are also larger at $z>1.5$, but this mostly reflects the typically fainter magnitudes of the more distant sources. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{gs38_gal_SEP24.png} \caption{Upper panel: Photo-z vs spec-z. Dots represent all normal galaxies with spec-z in Area~1. The solid line represents $z_{{p}}= z_{{s}}$; the dotted lines represent $z_{{p}}= z_{{s}} \pm 0.15(1+ z_{{s}})$. Lower panel: Same but plotted as the difference $\Delta z\equiv(z_{{p}}-z_{{s}})$. \label{gal_phz1}} \end{figure} The decreased outlier fraction in the present survey requires {\it both} the deeper CANDELS-TFIT data and the deblended IB photometry. Table~\ref{CvsM} gives data quality measures for 1541 sources in common using various data sets. Using only MUSYC+TENIS, but not the deep $\rm{TFIT}_{\rm{CANDELS+IB}}$\ data, produces the same data quality as \citet{car10} as expected. However using the $\rm{TFIT}_{\rm{CANDELS+IB}}$\ photometry decreases the outlier fraction from $\sim$4\% to $\sim$2\%, and the decrease is most substantial (more than a factor of two) for the faint and distant sources. (See Table~\ref{CvsM}.) Figure~\ref{gs_gal_car} illustrates the comparison and in particular the decrease in outliers at $z_s>2$. The difference comes from the use of deep space-based data (i.e., CANDELS) and the TFIT technique for deblending the lower-resolution bands. However, the IB data are also important. \citet{dah13} used the CANDELS-TFIT data, while their results (included in Table~\ref{CvsM}) are better than with the ground-based data alone, they are not as good as with the combined data sets (i.e., $\rm{TFIT}_{\rm{CANDELS+IB}}$ ). Adding the IB data improves results---mainly in accuracy but also in outlier fraction---even for the fainter subset of the sample. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{gs38_gal_SEP24_car.png} \caption{Photo-z vs. spec-z for 1541 normal galaxies in Area~1. Black dots are results from this work, and grey open circles are results from \citet{car10}. Blue dots and blue open circles indicate objects that are outliers both in our work and in that of \citet{car10}. \label{gs_gal_car} \\} \end{figure} \begin{table*} \begin{center} \caption{Photo-z Quality for Normal Galaxies\label{gal_phz_tab}} \begin{tabular}{ccccccccccccc} \toprule[1.5pt] \multicolumn{1}{c}{}&\multicolumn{4}{c}{Area 1}&\multicolumn{4}{c}{Area 2+3} &\multicolumn{4}{c}{Area 1+2+3} \\ \cmidrule(r){2-5} \cmidrule(r){6-9} \cmidrule(r){10-13} & $N$ &$\mathrm{bias}_{z}$ & $\sigma_{\mathrm{NMAD}}$ & $\eta(\%)$ & $N$&$\mathrm{bias}_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$ & $N$&$bias_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$ \\ \midrule Total &1979 &-0.001 &0.012 &3.79 &3444 &0.001 &0.009 &4.21 &5423 &0.001 &0.010 &4.06\\ \midrule $R < 23$ &576 &0.003 &0.008 &1.04 &2414 &0.001 &0.009 &2.20 &2990 &0.001 &0.009 &1.97\\ $R > 23$ &1403 &-0.002 &0.015 &4.92 &1030 &0.002 &0.012 &8.93 &2433 &-0.001 &0.013 &6.62\\ \midrule $H < 23$ &1323 &-0.000 &0.011 &2.87 &2428 &0.002 &0.009 &2.72 &3751 &0.001 &0.009 &2.77\\ $H > 23$ &656 &-0.002 &0.016 &5.64 &1016 &-0.001 &0.011 &7.78 &1672 &-0.001 &0.012 &6.9\\ \midrule $z < 1.5$ &1652 & 0.002 &0.011 &3.51 &3316 &0.002 &0.009 &3.89 &4968 &0.002 &0.009 &3.76\\ $z > 1.5$ &327 &-0.013 &0.021 &5.20 & 128 &-0.008 &0.031 &12.50 &455 &-0.011 &0.024 &7.25\\ \bottomrule[1.5pt] \end{tabular} \end{center} \end{table*} \begin{table*} \begin{center} \caption{Comparison of Photo-z Results for Normal Galaxies in Area~1\label{CvsM}} \begin{tabular}{cccccccccccccc} \toprule[1.5pt] \multicolumn{2}{c}{}&\multicolumn{3}{c}{$\rm{TFIT}_{\rm{CANDELS+IB}}$}&\multicolumn{3}{c}{MUSYC+TENIS}&\multicolumn{3}{c}{Cardamone+2010} &\multicolumn{3}{c}{Dahlen+2013} \\ \cmidrule(r){3-5} \cmidrule(r){6-8} \cmidrule(r){9-11} \cmidrule(r){12-14} & $N$ &$\mathrm{bias}_{z}$ & $\sigma_{\mathrm{NMAD}}$ & $\eta(\%)$ &$\mathrm{bias}_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$&$\mathrm{bias}_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$&$\mathrm{bias}_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$ \\ \midrule Total &1541 &0.000 &0.011 & 2.14 &0.003 &0.012 &3.96 & 0.000 &0.011 & 3.96 &-0.005 &0.026 &2.47 \\ \hline $R < 23$ &506 &0.003 &0.009 &0.79 &0.002 &0.008 &0.79 &0.002 &0.008 &0.99 &-0.002 &0.026 &0.99\\ $R > 23$ &1035 &-0.002 &0.013 &2.80 &0.003 &0.016 &5.51 &-0.001 &0.016 &5.41 &-0.006 &0.026 &3.19\\ $H < 23$ &1064 & 0.001 &0.010 & 1.60 &0.003 &0.010 &2.07 & 0.000 &0.010 & 2.07 &-0.006 &0.027 &1.97\\ $H > 23$ & 477 &-0.002 &0.014 & 3.35 &0.002 &0.021 &8.18 & 0.000 &0.022 & 8.18 &-0.001 &0.024 &3.56\\ $z < 1.5$ &1308 &0.002 &0.010 &2.14 &0.004 &0.011 &3.13 &0.002 &0.010 &2.98 &-0.005 &0.026 &2.45\\ $z > 1.5$ & 233 &-0.014 &0.019 &2.15 &-0.002 &0.030 &8.58 &-0.008 &0.045 &9.44 &-0.002 &0.023 &2.58\\ \bottomrule[1.5pt] \end{tabular} \end{center} \end{table*} \subsection{Areas 2 and 3} \label{galphz_zone23} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{MUSYC_gal_outgs_NOV19.png} \caption{Photo-z vs spec-z of normal galaxies in Areas~2+3. }. \label{gal_phz23} \end{figure} Outside the CANDELS area, photo-z quality using MUSYC+TENIS photometry is similar to that of \citet{car10}. Figure~\ref{gal_phz23} illustrates the results. The brighter and lower-redshift subsets have photo-z quality almost as good as in Area~1 (see Table~\ref{gal_phz_tab}), but fainter galaxies have a higher outlier fraction. This is just as expected from the tests in Section~\ref{galphz_zone1}.\\ \begin{figure} \centering \includegraphics[width=0.47\textwidth]{gal_phz_hist2.png} \caption{Upper panel: normalized photo-z distribution for normal galaxies. Grey hatched area shows results of this work, and blue shaded area shows results of \citet{car10}/ Lower panel: cumulative number of normal galaxy photo-z redshifts for this work and for \citet{car10} as labeled. \label{gal_phz_hist}} \end{figure} The entire ECDFS (Areas~1+2+3) contains $\sim104000$ normal galaxies that have photo-z up to $z\sim7$. Figure~\ref{gal_phz_hist} shows the advantage of using WFC3 NIR to detect more sources in total and especially at $z\ga2$. An interesting paradox is that we actually have a slightly lower fraction of sources at $z>1.5$ than \citet{car10}. This is probably because their higher outlier fraction, lacking deep NIR data, leads to more outliers with apparent $z>1.5$. \section{Photo-z for X-ray sources }\label{x-phz} \begin{figure*} \centering \includegraphics [width=1.0\textwidth]{flux.png} \caption{Soft X-ray flux distributions in numbers (left) and source densities (right). Histograms show distributions for the 4Ms-CDFS (Areas~1 and ~2), 250ks-ECDFS (Area~3), and comparison surveys {\it Chandra}-COSMOS \citep{elv09} and {\it XMM}-COSMOS \citep{cap09} surveys as indicated in the legend. \label{flux}} \end{figure*} AGNs require special treatment to calculate photo-z. This paper uses deep X-ray data to identify candidate AGNs. However, X-ray surveys as deep as the 4Ms-CDFS also detect significant numbers of star-forming galaxies. The library must therefore include templates of normal galaxies, AGNs, and hybrids. \subsection{Template Library Methods}\label{tlm} \citet{luo10} computed photo-z for sources in the 2Ms-CDFS by using the spectra of known sources as templates for SED fitting. Using the entire spectroscopic sample for template training gave an apparent accuracy $\sim$0.01 with almost no outliers. However, unbiased testing suggested a true accuracy of 0.059 and $\sim$9\% outliers. This demonstrates how important the training sample is. For the same field, \citet{car10} created hybrid templates by combining normal galaxy templates with the SED of a type~1 AGN. The method gave accurate results ($\sigma_{\rm NMAD}\sim 0.01$) but large outlier fraction ($\sim 12\%$). \citet[][hereafter S09, S11]{sal09,sal11} pursued a different approach for X-ray sources in the COSMOS field \citep{sco07} detected by {\it XMM} \citep{cap09} and by {\it Chandra} \citep{elv09}. This involved (1) correcting the photometry for variability when applicable; (2) separating the optical counterparts to the X-ray sources into two subgroups: point-like and/or variable sources in one and extended, constant sources in the other; (3) applying absolute magnitude priors to these two subgroups, assuming that the former are AGN-dominated while the latter are galaxy-dominated; (4) creating AGN-galaxy hybrids, using different libraries for the two subgroups. This same procedure substantially reduced the fraction of outliers and gave higher accuracy than standard photo-z techniques when applied to X-ray sources in COSMOS. The procedure has also yielded reliable results for the Lockman Hole \citep{fot12} and AEGIS fields (Nandra et al.\ 2014, submitted). S11 also verified the need for depth-dependent template libraries by showing that hybrids defined for {\it XMM}-COSMOS are not optimal for the deeper {\it Chandra}-COSMOS. Even though the X-ray-faint {\it Chandra} sources are AGNs (i.e., $L_{\rm{x}} >10^{42}$), normal galaxy templates gave better results for them than AGN-dominated templates. \subsection{Constructing Population-dependent SED Libraries} \label{xpop} For this work, we constructed new hybrid templates following the procedure of S09 and S11 (Sec.~\ref{tlm}). First we point out that the difference of the X-ray flux distributions between the two X-ray surveys used in this work (i.e., 4Ms-CDFS and 250ks-ECDFS) is even more extreme than what we have found in S11 (i.e., XMM-COSMOS and Chandra-COSMOS). Fig.~\ref{flux} shows the soft X-ray flux distributions of the 4Ms and 250ks sources, together with the distributions from the Chandra-COSMOS \citep{elv09} and XMM-COSMOS \citep{cap09}. The left panel shows the distribution in numbers for each survey. Because of the sky coverage and the depth of the observations, most of the 4Ms-CDFS sources are located in the faint part of the flux distribution, which is opposite to the locus occupied by the shallower observations (e.g. XMM-/Chandra-COSMOS and 250ks-ECDFS). After normalizing by the total surveyed area\footnote{This is not the Log N-Log S} (see the right panel), it reveals that the X-ray bright sources which are similar to the XMM-COSMOS sources are very rare in the 4Ms survey. This implies that the library of hybrids used in XMM-COSMOS is probably not representative of the 4Ms population. Based on this considerations, we need to build a new library for the fainter 4Ms-CDFS population. The Appendix gives details, but in short AGN SEDs were combined in various proportions with semi-empirical galaxy SEDs (the same as already successfully used by \citet{gab04}, \citet{dro05}, and \citet{feu05}) from the FORS Deep Field \citep{ben01} to make hybrid templates. The AGN SEDs were modified QSO1 and QSO2 originally from \citet{pol07}. Separate libraries were used for (a) point-like sources in Areas~1+2, (b) point-like sources in Area~3, and (c) extended sources in all Areas. Because the flux distribution of point-like sources in Area~3 is similar to that of the {\it XMM}-COSMOS field, library (b) for point-like sources in that area was the same as used by \citet{sal09}. As a first step, we split the sources into extended and point-like subgroups depending on the observed source FWHM in the WFC3/$H$-band images for Area~1 and the MUSYC/$BVR$ images for Areas~2 and~3. The extended sources were assumed to be host-dominated, and being seen as extended means they are likely to be at low redshift. For these sources, we applied an absolute magnitude prior $-24<M_{B}<-8$ and used templates with at most a small AGN fraction. Point-like sources are usually AGN-dominated and can be at any redshift. We therefore applied a prior $-30<M_{B}<-20$ to these and used hybrid AGN-galaxy templates. The library of stellar templates was the same as used by \citet{ilb09} and \citet{sal09}. For the {\it XMM}-COSMOS field, \citet{sal09} had multi-wavelength, multi-epoch observations spanning several years. About 1/4 of sources seen in those were variable. The lack of multi-epoch data for the CDFS/ECDFS means that we cannot detect the variable objects and correct their photometry. However, these objects are a minor contributor to the X-ray population in the much smaller CDFS area (1/15 of XMM-COSMOS area). Therefore only a minor fraction of the Area~1 and~2 sources are likely to be variable. The major effect of being unable to correct for variability will be an increased outlier fraction rather than decreased photo-z accuracy \citep{sal09}. Area~3, covered at 250~ks depth, is an intermediate case, and part of the photo-z inaccuracy there could be due to lack of variability correction. The spectroscopic testing in the respective Areas (Table~\ref{xmix_phz_tab}) quantifies the outlier fractions and the inaccuracies resulting from all causes. \begin{table*}\footnotesize \begin{center} \caption{Photo-z Quality for X-ray Sources\label{xmix_phz_tab}} \begin{tabular}{ccccccccccccccccc} \toprule[1.5pt] \multicolumn{1}{c}{}&\multicolumn{4}{c}{Area 1}&\multicolumn{4}{c}{Area 2}&\multicolumn{4}{c}{Area 3}&\multicolumn{4}{c}{Area 1+2+3} \\ \cmidrule(r){2-5} \cmidrule(r){6-9} \cmidrule(r){10-13}\cmidrule(r){14-17} & $N$ &$bias_{z}$&$\sigma_{\mathrm{NMAD}}$ & $\eta(\%)$ & $N$ &$\mathrm{bias}_{z}$& $\sigma_{\mathrm{NMAD}}$& $\eta(\%)$ & $N$ &$\mathrm{bias}_{z}$&$\sigma_{\mathrm{NMAD}}$ & $\eta(\%)$&$N$ &$\mathrm{bias}_{z}$&$\sigma_{\mathrm{NMAD}}$ & $\eta(\%)$ \\ \midrule Total & 300 &-0.002 &0.012 & 2.67 & 104 &-0.002 &0.014 & 6.73 & 148 &-0.004 &0.016 &10.14 & 552 &-0.002 &0.014 & 5.43 \\ \hline $R < 23$ & 171 &-0.004 &0.010 & 1.17 & 80 &-0.000 &0.014 & 5.00 & 109 & 0.001 &0.013 & 8.26 & 360 &-0.002 &0.011 & 4.17\\ $R > 23$ & 129 & 0.001 &0.024 & 4.65 & 24 &-0.008 &0.016 &12.50 & 39 &-0.018 &0.023 &15.38 & 192 &-0.004 &0.023 & 7.81\\ $H < 23$ & 278 &-0.003 &0.012 & 1.80 & 102 &-0.002 &0.014 & 6.86 & 69 &-0.004 &0.016 &10.14 & 449 &-0.003 &0.013 & 4.23\\ $H > 23$ & 22 & 0.012 &0.014 &13.64 & 2 &-0.010 &0.026 & 0.00 & 79 &-0.004 &0.016 &10.13 & 103 &-0.001 &0.014 &10.68\\ $z < 1.5$ & 240 &-0.001 &0.012 & 2.50 & 86 &-0.001 &0.015 & 4.65 & 112 &-0.003 &0.020 & 8.04 & 438 &-0.001 &0.014 & 4.34\\ $z > 1.5$ & 60 &-0.004 &0.014 & 3.33 & 18 &-0.009 &0.012 &16.67 & 36 &-0.009 &0.010 &16.67 & 114 &-0.006 &0.012 & 9.65\\ \bottomrule[1.5pt] \end{tabular} \end{center} \end{table*} \subsection{Results}\label{xresult} \begin{figure} \includegraphics [width=0.45\textwidth]{TFIT_vs_MUSYC_xmix.png} \caption{Comparison of photo-z to spec-z with and without $\rm{TFIT}_{\rm{CANDELS+IB}}$\ photometry. Filled points show results for 242 X-ray sources from the \citet{car10} catalog using the full $\rm{TFIT}_{\rm{CANDELS+IB}}$\ dataset. Open circles show results for the same sources using only the MUSYC+TENIS data. \label{TFIT_vs_MUSYC_xmix}} \end{figure} \begin{figure}[htb] \includegraphics [width=0.45\textwidth]{Area123_phz_xmix.png} \caption{Comparison of photo-z to spec-z for all X-ray sources in Areas 1+2+3.\label{zone123_phz_fig}} \end{figure} In Area~1, where $\rm{TFIT}_{\rm{CANDELS+IB}}$\ photometry and high-resolution space-based images are available, the photo-z for X-ray sources (Table~\ref{xmix_phz_tab}) are as accurate as those for normal galaxies (Table~\ref{gal_phz_tab}). Remarkably, the outlier fraction is actually lower for the X-ray sources than for normal galaxies. Excellent photo-z quality is maintained even for $z>1.5$. Figure~\ref{TFIT_vs_MUSYC_xmix} shows that the results are largely attributable to the WFC3 data with their high angular resolution. Instead of using ground-based data (i.e., MUSYC+TENIS), the use of $\rm{TFIT}_{\rm{CANDELS+IB}}$ catalog allows us to reduce the outlier fraction by a factor of 3. The improvement is especially great for the $R>23$ and $z>1.5$ sources. The outlier fractions decreases from $6.3\%$ to $2.1\%$ for faint sources and from $12.8\%$ to $2.6\%$ for high-redshift sources. Comparison with Area~2 also confirms the importance of the WFC3 data. Without these data, photo-z accuracy deteriorates only slightly (Table~\ref{xmix_phz_tab}), but the outlier fraction triples. Most of the outlier increase comes from the $R>23$ and $z>1.5$ subsets. (There are only two sources with $H>23$, and numerical results for that bin are meaningless.) Area~3 has a larger fraction of outliers than either of the other two Areas, though accuracy for the non-outliers is little worse than in Areas~1 and~2 (Table~\ref{xmix_phz_tab}). Three effects probably contribute to the larger fraction of outliers. One is shallower photometry at the border of the field (Fig.~\ref{zone}), leading to larger errors. Second, the X-ray coverage is shallower in the larger Area~3, thus the fraction of varying Type~1 AGN is presumably higher. The lack of variability correction will therefore have a larger effect. This is likely exacerbated by the third effect, having to use ground-based images rather than higher-resolution images for classifying sources as point-like or extended. In Area~1, about 30\% of sources are classified point-like using WFC3 but extended on a ground-based image, due to the low resolution of the images and being sensitive to the presence of nearby sources. Using the template library for the extended sources rather than for point-like classification would have doubled the outlier fraction. Furthermore, in order to identify possible outliers among the sources without spec-z, we look at the distribution of observed-frame X-ray luminosity as a function of redshift. In Figure~\ref{lx_z}, three sources with apparent extreme redshift are probably outliers. They are located on the edge of the optical images and have unreliable or non-existent MUSYC photometry, leaving only six photometric data points (from the TENIS catalog). Photo-z with so few data points cannot be trusted. \begin{figure} \includegraphics [width=0.45\textwidth]{observed_Lx_z.png} \caption{Distribution of 0.5--8~keV observed-frame X-ray luminosity as a function of redshift for all X-ray sources. Redshifts are spec-z if available and otherwise photo-z. Red open circles indicate the three anomalous sources that have unreliable photo-z. }\label{lx_z} \end{figure} \section{Discussion}\label{discussion} \subsection{Photometric Redshift Accuracy Beyond the Spectroscopic Limit} \begin{figure*} \centering \includegraphics[width=\textwidth]{pair_stat.png} \caption{Distributions of photo-z differences for pairs of galaxies. In the upper panels, red dotted lines represent differences for random pairs and black solid lines represent differences for pairs having angular separation $<$15\arcsec. Only galaxies with $J<28$ are included. The lower panels show results for close pairs after subtracting the distributions for random pairs. Black lines show the observed $\Delta z_p$, and red lines show a Gaussian fit with standard deviation sigma as indicated in each panel. The left two panels are for Area~1 and the right two for Areas~2+3. \label{pair_stat}} \end{figure*} Using spec-z to estimate photo-z accuracy (as in Tables~\ref{gal_phz_tab} and~\ref{xmix_phz_tab}) is not representative of sources fainter than the spectroscopic limit. \citet{qua10} introduced a method for estimating photo-z accuracy based on the tendency of galaxies to cluster in space. Because of clustering, galaxies seen close to each other on the sky have a significant probability of being physically associated and having the same redshift. Therefore the distribution of photo-z differences\footnote{Photo-z difference is defined as $\Delta z_p\equiv(z_{{p,1}}-z_{{p,2}})/(1+z_{\rm{mean}}) $.} ($\Delta z_{{p}}$) of close pairs will show an excess at small redshift differences over the distribution for random pairs. This is seen in Figure~\ref{pair_stat}.\footnote{Random pairs also show a noticeable peak at small $\Delta z_{{p}}$. This is not due to any systematic we can identify and may be due to the known large scale structure \citep{cast07,sali09,deh14} in the field.} The excess for close pairs with magnitude $J< 28$ fits a Gaussian with standard deviation $\sigma =0.012$ in Area~1 and $0.010$ in Areas 2+3. Because the width includes the scatter from both paired galaxies, the photo-z uncertainty for an individual object should be $\sigma / \sqrt{2}$. These values are given in Table~\ref{pair_tab}.\footnote{ The close pair excess includes only objects with similar photo-z, so outliers are excluded in calculating $\sigma$ here.} The pair test reveals that the faint sources without spec-z have photo-z accuracy similar to that of sources bright enough to have spectroscopic data. \begin{table}[!h] \centering \caption{Photo-z Scatter from Pair Statistics\label{pair_tab}} \begin{tabular}{ccc} \toprule[1.5pt] &Area 1 &Areas 2+3 \\ \midrule $J < 25$ &0.008 & 0.007 \\ $J < 26$ &0.009 &0.007 \\ $J < 27$ &0.009 &0.007 \\ $J < 28$ &0.008 &0.007 \\ \bottomrule[1.5pt] \end{tabular} \tablecomments{\protect\parbox[t]{2.4in}{Table values $\sigma / \sqrt{2}$ are the estimated standard deviation of a single galaxy photo-z as derived from galaxy pairs in each magnitude range.}} \end{table} \subsection {The Impact of Intermediate-band Photometry} \label{impact_ib} \begin{figure} \includegraphics[width=0.45\textwidth]{IB_all.png} \caption{Distribution of photo-z minus spec-z. Histograms show $z_{p}-z_{s}$ distribution for all galaxies with spec-z in Area~1. Black line shows results for photo-z with IB photometry included, and hatched area shows results for the same galaxies with IB data omitted. \label{IB-all}} \end{figure} \begin{figure*} \includegraphics[width=1\textwidth]{IB_sb.png} \caption{Distribution of photo-z minus spec-z for star-forming galaxies selected by rest $BzK$ colors. Histograms show ($z_{\rm p}-z_{\rm s}$) distributions in various redshift bins as indicated above each panel. Black lines show the distributions for photo-z with IB photometry included, and blue areas show distributions for the same galaxies with IB photometry omitted. All data are from Area~1. \label{IB-sb}} \end{figure*} Previous work has shown the importance of IB photometry for photo-z, especially because IB data can show the presence or absence of emission lines in galaxy SEDs. For example \citet{ilb09} showed that including IBs improved photo-z accuracy from 0.03 to 0.007 for normal galaxies with $i^{+}<22.5$ in the COSMOS field. \citet{car10} found the same in the ECDFS. For AGNs, \citet{sal09} showed that for both extended and point-like X-ray sources in COSMOS, accuracies and outlier fractions were substantially better when IBs were included. In the current data, the IB photometry is much shallower than the NIR data from CANDELS (Table~\ref{photometry}). To examine whether the shallow IB data are helpful or not in this case, we recomputed the Area~1 photo-z with exactly the same CANDELS-TFIT dataset \citep{guo13} used by \citet{dah13}, i.e., without IBs. Results are given in Table~~\ref{CvsM}, and Figure~\ref{IB-all} compares results with IBs and without. Without the IBs, the outlier fraction is 5\%, accuracy is 0.037, and $\mathrm{bias}_{z} = -0.010$. These are similar to the results of \citet{dah13} ``method 11H,'' which used the same code as this work. The negative value of $\mathrm{bias}_{z}$ indicates underestimation of photo-z on average. That results in lower galaxy luminosities and incorrect rest colors. As discussed by \citet{ros13a}, these may lead to incorrect measurements of galaxy ages and stellar populations. The IB data improve the accuracy and mean offset substantially, creating a narrower and more symmetric peak of photo-z values around the spec-z (Fig.~\ref{IB-all}). Intermediate bands should be most important for objects that have strong emission lines in their spectra. Strong emission lines can arise either from vigorous star formation or an AGN. To quantify the effect, we applied (inverse) $BzK$ selection \citep{dad04} to define a sample of star-forming objects among those with reliable spec-z. In order to extend the selection at high redshift, we applied the revised $BzK$ criterion as defined by \citet{guo13}.\footnote{The exact criteria were (1) $(z-K_{s}) > (B-z)-0.2$ in the redshift range $1.4<z<2.6$; (2) $(J-L) > 1.2\times(V-J)$ in the redshift range $2.4<z<3.6$; (3) $(H-M) > 1.375\times(i-H)$ in the redshift range $3.4<z<4.6$. Symbols $B,\, V,\, i,\, z,\, J,\, H,\, K_{s},\, L,\, M$ refer to F435W, F606W, F775W, F850LP, F125W, F160W, ISAAC $K_{s}$, IRAC 3.6~$\mu \mathrm{m}$, IRAC 4.5~$\mu \mathrm{m}$, respectively.} Fig.~\ref{IB-sb} shows the resulting distributions of photo-z minus spec-z. At all redshifts, the distribution including IB is more peaked and symmetric around zero when IBs are included. \subsection {Impact of Emission Lines in the Templates} \label{impact_em} \begin{figure*} \includegraphics[width=1\textwidth]{EM_z.png} \caption{($z_{\rm p}-z_{\rm s}$) distribution in various redshift bins. The photo-z are computed using the $\rm{TFIT}_{\rm{CANDELS+IB}}$ photometric catalog and using the templates with (black solid line ) and without (red solid lines) emission line contributions. The upper panel shows that the emission lines are useful particularly at $z<1.5$. \label{deltaz}} \end{figure*} \citet{ilb09} demonstrated the importance of taking emission lines into account for photo-z. Including lines in the templates improved photo-z accuracy by a factor of 2.5 for bright ($i^{+}<22.5$) galaxies in the COSMOS field. The same effect is seen in the deeper $\rm{TFIT}_{\rm{CANDELS+IB}}$\ data as shown in Figure~\ref{deltaz}. Although outlier numbers remain similar ($\sim$4\%) whether emission lines are included in the templates or not, the distributions of ($z_{\rm p}-z_{\rm s}$) change. At $z<1.5$, including emission lines gives much narrower peaks and lower bias. At $z>1.5$, the improvement is less than at lower redshifts. Possible reasons are: (a) the contribution of the emission lines are diluted when observed in the NIR bands, which have broader bandwidths than optical bands; (b) the recipes for adding emission lines to the templates may be wrong for high-redshift galaxies; and/or (c) the IB data may be too shallow to affect the high-redshift (and therefore faint) sources. However, even at $z>1.5$, the photo-z accuracy still shows a factor of 1.5 improvement ($\sigma_{\mathrm{NMAD}}$ decreasing from 0.032 to 0.021) when emission lines are included in the templates. \subsection{Testing Libraries for the X-ray Population} Because of the different X-ray populations in the 4Ms-CDFS and 250ks-ECDFS surveys, we adopted different libraries for point-like sources in Areas 1+2 and Area~3. For the sake of template comparison, we tried using the Area~3 (``S09'') library to calculate photo-z for point-like sources in Areas 1+2. The fraction of outliers increased from $5.3\%$ to $15\%$, and the accuracy became two times worse than achieved with the preferred library. Even for $R<23$ sources, $\sigma_{\rm NMAD}$ went from 0.011 with the proper templates to 0.016 with the old ones. For $R>23$ galaxies, the deterioration was from 0.027 to 0.059. In Area~3, on the other hand, using the new templates instead of the S09 ones made photo-z slightly worse: for $R<23$, $\sigma_{\rm NMAD}$ was 0.009 for the new and 0.008 for the S09 libraries. For $R>23$, accuracies were 0.025 and 0.017 respectively. Moreover $\mathrm{bias}_{z}=-0.014$ using the S09 library but increased to $\mathrm{bias}_{z}=-0.031$ with the new library. The better performance of the S09 library in Area~3 can be understood because the population of point-like X-ray sources in the 250k-ECDFS is similar to the {\it XMM}-COSMOS population, and the S09 library is more suitable for counterparts of such bright X-ray sources. \subsection{Impact of UV data} UV emission from accretion disks around supermassive black holes makes type~1 AGNs distinguishable from normal galaxies. Therefore including UV data in the photometry is crucial for SED fitting to obtain accurate photo-z and to decrease outliers for AGNs. To demonstrate this we compared photo-z for AGNs obtained with and without photometry in the UV bands. About $25\%$ of all the X-ray detected sources in Areas~1+2+3 have UV data available from GALEX. Among these, 221 sources have spectroscopy available and were used as our test sample. As expected, for the optically extended sources, where the host dominates the emission, there is very little difference in accuracy and fraction of outliers whether UV data are included or not. For 170 extended sources with spectroscopy, including UV data decreases $\sigma_{\mathrm{NMAD}}$ from 0.013 to 0.012 and $\eta$ from $5.9\%$ to $5.3\%$. In contrast, for the 51 point-like (i.e., AGN-dominated) sources, adding the UV data halves the number of outliers (from $23.5\%$ to $11.8\%$) though with only modest improvement in accuracy (from 0.013 to 0.011). Among the five remaining outliers (see Fig.~\ref{ptuv}), two are faint ($\mathrm{mag}> 23$) in the UV, and three are close to other sources with the UV flux blended in the 10\arcsec\ GALEX aperture. Deblending the GALEX photometry with TFIT as in the other bands could perhaps improve these cases. \begin{figure} \includegraphics[width=0.45\textwidth]{ECDFS_pt_UV.png} \caption{Comparison of photo-z with spec-z for X-ray sources with point-like counterparts. All 51 available sources in Areas 1+2+3 are plotted. Black dots indicate photo-z computed with UV data, and red squares indicate photo-z computed without UV data. \label{ptuv}} \end{figure} \section{Released Catalogs}\label{catalog} Tables~8 through 11 give homogeneously computed photo-z and related data for all sources detected in the area covered by CANDELS/GOODS-S, CDFS, and ECDFS survey. For each source we make also available the redshift probability distribution function $P(z)$\footnote{The redshift probability distribution function is derived directly from the $\chi^{2}$: $P(z)\propto \rm{exp}\left ( -\frac{\chi ^{2}(z)-\chi ^{2}_{\rm{min}}}{2} \right )$; $1\sigma$ is estimated from $\chi^2(z)-\chi^{2}_{\rm{min}}= 1\,(68\%)$, and $2.3\sigma$ is estimated from $\chi^2(z)-\chi^{2}_{\rm{min}}= 6.63\,(99\%)$}. With these data it is possible to construct figures like the inserts in Figure~\ref{SED_fitting}. Because of the large size of the $P(z)$ files, we provide them at the link \url{http://www.mpe.mpg.de/XraySurveys}. In lieu of the full $P(z)$, the catalogs provide a proxy in the form of the normalized integral of the main probability distribution $P(z_{{p}})\equiv 100\times \int P(z)dz$ with the integral over the range $z_{{p}} \pm 0.1(1+z_{{p}})$. A value close to 100 indicates that the photo-z value is uniquely defined. Smaller values imply that a wide range or multiple photo-z values are possible. Updated versions of the catalogs and templates will be available under [Surveys] $>$ [CDFS] through the portal \url{http://www.mpe.mpg.de/XraySurveys}. For the {\it Chandra} X-ray detections, the catalogs also provide a new compilation of X-ray source lists from the literature, the new optical/NIR/MIR associations, and the corresponding photometry. Catalog descriptions and excerpts are below. An entry of -99 indicates no data for that quantity. All coordinates are J2000. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{SED_fitting.png} \caption{ Two examples of SED fitting for source 797 ( a normal galaxy) and source 16150 (an X-ray-detected AGN). The photometric points are shown in black. The red lines show the best-fitting template, and grey lines the best fitting star (the latter a poor fit for both objects shown). In the right panel, the black line shows the second-best template. Information about the templates---type, photo-z, extinction law, extinction value, number of bands, model identification, and $\chi^2$ of the fit---is given in the legends. Inserts show $P(z)$ for the sources. \label{SED_fitting}} \end{figure} \subsection{Cross ID reference table} Table~\ref{crossid} gives cross-IDs and positions for all sources within each Area as identified in Table~\ref{sets}. The table also indicates whether a source is a possible counterpart to an X-ray detection. \begin{table*}\footnotesize \begin{center} \caption{Column description of the cross ID reference catalog} \begin{tabular}{lll} \toprule[1.5pt] Column & Title & Description \\ \midrule 1 & [HSN2014] & Sequential number adopted in this work.\\ 2-4 & $\rm{ID_{C}}$ , $\rm{R.A._{C}}$, $\rm{Dec._{C}}$ & ID, right ascension and declination from the CANDELS-TFIT catalog (G13). \\ 5-7 & $\rm{ID_{M}}$ , $\rm{R.A._{M}}$, $\rm{Dec._{M}}$& ID, right ascension and declination from the MUSYC catalog \citep{car10}.\\ 8-10 & $\rm{ID_{T}}$, $\rm{R.A._{T}}$, $\rm{Dec._{T}}$ & ID, right ascension and declination from the TENIS catalog \citep{hsi12}.\\ 11-13 & $\rm{ID_{S}}$, $\rm{R.A._{S}}$, $\rm{Dec._{S}}$ & ID, right ascension and declination from the SIMPLE catalog \citep{dam11}.\\ 14-17 & $\rm{ID_{R13}}$, $\rm{R.A._{R13}}$, $\rm{Dec._{R13}}$, $\rm{PosErr_{R13}}$& ID, right ascension, declination and positional error from the R13 4Ms-CDFS catalog.\\ 18-21&$\rm{ID_{X11}}$, $\rm{R.A._{X11}}$, $\rm{Dec._{X11}}$, $\rm{PosErr_{X11}}$ & ID, right ascension, declination and positional error from the X11 4Ms-CDFS catalog.\\ 22-25 & $\rm{ID_{L05}}$, $\rm{R.A._{L05}}$, $\rm{Dec._{L05}}$, $\rm{PosErr_{L05}}$ & ID, right ascension, declination and positional error from the L05 250ks-ECDFS catalog. \\ 26-29 & $\rm{ID_{V06}}$, $\rm{R.A._{V06}}$, $\rm{Dec._{V06}}$, $\rm{PosErr_{V06}}$ & ID, right ascension, declination and positional error from the V06 250ks-ECDFS catalog. \\ 30 & Xflag & ``1" indicates that the source is the only possible counterpart to an X-ray source. \\ & & ``n" (2 or more) indicates that the source is one of the ``n" possible counterparts \\ & & for a give X-ray source. ``-99" indicates that no X-ray counterpart are found. \\ 31 & $p$ & Posterior value which indicates the reliability of the X-ray to optical/NIR/MIR association. \\ && (as defined in Section~\ref{sec:matching_method}) \\ \bottomrule[1.5pt] \end{tabular} \label{crossid} \end{center} \end{table*} \subsection{X-ray source list in ECDFS} \begin{table*} \begin{center} \caption{X-ray source list} \begin{tabular}{ccccccccccccc} \toprule[1.5pt] [HSN2014] &$\rm{ID_{R13}}$ &$\rm{ID_{X11}}$ &$\rm{ID_{L05}}$& $\rm{ID_{V06}}$ & $\rm{R.A._{x}}$ &$\rm{DEC._{x}}$ &$\rm{Flux_{s}}$ &$\rm{Flux_{h}}$ &$\rm{Flux_{f}}$ & log $L_{\rm{s}}$ & log $L_{\rm{h}}$ & log $L_{\rm{f}}$\\ (1) & (2) & (3) & (4)& (5)& (6) & (7) & (8) & (9) &(10)&(11)&(12)&(13)\\ \midrule 125 & 343 &266 &-99 &-99 &53.079439 &-27.949429 &2.05E-16 &7.62E-16 &9.86E-16 &41.28 &41.85 &41.969 \\ 482 &6 &336 &-99 &-99 &53.103424 &-27.933357 &8.84E-16 &3.67E-15 &4.59E-15 &42.37 &42.99 &43.085\\ 47821 &-99 &-99 &527 &445 &53.251375 &-27.980556 &1.06E-15 &2.33E-15 &3.22E-15 &42.66 &43.00 &43.14\\ 50721 &-99 &-99 &32 &348 &52.842417 &-27.965417 &2.07E-15 &1.61E-14 &1.81E-14 &42.14 &43.03 &43.08\\ \bottomrule[1.5pt] \end{tabular} \label{xlist} \end{center} \end{table*} Table~\ref{xlist} gives the X-ray source list in Areas~1+2+3 with the position and flux information from the available catalogs. Columns are as follows:\\ \noindent (1) [HSN2014]: Sequential number adopted in this work.\\ (2) $\rm{ID_{R13}}$: ID from R13 catalog\\ (3) $\rm{ID_{X11}}$: ID from X11 catalog\\ (4) $\rm{ID_{L05}}$: ID from L05 catalog \\ (5) $\rm{ID_{V06}}$: ID from V06 catalog\\ (6) $\rm{R.A._{x}}$: Right Ascension of the X-ray source.\\ (7) $\rm{DEC._{x}}$: Declination of the X-ray sources.\\ (8) $\rm{Flux_{s}}$: Soft band X-ray flux ($\rm{erg~cm^{-2}~s^{-1} }$). \\ (9) $\rm{Flux_{h}}$: Hard band X-ray flux. \\ (10) $\rm{Flux_{f}}$: Full band X-rayflux. \\ (11) log $L_{\rm{s}}$: Soft band X-ray luminosity ($\rm{erg~s^{-1} }$). \\ (12) log $L_{\rm{h}}$: Hard band X-ray luminosity.\\ (13) log $L_{\rm{f}}$: Full band X-ray luminosity.\\ Note: From column (6) to (10), we chose the original X-ray data from, in order of priority, R13, X11, L05 and V06. \subsection{Photometry of X-ray sources} Table~\ref{xphotometry} gives photometry for all the possible counterparts to the X-ray sources. For the CANDELS area, this includes the TFIT photometry in the IBs as described in Section~\ref{optdata}. Columns are as follows:\\ \noindent (1) [HSN2014]: Sequential number adopted in this work.\\ (2)-(5) XID: ID from the four X-ray catalogs with the same order as Table~\ref{xlist} \\ (6)Xflag: As described in Table~\ref{crossid} \\ (7)$p$: As described in Table~\ref{crossid} \\ (8)$\rm{R.A._{opt}}$: Right Ascension of the optical/NIR/MIR source. \\ (9)$\rm{Dec._{opt}}$: Declination of the optical/NIR/MIR source. \\ (10)-(109): AB magnitude and the associated uncertainty in each of possible bands (Table~\ref{photometry}).\\ \begin{table*} \begin{center} \caption{Photometry of X-ray sources} \begin{tabular}{cccccccccccccc} \toprule[1.5pt] [HSN2014] &XID &xflag &p &$\rm{R.A._{opt}}$&$\rm{Dec._{opt}}$&$\rm{FUV_{m}}$ &$\rm{FUV_{e}}$&$\rm{NUV_{m}}$ & $\rm{NUV_{e}}$ &... &...&$\rm{IRAC4_{m}}$&$\rm{IRAC4_{e}}$ \\ (1) & (2)-(5) & (6) & (7)& (8)& (9) & (10) &(11) &(12) &(13)&... &... &(108)&(109)\\ \midrule 125 & ... &2 &0.98 &53.079489 &-27.948735 &-99.0 &-99.0 &-99.0 &-99.0 &... &... &19.888 &0.016\\ 482 & ... &1 &0.99 &53.103520 &-27.933323 &-99.0 &-99.0 &-99.0 &-99.0 &... &... &21.096 &0.03 \\ 47821& ... &2 &0.97 &53.252067 &-27.980645 &-99.0 &-99.0 &-99.0 &-99.0 &... &... &22.421 &0.18 \\ 50721& ... &2 & 1.0 &52.84249 &-27.965261 &-99.0 &-99.0 &-99.0 &-99.0 &... &... &19.24 &0.032\\ \bottomrule[1.5pt] \end{tabular} \label{xphotometry} \end{center} \end{table*} \subsection{Redshift catalog} \begin{table*}\footnotesize \begin{center} \caption{Redshift catalog} \begin{tabular}{ccccccccccccccccc} \toprule[1.5pt] [HSN2014] &$\rm{R.A._{opt}}$&$\rm{Dec._{opt}}$& $z_{s}$ &$Q_{\rm{zs}}$ & $z_{p}$ &$1\sigma^{\rm{low}}$ &$1\sigma^{\rm{up}}$ &$3\sigma^{\rm{low}}$ &$3\sigma^{\rm{up}}$ &$P(z_{\rm{p}}$)&$z_{\rm{p}}2$ &$P(z_{\rm{p}}2$)&$N_{\rm{p}}$&Mod&xflag &$p$ \\ (1) & (2) & (3) & (4)& (5)& (6) & (7) & (8) & (9) &(10)&(11)&(12)&(13)&(14)&(15)&(16)&(17)\\ \midrule 13 & 53.093452 & -27.957135 & -99.0 & -99 & 3.2619 & 3.25 & 3.27 & 3.21 & 3.29 & 100.0 & -99.0 & 0.0 & 29 & 328&-99&-99 \\ 14 & 53.104490 & -27.957068 & -99.0 & -99 & 2.1768 & 0.44 & 2.23 & 0.43 & 2.46 & 91.05 & 0.45 & 8.91 & 27 & 331&-99&-99 \\ 15 & 53.088446 & -27.956996 & -99.0 & -99 & 3.0468 & 3.01 & 3.09 & 2.92 & 3.18 & 100.0 & -99.0 & 0.0 & 26 & 322&-99&-99 \\ 16 & 53.104181 & -27.956592 & -99.0 & -99 & 3.1233 & 3.05 & 3.19 & 2.93 & 3.28 & 99.99 & -99.0 & 0.0 & 27 & 324&-99&-99 \\ 125 & 53.079490 & -27.94874 & 0.619 & 0 & 0.6664 & 0.66 & 0.68 & 0.65 & 0.68 & 100.0 & -99.0 & 0.0 & 24 & 028&2&0.98 \\ 135 & 53.142288 & -27.94447 & -99.0 & -99 & 2.6422 & 2.5 & 2.72 & 1.11 & 3.07 & 66.19 & 1.17 & 13.80 & 27 & 014&1&0.95 \\ \bottomrule[1.5pt] \end{tabular} \label{phz_cat} \end{center} \end{table*} Table~\ref{phz_cat} gives photo-z results for all sources detected in the CANDELS/GOODS-S, CDFS and ECDFS area. X-ray detections are flagged in the catalog. Columns are:\\ \noindent (1) [HSN2014]: Sequential number adopted in this work.\\ (2) $\rm{R.A._{opt}}$: Right Ascension of the optical/NIR/MIR source.\\ (3) $\rm{Dec._{opt}}$: Declination of the optical/NIR/MIR source.\\ (4) $z_{\rm{s}}$: Spectroscopic redshift (N. Hathi, private communication). \\ (5) $Q_{\rm{zs}}$: Quality of the spectroscopic redshift. (0=High, 1=Good, 2=Intermediate, 3=Poor).\\ (6) $z_{\rm{p}}$: The photo-z value as defined by the minimum of $\chi^2$.\\ (7) $1\sigma^{\rm{low}}$: Upper $1\sigma$ value of the photo-z.\\ (8) $1\sigma^{\rm{up}}$: Lower $1\sigma$ value of the photo-z.\\ (9) $3\sigma^{\rm{low}}$: Upper $2.3\sigma$ value of the photo-z.\\ (10) $3\sigma^{\rm{up}}$: Lower $2.3\sigma$ value of the photo-z.\\ (11) $P(z_{{p}})$: Normalized area under the curve $P(z)$, computed between $z_{\rm{p}} \pm 0.1(1+z_{{p}})$.\\ (12) $z_{{p}}2$: The second solution in the photo-z, when the $P(z_{{p}2})$ is above 5. \\ (13) $P(z_{{p}2})$: Normalized area under the curve $P(z)$, computed between $z_{{p2}} \pm 0.1(1+z_{{p2}})$.\\ (14) $N_{\rm{p}}$: Number of photometric points used in the fit.\\ (15) Mod: Template number used for SED fitting. 1-48 are the templates from Lib-EXT; 101-130 are the templates from Lib-PT; 201-230 are the templates from S09; 301-331 are the templates from \citet{ilb09}, in the same order as the mentioned authors used.\\ (16) Xflag: As described in Table~\ref{crossid} \\ (17) $p$: Posterior value which indicates the reliability of the X-ray to Optical/NIR/MIR association (as defined in Section~\ref{sec:matching_method}). \\ \section{Summary} \label{summary} The main product of this work is photometric redshifts for all sources detected in the CANDELS/GOODS-S, CDFS, and ECDFS area, a total of 105150 sources. This work has improved upon prior catalogs by G13, \citet{car10} and \citet{hsi12} by using the most up-to-date photometry and SED template libraries including separate libraries for X-ray sources of different characteristics. Probabilities of association between X-ray sources and optical/NIR/MIR sources are also provided. Our work has improved photo-z in the fields in the following ways: 1. In the CANDELS area, we added the IB photometry from {\it Subaru} \citep{car10} to the space-based photometric catalog of \citet{guo13} using the same TFIT parameters as in the official CANDELS catalog. The combined effect of using IB photometry to pinpoint emission lines in the objects and including lines in the templates gives excellent results, even for faint and high redshift sources (Table~\ref{gal_phz_tab} and Table~\ref{xmix_phz_tab}). 2. Using homogeneous data from the CANDELS/$H$-band, TENIS/$J\&K$ , MUSYC/$BVR$, and IRAC-3.6$\mu \mathrm{m}$-selected catalogs, we made X-ray to multi-wavelength associations simultaneously by means of a new, fast matching algorithm based on Bayesian statistics. This gave $98\%$, $96\%$, and $94\%$ of X-ray sources with reliable counterparts in Areas~1, ~2, and~3, respectively. Despite the new technique and data, all but 7 associations are consistent with those found earlier by by X11. The 7 new associations come from the deep, high-resolution CANDELS images and TENIS images that were not available earlier. Different X-ray reduction procedures can change the X-ray position by a few arcseconds. In crowded areas this may imply a different X-ray to optical association. 3. We demonstrated that the X-ray properties of sources need to be taken into account when constructing the library of templates for computing photo-z for such sources. More specifically, the library defined by \citet{sal09,sal11} for the rare X-ray bright sources detected in COSMOS is not representative of the faint X-ray source population detected in the deeper 4Ms-CDFS. We therefore defined a new of galaxy/AGN hybrids for the 4Ms survey (Areas~1+2). In the 250ks survey (Area~3), where the X-ray data have a depth similar to {\it Chandra}-COSMOS, the \citet{sal09} template library with the \citet{sal11} selection strategy works well. \acknowledgments We are grateful to the referee for constructive comments and to Olivier Ilbert for the help with {\it LePhare}. This work was supported by program number HST GO-12060 provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. We also acknowledge the use of TOPCAT tool \citep{tay05}. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. M.\ Brusa acknowledges support from the FP7 Career Integration Grant ``eEASy'' (CIG 321913). Facilities: {\it HST} (ACS/WFC3), {\it Spitzer} (IRAC), {\it Chandra}, {\it Subaru}, {\it GALEX}
1,314,259,996,818
arxiv
\section{Introduction}\label{intro} Unruh-DeWitt(UDW) detectors originated as a thought experiment by Unruh \cite{Unruh1976Notes} (later extended by DeWitt \cite{DeWitt1979quantum}) to model an accelerating qubit in a vacuum. Unruh showed that an accelerating observer would view the ground state of a quantum field as a mixed state and see a loss of quantum information hidden behind the Killing horizon \cite{carroll2004spacetime}. Today, there are parallel efforts to utilize UDW detectors to advance science in cosmology, high energy, and condensed matter physics. Cosmologists use them to model information in highly accelerated frames of reference (e.g. in and around black holes), high energy theorists use them without acceleration to study quantum information flow via quantum fields, a field called relativistic quantum information\cite{Bruschi2010Unruh, Simidzija2020Transmission, Simidzija2017Nonperturbative, Simidzija2018General, Martin-Martinez2015Causality, Toja2021What, Simidzija2017All}, and condensed matter experimentalists use UDW detectors such as nitrogen-vacancy centers or superconducting interference devices, calling the detectors ``quantum sensors'', to sensitively detect electromagnetic fields produced by a wide variety of systems from quantum materials to systems outside of condensed matter like cancer cells. But currently, experimentalists demand much less from the UDW detectors than theorists, having yet to use them to study the flow of quantum information in complex systems. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/Cartoon_Quantum_Computer.png} \label{Cartoon Quantum Computer} \end{figure} A feature theorists require of UDW detectors is the ability to turn their coupling to their environment on and off rapidly. Consider a spin qubit coupled to a one-dimensional wire, known as a Kondo impurity. It is capable of sensing the flow of small currents in the wire. Turning the Kondo coupling on and off rapidly, however, turns it into an emitter that sends signals through the wire, signals to be picked up by another such spin qubit acting as a detector. The net result of this communication amounts to a quantum gate that acts unitarily on the combined qubit-wire system. In Fig. \ref{fig:all-to-all}, we take this idea to the next level: the design of an all-to-all connected solid-state quantum computer where gates can be applied to distant qubits enabled by communication via quantum coherent wires. The possibility that a quantum wire could achieve such communication dates back to at least as early as 2007\cite{trauzettel2007spin,oskin2003building}. Fig. \ref{Cartoon Quantum Computer} provides a simple look at such a quantum computer in which natural scalability is a feature that preserves all-to-all quantum communication. Control over timing, therefore, enables the UDW detector to emit and receive quantum information with clear benefits to technology. In addition to practical applications in quantum computing and communication, timing controllable UDW detectors would allow the study of complex quantum systems in a new way. Careful construction of quantum information channels through these systems allows for a deeper understanding of their quantum properties without directly carrying out projective measurements on the system or inferring them through measurements of expectation values. Simulating channel capacity, for example, would show how entanglement spreads within them and how quantum information becomes scrambled. If we could similarly study physical systems, we could directly probe their entanglement properties. Studying quantum information channels into and out of complex quantum systems provided by UDW detectors will also enable these systems to become part of quantum technology. For example, a system described by a quantum field could become a component in a quantum computer that can carry out computations (these fields are known as flying qubits). The grand application of such a computer would then be to simulate quantum field theory, a task long predicted to be a consequence of quantum computing \cite{Preskill2018quantum,preskill2018simulating, Marchall2015Quantum,matchev2020quantum,jordan2019quantum}. Such a simulator, for example, could simulate Dirac fermions directly without needing to overcome the fermion doubling problem caused by discretization. In the near term, we expect a quantum bus of this style to be better equipped at enhancing error-correcting codes by exploiting all-to-all connectivity. So we see that timing-controlled UDW detectors will have both fundamental and technological applications. In this paper, we propose experimentally feasible designs of timing-controlled UDW detectors: spin qubits coupled to Luttinger liquids. To accomplish this, we combine the formalism of UDW detectors with the bosonization of Luttinger liquids and thereby propose a laboratory setting to explore quantum information flow through quantum fields. Using the results of Simidzija et al., we show that a non-zero channel capacity exists in this arena. Furthermore, we provide a library of Hamiltonians that characterize the qubit-field quantum transduction under these constraints, bolstering the natural viability of quantum electronics/communication through these means. Additionally, we present some experimental challenges to realize the study of quantum information flow via UDW detectors coupled to quantum materials in three contexts: HgTe/CdTe heterostructures in the quantum spin Hall phase\cite{Egger2010Helical,Menezes2016conformal}, graphene spin qubits\cite{chang2003chiral, bockrath1999luttinger, yao1999carbon, ishii2003direct, lee2004real}, and moiré transition metal dichalcogenides (TMDs) in either the anomalous or spin quantum Hall regime\cite{li2021quantum, Wang2021Transport}. We close the letter by examining some future theoretical work, outlining the vast area of information research this work identifies. \section{Quantum Information Flow in Quantum Materials} The trend for scaling up quantum computing consists of larger and larger numbers of qubits carrying out linear operations over longer length scales. This seems natural as error correcting codes are more accurate with more qubits \cite{Roffe2019Quantum, Laflamme1996perfect, Knill2000theory, knill1999Quantum}. However, topological systems such as the fractional quantum Hall effect could provide a quantum bus of all-to-all connected qubits which offer a robust error correcting code \cite{Preskill2018quantum} that scales at least like Fig. \ref{Cartoon Quantum Computer}. The all-to-all communication channels provide situational error-correcting codes based on stabilizer codes \cite{Gottesman1997Quantum} and the periodic condition of our quantum bus resolve length scales. The most important question is thus: how well does this system process quantum information? Undertaking this task involves devising quantum channels displaying maximal channel capacity. \subsection{Devising the Quantum Channel} Quantum channels present the necessary formalism to analyze entanglement propagation through a quantum circuit\cite{Wilde2017From, Gyongyosi2018survey}. Recently, high-energy physicists have made progress investigating quantum channels as they exist between fields and qubits. As mentioned in the introduction, they use UDW detector formalism that utilizes a smearing function to spread a two-dimensional Hilbert space onto an infinite-dimensional space. This formalism carries a series of complications such as limitations due to the No-Cloning theorem and information spreading due to Huygen's principle in spatial dimensions higher than one \cite{Simidzija2020Transmission, Jonsson2018Transmitting}. In this regard, an analogy to classical wireless communication is not possible. Instead, advancing quantum devices such as quantum wires may prove more profitable. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/UDWQC_Channel.pdf} \caption{Quantum coherent information is a measure of quantum information flowing through a quantum channel. To compute it, we use the above circuit diagram. Initially, Qubit A is entangled with Qubit C in a Bell state. Then Qubit A coupled to the Field $\phi$ through $U_A$ and later Qubit B $\ket{0}_B$ is similarly coupled through $U_B$. Finally, the entanglement is measured between Qubit B and Qubit C. If these qubits are entangled, then the coherent information is positive and quantum information flowed through the noisy channel.} \label{fig:coherentinfo} \end{figure} To see this flow of entanglement through quantum wires, one must carefully construct unitary gates to which we can evaluate the quantum coherent information, a figure of merit for the channel, which is analogous to mutual information in classical information theory. It is computed using the circuit shown in \ref{fig:coherentinfo}. The construction of candidate unitaries, such as $U_A$ and $U_B$ in Fig. \ref{fig:coherentinfo}, needed for our quantum channel, is done carefully in section \ref{Coupling Qdots and Helical Luttinger Liquids}. For now, we will outline some design constraints of these unitaries that enable them to successfully transport quantum information. Simidzija et al., as well as others, have recently laid the groundwork for models that successfully outline the necessary conditions of just such a channel\cite{Simidzija2020Transmission, Simidzija2018General, Simidzija2017Nonperturbative, Hummer2016renormalized}. They have shown that UDW unitaries which behave as controlled unitary gates, lead to entanglement breaking channels when processed alone. However, carefully applying two of these rank-one unitaries breaks the controlled-gate structure of the circuit, allowing them to properly encode (decode) information from a spin structure onto a scalar bosonic field. In other words, the channel created by gates with these unitaries can have a positive valued coherent information that scales with coupling and smearing parameters. More elusive is a schematic for strongly coupled fermionic systems using UDW detectors. We find that through the bosonization of our Luttinger liquid, we can create different approaches to solve this problem. Section \ref{Coupling Qdots and Helical Luttinger Liquids} aims to provide a library of these gates which enable channels with non-zero capacity. Furthermore, we claim that careful parameter selection can theoretically create a near-perfect channel. \subsection{Design parameters governing channel performance} There are many parameters governing a field-mediated quantum channel between two qubits. We can separate the channel into two components, gates between the qubit and field and the propagation pathway the quantum information travels along within the field itself. Generally, a gate between a qubit and a field is governed by a coupling function $J(x,t)$. We can break this function into three factors as is common in the relativistic quantum information literature. One is a ``switching'' function $\chi(t)$ normalized to $\int dt \, \chi(t) = 1$ that turns the gate on and off. Another is a smearing function $p(x,y)$ that couples a qubit at $x$ to the field at various points $y$. It too is normalized with $\int dy \, p(x,y)=1$. Ideally, both $\chi(t)$ and $p(x,y)$ are non-negative functions. Presumably, $p(x,y)$ is non-zero only inside the quantum dot (Qdot) hosting the qubit. Finally, there is the overall dimensionless strength of the coupling $J$. Hence the coupling function $J(x,t)$ is naturally parameterized by this strength $J$, a switching time $t_{sw}$ and a smearing length $\lambda_s$. For our models in Section \ref{Coupling Qdots and Helical Luttinger Liquids}, we use a Dirac delta-like switching function with $t_{sw}=0$, a mathematically convenient but physically impossible situation. It allows the gate to produce a change in the field that remains perfectly localized within the smearing length. For $t_{sw}>0$, during the action of the gate information will spread away from the qubit at the velocity $v$, the effective speed of light governing the relativistic field. Hence, if we have $t_{sw} < \lambda_s/v$, the effect of the gate will remain nearly localized within the smearing length, and a physically realizable gate will behave similarly to our idealized gates. Given $\lambda_s$, $t_{sw}$, and $v$, we now have two dimensionless parameters governing the design of a gate: the coupling strength $J$ and the \emph{gate-localization quality} $Q_{loc}=t_{sw}v/\lambda_s$. A small $Q_{loc}$ is a \emph{design constraint} for a UDW quantum computer. If it is too large, quantum information will spread over large distances and during the action of the gate making, it is difficult to recapture. A small $J$, however, implies the gate has little effect. Hence for good channel performance, we will want gates with a small $Q_{loc}$ and large $J$. With gates suitably built, it is up to the field to propagate the quantum information between the qubits. As the information propagates, it is subject to scrambling by interaction effects\cite{Swingle2016measuring, Shenker2014Black, Sekino2008Fast}. Measurements on the target qubit may detect the onset of quantum chaos caused by the system's inherent disorder\cite{Gharibyan2020characterization, Landsman2019verified, Swingle2016measuring}. Similarly, if the information runs into a magnetic impurity acting like an uncontrolled qubit, it may be stolen by it and never reach the intended qubit \cite{Bellucci2005Magnetic}. The information could also be taken away by phonons and spread throughout the host material\cite{Yu2002Phonon}. So this intermediate stage is simultaneously an opportunity to study the physics of the host quantum material at a quantum information level and a constraint on the performance of the communication. Given the above design constraints, we next turn to the question; in ideal circumstances, does a perfect communication channel exist for qubits coupled to Luttinger liquids? \section{Coupling Quantum Dots and Luttinger Liquids to create Unruh-DeWitt Detectors} \label{Coupling Qdots and Helical Luttinger Liquids} \subsection{Dirac fermions meet qubits} Massless Dirac fermions evaluated in UDW detector models have been a promising endeavor for relativistic quantum information processes \cite{Louko2016Unruh, louko2016unruhdewitt, Hummer2016renormalized}. However, scalar field theories have been more successful in producing simulations of quantum information channels in relativistic quantum information \cite{Hummer2016renormalized, Simidzija2020Transmission, Simidzija2018General}. In this section, we aim to show that bosonization is a tool that provides a convenient bridge between these two approaches. UDWs are commonly used when coupling a two-level system and a field \cite{Toja2021What}. In non-perturbative theories, they are most readily studied if the coupling is linear. In the case of a helical Luttinger liquid (HLL), the field is a 1+1 dimensional Dirac fermion, but a linear coupling to a Dirac fermion would violate fermion number conservation\cite{Hummer2016renormalized}. Fortunately, when quadratically coupled to this field, the interaction can be modeled as linearly coupled to a bosonic field, \emph{through bosonization}. Hence, bosonization is a powerful method to aid the analysis of UDWs in Luttinger liquids. To describe UDWs in HLLs, we need to define a physical two-level system and find its coupling to the HLL. The redundancy in chiral indices of a HLL model gives us a ``spinless-like" model \cite{,geissler2017transport,giamarchi2003quantum}, but spin is still physically present. Spin-up travels one way around the edge of the system, our $+$ mover, while spin-down travels the other, our $-$ mover. So the simplest two-level system would be a spin qubit with a finite spatial extent $p$ that we take to be approximately Gaussian, \begin{equation} p(x,y) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\left(\frac{(y-x)^2}{2\sigma^2}\right)}. \end{equation} If the spin qubit is a single atom, it would presumably have a $\sigma$ of order size of the atom. But if synthesized as a quantum dot, we assume it is larger in extent with $\sigma \gg k_F^{-1}$. Introducing a switching function $\chi(t)$ that captures our control over the coupling between the qubit and field through a Dirac delta type structure, yields a UDW detector Hamiltonian with a z-component Kondo interaction, \begin{flalign} \label{basic density hamiltonian} \mathrm{H}_{int}(t) &= \chi(t) \int_{\mathbb{R}} dy \ p(x(t),y) J_{\alpha,z}\mu_{\alpha}(t) (\psi^{\dagger}_+\psi_+ - \psi^{\dagger}_-\psi_-). \end{flalign} Here $\psi_-$ ($\psi_+$) denoting our spin-down left-moving (spin-up right-moving) fermions. This interaction term assumes the Qdot limit where factors of $e^{\pm2ik_Fy}$ have been suppressed as we restrict $\sigma >> \frac{1}{k_F}$ \cite{Yevtushenko2015Transport}. We will return to this assumption in section \ref{A Library of Gates}. Since this model is a Kondo model, some may recognize our two-level system as the Kondo impurity given by, \begin{flalign} J_{\alpha,z} \mu_{\alpha}(t) &= J_{xz}S_x(t)+J_{yz}S_y(t)+J_{zz}S_z(t)\\ &= \frac{J}{2}(S_+ e^{-i\Omega t} + S_-e^{+i\Omega t}) \equiv J\mu(t) \end{flalign} Where the second equality follows by choosing $J_{\alpha,z} = J\hat X_\alpha$ to point along a new $\hat{X}$ direction and time dependence generated by Hamiltonian $H_0 = -g\mu_B \vec B \cdot \vec S\equiv \Omega S_Z$ with $\hat Z$ perpendicular to $\hat X$. We've added the magnetic field to show that this is a two-level system. This formulation takes the usual convention of a HLL and implants it into the UDW model to describe a quantum channel between distant spin qubits communicating via Dirac fermions. It allows for an elegant promotion to a quantum circuit model by constructing the Hamiltonian in the following way \begin{equation}\label{unitary gatre fermion operators} \mathrm{H}_{int,r}(t) = J\chi(t) \mu(t) \otimes O_{\nu r}^{\dagger} O_{\nu r}. \end{equation} where $\nu$ specifies the interacting spin-qubit ($\nu \in\{A,B\}$ in Fig. \ref{Cartoon Quantum Computer}), $r\in\{\pm\}$, and we absorb the smearing functions into the field observables $O_{\nu r}$ as is common in detector literature. The delta-like switching function $\chi(t)$, can be seen as the instance of time $t_{\nu}$ of interaction for qubit $\nu$ such that we can express the coupling as $J_{\nu}=J\chi(t_{\nu})$ . We can then construct a set of unitary operators $U_A$ and $U_B$ that take the form \begin{equation} \label{SimpleUnitary} U_{\nu} = \exp{(-iJ_{\nu}\mu_{\nu}(t)\otimes O_{\nu r}^{\dagger}O_{\nu r})}. \end{equation} It is crucial, that a channel utilizing this gate (depicted in the circuit diagram given by Fig. \ref{UDWQC}) can propagate the entanglement. If we encode information onto quantum fields through gate $U_A$ at time $t_A$ we need to decode that information effectively through $U_B$ at some later time $t_B$. These processes via strong coupling, have not been explored in the literature. Bosonization, as mentioned above, offers a mapping from our low-energy fermionic operators, to low-energy bosonic operators and opens us up to processes well understood in the UDW literature. \begin{figure} \centering \begin{quantikz} \lstick{$\ket{S_A}$} & \gate[2, cwires={2}][1cm]{U_A}& \qw & \qw \\ \lstick{\Large{$\rho_{\psi}$}} & & \gate[2, cwires=1][1cm]{U_B} & \cw \\ \lstick{$\ket{S_B}$} & \qw & & \qw \end{quantikz} \caption{$\ket{S_i}$ is the state of the Qdot and $\rho_{\psi}$ represents the state of the left and right chiral fermions.} \label{UDWQC} \end{figure} \subsection{Bosonizing the Hamiltonian}\label{Bosonizing the Hamitonian} As mentioned in section \ref{intro}, our goal here is to evaluate if a quantum channel through our system has a non-zero channel capacity. One method to explore this quality is to find if the states are separable at any given point in the circuit. If separable states exist, then we are left with classical information. The channel is then said to be a ``quantum-to-classical" channel (ie. the channel is entanglement breaking \cite{Wilde2017From}). As mentioned previously, the literature points us in a direction where non-zero capacity quantum channels can be realized through unitary gates. However, the fields employed for strongly coupled processes are bosonic. Careful construction of bosonization, allows one to replace fermionic operators with a bosonic counterpart, emphasizing the bosonic properties of relativistic Dirac fermions \cite{shankar2017Bosonization}. Working in the interaction picture (as we are above), is enabled in bosonization by multiplying the fermionic creation and annihilation operators by a phase of $e^{\pm iv|k|t}$, so long as our left- and right-moving bosons in eq. \ref{left and right bosons} are massless\cite{senechal1999introduction}. Below, we introduce a quick primer to the bosonization process, while mostly remaining in the interaction picture. When bosonizing our fermionic fields, we utilize the interaction picture and consider variable $z$ with $z=-i(x-vt)$ and $\bar{z}=i(x+vt)$. Without this, the point-splitting process used to evaluate the bosonized forms of our fermionic fields will artificially eliminate necessary terms. At the level of the Unitary gates, we can consider the Schr{\"o}dinger picture, as our switching function is of delta form. Standard bosonization procedures \cite{senechal1999introduction,giamarchi2003quantum,shankar2017Bosonization,schulz2000fermi} to find bosonized fermionic operators are of the form, \begin{equation} \psi_+ \rightarrow \frac{1}{\sqrt{2\pi}} e ^{-i\sqrt{4\pi}\phi(z)}, \ \psi_- \rightarrow \frac{1}{\sqrt{2\pi}} e ^{i\sqrt{4\pi}\bar{\phi}(z)}, \end{equation} where $\phi(z)$ is a real scalar field given in the interaction picture by, \begin{flalign}\label{left and right bosons} \phi(z) &= \int_{k>0} \frac{dk}{2\pi} \frac{1}{\sqrt{2k}}[b(k)e^{-kz}+b^{\dagger}(k)e^{kz}] \\ \bar{\phi}(\bar{z}) &= \int_{k>0} \frac{dk}{2\pi} \frac{1}{\sqrt{2k}}[\bar{b}(k)e^{-k\bar{z}}+\bar{b}^{\dagger}(k)e^{k\bar{z}}]. \end{flalign} Consistent with the literature, these fields are a natural result of the mode expansions, \begin{flalign} \varphi(x) &= \int \frac{dk}{2\pi} \sqrt{\frac{v}{2\omega(k)}}[b(k)e^{ikx}+b^{\dagger}(k)e^{-ikx}]\\ \Pi(x) &=\int \frac{dk}{2\pi} \sqrt{\frac{\omega(k)}{2v}}[b(k)e^{ikx}+b^{\dagger}(k)e^{-ikx}] \end{flalign} which are related to $\phi$ through a duel boson $\vartheta$ by $\phi = \frac{1}{2}(\varphi + \vartheta)$ and $\bar{\phi}= \frac{1}{2}(\varphi - \vartheta)$. Under this formalism, the normal ordered density operators become $ \rho_- = \colon \psi^{\dagger}_-\psi_- \colon=- \frac{i}{\sqrt{\pi}}\partial_z\phi$ which allows us to rewrite our Hamiltonian linearly as, \begin{flalign}\label{seperate field bosonized hamiltonian} \mathrm{H}^{B}_{int}(t) &= J_{\nu} \int_{\mathbb{R}} dy \ p(x(t),y) \mu(t)(\frac{1}{\sqrt{\pi}}(\partial_z\phi+\partial_{\bar{z}}\bar{\phi})) \\ \label{Bosonized simple density}&=J_{\nu} \int_{\mathbb{R}} dy \ p(x(t),y) \mu(t)(\frac{1}{\sqrt{\pi}}\Pi) \end{flalign} We can see here that the right- and left-movers can be combined into a single equation that provides us with a simple rank-one unitary gate. After including the smearing function into our conjugate momentum field $\Pi$, equation \ref{SimpleUnitary} becomes \begin{equation} \label{SimpleUnitary 1} U_{\nu} = \exp{(iJ_{\nu}\mu_{\nu}\otimes \Pi_{\nu})}. \end{equation} This simple rank-one unitary is nice for transferring information onto and off of the field but to build a quantum channel that does not break entanglement we need more. \subsection{A Library of Gates.} \label{A Library of Gates} \subsubsection{Simple Rank One Unitary Gates} We have in eq. \ref{SimpleUnitary 1} the first gate of our quantum computer. If we consider a channel that connects Qubit A directly to Qubit B (as described in Fig. \ref{Cartoon Quantum Computer}a), then in essence we have created a channel, which at some point processes classical information. A ``measurement" of sorts takes place \cite{Wilde2017From}. In order to effectively transfer entanglement onto and off of our fields we need to have minimally, two rank-one simple unitaries, \begin{equation} \label{two simple rank one unitaries} U_{\nu} = \exp{(iJ_{\nu 2}\mu_{\nu 2}\otimes O_{\nu 2})}\exp{(iJ_{\nu 1}\mu_{\nu 1}\otimes O_{\nu 1})} \end{equation} \cite{Simidzija2020Transmission,Simidzija2018General}. The first naive attempt at finding a relationship like this in our setup may be understood better if we ignore the time-reversal symmetry of our system and instead write a new combination of field densities. To do this, lets revisit the relationships we discuss in section \ref{Bosonizing the Hamitonian}, \begin{equation} \begin{aligned} \rho_+ &= \colon \psi^{\dagger}_+\psi_+ \colon= \frac{i}{\sqrt{\pi}}\partial_z\phi\\ \rho_- &= \colon \psi^{\dagger}_-\psi_- \colon=- \frac{i}{\sqrt{\pi}}\partial_{\bar{z}}\bar{\phi} \end{aligned} \end{equation} which provides us with two field densities that are bosonized in the following way, \begin{equation} \begin{aligned} \rho &= \rho_++\rho_-=\frac{1}{\sqrt{\pi}}\partial_x\varphi\\ j &= \rho_+-\rho_-= \frac{1}{\sqrt{\pi}}\Pi \end{aligned} \end{equation} Notice now, if we choose our coupling carefully, we can craft an interaction Hamiltonian of which the bosonized form is, \begin{multline} \label{naive} \mathrm{H}^{Naive}_{int}(t) = \int_{\mathbb{R}} dy p(x(t),y) (J_{+} \mu_+(t)\rho \\+ J_{-} \mu_-(t)j) \end{multline} \begin{multline} \label{naive 2} = \int_{\mathbb{R}} dy p(x(t),y)\frac{1}{\sqrt{\pi}} (J_{+}\mu_+(t)\partial_x\varphi \\ + J_{-} \mu_-(t)\Pi) \end{multline} and yields a gate similar to \ref{two simple rank one unitaries} namely, \begin{equation} \label{two simple rank one unitaries 1} U_{\nu} = \exp{(iJ_{-}\mu_{-} \otimes \Pi)}\exp{(iJ_{+}\mu_{+}\otimes \partial_x\varphi)}. \end{equation} \subsubsection{Chiral Luttinger Liquid Gates} For the naive Hamiltonian we were explicit about breaking our time-reversal symmetry. The remainder of gates in our library preserve this symmetry. Consider for example, our Hamiltonian from eq. \ref{basic density hamiltonian}. If we were to rewrite this with separated left- and right-mover channels, \begin{multline} \mathrm{H}^{LR}_{int}(t) = \int_{\mathbb{R}} dy \ p(x(t),y)(J_{\alpha} \mu_{\alpha}(t) \psi^{\dagger}_{+}\psi_{+}\\ - J_{\beta}\mu_{\beta}(t)\psi^{\dagger}_{-}\psi_{-}) \end{multline} where $J_{\alpha} = 0$ for left-movers only, and $J_{\beta} = 0$ to suppress right-movers. This will then lead to a similar construction as \ref{seperate field bosonized hamiltonian} but we will not be able to combine the fields as we did previously. From here we can construct conjugate momentum $\frac{1}{v}\partial_t\phi$ and $\frac{1}{v}\partial_t\bar{\phi}$ for right- and left-movers respectively. Then evaluate an ``all left-moving channel" or an ``all right-moving channel'', yielding the same form as eq. \ref{two simple rank one unitaries 1}. Experimentally this construction may be accomplished by restricting how the Qdot interacts with the HLL or through a chiral Luttinger liquid such as found in the recently discovered anomalous quantum Hall effect in Moiré heterostructures\cite{li2021quantum}. \subsubsection{Including the Cross-Terms} \label{Cross-Term Gates} Another gate might be found in the cross terms suppressed by the factor of $e^{\pm 2iK_Fx}$. If instead of suppressing these interactions we amplify them, we gain coupled degrees of freedom that yield promising unitaries as well. These terms written out explicitly take the form, \begin{multline} \label{cross-term hamiltonian} \mathrm{H}^{CT}_{int}(t) = J_{\nu} \int_{\mathbb{R}} dy \ p(x(t),y) \mu(t) (e^{-i2K_Fx}\psi^{\dagger}_+\psi_- \\- e^{+i2K_Fx}\psi^{\dagger}_-\psi_+) \end{multline} using the above definitions of our bosonized fermions with added Klein factors to preserve anticommutation relations of the fermions \cite{senechal1999introduction,schulz2000fermi,shankar2017Bosonization} and we find a bosonized Hamiltonian, \begin{equation}\label{bosonized CT Hamiltonian} H^{CT}_{int} = J_{\nu} \mu(t) (\frac{1}{2\pi}\cos{\sqrt{4\pi}(\varphi)}) \end{equation} Combining equation \ref{bosonized CT Hamiltonian} with that of \ref{Bosonized simple density} provides a very interesting but non-linear interaction. \subsubsection{Non-Chiral Luttinger Liquid Gates} Often when considering the gates in section \ref{Cross-Term Gates}, one wants to include both the spin and charge sectors. This scenario describes Dirac fermions that are free to spin and propagate in either direction. Following the usual bosonization prescription, we introduce two bosons $\varphi_{\uparrow}$ and $\varphi_{\downarrow}$, that are related by the charge and spin bosons, \begin{equation} \begin{aligned} \varphi_c &= \frac{1}{\sqrt{2}}(\varphi_{\uparrow} + \varphi_{\downarrow}) & \varphi_s &= \frac{1}{\sqrt{2}}(\varphi_{\uparrow} - \varphi_{\downarrow}) \end{aligned} \end{equation} as well as chiral fields $\phi_{c,s}$ and $\bar{\phi}_{c,s}$. Using these definitions, we can see that our bosonized Hamiltonian can be split into two sections forward-scattering (the terms without the factors of $(e^{\pm2ik_Fx})$) which result follows from the point-scattering methods used in deriving eq. \ref{Bosonized simple density}, and our back-scattering terms. The forward-scattering bosonized Hamiltonian is given by, \begin{equation} H^F_{int} = J_{\nu} \mu(t) (\frac{2}{\sqrt{\pi}}(\Pi_c+\partial_x\varphi_s)). \end{equation} Which is of the same form as eq. \ref{naive 2} but our starting point was \ref{basic density hamiltonian} so this preserves time-reversal symmetry! Since the fermion fields in the back-scattering (cross-terms) anticommute we can straight-forwardly bosonize the fermions and our resulting back-scattering Hamiltonian is, \begin{equation} H^B_{int} = J_{\nu} \mu(t) (\frac{1}{2\pi}\cos{\sqrt{2\pi}(\varphi_c+\vartheta_s)}) \end{equation} Notice here if we suppress the spin terms (make the system spinless) we retrieve the same Hamiltonian we would by combining eqs. \ref{Bosonized simple density} and \ref{bosonized CT Hamiltonian}. When we do consider both spin and charge we have two noncommuting observables that could be used to create an arrangement of operators to explore new novel quantum channels of information. \subsubsection{Dirac Hamiltonian Gates} Another coupling that may be experimentally present is similar to the Kondo coupling of eq. \ref{basic density hamiltonian} but instead, a free Dirac fermion according to the Dirac Hamiltonian is coupled to our spin-qubit like, \begin{flalign} \mathrm{H}^{D}_{int}(t) &= J_{\nu}\int_{\mathbb{R}} dy \ p(x(t),y) \mu(t) (\psi^{\dagger}_+\partial_x\psi_+ - \psi^{\dagger}_-\partial_x\psi_-). \end{flalign} This has the famous bosonized form of, \begin{equation} \label{Dirac Bosonized Hamiltonian} H^{D}_{int} = J_{\nu}\int_{\mathbb{R}} dy \ p(x(t),y) \mu(t) (\Pi^2 + (\partial_x\varphi)^2) \end{equation} \cite{senechal1999introduction,giamarchi2003quantum}. Hamiltonians such as this one have been addressed with great success through perturbative approaches \cite{Hummer2016renormalized} but, our system provides a unique opportunity to explore the strong coupling of a quadratic interaction. However, since $[\varphi^2(x),\Pi^2(x')] = 2i\{\varphi(x),\Pi(x')\}\delta(x-x')$, exponentiation into simple unitary gates is not as straight forward. Nevertheless, could prove to be relevant. \subsection{Constructing a Perfect Quantum Channel} We have shown several scenarios where the bosonization of our Luttinger liquids leads to unitary gates like that of eq. \ref{two simple rank one unitaries}. Particularly some of these gates that consist of two rank-one unitaries are of the same form used by Simidzija et al. to simulate coherent information shown in Figure 4 of Ref. \cite{Simidzija2020Transmission}. This result demonstrates that coherent information asymptotically approaches one as the strength of the coupling $J_{\nu 1}$ grows with respect to the width of the Gaussian smearing function $\sigma$. Furthermore, while some of our systems don't produce the same unitary gates as that of \cite{Simidzija2020Transmission}, they are not inherently entanglement-breaking. Future studies of nonlinear Hamiltonians, like that of eq. \ref{bosonized CT Hamiltonian} and eq. \ref{Dirac Bosonized Hamiltonian} will be needed to understand if this same relationship is present. Regardless, these theoretical results indicate multiple condensed matter systems that can adequately utilize quantum fields to transmit quantum information and under the premise of strong coupling and adjustable $\sigma$, ensure near-perfect quantum channels. \section{Experimental Scenarios} \label{sec:exp} There are many scenarios for experimentally realizing a UDW quantum computer. Here we consider three to give a sense of how they might be designed. The first is to upgrade the graphene ribbon proposal\cite{trauzettel2007spin} to defining gates between the spin qubits and the conducting channels. The second is to build solid state quantum dots (Qdots) and embedd them in a HgTe/CdTe quantum well. The third is to build transition metal dichalcogenide(TMD) qubits and place them in a heterostructure exhibiting the recently discovered quantum anomalous Hall effect phase. Other possibilities could include quantum spin chains acting as the field\cite{qiu2020programmable} (they can be viewed as fermions through the Jordan-Wigner transformation), and silicon nanowires\cite{zhong2005coherent}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/Graphene_Ribbon_Scenario.png} \caption{Graphene nanoribbons, as proposed by Ref. \onlinecite{trauzettel2007spin}, as a possible scenario for realizing UDW detectors.} \label{fig:graphene} \end{figure} Let us review the graphene ribbon scenario proposed in Ref. \cite{trauzettel2007spin} and assess its potential for building UDW detectors. Fig. \ref{fig:graphene} presents the scenario: a ribbon with gates applied to trap electrons and conducting channels between them. The scenario works because of the Klein effect that enhances the conduction of Dirac fermions in the channels instead of suppressing it. The speed of electrons in graphene\cite{hwang2012fermi} is between $v = 0.8\times10^6 \mathrm{m/s}$ and $3\times10^6 \mathrm{m/s}$. Using units on the nanoscale, this translates to a slowest velocity of about $v = 10^6 \mathrm{m/s} = 10^6 \mathrm{nm/ns}$. The qubit, according to Ref. \onlinecite{trauzettel2007spin}, is about 30 nm. This suggests we take the smearing length to be $\lambda_s = 30$ nm. Hence, for a gate-localization quality $Q_{loc}\approx 1$ we require a switching time of roughly $t_{sw} = \lambda_s/v = 3\times10^{-5} \mathrm{ns} = 30 \mathrm{fs}$. This would be extremely challenging to meet, requiring advanced methods such as terahertz STM\cite{tachizaki2021progress, Nunes1993Picosecond,Weiss1993Ultrafast, Cocker2016Tracking,Cocker2013An,Terada2010Real,Yarotski2002Improved}, ultrafast electronics via nanodiodes\cite{li2021sub} that operate at 100 fs time-scales. It is therefore difficult to realize such a gate using electrical gate signals but appears possible for optical control via $60$ fs light pulses\cite{Dempsey2020Single, Nagar2021wavelength}. The previous experimental scenario would place a UdW detector in a non-chiral Luttinger liquid. To place it in an HLL, we could consider MgTe-CdTe quantum wells in their quantum spin Hall effect phase. We consider this case in detail in our simulations below. Our edge-state simulations predict a velocity of $0.54\times 10^6$ nm/ns making it slower than graphene by a factor of 2. This scenario would require a switching time of about $55 : \mathrm{fs}$ and is also beyond the scope of electrical control. \begin{wrapfigure}{r}{0.30\textwidth} \vspace{-1em} \centering \setlength{\tabcolsep}{0.50ex} \begin{tabular}{cccc} $T \, [\si{\kelvin}]$ & $B_0 \, [\si{\tesla}]$ & $f_{\text{e}} \: [\si{\giga\hertz}]$ & $p_{\text{th}}$ \\ \\ \hline 4.2 & 1.4 & 39.2 & 0.22 \\ 4.2 & 4.5 & 126. & 0.62 \\ 4.2 & 9.0 & 252. & 0.89 \\ 2.1 & 9.0 & 252.& 0.99 \end{tabular} \vspace{-2em} \end{wrapfigure} The qubits in Figure~\ref{fig:all-to-all} could be either chemically synthesized Qdots embedded in HgTe during deposition, doped silicon Qdots analogously embedded in the HgTe, or gate electrodes. One could then employ gate-induced initialization and tunneling readout of the electron spins in the Qdots \cite{Hanson2007oct, Chatterjee2021mar}. For this one would need the electron spin to be fully polarized, which in turn requires working at high field and low temperature. Employing cryogenic chip-scale microwave sources would allow one to work at magnetic fields up to $B_0 = \SI{9}{\tesla}$ and therefore at a relatively high temperature of $T = \SI{4.2}{\kelvin}$ (liquid helium) or $\SI{2.1}{\kelvin}$ (pumped liquid helium). The above table shows the thermal spin polarization $p_{\text{th}}$ expected for an experiment along with the associated electron spin resonance frequency $f_{\mathrm{e}} = \gamma_{\text{e}} B_0 \big/ 2 \pi$ for a donor-bound electron spin ($g = 2$ and gyromagnetic ratio $\gamma_{\text{e}} = 2 \pi \times \SI{28.0}{\giga\hertz\per\tesla}$). Here $p_{\text{th}} = \tanh{(\hbar \gamma_{\text{e}} B_0 \big/ 2 k_{\text{b}} T)}$ with $\hbar$ reduced Planck's constant, $k_{\text{b}}$ Boltzmann's constant, and $\hbar \gamma_{\text{e}} \big/ 2 k_{\text{b}} = \SI{0.672}{\kelvin\per\tesla}$. Working with gate-defined Qdots in HgTe is convenient, but short electron relaxation times are a concern. Silicon Qdots will require more work to embed into the HgTe quantum well, but shallow dopants in silicon are known to be excellent qubits \cite{Hanson2007oct,Chatterjee2021mar}; $T_1$ decreases with field \cite{Xiao2010mar, Morello2010oct, Tenberg2019may}, but is still favorably long, $T_1 \approx \SI{50}{\milli\second}$, at $T \approx \SI{100}{\milli\kelvin}$ at $B_0 = \SI{5}{\tesla}$ in natural abundance silicon \cite{Tenberg2019may}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/TMD_Scenario_tikz.pdf} \caption{TMD scenario for realizing UDW detectors. Here we envision qubits formed from antisite defects in WSe$_2$ as proposed in Ref. \onlinecite{tsai2022antisite} and the quantum anomalous Hall effect edge states as discovered in WSe$_2$/MoTe$_2$ heterostructures. This heterostructure is then placed on a substrate which is ideally not hBN due to the presence of nuclear moments in this material. The UDW detector scenario is then to couple the qubits to these edge states with a controllable couplings $J1(x,t)$ and $J_2(x,t)$.} \label{fig:TMD} \end{figure} To realize UdW detectors in a chiral Luttinger liquid we suggest a transition metal dichalcogenide(TMD) scenario using the AB-stacked MoTe2/WSe2 heterobilayers where a quantum anomalous Hall(QAH) effect was discovered recently\cite{li2021quantum}. TMDs are potentially excellent materials for quantum information applications owing to the naturally occurring low density of nuclear spins. One candidate for qubits are the antisite defects proposed in Ref. \onlinecite{tsai2022antisite}. These can occur in WSe2 suggesting the experimental scenario in Fig. \ref{fig:TMD}. We can estimate the velocity of the edge electrons in the QAH phase from the bandwidth $W$ and Moir\'{e} lattice constant $a_M$. These are expected to take the values $W\sim 1\to100$ meV and $a_M\sim 5-10$ nm\cite{mak2022semiconductor}. Assuming these are correlated, we can take $W=1$ meV and $a=10$ nm to get \begin{equation} v \sim \frac{W}{\hbar\pi/a_M} = 5000 \mathrm{nm/ns} \end{equation} The smearing length $\lambda_s$ will have to be at least $a_M$ to make use of the nearly flat bands of the Moiré system. Taking it to be $10$ nm, we get a switching time of $t_{sw} = 2$ ps. Hence, moiré pattern materials have significantly slower electrons and longer switching time. This time-scale places it in the vicinity of picosecond electronics such as those achieved with nanoplasmas\cite{samizadeh2020nanoplasma} and ultrafast scanning probe microscope \cite{Nunes1993Picosecond,Hamers1990Ultrafast,Weiss1993Ultrafast,Cocker2013An}. In the above scenarios, a conservative estimate was obtained for the time scales needed to operate the gate. These time scales ranged from 50 fs to 2 ps. In each case, significantly longer time scales would still enable the entanglement of qubits coupled to the field but the theoretical description of these cases breaks down in this limit. What ultimately limits the experiment is a smearing length of order the coherence length such as the 2 micron coherence length of edge states in HgTe QWs\cite{Roth2009jul}. At this length scale, the switching time-scale estimates above would be increase by several orders of magnitude possibly reaching ordinary nanosecond, an easily achieved electrical switching timescale. By conducting experiments inspired by these scenarios, strong evidence could be obtained, for the realization of UdW detectors and whether they could enable all-to-all connectivity in a solid-state quantum computer or the study of quantum information flow in condensed matter systems. Furthermore, the proposed designs could be operated, with considerably less stringent switching requirements as a long-range all-to-all connected qubit network. This network would, by itself, already be a singular technical advancement. \section{Simulating the Gated Edge State} \label{The Simulated Result} In section \ref{Coupling Qdots and Helical Luttinger Liquids}, we established gates describing Unruh-DeWitt detectors in Luttinger liquids and identified those that allow for non-zero quantum information channels. Conveniently, the materials that exhibit the phenomenon associated above are achievable and well understood in a laboratory setting. If the velocity of the edge mode is low enough, it will allow for electrical control of the gates, though this will require picosecond electronics such as those using nanoplasmas\cite{samizadeh2020nanoplasma}. We aim to simulate such control in this section with the practical consequences of achieving electrical control an all-to-all connected solid-state quantum information processors or the study of quantum information flow in quantum materials. Among the three experimental scenarios presented in the previous section, here we will consider the case of a CdTe-HgTe-CdTe quantum well (HgTe QW) in its quantum spin Hall phase whose microscopic parameters are well-known from experiment. It has topologically protected HLL edge states that govern electron transport with an insulating bulk. These states are coherent over 2 $\mu$m\cite{Roth2009jul}, a scale achievable with simulation.\hfill\ \begin{figure}[t] \includegraphics[width=0.4\textwidth]{Figures/Pure-edgestate-system-with-profile-v2.png} \caption{\label{fig:edgestatesim}Simulation of HgTe QW edge state on a $200\times400$ atom ($130\times260$ nm) bar. What is shown is the density of states summed over an energy window of $-10$meV to $10$ meV.} \end{figure} The simulations are carried out using the Bernevig-Hughes-Zhang(BHZ) model\cite{Bernevig2006dec} placed on a lattice following Ref. \onlinecite{amaricci2017edge}. It has a mass parameter $M$, a band width controlling parameter $\epsilon$, and a hybridization parameter $\lambda$. The tight-binding Hamiltonian on the square lattice with periodic boundary conditions is \begin{equation} {\mathcal H}({\bf k}) = \left(M-\epsilon(\cos k_x+\cos k_y)\right)\Gamma_5 + \lambda \sin kx\Gamma_x - \lambda\sin ky\Gamma_y \end{equation} where, using two sets of Pauli operators $\sigma_{x,y,z}$, $\tau_{x,y,z}$, acting on spin and orbital indices respectively, $\Gamma_5 = I\otimes\tau_z$, $\Gamma_x = \sigma_z\otimes\tau_x$, and $\Gamma_y = -I\otimes\tau_y$. The three parameters map to the parameters in the BHZ continuum model via $\epsilon \to -2B$, $M\to -4B+M$, $\lambda\to A$, where the $M$ in the Lattice model of Ref. \onlinecite{amaricci2017edge} is a different parameter than the $M$ in the continuum BHZ model. We fix these parameters to those of the sample described by the 9th row of Table 1 in Ref. \onlinecite{beugeling2012reentrant}: $M=-14.6$ meV, $A = 0.55$ eV, $B=-1.87$eV so that $\epsilon = 3.74$, $M = 7.4654$, and $\lambda=0.55$. Finally, we switch to open boundary conditions, add a gate potential and local magnetic fields to simulate the presence of a qubit, and generate sparse Hamiltonian matrices of size $320,000\times320,000$ whose low energy eigenstates can be obtained using the Arnoldi method on a 200 x 400 lattice of real dimensions $130 \mathrm{nm}\times 260$ nm. An example of these edge states in a simulation is shown in Figure \ref{fig:edgestatesim}. This simulation reveals the density profile of the edge state for states between -10 meV and 10 meV. It does so by adding up the magnitude squared of the eigenstate wave functions on each site. The results shows edge states with a width of about 20 nm in the inset and illustrates propagation directions at the top with spin up moving to the right and spin down moving to the left. As shown in Figure \ref{fig:all-to-all}a, if we apply a gate voltage on the edge, we do not destroy the edge state but merely force it to propagate around the gated region. Namely, a gate can be used to programmatically move the ``wires'' around. In this figure, we used a gate voltage of 15 V to produce the semicircular dark region. \begin{figure*} \begin{minipage}{0.65\textwidth} \centering \includegraphics[width=0.90\textwidth]{Figures/all-to-all_connectivity.png} \end{minipage} \begin{minipage}{0.95\textwidth} \caption{\label{fig:all-to-all} A new electronic quantum bus for spin qubits. {\bf a} A gated region moves the edge state. {\bf b} A local magnetic field penetrates the edge state in two pieces. {\bf c} A snapshot of a 12 qubit all-to-all connected device, 10 qubits at 15.0~V local gate (appear dark in these electron density plots), two qubits at 0.0~V gate exposed to the edge state. {\bf d} A 20 qubit device with gates placed in a regular grid to create trapped electron qubits. {\bf e} Releasing the gate between two of the qubits and center region enables communication.} \end{minipage} \vspace{-2em} \end{figure*} If we now place a spin qubit on the edge, represented by a localized magnetic field in the simulation shown in Figure \ref{fig:all-to-all}b, where color denotes the spin direction, we see that the edge state is penetrated at the location of the qubit. It terminates inside the dot. This is consistent with the ``Kondo Insulator" phase induced by a strongly coupled spin impurity on the edge of quantum spin Hall insulators\cite{Maciejko2009jun}. In HgTe QW and other quantum spin Hall insulators, this penetration of the edge state has likely been observed by the loss of coherence or finite Hall resistance due to spin impurities lying in the vicinity of the edge. The edge states are known to have a coherence length of 2 $\mu$m\cite{Roth2009jul}. But the quantum dot scenario would engineer such behavior and place it under control so long as impurities are very sparse and the coherence length is longer than the edge state, as assumed in this simulation. With a cleanly penetrated edge state, quantum information carried by electrons will not pass by the qubit and instead terminate in or emanate from the qubit introducing a strong coupling between qubit and edge state. We now use the simulation to demonstrate all-to-all connectivity. In Figure \ref{fig:all-to-all}c, we placed 12 spin qubits around the edge of the sample, each with a local gate controlling their coupling to the edge state. We then selectively enable edge state propagation to/from two of the qubits. Due to the quantum information transmissibility of this channel, turning this coupling on for a short switching time as discussed in the previous sections enables the transfer of quantum information between the qubits without allowing this information to spread far beyond the qubit during the application of the gate. Hence, in principle, this approach enables the application of a high-fidelity gate operation on just these two qubits, for the edge state circumnavigates all of the other qubits. In practice, there is an engineering challenge to making qubits that are good at penetrating the edge state to enable high-fidelity gates. One option is to study different spin impurities placed on the edge states and study them using scanning tunneling spectroscopy. Another is to work with Qdots and exploit the many years of research that have gone into engineering their properties. This approach requires designing a suitable dot that can penetrate the edge state (a possibility due to the long time an electron spends in the dot\cite{Vayrynen2013may}). An alternative device is shown in Figure \ref{fig:all-to-all}d. Here gates at 15.0 V create the dark regions and trap the electrons into 20 Qdots surrounding the outside of the system. Namely, this system replaces the Si Qdots of \ref{fig:all-to-all}a-c with trapped electrons. If now a gate is altered near two of these dots (Figure \ref{fig:all-to-all}e), they connect with the grey region in the middle which holds conducting edge states that propagate on the boundary of this interior region. Similar to \ref{fig:all-to-all}c, this connection translates into a gate operation between just the two qubits as it is turned on and off. But now, the concern of whether the dot penetrates the edge state is replaced with the degree to which we can control the tunneling of electrons in and out of the Qdots. \section{Future Investigations} We focused the experimental proposal on HgTe quantum wells because these are well-known and studied. But they are also hard to synthesize. Alternative materials include GaAs quantum wells in large magnetic fields exhibiting the quantum Hall effect \cite{levy2012experimental}, Moiré pattern materials exhibiting either the quantum anomalous Hall effect or quantum spin Hall effect\cite{li2021quantum}, and even ultraclean single-wall nanotubes suitable for quantum information applications. Beyond the engineering opportunities, we find a large list of theoretical gateways opened through this exploration. Some of these include; calculating channel capacities of novel quantum information channels, investigating and simulating the balance of the coherence lengths to our gaussian-like time scales of the switching function, and understanding the zero modes of our bosonization formalism and how that may play a role in quantum information propagation. \section{Conclusion} In this letter, we aimed to provide an experimental approach coupled with a theoretical understanding of a novel quantum computer. The Unruh-DeWitt detector model was deployed as a means to explain the interaction mechanics between our Qdots and helical Luttinger liquid. This unification provided a library of unitary gates that allow us to process quantum information through our system. We showed variations in the theory that would provide channels for processing quantum information that was not inherently entanglement-breaking. Furthermore, we proposed gates whose formalism is well understood in one field but yet to be applied in the other. Furthermore, we showed that the helical Luttinger liquid HeTe, with a controlled delta-like interaction of a Qdot CdTe, gives us an experimental vision of how these Dirac fermions can propagate as flying qubits. Further investigations are being carried out to not only bridge the gap between these previously disconnected fields of physics but to understand how connecting them in this way can lead to new and exciting physics. \begin{acknowledgements} We thank Charles Kane, Justin Kulp, Jiye Fang, Pegor Aynajian, Yuan Ping, David Klotzkin, Wei-Cheng Lee. This material is based upon work supported by the National Science Foundation under Grant OAC-1940260. \end{acknowledgements} \bibliographystyle{ieeetr}
1,314,259,996,819
arxiv
\section{Introduction} $\eta$ Carinae is the most luminous massive binary system of our galaxy and the first one to have been detected at very high energies without hosting a compact object\footnote{So far this list also counts the interesting case of WR11~\citep{2016MNRAS.457L..99P}}. It is composed by one of the most massive stars known ($\eta$ Car A) with an initial mass estimated above M$_{\rm{A}} \gtrsim 90 M_{\odot}$ \citep{2001ApJ...553..837H} and of a companion ($\eta$ Car B) believed to be an O supergiant or a WR star. $\eta$ Carinae has been studied across the whole electromagnetic spectrum from the radio \citep{2007MNRAS.382..382M} to the TeV \citep{2012MNRAS.424..128H}, passing through the infrared \citep{1994MNRAS.270..364W,2004MNRAS.352..447W}, optical \citep{1996ApJ...460L..49D,2000ApJ...528L.101D,2008MNRAS.384.1649D}, X-rays \citep{2001ApJ...547.1034C,2008MNRAS.388L..39O,2008A&A...477L..29L,2010A&A...524A..59L}, and $\gamma$-rays \citep{2009ApJ...698L.142T,2010ApJ...723..649A,2011A&A...526A..57F}. The presence of the companion star so far has been only indirectly inferred from the effect of the wind-wind collision (in particular by the X-ray emission from a multi keV plasma) and from the variable ultraviolet emission photoionizing nearby circumstellar clouds \citep{2005ApJ...633L..37I,2010ApJ...710..729M}. $\eta$ Car A is accelerating a very dense wind with a mass-loss rate of $\sim 8.5\times 10^{-4}$ M$_{\odot}$ yr$^{-1}$ and a terminal wind velocity of $\sim 420$ km s$^{-1}$ \citep{2012MNRAS.423.1623G}. Its companion probably emits a fast low-density wind at $10^{-5}$ M$_{\odot}$ yr$^{-1}$ reaching a velocity of 3000 km s$^{-1}$ \citep{2002A&A...383..636P,2005ApJ...624..973V,2009MNRAS.394.1758P}. The regular modulation detected in the X-ray light curves suggests that the two stars are located in a very eccentric orbit \citep{2001ApJ...547.1034C,2008MNRAS.388L..39O}. The estimated orbital period at the epoch of the Great Eruption that happened between 1837-1856 was $\sim5.1$ yr and since then has increased up to the current $\sim5.54$ yr \citep{2004MNRAS.352..447W,2005AJ....129.2018C,2008MNRAS.384.1649D} owing to the huge quantities of mass and energy dissipated during the past century. During the Great Eruption, $\eta$ Carinae experienced a huge outburst ejecting an impressive quantity of mass estimated as $10-40~M_\odot$ \citep{2010MNRAS.401L..48G} at an average speed of $\sim 650$ km s$^{-1}$ \citep{2003AJ....125.1458S}, giving rise to the formation of the Homunculus Nebula and becoming one of the brightest stars of the sky. The energy released in such a catastrophic event ($10^{49-50}$ erg) was comparable with a significant fraction of the energy emitted by a supernova explosion. Given the high eccentricity of the orbit, the relative separation of the two stars varies by a factor $\sim20$, reaching its minimum at periastron, when the two objects pass within a few AU of each other; the radius of the primary star is estimated as 0.5 AU. In these extreme conditions their supersonic winds interact forming a colliding wind region of hot shocked gas where charged particles can be accelerated via diffusive shock acceleration up to high energies \citep{1993ApJ...402..271E,2003A&A...409..217D,2006ApJ...644.1118R}. As these particles encounter conditions that vary with the orbital phase of the binary system, one can expect a similar dependency in the $\gamma$-ray emission. The hard X-ray emission detected by INTEGRAL \citep{2008A&A...477L..29L} and Suzaku \citep{2008MNRAS.388L..39O}, with an average luminosity $(4$-$7)\times10^{33}$ erg s$^{-1}$, suggested the presence of relativistic particles in the system. The following year AGILE detected a variable source compatible with the position of $\eta$ Carinae \citep{2009ApJ...698L.142T}. Other $\gamma$-ray analyses followed, which reported a luminosity of $1.6\times10^{35}$ erg s$^{-1}$ \citep{2010ApJ...723..649A,2011A&A...526A..57F,2012A&A...544A..98R}, and suggested the presence of hard component in the spectrum around periastron, which subsequently disappeared around apastron. Such a component has been explained through $\pi^0$ decay of accelerated hadrons interacting with the dense stellar wind \citep{2011A&A...526A..57F}, or interpreted as a consequence of $\gamma$-ray absorption against an ad hoc distribution of soft X-ray photons \citep{2012A&A...544A..98R}. An alternative acceleration scenario suggested by \cite{2010ApJ...718L.161O}, which associated particle acceleration to the blast wave of the 1843 Great Eruption and foresaw a constant flux emission, was ruled out by the variability detected in the Fermi LAT light curves. At even higher energies, the observation by \cite[the][]{2012MNRAS.424..128H} did not lead to any significant detection, raising only an upper limit at energies $\gtrsim 500$ GeV. This in turn would imply a sudden drop in the $\gamma$-ray flux, which could be related to a cut-off in the accelerated particle distribution or to severe $\gamma-\gamma$ absorption. Our study starts with a new analysis of the Fermi-LAT data, including 25\% more data than published previously and the latest version of the pipeline processing. We then present the results of a simulation of particle acceleration in the colliding wind, based on detailed hydrodynamic simulations of \cite{2011ApJ...726..105P}, and compare these with the observations. These comparisons are very successful and lead to a number of conclusions reinforcing our previous interpretations \citep{2008A&A...477L..29L,2011A&A...526A..57F}. \subsection{High-energy emission: 10-300 GeV} \label{sec:high_energy} For the high-energy analysis, we took into account only photons arriving within $3^{\circ}$ from the nominal position of $\eta$ Carinae (R.A.=161.264775, Dec=-59.684431), as the total (front+back) $95\%$ PSF containment angle above 10 GeV is smaller than $1^{\circ}$ on-axis. We created a sky model using the 3FGL Fermi-LAT four-year catalogue \citep{2015ApJS..218...23A} including all sources up to $1^{\circ}$ outside of the ROI. We used the same source spectral models as indicated in the catalogue, leaving the normalization parameter free to vary for all those sources within $2^{\circ}$ from $\eta$ Carinae and with an average test statistic\footnote{Source detection significance can be described by the likelihood test statistic value $TS=-2Log(L_{max,0}/L_{max,1})$, which compares the ratio of two values that are obtained by a maximum-likelihood procedure. \textit{$L_{max,0}$} is the likelihood for a model without an additional source at a specified location (the null-hypothesis), and \textit{$L_{max,1}$} is the maximum-likelihood value for a model including an additional source or one more free parameters.} (TS) reported from the catalogue greater than $100$ (corresponding to a detection significance $\sim 10\sigma$) or presenting a variability index higher than $72.44$, which indicates a $99\%$ confidence probability that the source is variable on a monthly timescale \citep{2015ApJS..218...23A}. The only exception to the model was made for the source \object{3FGL J1043.6-5930} (hereafter J1043), which does not satisfy the condition $TS>100$ but is only $0.27^{\circ}$ away from $\eta$ Carinae \citep[for a 68\% containment radius of $0.23^{\circ}$ above 10 GeV, ][]{2015ApJS..218...23A}. This source has been modelled in the 3FGL catalogue as a power law (PL) with index $\Gamma=2.07\pm0.11$ and we decided to leave its normalization parameter free as well. For completeness we also checked if there was any source from the FAVA \citep{2013ApJ...771...57A} weekly flare list\footnote{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/FAVA/index.php} present in our model. As the closest source lies more than $6^{\circ}$ away from the binary system, we did not take any particular precaution. Finally, with the only exception of $\eta$ Carinae, no other source is mentioned in the second Fermi-LAT catalogue of High-Energy Sources \citep{2016ApJS..222....5A}. $\eta$ Carinae lies on the tangential projection of the Carina-Sagittarius Arm of the Milky Way, less than $1^{\circ}$ away from the Galactic plane. Consequently a correct description of the diffuse Galactic $\gamma$-ray background plays a key role in the analysis. To reproduce this emission we used the \texttt{gll\_iem\_v06.fits}\footnote{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels}~\citep{2016ApJS..223...26A} Galactic background model. Unfortunately, for several reasons the representation of the diffuse emission is a serious concern in the Carina region, becoming more and more delicate towards lower energies (see Sect. \ref{sec:low_energy}). As a matter of fact a significant number of sources from the catalogue \citep{2015ApJS..218...23A} are flagged with "c" in that region, indicating that they are considered to be potentially confused with the Galactic diffuse emission. In order to obtain a more realistic representation of the Galactic diffuse emission in such a small region we decided to let its normalization free. The isotropic background component has been represented using the model \texttt{iso\_P8R2\_SOURCE\_V6\_v06.txt$^{5}$}. In the high-energy range we described the emission coming from $\eta$ Carinae (its signal to noise is very limited) with a simple PL model dN/dE = N$_{{\rm E}_{min}-{\rm E}_{max}}(\Gamma+1){\rm E^\Gamma/(E^{\Gamma+1}_{max}-E^{\Gamma+1}_{min})}$, where N$_{{\rm E}_{min}-{\rm E}_{max}}$ indicates the integrated photon flux over the fitting energy range $[{\rm E}_{min},{\rm E}_{max}]$ and $\Gamma$ represents the PL index. Exploiting this model we performed a maximum-likelihood analysis \citep{1996ApJ...461..396M} on the previously described sample of photons, using the \texttt{P8R2\_SOURCE\_V6} IRF and the \texttt{gtlike} tool. Checking the results of the first fit, we found that the significance of some of the sources, left free in our model, were below $5\sigma$. This does not contradict the catalogue results because even though we are increasing the statistic covering a longer period with our data sample, we previously took the average significance over the whole energy band in our choice of the model, while here we are performing the analysis only above 10 GeV. Thus we proceeded to freeze those sources parameters to the catalogue values. We did the same for all the "c" flagged sources, as they might be just artefacts of a bad representation of the diffuse Galactic model at lower energies. Finally we let free the normalization parameter of the Galactic diffuse emission and isotropic background, and the fit resulted in an integrated flux for $\eta$ Carinae of F$_{\rm 10-300GeV}=(5.06\pm0.52) \times 10^{-10}$ ph cm$^{-2}$ s$^{-1}$ and index $\Gamma=-2.28 \pm 0.15$ with a TS value of 366 (significance $\gtrsim 19 \sigma$). Those results represent an average over the whole time period of nearly seven years and are in good agreement with that reported in \cite{2015A&A...577A.100R}. The slightly lower flux value could be explained by the lower than average flux in the data of the previous year \citep{2015AAS...22534415C}. The resulting normalization factor of the Galactic diffuse emission differs significantly from unity. This is a known problem in the Carina region. Running again the fit, forcing the normalization of the Galactic diffuse emission component to unity, we obtain a $22\%$ higher integrated flux for $\eta$ Carinae, while the $\Gamma$ index remains unchanged. On the contrary, the isotropic background does not present specific issues on the Galactic plane, so we decided to freeze it to unity and let vary just the Galactic component normalization. Running again the likelihood, the fit yields a Galactic normalization of $1.95\pm0.06$, a slightly lower integrated flux for $\eta$ Carinae F$_{\rm 10-300GeV}=(5.00\pm0.51) \times 10^{-10}$ ph cm$^{-2}$ s$^{-1}$ and an index $\Gamma=-2.27 \pm 0.15$, for a significance of nearly $19\sigma$. Finally, as we do not expect any temporal variation of the Galactic diffuse emission we kept its normalization frozen to the seven-year average for the subsequent fit. \begin{table}[] \caption{Time bins interval for the variability likelihood analysis and corresponding phase, using \cite{2005AJ....129.2018C} ephemeris.} \centering \begin{tabular}{c|ccc} \hline \hline \noalign{\smallskip} Bin & MET Start & MET Stop & $\eta$ Carinae orbital phase \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & 239557417 & 263317417 & [0.921,1.057]\\ 2 & 263317417 & 287077417 & [1.057,1.193]\\ 3 & 287077417 & 321608617 & [1.193,1.390]\\ 4 & 321608617 & 356139817 & [1.390,1.588]\\ 5 & 356139817 & 390671018 & [1.588,1.785]\\ 6 & 390671018 & 414431018 & [1.785,1.921]\\ 7 & 414431018 & 438191018 & [1.921,2.057]\\ 8 & 438191018 & 461951018 & [2.057,2.193]\\ \noalign{\smallskip} \hline \end{tabular} \label{tab:time_bin} \end{table} We then refined our analysis by dividing the photon sample into more time intervals. The duration of the bins was chosen to obtain a clear detection (TS $\gtrsim$ 16) of $\eta$ Carinae in all time intervals and to obtain a sequence repeating from an orbit to the next. We split the light curve into two periods, one for periastron and one for apastron, and subsequently we applied a static binning in each of them. The final eight time bins are reported in Table~\ref{tab:time_bin}. We attributed to each photon a phase calculated from periastron times using ${\rm JD} = 2450799.792+{\rm N_p}\cdot(2024\pm 2)$ \citep{2005AJ....129.2018C}, where ${\rm N_p}$ counts the successive periastrons. A more recent analysis \citep{2008MNRAS.384.1649D} suggests a shorter orbital period of $2022.7 \pm 1.3$ days, but such a small variation does not have any impact on our results. We then ran another likelihood analysis for each time bin, but given the shorter exposure, we detected a very low significance for the source J1043 in the 4th, 5th, and 8th bin and, consequently, we decided to fix its contribution in those bins to the average value reported in the 3FGL catalogue. The light curve of the high-energy flux of $\eta$ Carinae obtained from the likelihood analysis is reported in Fig.~\ref{fig:HE_lc}. After the first periastron passage of 2009 the flux of \object{$\eta$ Carinae} decreased slightly towards apastron. The flux did however not increase again toward the periastron of 2014. The last two bins of this light curve can be directly compared with the first two, having the same exposures and orbital phases. In order to search for any faster variability we reduced the temporal size of the bins, which confirmed the absence of any excess during the second periastron. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics{29640f1.pdf}} \caption{Seven-year high-energy flux light curve of \object{$\eta$ Carinae} obtained from the binned analysis, using the binning as reported in Table \ref{tab:time_bin} (blue points) and with a smaller binning (red points). Error bars are $1\sigma$ and superposed upper limits are 95\%. For comparison we plotted an arbitrarily rescaled X-ray light curve (grey line).} \label{fig:HE_lc} \end{figure} When performing the fit in each bin, letting free the spectral index $\Gamma$, we can observe that the resulting values are constant within the uncertainties. When performing the same analysis fixing $\Gamma$ to its average value, we found that the resulting fluxes were changing by a few percent at most, i.e. much less than the statistical uncertainties. Therefore the light curve presented in Fig. \ref{fig:HE_lc} does not depend significantly on the exact spectral shape assumed. Given the relatively low event statistics at high energy, as a countercheck we also performed an unbinned analysis \citep{1996ApJ...461..396M} on the same sample of photons. The results that we obtained are F$_{\rm 10-300GeV}=(4.74\pm0.49) \times 10^{-10}$ ph cm$^{-2}$ s$^{-1}$ and index $\Gamma=-2.29 \pm 0.15$. Both the flux light curve and the spectral index trend are in very good agreement with the results of the binned analysis. For comparison we also ran an analysis with a smaller bin as reported in Fig~\ref{fig:HE_lc}. These results show a bigger error but are consistent with the previous analysis within the uncertainties, even if on a few occasions the smaller statistics did not yield a firm detection of the source and only reached a $95\%$ upper limit. We gave special attention in our analysis to J1043. In the Fermi 3FGL catalogue a relatively hard spectrum is reported for that source. At low energy its flux is more than one order of magnitude lower than that of \object{$\eta$ Carinae}, and it reaches nearly one-third of that flux above 10 GeV. Given its small distance ($\lesssim 2r_{68}$ PSF), we analysed the impact that a wrong representation of this source could have on the flux of \object{$\eta$ Carinae}. To set an upper limit on the possible systematic error introduced, we considered the following two cases. We first ran a likelihood analysis keeping the J1043 parameters frozen to its seven-year average value, which gave us a light curve for \object{$\eta$ Carinae} with flux values reduced by $5\%$. Then as a counter check, we completely removed J1043 from the model, obtaining a biased light curve for \object{$\eta$ Carinae} with values up to $11\%$ higher. All those results are in agreement with the TS maps we obtained for each single bin (see Fig.~\ref{fig:tsmap}). \begin{figure*}[] \resizebox{\hsize}{!}{\includegraphics{29640f2.pdf}} \caption{High-energy TS maps for each of the time intervals, corrected for the small differences of exposure times between the 8 phase bins such that the images illustrate the source variability. Each image has the same width $(0.77^{\circ})$. Rows a) and b) show TS maps obtained from the binned analysis, respectively including and excluding J1043 from the model. Rows c) and d) represent the same as a) and b) but from the unbinned analysis. The linear colour map spans TS from 0 to 100. The green and purple crosses are the positions of $\eta$ Carinae and J1043.} \label{fig:tsmap} \end{figure*} \begin{table}[] \caption{Best fit coordinates of \object{$\eta$ Carinae} obtained from \texttt{gtfindsrc} with $1\sigma$ uncertainties. Distance is referred to the nominal position of \object{$\eta$ Carinae}.} \centering \label{my-label} \begin{tabular}{c|ccccc} \hline \hline \noalign{\smallskip} Bin & R.A. & Dec & $1\sigma$ $[']$ & Dist $[']$ & TS \\ \noalign{\smallskip} \hline \noalign{\smallskip} 7 years & 161.28 & -59.70 & 0.6 & 0.8 & 368 \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & 161.28 & -59.71 & 1.3 & 1.5 & 92 \\ 2 & 161.23 & -59.72 & 1.5 & 2.4 & 51 \\ 3 & 161.29 & -59.67 & 1.3 & 1.1 & 59 \\ 4 & 161.39 & -59.71 & 4.0 & 4.0 & 18 \\ 5 & 161.29 & -59.68 & 1.2 & 0.9 & 67 \\ 6 & 161.30 & -59.69 & 1.9 & 1.0 & 34 \\ 7 & 161.27 & -59.71 & 1.8 & 1.4 & 20 \\ 8 & 161.30 & -59.69 & 2.2 & 1.2 & 29 \\ \noalign{\smallskip} \hline \end{tabular} \label{tab:gtfindsrc} \end{table} The average seven-year TS maps perfectly match the PSF of the instrument, while in the shorter time bins the emission from \object{$\eta$ Carinae} is often broadened. So far in our analysis we have always kept fixed the location of \object{$\eta$ Carinae}, using its nominal coordinates. Exploiting the unbinned analysis and the \texttt{gtfindsrc} tool, we left the spatial coordinates as a free parameter in the fit and looked for the best coordinates to maximize the likelihood. Such an analysis has been performed in each single bin. The results are shown in Fig.~\ref{fig:gtfindsrc} and reported in Table~\ref{tab:gtfindsrc}. The eight error circles of \object{$\eta$ Carinae} are represented for each time bin, while the black circle indicates the result obtained running \texttt{gtfindsrc} on the whole seven-year data sample. The radius of each circle represents a $1\sigma$ error. The nominal position of \object{$\eta$ Carinae}, shown by the black cross, is very well in agreement with the results of the Fermi analysis, being within the error circle more than 78$\%$ of the time. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics{29640f3.pdf}} \caption{Error circles of $1\sigma$ for $\eta$ Carinae derived from the analysis of each time bin. Labels identify the time bin as in Table \ref{tab:time_bin}; black circle refers to the whole 7 years of data. The black cross shows the nominal position of $\eta$ Carinae. As for Fig.~\ref{fig:tsmap}, the image is in Galactic coordinates with North up and longitude increasing towards the left.} \label{fig:gtfindsrc} \end{figure} \subsection{Low-energy emission: 0.3 - 10~GeV} \label{sec:low_energy} Extending the analysis to lower energies, the PSF becomes broader and the effective area, acceptance, and energy resolution worsen, making the analysis more challenging. In particular it requires us to enlarge the ROI and consequently to increase drastically the number of sources, the number of free parameters, the uncertainties (also related to the non-perfect Galactic diffuse emission model), and the computation requirements. For these reasons we chose a lower bound for our analysis of 300 MeV, as a compromise. This choice also keeps the flux systematic uncertainty below $5\%$\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/\\/Pass8\_edisp\_usage.html}. As the photon statistic is much better at lower energies, we used only a binned analysis, similar to that described in Sect. \ref{sec:high_energy} but with the following adaptations. As the total (front+back) $95\%$ containment angle above 300 MeV is now smaller than $7^{\circ}$ (on-axis), we took into account all photons arriving within $14^{\circ}$ from the nominal position of $\eta$ Carinae and created a sky model including all sources in the ROI enlarged by $7^{\circ}$. We let the normalization parameters free to vary for all sources within $7^{\circ}$ from $\eta$ Carinae with an average TS $>$100 or presenting a variability index higher than 72.44 (see Sect.~\ref{sec:high_energy} for explanation). As the flux of J1043 is much smaller than that of $\eta$ Carinae at low energy, we kept frozen all its parameters to the values given in the 3FGL catalogue. Even though the extended pulsar wind nebula HESS J1303-631 lies more than $16^{\circ}$ away from $\eta$ Carinae, we included an appropriate extended\footnote{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/4yr$\_$catalog/\\/LAT$\_$extended$\_$sources$\_$v15.tgz} representation in the model. We checked again in the FAVA \citep{2013ApJ...771...57A} weekly flare list and found seven events within our ROI and verified that all were effectively associated with sources whose normalization were left free to vary. To obtain a rough idea of the spectral shape of $\eta$ Carinae, we started to split our analysis into separate energy bins, where the spectrum could be approximated locally as a simple PL. At the same time we kept a sufficient statistic in order to detect our source with at least $5\sigma$. We defined five logarithmically equal energy bins, from 300 MeV up to 100 GeV, and performed a separate likelihood analysis in each of them for the different orbital phase intervals. In the most energetic bin [30~GeV-100~GeV] $\eta$ Carinae did not always reach the required TS, and in these cases we merged the 4th and 5th energy bins together in the analysis. We computed the integral energy flux for each bin, converted them to luminosities (assuming a distance of 2.3 kpc), and plotted them in Fig.~\ref{fig:espec} (later described in Sect.~\ref{sec:simulation}) for two orbital phase bins (0.92-1.05 and 0.39-0.59), corresponding to periastron and apastron, respectively. In these plots, the centre of each point is not a logarithmic average, but is computed making a weighted average using the energy dependent function resulting from each fit as a weight. The result of this analysis indicates that the low-energy spectrum of $\eta$ Carinae features some curvature that could be represented locally, for example by a cutoff power law or a broken power law and that an excess could be observed in some spectra above 10 GeV. We therefore performed the complete analysis, in band 0.3-10 GeV, assuming a power law spectrum with an exponential cutoff (PLEC) dN/dE = N$_{\rm 1GeV} {\rm (E/1~GeV)}^\Gamma {\rm e}^{\rm -E/E_c}$ for $\eta$ Carinae, where ${\rm N}_{\rm 1GeV}$ is the normalization in units of ph~cm$^{-2}$~s$^{-1}$~MeV$^{-1}$, $\Gamma$ is the power law index, and E$_c$ is the cutoff energy. To perform the analysis in the orbital phase bins we always fixed the Galactic and extragalactic diffuse emission normalization to their seven-year averaged values (N$_{\rm Gal}=0.962 \pm 0.002$ and N$_{\rm Exgal} = 1.20 \pm 0.03$). Leaving these two normalizations free affects the flux of $\eta$ Carinae by less than $5\%$. The difference of normalization obtained between the high and low-energy analysis for the diffuse emission is related to the very different sizes of the respective ROI. \begin{figure}[b] \resizebox{\hsize}{!}{\includegraphics{29640f4.pdf}} \caption{Integrated flux light curve of $\eta$ Carinae assuming different spectral models: PLEC with three (green) or two (blue) free parameters and BPL with three (red) or two (magenta) free parameters. Error bars are $1\sigma$.} \label{fig:PLEC} \end{figure} The values of the photon index and energy cutoff are locally correlated and not very meaningful. Figure~\ref{fig:PLEC} compares the integrated flux obtained with the PLEC model with all parameters free or fixed to the average of $\Gamma_{PLEC}=-2.12$ and with a broken power law (BPL) model with the low-energy spectral index left free or fixed to the average of $\Gamma_{BPL}=-2.14$. The low-energy $\gamma$-ray flux light curve does not change significantly and is therefore a good measure of the emission of $\eta$ Carinae. Figure~\ref{fig:simul} shows the combined result of the binned high-energy analysis obtained in Sect.~\ref{sec:high_energy} together with the results derived here for the low-energy band (assuming a distance of 2.3 kpc). As the data above 10 GeV of the second periastron fails to reproduce the high flux levels detected during the first periastron, the light curve of the low-energy band is now of prime importance to confirm the orbital variability of the $\gamma$-ray emission. The probability of obtaining an orbital flux variation by chance, as reported in Fig.~\ref{fig:PLEC}, is lower than 5$\times$10$^{-9}$ (5.9$\sigma$). Finally, as the low-energy $\gamma$-ray variability appears similar for the two periastrons (see Fig.~\ref{fig:PLEC}) and the spectra are well compatible within $2\sigma$ (see Fig.~\ref{fig:espec}), we used the good statistics available to perform a merged analysis of these two periods on even shorter time bins. We merged the data covering the period from phase [0.92-1.19] and [1.92-2.19] and split the data according to phase bin intervals of 2\% (i.e. $\sim$ 40 days). We performed a single likelihood analysis in each time bin, representing the emission of $\eta$ Carinae with a simple PL. $\eta$ Carinae was detected above 12$\sigma$ in every bin, increasing the detection significance up to 50\% compared to a single periastron analysis. We also performed the analysis by shifting the central phase point of each bin by half of its width to obtain a better sampling of the variability. The results of both analyses are shown in Fig.~\ref{fig:peri}. We also performed another similar analysis increasing the lower energy threshold to 600 MeV and 1GeV to reduce the energy range, trying to exploit the better PSF at higher energy and increasing at the same time the robustness of the model approximations. The results showed exactly the same variability trend and similar likelihoods were obtained. We added in Fig.~\ref{fig:peri} the results obtained previously for two broad bins adjacent to the periastron period. \section{Fermi-LAT data analysis} Launched on 2008 June 11, the Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is the most sensitive $\gamma$-ray telescope to date, covering an energy range from 20 MeV to 300 GeV~\citep{2009ApJ...697.1071A}. The LAT is characterized by large field of view ($2.4$ sr at 1~GeV) and collecting area ($\sim6500$ cm$^2$ at 1~GeV), a low deadtime ($<100~\mu$s per event), a high time resolution ($<10~\mu$s), and an energy dependent point-spread function (PSF) improving from $\sim5^{\circ}$ ($68\%$ containment) at 100~MeV to $\sim0.1^{\circ}$ at 40~GeV. The LAT consists of a charged particle tracker, a calorimeter, and an anti-coincidence system. The electron-positron pair conversion tracker is made of 36 layers of silicon strip detectors to track charged particles, interleaved with 16 layers of tungsten foil to facilitate the conversion of $\gamma$-rays to pairs, these layers are comprised of 12 thin layers in the front section followed by 4 thick layers in the back, of 0.03 and 0.18 radiation length, respectively. The calorimeter, located at the bottom of the instrument, has $\sim8.5$ total radiation lengths of caesium iodide to measure the total event energy. Given the intense background of charged particles from cosmic rays and trapped radiation at the orbit of the Fermi satellite, the instruments are protected by a segmented anti-coincidence detector used to reject charged-particle background events. More information about the LAT is provided in~\cite{2009ApJ...697.1071A}, the LAT in-flight calibration is described in ~\cite{2009APh....32..193A}, \cite{2012ApJS..203....4A}, and \cite {2012APh....35..346A}. We performed the analysis of the recent Fermi-LAT data since the beginning of the regular survey-mode observations on 2008 August 4, until 2015 July 1, more precisely mission elapsed time (MET): 239557417 to 457485024 s. We used only \textit{source} class data (i.e. reconstructed events with high probability of being photons) that have been reprocessed with the PASS8\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/\\/Pass8$\_$usage.html} pipeline and subsequently analysed using the Fermi Science Tool v10r0p5 package\footnote{Available on the Fermi Science Support Centre (FSSC) website: http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}. With respect to the previous PASS7, the new PASS8 introduced improvements in the reconstruction of the event direction, energy measurement, event selection, ghost handling, and track/anti-coincidence detector matching information. All these yield to an enhancement in the entire performance of the telescope. These enhancements include higher acceptance, larger field of view, smaller PSF, better energy resolution, deeper differential sensitivity, smaller systematic uncertainties, and the introduction of four subgroups of PSF and energy dispersion to improve even further the containment radius and energy resolution at the expense of the statistics. We selected data with \texttt{evclass=128} corresponding to the source class and \texttt{evtype=3} indicating that both categories of photons converted in the front and in the back part of the LAT were selected. We followed the analysis recommendations of the Fermi-LAT team for the time and event selection, rejecting photons with apparent zenith angle greater than $90^{\circ}$ in order to minimize the background due to the atmospheric $\gamma$-rays originated from the Earth's limb, which lies at a zenith angle of $\sim113^{\circ}$. We did not perform a zenith angle cut based on the region of interest (ROI) with \texttt{gtselect}, but did correct the exposure with the \texttt{zmax=90} option in the \texttt{gtltcube} tool. With its orbital period of 96.5 minutes, its maximal rocking angle of $60^{\circ}$ and its precession period of 53.4 days, Fermi-LAT spends more than $80\%$ of its operative time in survey mode providing a uniform coverage of the sky every two orbits ($\sim3$ hours). As a consequence, our target of interest is not always visible, so knowing the exact position and orientation of the satellite from the spacecraft files at each time, we ran \texttt{gtmktime} to obtain all the good time intervals in which to perform our analysis; we also excluded the readout dead-time $(\sim9\%)$ and the time intervals corresponding to the South Atlantic Anomaly ($\sim13\%$) when data taking is suspended. Finally data are filtered to accept only those events flagged with \texttt{(DATA\_QUAL==1) \&\& (LAT\_CONFIG==1)}, which exclude bad quality events and instrument configuration not recommended for scientific analysis, respectively. Given the relative distance between the apparent position of the Sun during the year and the nominal position of $\eta$ Carinae, we can clearly neglect the $\gamma$-ray contribution from our local star. The same is also valid for the Moon. The instrument response function (IRF) of the Fermi-LAT, i.e. the description of the instrument performance provided for data analysis, strongly depends on the energy~\citep{2012ApJS..203....4A}; thus to better exploit its performance we made a separate analysis for photons above and below 10 GeV. This threshold was chosen because of the spectral shape of $\eta$ Carinae, as explained in the third paragraph of Sect. \ref{sec:low_energy}. \input{29640s2a.tex} \input{29640s2b.tex} \section{Comparison with simulations} \label{sec:simulation} \cite{2011ApJ...726..105P} presented three-dimensional hydrodynamical simulations of $\eta$ Carinae including radiative driving of the stellar winds \citep{1975ApJ...195..157C}, optically thin radiative cooling \citep{2000adnx.conf..161K}, gravity, and orbital motion. The main aim of these simulations was to reproduce the X-ray emission by analysing the emissivity and self-obscuration of the stellar wind. The simulations reproduced the observed X-ray spectra and light curves reasonably well, excepting the post-periastron extended X-ray minimum, where flux was overestimated and the wind collision disruption was inhibited. Additional gas cooling, for example by particle acceleration and inverse-Compton processes, could increase the cooling and disruption of the central wind collision zone. \cite{2011ApJ...726..105P} provided us with the results of their simulations, i.e. temperature, density, and three-dimensional velocities in the cells of the adaptive mesh for various orbital phases. To estimate the non-thermal emission we first calculated the maximum energies that could be reached by electrons and hadrons \citep[as in][]{2011A&A...526A..57F} cell by cell assuming a di-polar magnetic field at the surface of the main star, perpendicular to the orbital plane (reality is probably more complex with the two stars contributing). The magnetic field is the only additional parameter, which can be tuned. We calculated shock velocities and mechanical power in every cell, including those outside the shock region. As expected, most of the shock power is released on both sides of the wind collision zone and in the cells downstream the wind-collision region \citep{2006ApJ...644.1118R}. The increasing shock area compensates for the loss of the released energy density up to a relatively large distance from the centre of mass, explaining why the X-ray luminosity at apastron is about a third of the peak emission at periastron. The energy available in electrons and hadrons were then summed in the ranges 0.3 $< \rm{E}_e <$ 10 GeV and $\rm{E}_p>20$ GeV, respectively, to match the spectral bands observed by Fermi-LAT. The local cell physical properties can be used to easily estimate pion production as long as the Larmor radius is similar to the cell size. The minimum size of the cells in the simulation is $\sim10^{11}$ cm, which is larger than the proton Larmor radius for Lorentz gamma factor up to 10$^5$. Only one-third of the power accelerating protons is available to produce $\gamma$-rays through the neutral pion channel. Electron cooling and pion decay occur instantaneously when compared to other timescales. To consider the possible effects of photon-photon opacity we calculated the X-ray thermal emission in each cell and evaluated the optical depth along different lines of sight. As the current orientation of the binary system, with respect to the Earth, still presents some uncertainties \citep{2012ApJ...746L..18M}, we used several possible directions; this provided optical depth $\tau$ that varied between $\sim10^{-6}$ at apastron and $\sim10^{-2}$ at periastron. This excludes explaining the 1-100 GeV spectral shape by the effects of photon-photon absorption \citep{2012A&A...544A..98R}. Thermal emission increases towards periastron. The mechanical luminosity available in the shock also increases towards periastron and almost doubles in the phase range $\approx1.05-1.15$. The latter peak corresponds to a bubble with reverse wind conditions developing because of the orbital motion, effectively doubling the shock front area during about a tenth of the orbit \citep[see Fig. 9 of][]{2011ApJ...726..105P}. The density of this bubble is low so its thermal emission $(\propto \rm{density}^2)$ does not contribute significantly to the X-ray light curve. The mechanical luminosity shows a local minimum between phases 1.0 and 1.05 when the central part of the wind collision zone is disrupted. \begin{figure}[] \resizebox{\hsize}{!}{\includegraphics{29640f5.pdf}} \caption{Simulated and observed X-ray and $\gamma$-ray light curves of $\eta$ Carinae. The black and purple lines and bins show the predicted inverse-Compton and neutral pion decay light curves. The green and red points show the observed Fermi-LAT light curves at low (0.3-10 GeV) and high (10-300 GeV) energies. The dim grey light curves show the observed (continuous) and predicted (dash, without obscuration) thermal X-ray light curves. Error bars are $1\sigma$.} \label{fig:simul} \end{figure} Electron cooling, through inverse-Compton scattering, is very efficient and such $\gamma$-rays are expected to peak just before periastron. A secondary inverse-Compton peak could be expected above phase 1.05, although its spectral shape could be very different as the UV seed thermal photons are of lower density when compared to the location of the primary shock close to the centre of the system. In our simplified model we assumed that the spectral shape of the seed photons is the same in all cells of the simulation (r$^{-2}$ dependency is taken into account), and that these soft photons are sufficient to cool down all the relativistic electrons. The relative importance of the second peak, however, depends on the magnetic field geometry; radiation transfer, which is neglected in our model; obscuration; and details of the hydrodynamics, which do not represent the soft X-ray observations very well in this phase range. These details are not well constrained by the available observations and we did not try to refine them. The situation is different for hadrons. Unless the magnetic field is very strong ($>$ kG) hadronic interactions mostly take place close to the centre and a single peak of neutral pion decay is expected before periastron. Figure \ref{fig:simul} shows the X and $\gamma$-ray light curves predicted by the simulations for a magnetic field of 500 G and assuming that 1.5\% and 2.4\% of the mechanical energy is used to accelerate electrons and protons, respectively. To ease the comparison between observations and simulations, the results of the latter were binned in the same way as the observed data. The thermal X-ray emission matches the observations pretty well \citep[by construction,][]{2011ApJ...726..105P}. For the simulated curve in Fig. \ref{fig:simul}, we use the orbit-IA model by Parkin, which uses instantaneous acceleration and not radiative driving of the winds. In addition, the thermal X-ray light curve does not take self-obscuration into account and therefore does not match the observations around periastron. The predicted $\gamma$-ray emission induced by the hadrons and electrons are also at the right level, although significant differences exist between simulations and observations. \begin{figure}[b] \resizebox{\hsize}{!}{\includegraphics{29640f6.pdf}} \caption{Merged Fermi LAT analysis (0.3-10 GeV) of the two periastrons for narrow time bins. The two broad bins and the black curve are the same as in Fig. \ref{fig:simul}.} \label{fig:peri} \end{figure} \begin{figure*}[ht] \resizebox{\hsize}{!}{\includegraphics{29640f7.pdf}} \caption{Electrons and photons luminosity spectra at periastron (phase 0.92-1.06; left) and at apastron (phase 0.39-0.59; right). The top panels show the spectra (arbitrary units) of the electrons accelerated in the wind of the primary (green) and secondary (blue) stars and their sum (red). The lower panels show the inverse-Compton emission of both components and the total emission, under the highly simplified assumption that the inverse-Compton parameters (geometry and soft photon spectra) are the same in all cells. The black and grey points are the broadband fluxes derived from Fermi data for the first and the second orbital cycle, respectively. The simulation results were averaged over the orbital phase range corresponding to the periastron observation, as the electron spectra vary quickly during that interval.} \label{fig:espec} \end{figure*} Both the predicted inverse-Compton emission and the observed (0.3-10 GeV) LAT light curve show a broad peak extending on both sides of periastron, as expected from the evolving shock geometry. The amplitude of the variability in the simulation depends on the number/size of those cells where particles can be accelerated up to relevant energies, which in turn depends on the magnetic field. Probing the range suggested by \cite{2012SSRv..166..145W}, a surface magnetic field that is larger than 400~G provides a good match to the observations, while lower fields produce variations that are too large. In this work we did not considered any magnetic field amplification at the shock, which in turn could obviously scale down the surface magnetic field required to get equivalent results. Assuming a field of 500 G for the rest of the discussion, the predicted flux at phase 1.1 is two times larger than observed. This discrepancy largely comes from the energy released in the inverted wind bubble after periastron. The ratio of the emission generated in the shocks on both sides of the wind collision zone is relatively constant along the orbit except at phase 1.1, where much more power is generated in the shock occurring in the wind of the secondary star. The inverted bubble might either be unstable in reality or might produce a significantly different inverse-Compton spectrum. Relativistic electrons immersed in such a high magnetic field produce a synchrotron radiation at low energy. The ratio of the energy that electrons lose via synchrotron and inverse-Compton processes is equal to the ratio of the magnetic field energy density over the photon field energy density, i.e. $P_{synch}/P_{IC}=U_B/U_{rad} = B^2 4\pi R^2 c/8\pi L \approx 7.8 \times 10^{-31} \cdot B[G]^2 \cdot R[cm]^2$. If we know the inverse-Compton spectrum, we can estimate the synchrotron peak luminosity, which around apastron results to be several orders of magnitude ($\sim10^6$) fainter than the inverse-Compton peak. The synchrotron emission peak should reach its maximum in the optical band only very close to periastron, as it is only two orders of magnitude fainter than the inverse-Compton peak. Those limits are in agreement with the estimated radio upper limit \citep{2003MNRAS.338..425D}. Since the low-energy spectra during both periastrons are sufficiently in agreement (see Fig.~\ref{fig:espec} described later), we analysed simultaneously the Fermi LAT low-energy data derived from the two periastrons, binned in shorter time intervals (Fig.~\ref{fig:peri}). These data show a peak at periastron, a minimum at phase 1.02, and a second broad peak at phase 1.1. This is very similar to the prediction of the simulation for the inverse-Compton luminosity. The only notable exception is that the observed second broad peak is slightly shifted towards earlier phases and has a lower luminosity when compared to the simulation. The similarities between the observations and the simulation for the $\gamma$-ray peak and minimum with consistent duration and amplitude are very encouraging. The phase difference could be related to the eccentricity $(\epsilon=0.9)$ assumed in the simulation, which is not well constrained observationally \citep{2000ApJ...528L.101D,2001ApJ...547.1034C}, and this has an important effect on the inner shock geometry. Figure \ref{fig:espec} shows that the distribution of $\gamma_e$, weighted by the emissivity, is relatively smooth and that the expected photon distribution is very smooth. The difference in the electron spectral shape on both sides of the wind collision zone cannot explain the two components $\gamma$-ray emission as suggested by \cite{2011A&A...530A..49B}, who assumed a simplified geometry. We obtain a good match between the observed low-energy $\gamma$-ray spectrum and the predictions of the simulations at periastron, even though some discrepancy can be observed at apastron where an excess is observed between 2 and 10 GeV. The inverse-Compton emission peaks slightly below 1 GeV and does not extend beyond 10 GeV at a level that is consistent with the observations during the first periastron in contrast with the conclusions from \cite{2015MNRAS.449L.132O}, which attribute the full Fermi LAT detection to hadronic emission. Their simulations predict a smaller variation between periastron and apastron, a longer flare around periastron, and a deeper minimum when compared to the observed data. Such discrepancies might be due to the simplified geometry assumed by the authors and by the artificially reduced particle acceleration at periastron. Inverse-Compton emission and neutral pion decay \citep{2011A&A...526A..57F} remains therefore a very good candidate to explain the Fermi observations. The fraction of the shock mechanical luminosity accelerating electrons appears to be slightly smaller than the fraction that accelerates protons. These results differ from the efficiencies derived from simulations of particle acceleration in supernova remnants \citep{2015PhRvL.114h5003P}, but those simulations involve low magnetic field, radiation energy, and particle densities, i.e. very different physical conditions than found in $\eta$ Carinae. An instrument sensitive in the 1-100 MeV band would be able to discriminate between our model and the one proposed by \cite{2015MNRAS.449L.132O}. The simulated pion induced $\gamma$-ray light curve and its variability amplitude show a single peak of emission centred at periastron, which is in good agreement with the Fermi LAT observations of the first periastron. The results of the observations of the second periastron are different in that they have a weaker emission. It has been suggested that the change of the X-ray emission after that periastron, where a significant decrease can be observed in Fig. \ref{fig:simul}, \citep[see also ][]{2015arXiv150707961C}, was the signature of a change of the wind geometry possibly because of cooling instabilities. A stronger disruption or clumpier wind after the second periastron could perhaps induce a decrease of the average wind density and explain that fewer hadronic interactions and fewer thermal emission took place without affecting inverse-Compton emission much. \begin{figure}[h] \resizebox{\hsize}{!}{\includegraphics{29640f8.pdf}} \caption{Protons luminosity spectra (arbitrary units) at periastron (red; phase 1.0), apastron (blue; phase 0.5) and accelerated on average along the orbit (black). } \label{fig:protons} \end{figure} Figure~\ref{fig:protons} shows the proton spectra obtained from the simulation at apastron, periastron, and averaged over the orbit. Protons could be accelerated up to $10^{15}$ eV around periastron and reach $10^{14}$ eV on average. The choice of a lower magnetic field reduces those energies at apastron to $\sim6\times 10^{12}$~eV and $\sim 2 \times 10^{12}$~eV and at periastron to $\sim 5.6 \times 10^{14}$~eV and $\sim 1.9 \times 10^{14}$~eV for 300~G and 100~G, respectively. $\eta$ Carinae can therefore probably accelerate particles close to the knee of the cosmic-ray spectrum. The spectra and the maximum particle energy depend of course on several assumptions, in particular the magnetic field. The highest energy $\gamma$-rays are photo-absorbed and orbital modulation could be expected in the TeV domain. The duration of the periastron bin [0.92-1.06] corresponds to more than 260 days and is longer than the interaction timescale of the protons responsible for the flux variability. $\gamma$-ray observations can probe the magnetic field and shock acceleration in detail, however the quality of the current data above 1 GeV does not yet provide enough information to test hydrodynamical models including detailed radiation transfer (inverse-Compton, pion emission, and photo-absorption). The interplay between disruption and obscuration does not yet account for the X-ray minimum and orbit to orbit variability. More sensitive $\gamma$-ray observations will provide a wealth of information and allow us to test the conditions and physics of the shocks at a high level of details, making of $\eta$ Carinae a perfect laboratory to study particle acceleration in wind collisions. \section{Conclusions} We have used the hydrodynamic simulation of \cite{2011ApJ...726..105P}, which was developed to reproduce the thermal soft X-ray light curve of $\eta$ Carinae, and have estimated leptonic and hadronic Fermi acceleration, inverse-Compton emission, and neutral pion decay cell by cell assuming a di-polar magnetic field at the surface of the primary star. The results of the simulation were compared with the light curves and spectra observed by Fermi LAT between mid-2008 and mid-2015. We increased the data sample by $\sim25$\% with respect to previous analyses and exploited the much better performance of the new PASS8 Fermi-LAT pipeline and of the updated instrument responses. We performed a low-energy and high-energy analysis, from 300 MeV to 10 GeV and from 10 GeV up to 300 GeV, respectively, using the binned \citep{1979ApJ...228..939C} and unbinned analyses \citep{1996ApJ...461..396M}. We used different time bins and also performed a low-energy merged analysis combining data with the same orbital phases, when possible, to increase the signal-to-noise ratio. We looked for high and low-energy flux variability of $\eta$ Carinae and analysed its spectral variations at different orbital phases. We found a good match between the accuracy of the simulation, even if simplified, and the signal-to-noise of the observations. The comparison between simulation and observations led to several results. \begin{enumerate} \item The centroid of the $\gamma$-ray source observed by Fermi LAT is compatible with the position of $\eta$ Carinae within less than 1 arcmin. The low-energy (0.3-10 GeV) $\gamma$-ray light curve is modulated along the orbit and shows a very similar and highly significant modulation (5.9$\sigma$) during the periastrons of 2009 and 2014, indicating that it is driven by the orbital motion of the system. \item Around periastron the low-energy (0.3-10 GeV) $\gamma$-ray flux varied by nearly a factor 2 in less than 40 days. A significant fraction of the $\gamma$-rays are therefore emitted by a source smaller than the Homunculus Nebula in contrast with the hypothesis of \cite{2010ApJ...718L.161O}. \item The maximum over the minimum flux ratio observed at low energy (0.3-10 GeV) is 1.53 considering broad phase bin and 1.92 considering bins of 40 days. This matches the results of the simulations assuming that the magnetic field at the surface of the primary star is larger than $\sim 400$ G. Smaller values of the magnetic field shorten the volume where electrons could be accelerated to sufficient energies, increase the expected variability amplitude beyond the observed one, and decrease the expected $\gamma$-ray luminosity. \item A surface magnetic field larger than $\sim1$ kG would produce a secondary peak of emission after periastron that is stronger than the periastron peak, which is not observed. A large part of the secondary peak, observed in the data, is linked with a bubble with reversed wind conditions created after periastron and lasting for about a tenth of the orbit. We note that $\gamma$-ray observations together with improved simulations should allow us to constrain the magnetic field in the system even more accurately. \item The primary maximum observed just before periastron perfectly matches the prediction of the simulation (amplitude, phase, and duration). The secondary peak occurs slightly earlier and with a lower amplitude than predicted. We assume that these discrepancies come from an inaccurate eccentricity and from the extremely simplified treatment of inverse-Compton scattering. The $\gamma$-ray observations should allow us to constrain the eccentricity of the orbit of $\eta$ Carinae more accurately than possible with current optical observations. \item The amplitude and pattern of the low-energy (0.3-10 GeV) $\gamma$-ray variability correspond in general very well with the predictions. The luminosity of the pion decay depends on the density and a larger variability is expected. The low-energy $\gamma$-rays are therefore very likely to be emitted by inverse-Compton emission in contrast with the claims of \cite{2015MNRAS.449L.132O}. \item The match between the electron distribution predicted by the simulation and the observed cutoff energy, as well as the negligible photon-photon opacity due to the hot shocked gas in the wind collision as computed along different lines of sight, are strong arguments against the scenario suggested by \cite{2012A&A...544A..98R}. \item The $\gamma$-ray spectrum observed at apastron shows a discrepancy with the predictions assuming a simplified inverse-Compton treatment. This is very likely indicating that the seed soft photon spectrum is not identical everywhere, as currently assumed by the simulations. Spectral variability therefore provides additional constrains on the shock geometry that can be used by more accurate simulations. \item The high-energy ($>10$ GeV) $\gamma$-ray component is poorly constrained by the observations. It was well detected during the periastron of 2009, but only weakly detected during the periastron of 2014 and at apastron. The amplitude of variability and the level of the emission however match the expectations for pion decay, inverse-Compton emission is ruled out at such energies, in contrast with the claims of \cite{2011A&A...530A..49B}. Both the high-energy $\gamma$-rays and the thermal X-ray emission were weaker during the second periastron, while the inverse-Compton emission was not affected much. This indicates that something peculiar happened in the densest region of the wind collision zone in 2014. Observation of the next periastrons with Fermi-LAT and by the Cherenkov Telescope Array (CTA) are required to probe the high-energy component and the wind and shock geometry further through $\gamma$-ray pair conversion. \item With the constraints derived on the magnetic field, the simulations predict that $\eta$ Carinae should be a Pevatron, as this object is able to accelerate protons nearly up to $\sim10^{15}$ eV. Assuming that for each photon originated via hadronic processes we also have the production of one neutrino, we derive a neutrino flux above 10 TeV that might reach $10^{-9}$ GeV s$^{-1}$cm$^{-2}$ on average, which is of the order of the IceCube neutrino sensitivity for several years of observations \citep{2017ApJ...835..151A}. Stacking some months of periastron data over many orbits should in principle allow the detection of one PeV neutrino, well above the atmospheric background. \item The lack of statistics at high energy does not allow us to constrain any physical information about the hadronic spectrum. But it is evident that a pure leptonic scenario is not able to reproduce the high-energy spectrum observed during the first periastron. A strong $\gamma$-ray variability is expected above 100 GeV. Depending on the assumed soft energy photon distribution and the consequent $\gamma$-$\gamma$ absorption at very high energy, $\eta$ Carinae could be detected by the CTA southern array (including four large size telescopes) at more than $10\sigma$ in spectral bins of $\Delta {\rm E/E} = 20\%$ for exposures of 50 hours, which would be sufficient to measure separately the variability along the orbit of the high-energy component and of photo absorption \citep{Acharya20133}. $\eta$ Carinae could yield to $10^{48-49}$ erg of cosmic-ray acceleration which is a number close to the expectation for an average supernova remnant \citep{2016APh....81....1B}. \end{enumerate} $\eta$ Carinae is a wonderful laboratory to study particle acceleration in wind collisions. We have demonstrated that the data from Fermi match the simulation expectations, confirming that Fermi acceleration takes place and providing a new tool to diagnose magnetic fields, shock processes, and a complex geometry. Hadronic acceleration is likely but the ultimate proof requires further observations. The evolution of the geometry along the orbit of $\eta$ Carinae provides a wealth of constrains that future observations and simulations will profit from.
1,314,259,996,820
arxiv
\section{INTRODUCTION} Sampling theory of graph signal aims to recover the whole signal by using part of the observation of the original signal, which can save the cost to infer in a large graph. Various methods have been developed to reconstruct the original signal from noise-free samples\cite{chen2015discrete,marques2015sampling}, or noisy observations\cite{xie2017design,anis2016efficient,tsitsvero2016signals,chamon2017greedy,8683739,sakiyama2019eigendecomposition}, based on bandlimitedness or smoothness prior in graph spectral domain. Most related works focused on the static graph signal. But many real-world signals are time-varying, like the temperatures collected by a sensor network, which means the signal on each vertex is of a higher dimensional form like a vector or tensor. In such cases, the joint time-vertex graph signal is a candidate model to describe and process such kind of signals whose frequency spectrum can be obtained by so-called \textit{Joint Time-Vertex Fourier Transform} (JFT)\cite{grassi2018time}. R. Varma \textit{et al.} define the smooth signal on joint time-vertex model and propose a recovery strategy\cite{varma2019smooth}. Besides, Wei \textit{et al.} propose a sampling scheme for continuous time-varying graph signals\cite{wei2019optimal}. Ji \textit{et al.} extend the time domain to Hilbert space and introduce a generalized graph signal processing framework\cite{8646656}. In this paper, we investigate the fundamental sampling theory, i.e, the conditions for critical sampling, for joint time-vertex graph signals in noise-free scene. Some prior works have touched this problem. From the view of product graphs, {Ortiz-Jim{\'e}nez} \textit{et al.} extend the bandlimited signal to the simultaneously bandlimited (SBL) signal and propose a sampling scheme in two domains separately \cite{ortiz2018sampling}. The generalized graph signal processing theory\cite{8646656} discusses some properties of sampling. However, they don't propose the scheme of critical sampling with minimum samples, which we will show later in section \ref{sec:sampling}. In this paper, we reveal the connection between general bandlimited signal (GBL) and simultaneously bandlimited signal (SBL) on the time-vertex graph by introducing the projection bandwidth. Then, we give the necessary conditions for critical sampling on GBL signal in two domains. Finally, we propose an algorithm to find a critical sampling set, which is proved to exist. \section{MODEL} \subsection{Graph Signal and Sampling Theory} Consider an undirected graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{W})$ with the set of vertex $\mathcal{V}$, edge $\mathcal{E}$ and weighted adjacency matrix $\mathbf{W}$. A graph signal is $\mathbf{x}=[x_1,x_2,\dots,x_N]$ in which the element $x_i$ represents the signal value at the $i$-th vertex in $\mathcal{V}$. The graph Laplacian is ${\mathcal{L}}=\mathbf{D}-\mathbf{W}$, where the degree matrix $\mathbf{D}=\text{diag}(\mathbf{1W})$. Because ${\mathcal{L}}$ is symmetric, it has the spectral decomposition \begin{equation} {\mathcal{L}}=\mathbf{U}\Lambda \mathbf{U}^H, \end{equation} where the eigenvectors $\{\mathbf{u}_i\}_{i=1}^N$ of ${\mathcal{L}}$ form the columns of $\mathbf{U}$, and $\Lambda$ is a diagonal matrix of eigenvalues $\{\lambda_i \}_{i=1}^N$ according to $\{\mathbf{u}_i\}$. The eigenvalues can be regarded as frequencies and eigenvectors can be regarded as Fourier-like basis for graph signals\cite{ortega2018graph}. The Graph Fourier Transform (GFT) can be represented by $\mathbf{x}_\text{f}= \mathbf{U}^H {\mathbf{x}}$ and Inverse Graph Fourier Transform (IGFT) can be represented by ${\mathbf{x}}={\mathbf{U}}{\mathbf{x}}_{\text{f}}$. In this sense, a graph signal ${\mathbf{x}}$ is so-called bandlimited signal when ${\mathbf{x}}_{\text{f}}$ has $K<N$ non-zero coefficients, which has the low-dimensional representation as \begin{equation} \label{equ:low_dimension} {\mathbf{x}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{x}}}_{\text{f}}, \end{equation} where $\tilde{{\mathbf{x}}}_{\text{f}}$ consists of non-zero spectral components in ${\mathbf{x}}_{\text{f}}$, and $\tilde{{\mathbf{U}}}$ is constructed by extracting the columns of ${\mathbf{U}}$ corresponding to the indices of the non-zero elements of ${\mathbf{x}}_{\text{f}}$\cite{ortega2018graph, chamon2017greedy}. Define the sampled graph signal ${\mathbf{x}}_{\mathcal{S}}=[x_{s_1},\dots,x_{s_M}]$, such that ${\mathbf{x}}_{\mathcal{S}}=\Psi{\mathbf{x}}$, where ${\mathcal{S}}=\{s_1,\dots, s_M\}$ is the index set of sampled vertices, and the sampling matrix $\Psi\in\{0,1\}^{M\times N}$ is defined as \begin{equation} [\Psi]_{i,j}=\begin{cases} 1,& j=s_i;\\ 0,& \text{otherwise}. \end{cases} \end{equation} The interpolation matrix $\Phi$ is the operator of recovering ${\mathbf{x}}_{\mathcal{S}}$ to ${\mathbf{x}}'=\Phi{\mathbf{x}}_{\mathcal{S}} \in {\mathbb{R}}^N$. The following Theorem \ref{thm:chen} gives the condition of perfect reconstructing ${\mathbf{x}}$ from ${\mathbf{x}}_{\mathcal{S}}$\cite{chen2015discrete}. \begin{theorem} \label{thm:chen} Define $\tilde{{\mathbf{U}}}_M=\Psi \tilde{{\mathbf{U}}}$, for all bandlimited graph signal ${\mathbf{x}}$ with bandwidth $K$. If $\Psi$ satisfies ${\operatorname{rank}}(\tilde{{\mathbf{U}}}_M)=K$, perfect recovery ${\mathbf{x}}=\Phi \Psi {\mathbf{x}}$ can be achieved by choosing $\Phi=\tilde{{\mathbf{U}}}(\tilde{{\mathbf{U}}}_M^{\text{T}}\tilde{{\mathbf{U}}}_M)^{-1}\tilde{{\mathbf{U}}}_M^{\text{T}}$. \end{theorem} Obviously, the rank condition of Theorem \ref{thm:chen} is necessary for perfect reconstruction as the following corollary. \begin{corollary} \label{cor:thm1} If there exists a linear interpolation operator to recovering ${\mathbf{x}}$ from ${\mathbf{x}}_{\mathcal{S}}$, there must be ${\operatorname{rank}}(\tilde{{\mathbf{U}}}_M)=K$, i.e. we need at least $K$ samples. \end{corollary} We call a sampling matrix $\Psi$ a \textit{qualified sampling matrix} when it satisfies ${\operatorname{rank}}(\tilde{{\mathbf{U}}}_M)=K$. And we call the sampling set ${\mathcal{S}}$ corresponding to a qualified sampling matrix a \textit{qualified sampling set}. \subsection{Joint Time-vertex graph signal and Joint Time-vertex Fourier Transform} Now we consider an undirected graph $\mathcal{G}_G=(\mathcal{V}_G,\mathcal{E}_G,\mathbf{W}_G)$, and each vertex relates to a time sequence of length $T$, which can be represented by a cycle graph ${\mathcal{G}}_T=({\mathcal{V}}_T,{\mathcal{E}}_T,\mathbf{W}_T)$. A joint time-vertex graph, denoted by ${\mathcal{G}}_J$, is constructed by Cartesian product of ${\mathcal{G}}_T$ and ${\mathcal{G}}_G$ as shown in Fig. \ref{fig:example} \cite{grassi2018time}, \begin{equation} \label{equ:product graph} {\mathcal{G}}_J = {\mathcal{G}}_T \times {\mathcal{G}}_G=({\mathcal{V}}_T\times {\mathcal{V}}_G,{\mathcal{E}}_J). \end{equation} \begin{figure}[t] \centering \includegraphics[scale=0.4]{product_graph_example1.eps} \caption{A joint time-vertex graph} \label{fig:product_graph} \end{figure} Denoting the graph signal at instant $t$ by ${\mathbf{x}}_t\in {\mathbb{R}}^N$, the total graph signal is represented as the matrix ${\mathbf{X}}=[{\mathbf{x}}_1, {\mathbf{x}}_2, \dots, {\mathbf{x}}_T]\in {\mathbb{R}}^{N\times T}$ with the corresponding vectorized form ${\mathbf{x}}=\operatorname{vec}(X)\in {\mathbb{R}}^{NT}$. The Laplacian matrix of ${\mathcal{G}}_J$, denoted by ${\mathcal{L}}_J$, is the Cartesian product of the Laplacian of ${\mathcal{G}}_T$ and ${\mathcal{G}}_G$, \begin{align*} {\mathcal{L}}_J &= {\mathcal{L}}_T\times {\mathcal{L}}_G=({\mathcal{L}}_T\otimes I_G)+(I_T\otimes {\mathcal{L}}_G)\\&= {\mathbf{U}}_J \Lambda_J {\mathbf{U}}_J^H = ({\mathbf{U}}_T\otimes {\mathbf{U}}_G)(\Lambda_T\times\Lambda_G )({\mathbf{U}}_T \otimes {\mathbf{U}}_G)^H, \end{align*} where $\otimes$ denotes the Kronecker product and $I_T,I_G$ are the identify matrices which have the same size as ${\mathcal{L}}_T,{\mathcal{L}}_G$ \cite{grassi2018time}. JFT has been introduced by appling Fourier transform of ${\mathcal{G}}_T$ in time domain and Fourier transform to ${\mathcal{G}}_G$ in vertex domain \cite{grassi2018time} \begin{equation} {\mathbf{X}}_{\text{f}}=\text{JFT}\{{\mathbf{X}}\}={\mathbf{U}}_G^H {\mathbf{X}} {\mathbf{U}}_T. \end{equation} Expressed in vector form, the transform becomes \begin{equation} {\mathbf{x}}_{\text{f}}=\text{JFT}\{{\mathbf{x}}\}={\mathbf{U}}_J^H {\mathbf{x}}. \end{equation} \section{Sampling on Joint Time-vertex Graphs} \label{sec:sampling} Because the joint time-vertex graph consists of two domains, there are different meanings when we talk about a bandlimited signal. \begin{definition}\textit{(GBL)} A joint time-vertex graph signal ${\mathbf{x}}$ is a GBL signal when ${\mathbf{x}}_{\text{f}}$ has $K<(NT)$ none-zero elements, where $K$ is the general bandwidth. \end{definition} \begin{definition}\textit{(Projection bandwidth)} \label{def:projection bandwidth} For a GBL signal ${\mathbf{X}}$, when ${\mathbf{X}}_{\text{f}}$ has $K_G$ non-zero columns, we define the projection bandwidth on ${\mathcal{G}}_G$ as $K_G$. And when ${\mathbf{X}}_{\text{f}}$ has $K_T$ non-zero rows, we define the projection bandwidth on $G_T$ as $K_T$. \end{definition} The projection bandwidth builds the connection of ${\mathcal{G}}_J$ with ${\mathcal{G}}_G$ and ${\mathcal{G}}_T$ respectively. When a GBL signal ${\mathbf{X}}$ has projection bandwidth $K_G$ on ${\mathcal{G}}_G$ and $K_T$ on ${\mathcal{G}}_K$, each column of ${\mathbf{X}}$ is a bandlimited signal on ${\mathcal{G}}_G$ with bandwidth $K_G$, and each row of ${\mathbf{X}}$ is a bandlimited signal on ${\mathcal{G}}_T$ with bandwidth $K_T$. \begin{definition}\textit{(SBL)\cite{ortiz2018sampling}} \label{def:simultaneously} We call a GBL signal ${\mathbf{X}}$ an SBL signal, if its projection bandwidth $K_G<N$ and $K_T<T$. \end{definition} Obviously, the relationship between projection bandwidth and general bandwidth is \begin{equation} \label{equ:relationship} \text{max}(K_T,K_G)\le K\le K_TK_G. \end{equation} So if a signal ${\mathbf{X}}$ is SBL, it must be GBL. But a GBL signal may not be SBL. For example, when the spectral coefficient ${\mathbf{X}}_{\text{f}}$ is a diagonal matrix with all non-zero diagonal entries, the signal ${\mathbf{X}}$ is a GBL signal, but it is not an SBL signal. An SBL signal ${\mathbf{X}}$ admits a low-dimensional representation as \begin{equation} \label{equ:low_dimensional2} {\mathbf{x}}=(\tilde{{\mathbf{U}}}_T \otimes \tilde{{\mathbf{U}}}_G)\tilde{{\mathbf{x}}}_{\text{f}} \Leftrightarrow {\mathbf{X}}=\tilde{{\mathbf{U}}}_G\tilde{{\mathbf{X}}}_{\text{f}}\tilde{{\mathbf{U}}}_T^H, \end{equation} where $\tilde{{\mathbf{X}}_{\text{f}}}$ and $\tilde{{\mathbf{x}}}_{\text{f}}$ are the non-zero spectral components in ${\mathbf{X}}_{\text{f}}$ and ${\mathbf{x}}_{\text{f}}$. And $\tilde{{\mathbf{U}}}_T$ and $\tilde{{\mathbf{U}}}_G$ are obtained by removing the columns of ${\mathbf{U}}_T$ and ${\mathbf{U}}_G$ corresponding to the indices of the rows and columns of ${\mathbf{X}}_{\text{f}}$ that are all zero. Based on Theorem \ref{thm:chen} and ${\operatorname{rank}}(\tilde{{\mathbf{U}}}_T \otimes \tilde{{\mathbf{U}}}_G)={\operatorname{rank}}(\tilde{{\mathbf{U}}}_T){\operatorname{rank}}(\tilde{{\mathbf{U}}}_G)$, a separately sampling scheme of SBL signals is proposed in \cite{ortiz2018sampling}. Let $S_T\subseteq {\mathcal{V}}_T$ and ${\mathcal{S}}_G\subseteq {\mathcal{V}}_G$ be two subset of vertices from ${\mathcal{G}}_T$ and ${\mathcal{G}}_G$. There must be a qualified sampling set with $|{\mathcal{S}}_T|\ge K_T$ and $|{\mathcal{S}}_G|\ge K_G$ so that we can recover ${\mathbf{x}}$ from ${\mathbf{x}}_{\mathcal{S}}$, which can be expressed as \begin{equation} \label{equ:seperatly} {\mathbf{X}}_{{\mathcal{S}}_G\times{\mathcal{S}}_T}=\Psi_G {\mathbf{X}} \Psi_T^{\text{T}}=\Psi_G \tilde{{\mathbf{U}}}_G\tilde{{\mathbf{X}}}_{\text{f}}\tilde{{\mathbf{U}}}_H^{\text{T}}\Psi_T^{\text{T}}, \end{equation} where $\Psi_T$ and $\Psi_G$ are sampling matrices of sampling sets ${\mathcal{S}}_T$ and ${\mathcal{S}}_G$. The vectorized form of ${\mathbf{X}}_{{\mathcal{S}}_G\times{\mathcal{S}}_T}$ can be expressed as \begin{equation} \label{equ:seperatly2} {\mathbf{x}}_{{\mathcal{S}}_T{\mathcal{S}}_G}=\left[\Psi_T \tilde{{\mathbf{U}}}_T \otimes \Psi_G \tilde{{\mathbf{U}}}_G \right]^H \tilde{{\mathbf{x}}}_{\text{f}}. \end{equation} In the separate sampling scheme \cite{ortiz2018sampling}, the actual sampling set of ${\mathcal{G}}_J$ can be denoted by ${\mathcal{S}}={\mathcal{S}}_T\times {\mathcal{S}}_G$ so that the number of samples is $|{\mathcal{S}}|=|{\mathcal{S}}_T||{\mathcal{S}}_G|$. But the separate sampling scheme may not give a qualified sampling set with minimum vertices, since it sampled at least $K_TK_G$ vertices\cite{ortiz2018sampling}. Applying Theorem \ref{thm:chen} to ${\mathcal{G}}_J$, for all GBL signals with general bandwidth $K$ and projection bandwidth $K_T$ and $K_G$, there will always exist a qualified sampling set of ${\mathcal{G}}_J$, denoted by ${\mathcal{S}}$, satisfying $|{\mathcal{S}}|= K$. If we hope to squeeze the sample size from $K_TK_G$ to $K$, we need to analyze this question from the view of the joint time-vertex rather than considering it separately. Before presenting our main theorem, we first define the projection set on graphs. As the vertex set of ${\mathcal{G}}_J$ in Eq. (\ref{equ:product graph}) is ${\mathcal{V}}_T\times {\mathcal{V}}_G$, these vertices can be represented as a two-tuple form like $(1,1), (1,2), \dots, (T,N)$. \begin{definition}\textit{(Projection set of sampling set on two graphs)} Given a sampling set ${\mathcal{S}}\subset {\mathcal{V}}_T \times {\mathcal{V}}_G$, we define the projection set on ${\mathcal{V}}_T$ and ${\mathcal{V}}_G$ as $S_T$ and ${\mathcal{S}}_G$, respectively, where $|{\mathcal{S}}_T|$ means how many time-slots we need to sample at least on one node, and $|{\mathcal{S}}_G|$ means how many vertices of ${\mathcal{G}}_G$ we need to sample during all the time. \end{definition} For example, ${\mathcal{S}}=\{(1,2),(2,2),(3,4)\}$, then ${\mathcal{S}}_T=(1,2,3)$ and ${\mathcal{S}}_G=(2,4)$. The projection sets on two graphs would reveal additional bounds of a qualified sampling set of ${\mathcal{G}}_J$. Besides the rank condition from Corollary \ref{cor:thm1}, we are interested in whether there are any additional conditions of qualified sampling set. Before proposing the theorem, we prove a lemma first. \begin{lemma} \label{lem:sample} For a bandlimted signal ${\mathbf{x}}$ with bandwidth $K$ on a graph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}})$, there are two sampling sets of the signal ${\mathbf{x}}$, denoted as $S_1$ and $S_2$. When $S_1 \subseteq S_2$, if $S_2$ is not a qualified sampling set, $S_1$ is not a qualified sampling set either. \end{lemma} \begin{IEEEproof} Denote the sampling matrix of ${\mathcal{S}}_1$ and ${\mathcal{S}}_2$ by $\Psi_1$ and $\Psi_2$. When ${\mathcal{S}}_1 \subseteq {\mathcal{S}}_2$, ${\operatorname{rank}}(\Psi_1\tilde{{\mathbf{U}}})\le {\operatorname{rank}}(\Psi_2\tilde{{\mathbf{U}}})$. Because ${\mathcal{S}}_2$ is not a qualified sampling set, from Corollary \ref{cor:thm1}, we can conclude ${\operatorname{rank}}(\Psi_2\tilde{{\mathbf{U}}})<K$, ${\operatorname{rank}}(\Psi_1\tilde{{\mathbf{U}}})<K$. So ${\mathcal{S}}_1$ is not a qualified sampling set. \end{IEEEproof} \begin{theorem} \label{thm:sample} For any GBL signal ${\mathbf{x}}$ on ${\mathcal{G}}_J$ with general bandwidth $K$ and projection bandwidth $K_T$ and $K_G$, if ${\mathcal{S}}$ is a qualified sampling set of ${\mathcal{G}}_J$, i.e. its corresponding sampling matrix $\Psi$ satisfies ${\operatorname{rank}}(\Psi \tilde{{\mathbf{U}}}_J)=K$, there must be: \begin{enumerate} \item $|{\mathcal{S}}|\ge K$ \item $|{\mathcal{S}}_G|\ge K_G$ \item $|{\mathcal{S}}_T|\ge K_T$ \end{enumerate} \end{theorem} \begin{IEEEproof} \label{pro:proof_th2} $|{\mathcal{S}}|\ge K$ is obvious by applying Corollary \ref{cor:thm1} to ${\mathcal{G}}_J$. We prove clause 2) by contradiction. And clause 3) can be proved in the same way. Assume there is a sampling set ${\mathcal{S}}$ whose projection sampling set on ${\mathcal{G}}_G$ satisfies $|{\mathcal{S}}_G|<K_G$. We construct another sampling set ${\mathcal{S}}'={\mathcal{V}}_T\times {\mathcal{S}}_G$. The sampled signal on ${\mathcal{S}}'$ is denoted by ${\mathbf{X}}_{{\mathcal{S}}'}$. Now recovering the original signal ${\mathbf{X}}$ from ${\mathbf{X}}_{{\mathcal{S}}'}$ is equivalent to recovering each column of ${\mathbf{X}}$ from the corresponding column of ${\mathbf{X}}_{{\mathcal{S}}'}$. It means that a sampling set with $|{\mathcal{S}}_G|$ vertices is a qualified sampling set of a bandlimited signal with bandwidth $K_G$, which is not possible according to Corollary \ref{cor:thm1}. So ${\mathcal{S}}'$ is not a qualified sampling set. Since ${\mathcal{S}}\subset {\mathcal{S}}'$, from Lemma \ref{lem:sample}, ${\mathcal{S}}$ is not a qualified sampling set either. So there must be $|{\mathcal{S}}_G|\ge K_G$. \end{IEEEproof} \begin{definition} \textit{(Critical sampling set)} A qualified sampling set ${\mathcal{S}}$ is a \textit{critical sampling set} on ${\mathcal{G}}_J$, when it satisfies $|{\mathcal{S}}|=K$, $|{\mathcal{S}}_T|=K_T$ and $|{\mathcal{S}}_G|=K_G$ at the same time. \end{definition} The corresponding sampling matrix $\Psi$ of a critical sampling set is called \textit{critical sampling matrix}. A critical sampling leads to the minimum cost in many scenes. For example, a critical sampling set of a sensor network signal means we can use as less as possible sensors, time-slots and data to recover the whole signal. Regarding the existence of critical sampling set and critical sampling matrix, we have the following corollary. \begin{corollary} \label{cor:exists} For any GBL signals, there always exists a critical sampling matrix and its corresponding sampling set. \end{corollary} \begin{IEEEproof} Consider a GBL signal ${\mathbf{X}}$ with vectorized form ${\mathbf{x}}$, with general bandwidth $K$ and the projection bandwidth $K_T$ and $K_G$. According to Eq. (\ref{equ:low_dimension}) and (\ref{equ:low_dimensional2}), we have $\tilde{{\mathbf{U}}}_J$, $\tilde{{\mathbf{U}}}_T$ and $\tilde{{\mathbf{U}}}_G$. Define $\tilde{{\mathbf{U}}}_J'=\tilde{{\mathbf{U}}}_T\otimes \tilde{{\mathbf{U}}}_G$ and we have ${\operatorname{rank}}(\tilde{{\mathbf{U}}}_J')={\operatorname{rank}}(\tilde{{\mathbf{U}}}_T){\operatorname{rank}}(\tilde{{\mathbf{U}}}_G)=K_TK_G$. Using separately sampling scheme by Eq. (\ref{equ:seperatly2}), we get a qualified sampling matrix $\Psi_T$ on ${\mathcal{G}}_T$ and a qualified sampling matrix $\Psi_G$ on ${\mathcal{G}}_G$. Then we define $\Psi' = \Psi_T\otimes \Psi_G$. The corresponding sampling set ${\mathcal{S}}'$ of $\Psi'$ has projection sampling set ${\mathcal{S}}'_T$ and ${\mathcal{S}}'_G$ satisfying $|{\mathcal{S}}'_T|=K_T$ and $|{\mathcal{S}}'_G|=K_G$. Obviously ${\operatorname{rank}}(\Psi'\tilde{{\mathbf{U}}}'_J)=K_TK_G$. If $K=K_TK_G$, $\Psi'$ is a critical sampling matrix of ${\mathbf{X}}$. If $K<K_TK_G$, the column set of $\tilde{{\mathbf{U}}}_J$ is a subset of the column set of $\tilde{{\mathbf{U}}}'_J$. So the column set of $\Psi'\tilde{{\mathbf{U}}}_J$ is a subset of $\Psi'\tilde{{\mathbf{U}}}'_J$. Now $\Psi'\tilde{{\mathbf{U}}}_J \in {\mathbb{R}}^{K_TK_G \times K}$ and ${\operatorname{rank}}(\Psi'\tilde{{\mathbf{U}}}_J)=K$. There always exists a sampling matrix $\Psi_c \in \{0,1\}^{K\times K_TK_G}$ such that ${\operatorname{rank}}(\Psi_c\Psi'\tilde{{\mathbf{U}}}_J)=K$. Let $\Psi=\Psi_c\Psi'$. Since ${\mathcal{S}}'$ satisfies $|{\mathcal{S}}'_T|=K_T$ and $|{\mathcal{S}}'_G|=K_G$, the corresponding sampling set ${\mathcal{S}}$ of $\Psi$ satisfies $|{\mathcal{S}}_{T}|\le K_T$ and $|{\mathcal{S}}_{G}|\le K_G$. Because ${\operatorname{rank}}(\Psi\tilde{{\mathbf{U}}}_J)=K$, $\Psi$ is a qualified sampling matrix, such that $|{\mathcal{S}}_{T}|\ge K_T$, $|{\mathcal{S}}_{G}|\ge K_G$, by Theorem \ref{thm:sample}. Now we get $|{\mathcal{S}}_{T}|= K_T$, $|{\mathcal{S}}_G|= K_G$. As $\Psi\in \{0,1\}^{K\times NT}$, $|{\mathcal{S}}|=K$. So $\Psi$ is the critical sampling matrix. \end{IEEEproof} According to our proof of Corollary \ref{cor:exists}, we propose an efficient algorithm (Algorithm \ref{alg:find}) to find a critical sampling set. Provided a corresponding sampling matrix $\Psi$ from ${\mathcal{S}}$, we can get the original signal ${\mathbf{x}}$ by interpolation matrix $\Phi=\tilde{{\mathbf{U}}}_J(\Psi \tilde{{\mathbf{U}}}_J)^{-1}$. Algorithm \ref{alg:find}\footnote{An example code is showed on https://github.com/ParaNoth/Example-code-of-On-Critical-Sampling-of-Time-Vertex-Graph-Signals} provides a feasible way to find the critical sampling set and reduces the time complexity compared with the algorithm proposed in \cite{chen2015discrete}. For example, we can use Gaussian Elimination to find the index set of maximal whose time complexity is $O(N^3)$ when the matrix has $N$ rows. So the time complexity of the algorithm based \cite{chen2015discrete} is $O((NT)^3)$ because $\tilde{{\mathbf{U}}}_J$ has $NT$ rows, while the time complexity of our algorithm is $O(N^3)+O(T^3)$(step 1 in Algorithm \ref{alg:find}) and $O((K_TK_G)^3)$(step 3 in Algorithm \ref{alg:find}). \begin{algorithm}[htbp] \caption{Finding a critical sampling set} \label{alg:find} \begin{algorithmic}[1] \REQUIRE $\tilde{{\mathbf{U}}}_T$, $\tilde{{\mathbf{U}}}_G$, $\tilde{{\mathbf{U}}}_J$ \ENSURE ${\mathcal{S}}$ \STATE Find ${\mathcal{S}}_T$, ${\mathcal{S}}_G$, the index set of maximal linearly independent rows of $\tilde{{\mathbf{U}}}_T$, $\tilde{{\mathbf{U}}}_G$, respectively. \STATE Choose the rows of $\tilde{{\mathbf{U}}}_J$ based on ${\mathcal{S}}'={\mathcal{S}}_T\times {\mathcal{S}}_G$, and then get $\Psi'\tilde{{\mathbf{U}}}_{J}$. \STATE Get ${\mathcal{S}}$ from maximal linearly independent rows of $\Psi'\tilde{{\mathbf{U}}}_{J}$. \end{algorithmic} \end{algorithm} \section{Example} \label{sec:example} In this section, we show an example of joint time-vertex graph as Fig. \ref{fig:product_graph} to explain our idea. The Laplacian matrices of two undirected graphs ${\mathcal{G}}_T,{\mathcal{G}}_G$ are $$ {\mathcal{L}}_T=\left[\begin{matrix} 2&-1&0&-1\\ -1&2&-1&0\\ 0&-1&2&-1\\ -1&0&-1&2\\ \end{matrix} \right], {\mathcal{L}}_G=\left[\begin{matrix} 1&-1&0&0\\ -1&3&-1&-1\\ 0&-1&1&0\\ 0&-1&0&1\\ \end{matrix} \right]. $$ The GBL graph signal ${\mathbf{X}}$ on graph ${\mathcal{G}}_J$ with $K=3$, $K_T=2$, $K_G=2$ is as $$ {\mathbf{X}}=\left[\begin{matrix} 0.2985&-0.3533&-0.2985&0.3533\\ 0&0&0&0\\ -0.1492&0.5432&0.1492&-0.5432\\ -0.1492&-0.1898&0.1492&0.1898\\ \end{matrix} \right], $$ whose corresponding frequency coefficient is $$ {\mathbf{X}}_{\text{f}}=\left[\begin{matrix} 0&0&0&0\\ 0&0.733&0&0\\ 0&0.612&0.517&0\\ 0&0&0&0\\ \end{matrix} \right]. $$ So $\tilde{{\mathbf{U}}}_T$ and $\tilde{{\mathbf{U}}}_G$ are \begin{equation*} \tilde{{\mathbf{U}}}_T=\left[\begin{matrix} 0&0.7071\\ -0.7071&0\\ 0&-0.7071\\ 0.7071&0\\ \end{matrix} \right], \tilde{{\mathbf{U}}}_G=\left[\begin{matrix} 0&0.8165\\ 0&0\\ -0.7071&-0.4082\\ 0.7071&-0.4082\\ \end{matrix} \right]. \end{equation*} \subsection{Finding a critical sampling set} We use Algorithm \ref{alg:find} to find a critical sampling set for ${\mathbf{X}}$. From $\tilde{{\mathbf{U}}}_T$ and $\tilde{{\mathbf{U}}}_G$, we get ${\mathcal{S}}_T=\{1,2\}$ and ${\mathcal{S}}_G=\{1,3\}$ (step 1 in Algorithm \ref{alg:find}), so ${\mathcal{S}}'=\{(1,1),(1,3),(2,1),(2,3)\}$ as step 2 in Algorithm \ref{alg:find}. We have \begin{equation} \Psi'\tilde{{\mathbf{U}}}_J=\left[ \begin{matrix} 0&0&0.5774\\ 0&0&-0.2887\\ 0&-0.5774&0\\ 0.5&0.2887&0\\ \end{matrix} \right]. \end{equation} By Gaussian elimination, we can get ${\mathcal{S}}=\{(1,1), (2,1), (2,3)\}$ (step 3 in Algorithm \ref{alg:find}). The original signal is shown in Fig. \ref{fig:example} and the critical sampling set is shown in Fig. \ref{fig:qualified}(a). Now ${\mathcal{S}}$ satisfies ${\mathcal{S}}=3$, ${\mathcal{S}}_T=2$ and ${\mathcal{S}}_G=2$, so it is the critical sampling set. Compared with separately sampling scheme, we sampled $3$ vertices which less than $K_TK_G=4$. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{product_signal2.eps} \caption{Original signal in section \ref{sec:example}} \label{fig:example} \end{figure} \subsection{Substitution between time and vertex} In many scenes, the sampling cost in time and vertices are different, so there might be a trade-off between time and vertices. For example, in a sensor network, sensors with low-speed ADC are cheap, while sensors with high-speed ADC may be much more expensive. Is it possible to use more sensors in exchange of lower sampling frequency? If so, is there any limit of mutual substitution between sampling in time and vertices? Theorem \ref{thm:sample} actually answers the questions and gives the bound of the substitution, which means we can substitute between time and vertices within certain limits. For example, there are two qualified sampling sets, shown in Fig. \ref{fig:qualified}, but only Fig. \ref{fig:qualified}(a) is a critical sampling set. Compared to Fig. \ref{fig:qualified}(a), Fig. \ref{fig:qualified}(b) opens the sensor 4 in order to reduce the sampling frequency of sensor 1. Conversely, Fig. \ref{fig:qualified}(a) increases the sampling frequency of sensor 2 so that we can close the sensor 4. But we can not close any more sensors. Otherwise, we cannot recover the original signal. Fig. \ref{fig:qualified}(a) also reveals that when a signal is a GBL signal, there might be a qualified set with different sampling frequency on every node. This property is important for sampling design in sensor networks, social networks, etc. \begin{figure}[htbp] \centering \subfigure[] { \begin{minipage}{3cm} \centering \includegraphics[scale=0.4]{product_sample2.eps} \end{minipage} } \hfill \subfigure[] { \begin{minipage}{4cm} \centering \includegraphics[scale=0.4]{product_sample3.eps} \end{minipage} } \caption{Two qualified sets, (a) is the critical sampling set and (b) has lower sampling frequency} \label{fig:qualified} \end{figure} \section{CONCLUSION} We have shown that we should sample in joint time-vertex domain rather than sampling in two domain separately, if we want to get a more efficient sampling. The main result of this paper can be extended to all product graph signals. In future works, we plan to investigate the continuous as time-varying graph signals. \section*{ACKNOWLEDGMENT} This work was supported by the National Key Research and Development Program of China (No. 213), the Shanghai Municipal Natural Science Foundation (No. 19ZR1404700), and the NSF of China (No. 61501124). \bibliographystyle{IEEEbib}
1,314,259,996,821
arxiv
\section{#1} \setcounter{equation}{0}} \newcommand{\myappendix}{
1,314,259,996,822
arxiv
\section{Introduction} Thermal hadron gas (HG) models have recently been used to fit the data of particle yields in nucleus-nucleus (A+A) collisions at the AGS and SPS energies (see e.g. \cite{St:96}). The ideal HG model becomes inadequate at the chemical freeze-out in high-energy A+A collisions: temperature and baryonic chemical potential obtained from fitting the particle number ratios at the AGS and SPS energies lead to artificially large values of total particle number and energy densities (see e.g. \cite{Yen:98}). This is hardly consistent with a picture of a gas of point-like noninteracting hadrons. The Van der Waals (VdW) excluded volume procedure appeared to be effective in taking into account the hadron repulsion at short distances. It suppresses undesirable large values of particle number densities. Different versions of the VdW HG models were proposed and applied for fitting experimental data on particle number ratios in A+A collisions at the AGS and SPS energies [3-12]. The proper volume of the $i$-th hadron species is expressed in terms of its hard-core radius $R_i$. An introduction of the phenomenological parameters $R_i$ changes the particle number ratios in comparison with ideal HG results. The VDW model formulation, however, has not been properly defined in the case when the $R_i$s are not equal to each other. The aim of the present paper is to propose a generalization of the VdW excluded volume procedure for the HG gas of several particle species with different particle radii. The derivation is based on the grand canonical partition function for the system of particles of several species interacting by hard-core potentials. The obtained formulae are therefore consistent with the underlying principles of statistical mechanics as well as with thermodynamical identities. The pressure, particle densities and other thermodynamical quantities as functions of temperature and chemical potentials are defined by a set of coupled transcendental equations. \section{One component VDW Gas} The canonical partition function (CPF) for the one-component classical (Boltzmann) gas can be written as \begin{eqnarray}\label{cpartp1} Z(V,T,N) ~=~ \frac{1}{N!} \int ~ \prod_{i=1}^{N}\frac{d^3 p_i d^3 r_i } {(2\pi)^3} \exp\left(- \frac{\sqrt{ p_i^2 +m_2}}{T} - \frac{U}{T} \right) \end{eqnarray} where $V$ and $T$ are the system volume and temperature, $m$ and $N$ are the mass and number of particles, respectively. The function $U$ in Eq.~(\ref{cpartp1}) is assumed to be equal to the sum of pair potentials: \begin{equation}\label{potential1} U ~=~ \sum_{1\leq i < j \leq N} u (|\vec{r}_{i} - \vec{r}_{j}|)~. \end{equation} After integration over the particle momenta, Eq.~(\ref{cpartp1}) is reduced to \begin{equation}\label{cpart2} Z(V,T,N)~ =~ \frac{1}{N!}~ \left[ \phi (T;m) \right ]^{N}~ \int ~ \prod_{i=1}^{N} d^3 r_i~ \exp\left(-\frac{U}{T}\right)~. \label{cpart1} \end{equation} Here we use the notation \begin{equation}\label{phi} \phi(T;m)~ =~ \frac{1}{2 \pi^2} \int_0^{\infty}p^2 dp~ \exp \left( - \frac{\sqrt{p^{2}+m^{2}}}{T} \right)~ = ~\frac{m^{2} T}{2 \pi^{2}}~ K_{2}\left( \frac{m}{T} \right), \end{equation} where $K_{2}$ is the modified Bessel function. The asymptotic behavior of $\phi(T;m)$ in the non-relativistic limit, $m>>T$, has the form \begin{equation} \phi(T;m)~\cong~ \left( \frac{m T}{2 \pi} \right)^{3/2}~ \exp(-m/T)~. \end{equation} By means of the Mayer functions \begin{equation}\label{fij1} f_{ij}~\equiv ~\exp\left( -\frac{u(|\vec{r}_{i}-\vec{r}_{j}|)}{T} \right)~ -~ 1~, \end{equation} one can rewrite the integrand of Eq.~(\ref{cpart1}) in the following form \begin{equation}\label{rinteg1} \exp\left(- \frac{U}{T}\right)~ =~ \prod_{i=1}^{N-1} \prod_{j=i+1}^{N} [ 1+f_{ij} ]~ \cong~ \prod_{i=1}^{N-1} [ 1+ \sum_{j=i+1}^{N} f_{ij} ] \end{equation} The approximate equality in the last expressions is based on the assumption that the gas is rarefied and, therefore, the terms containing products $f_{ij} f_{im}$ can be dropped. Let us introduce the notations \begin{equation}\label{b} \int d^{3} r_{i} f_{ij}~ =~ - ~2 b(T,\vec{r}_{j})~. \end{equation} Now we shall use the rigid ball model, i.e. we assume the hard-core interaction between the particles: \begin{equation} u(|\vec{r}_{i}-\vec{r}_{j}|) = \left\{ \begin{array}{lcl} \infty & \mbox{ if } & |\vec{r}_{i}-\vec{r}_{j}| \le 2 R \\ 0 & \mbox{ if } & |\vec{r}_{i}-\vec{r}_{j}| > 2 R , \end{array} \right. \end{equation} ($R$ is the particle radius). In this case $b$ defined by Eq.~(\ref{b}) does not depend on temperature. For $V^{1/3} \gg R$ the dependence of $b$ on $\vec{r}_{j}$ is negligible and the integral in Eq.~(\ref{b}) can be easily calculated: \begin{equation}\label{b0} b ~=~ \frac{16}{3} \pi R^{3}~. \end{equation} Substituting Eqs.~(\ref{rinteg1}-\ref{b}) into Eq.~(\ref{cpart2}) one gets \begin{eqnarray}\label{cpart3} Z(V,T,N) ~&=&~ \frac{1}{N!} \left[ \phi (T;m) \right ]^{N} \prod_{i=1}^{N} (V - 2 (N - i ) b ) \label{cpartl1} \\ &=&~ \frac{1}{N!} \left[ \phi (T;m) \right ]^{N} V^N \exp \left[ \sum_{i=1}^{N} \log \left(1 - 2 (N - i ) \frac{b}{V} \right) \right]~. \nonumber \end{eqnarray} In a rarefied gas the value of $2bN$ is much smaller then the total volume $V$. In this case one can approximate $\log(1-x)\cong -x$ in Eq.~(\ref{cpart3}). This yields \begin{eqnarray} Z(V,T,N) ~\cong~ \frac{1}{N!}~ \left[ \phi (T;m) ~ V\right ]^{N} ~\exp \left( - \sum_{i=1}^{N} 2 (N - i ) \frac{b}{V}~ \right)~. \end{eqnarray} Performing the summation in the exponential and using approximate relation $\exp(-Nb/V)\cong 1-Nb/V$ one gets CPF of one component rigid ball gas: \begin{eqnarray}\label{cpfa1} Z(V,T,N) ~&\cong&~ \frac{1}{N!} \left[ \phi (T;m)~V \right ]^{N} ~\exp \left( - N (N - 1) \frac{b}{V} \right) \nonumber \\ &\cong&~ \frac{1}{N!}~ \left[ \phi (T;m) \right ]^{N}~ (V - N b )^{N}~. \label{cparte1} \end{eqnarray} Substituting the expression (\ref{cpfa1}) into the formula for pressure one finds the well known VdW equation for rigid ball gas: \begin{equation}\label{steq1} p(V,T,N)~ \equiv~ T~ \frac{\partial \log Z(V,T,N)}{\partial V} ~=~\frac{N T}{V - b N }~. \end{equation} Note that $b$ (\ref{b0}) is equal to $4v$, where $v=4\pi R^3/3$ is the particle volume. The VdW equation of state (\ref{steq1}) is obtained from the statistical mechanics of rigid balls within a rarefied gas approximation, $vN<<V$. For a dense gas the VDW equation (\ref{steq1}) should be considered as a phenomenological extrapolation. The grand canonical partition function (GCPF) is expressed through the CPF in the following way: \begin{equation} {\cal Z}(V,T,\mu)~ =~ \sum_{N=0}^{\infty} \exp \left( \frac{\mu N}{T} \right) Z(V,T,N)~, \label{gcpfe1} \end{equation} where $\mu$ is the chemical potential. In the case of rigid ball gas, the upper limit of the sum is in fact not infinite. The CPF (\ref{cpart1}) becomes equal to zero if $N$ exceeds $N_{0}\sim V/v$. Let $\exp \left( \frac{\mu N^{*}}{T} \right) Z(V,T,N^{*})$ is the maximal term in the sum (\ref{gcpfe1}). The rest of terms are also positive, hence the following inequalities are satisfied \begin{equation} \exp\left(\frac{\mu N^*}{T}\right)~Z(V,T,N^{*}) ~<~ {\cal Z}(V,T,\mu)~ <~ N_{0}~\times~\exp\left(\frac{\mu N^*}{T}\right)~ Z(V,T,N^{*})~. \label{ineqz1} \end{equation} The pressure can be expressed via GCPF by the formula \begin{equation}\label{pmu1} p(T,\mu)~ =~ T~ \lim_{V \rightarrow \infty} \frac{ \log {\cal Z} (V,T,\mu)}{V}~. \end{equation} Taking Eq.~(\ref{ineqz1}) into account one gets \begin{eqnarray} p(T,\mu)~&\ge &~\lim_{V \rightarrow \infty} ~\frac{\mu N^*+T \log Z(V,T,N^{*})}{V}~, \nonumber \\ p(T,\mu) ~&\le & ~ \lim_{V \rightarrow \infty} \frac{ \mu N^*+T\log Z(V,T,N^{*})}{V} ~+~T~\lim_{V \rightarrow \infty}\frac{\log N_{0}}{V}~.\nonumber \end{eqnarray} Since $V^{-1}\log N_{0} \rightarrow 0$ in the thermodynamical limit, $V\rightarrow \infty$, the pressure $p(T,\mu)$ (\ref{pmu1}) is defined by the largest term, with $N=N^*$, of the GCPF (\ref{gcpfe1}). $N^*$ is also the average number of particles in the grand canonical formulation. Using the VdW approximation (\ref{cparte1}) for the CPF one finds \begin{equation}\label{pnsvdw1} p(T,\mu)~ =~ T~ \lim_{V \rightarrow \infty} \frac{1}{V}~ \log \left[ \frac{A^{N^{*}} (V - N^{*} b )^{N^{*}} }{N^{*}!} \right], \end{equation} where $A=\exp( \mu/T ) \phi(T;m)$, and $N^*=N^*(V,T,\mu)$ corresponds to the maximum of the expression in the square brackets. Let us show that Eq.~(\ref{pnsvdw1}) leads to the result \begin{equation}\label{pxi1} p(T,\mu)~ =~ T \xi~, \end{equation} where $\xi$ is defined by the transcendental equation \begin{equation} \label{eqxi} \xi~ =~ A ~\exp(- b \xi)~. \end{equation} Using the asymptotic representation for the $\Gamma$-function logarithm at $N \rightarrow \infty$ \begin{equation} \log \Gamma (N + 1) \cong N ( \log N - 1) \label{Gamas} \end{equation} it is easy to check that the value of $N^{*}$ satisfying the maximum condition of logarithm argument in Eq.~(\ref{pnsvdw1}) is given by the formula \begin{equation}\label{ns1} N^{*}~ \cong~ V n~, \end{equation} where $n=n(T,\mu)$ is related to $\xi$ via equation \begin{equation}\label{eqn} n~ =~ \frac{\xi}{1 + b \xi}~. \end{equation} Substitution of Eq.~(\ref{ns1}) into Eq.~(\ref{pnsvdw1}) with account for Eqs.(\ref{eqxi},\ref{eqn}) yields the formula (\ref{pxi1}). The quantity $n=n(T,\mu)$ is a particle number density in the grand canonical formulation. One can readily check that the definition of the particle number density, $n=\partial p(T,\mu)/\partial \mu$~, leads to Eq.~(\ref{eqn}), provided that Eqs.~(\ref{pxi1}-\ref{Gamas}) are taken into account. For the point like particles, $R=0$ and $b=0$, Eq.~(\ref{pxi1}) is reduced to the ideal gas result: \begin{equation}\label{ideal} p^{id}(T,\mu)~=~T~\exp(\mu/T)~\phi(T;m)~=~T~n^{id}(T,\mu)~. \end{equation} Eq.~(\ref{pxi1}) can be therefore written in the form \cite{ris91}: \begin{equation}\label{nonideal} p(T,\mu)~=~ p^{id}\left(T,\mu-\frac{bp(T,\mu)}{T}\right)~. \end{equation} It can be also presented in the form of Eq.~(\ref{steq1}) with $N=N^*(V,T,\mu)$, which demonstrates explicitly the equivalence between canonical and grand canonical formulations at $V\rightarrow \infty$. \section{Two Component VDW Gas} In the case of two particle species CPF has the following form: \begin{eqnarray}\label{cpartp} Z(V,T,N_{1},N_{2})& =& \frac{1}{N_{1}! N_{2}!} \int ~ \prod_{i=1}^{N_1}\frac{d^3 p_i^{(1)} d^3 r_i^{(1)}} {(2\pi)^3} \exp\left(- \frac{\sqrt{m_1^2 + (p_i^{(1)})^2}}{T}\right)\\ & \times & \prod_{k=1}^{N_2}\frac{d^3 p_k^{(2)} d^3 r_k^{(2)}}{(2\pi)^3} \exp\left(- \frac{\sqrt{m_2^2 + (p_k^{(2)})^2}}{T}\right) ~ \exp\left(-\frac{U^{(1,2)}}{T}\right) \nonumber ~, \end{eqnarray} where $m_1$, $N_{1}$ ($m_2$, $N_{2}$) are the mass and number of particles of the 1-st (2-nd) species, \begin{eqnarray}\label{potential} U^{(1,2)}~& =&~ \sum_{1\leq i < j \leq N_{1}} u_{11} (|\vec{r}_{i}^{(1)}-\vec{r}_{j}^{(1)}|) ~+~ \sum_{1\leq k < l \leq N_{2}} u_{22} (|\vec{r}_{k}^{(2)}-\vec{r}_{l}^{(2)}|) \\ &+&~ \sum_{i=1}^{N_{1}} \sum_{k=1}^{N_{2}} u_{12} (|\vec{r}_{i}^{(1)}-\vec{r}_{k}^{(2)}|)~.\nonumber \end{eqnarray} It should be mentioned that in the case of {\it two} particle species $U^{(1,2)}$ contains {\it three} types of two-particle potentials. While the potentials $u_{11}$ and $u_{22}$ describe interactions between particles of the same species and can be handled similarly to the potential $u$ of one-component case, the potential $u_{12}$ describing interactions between particles of different species requires an special treatment and prevents a straightforward generalization of the one-component VDW equation to the two-component gas. The integration of (\ref{cpartp}) over the particle momenta gives the following expression for the CPF \begin{equation} Z(V,T,N_{1},N_{2}) = \frac{\left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}}}{N_{1}! N_{2}!} ~\int ~ \prod_{i=1}^{N_1} d^3 r_i^{(1)} \prod_{k=1}^{N_2} d^3 r_k^{(2)} \exp\left(-\frac{U^{(1,2)}}{T}\right)~. \label{cpart} \end{equation} The next step, however, involves the Mayer functions of {\it three} types ($p,q$=1,1;2,2;1,2): \begin{equation}\label{fij} f^{(pq)}_{ik}~\equiv~\exp\left( -\frac{u_{pq}(|\vec{r}_{i}^{(p)}-\vec{r}_{k}^{(q)}|)}{T} \right)~-~1~, ~~~~p,q=1,2~ \end{equation} (note that $f^{12}_{il}=f^{21}_{li}$). The integrand of (\ref{cpart}) can be rewritten as \begin{eqnarray}\label{rinteg} \exp\left(- \frac{U}{T}\right) &=& \prod_{k=1}^{N_{w}-1} \prod_{l=k+1}^{N_{w}} [ 1+f^{(ww)}_{kl} ]~ \prod_{i=1}^{N_{v}} \prod_{j=i+1}^{N_{v}} [1+f^{(vv)}_{ij}]~ \prod_{m=1}^{N_{w}} [1+f^{(vw)}_{im}] \\ & \cong & \prod_{k=1}^{N_{w}-1} [ 1+ \sum_{l=k+1}^{N_{w}} f^{(ww)}_{kl} ]~ \prod_{i=1}^{N_{v}} [1+ \sum_{j=i+1}^{N_{v}} f^{(vv)}_{ij} + \sum_{m=1}^{N_{w}} f^{(vw)}_{im}] , \nonumber \end{eqnarray} where $v,w~=~1,2$ and $N_{v} \ge N_{w}$. For each type of Mayer function we introduce the notation \begin{equation}\label{bpq} \int d^{3} r^{(p)}_{i} f^{(pq)}_{ij} = - 2 b_{pq}(T,\vec{r}^{q}_{j}). \end{equation} For the rigid ball model at $V^{1/3} \gg max(R_1,R_2)$ ($R_p$ is the radius of a particle of $p$-th species, $p=1,2$) it yields \begin{equation}\label{bpq0} b_{pq} = \frac{2}{3} \pi (R_p+R_q)^{3}~. \end{equation} Substituting (\ref{rinteg}) and (\ref{bpq}) into (\ref{cpart}) one gets \begin{eqnarray}\label{zn1n2} && Z(V,T,N_{1},N_{2}) = \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} \prod_{k=1}^{N_{w}} (V - 2 (N_{w} - k ) b_{ww} ) \\ & & \times \prod_{i=1}^{N_{v}} (V - 2 (N_{v} - i) b_{vv} - 2 N_{w} b_{vw} ) \label{cpartl} = \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} V^{N_{1} + N_{2}}\nonumber \\ & & \times \exp \left[ \sum_{k=1}^{N_{w}} \log \left(1 - 2 (N_{w} - k ) \frac{b_{ww}}{V} \right) + \sum_{i=1}^{N_{v}} \log \left(1 - 2 (N_{v} - i) \frac{b_{vv}}{V} - 2 N_{w} \frac{b_{vw}}{V} \right) \right]~. \nonumber \end{eqnarray} We again assume that the gas is rarefied and the total proper volume of all particles is much smaller then the total volume of the system. Then it follows from Eq.~(\ref{zn1n2}): \begin{eqnarray} Z(V,T,N_{1},N_{2}) &\cong & \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} V^{N_{1} + N_{2}} \\ & & \times \exp \left[ - \sum_{k=1}^{N_{w}} 2 (N_{w} - k ) \frac{b_{ww}}{V} - \sum_{i=1}^{N_{v}} \left( 2 (N_{v} - i) \frac{b_{vv}}{V} + 2 N_{w} \frac{b_{wv}}{V} \right) \right]~. \nonumber \end{eqnarray} Performing the summation in the exponential \begin{eqnarray} Z(V,T,N_{1},N_{2}) &\cong& \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} V^{N_{1} + N_{2}} \nonumber \\ & & \times \exp \left( - N_{2} (N_{2} - 1) \frac{b_{22}}{V} - N_{1} (N_{1} - 1) \frac{b_{11}}{V} - 2 N_{1} N_{2} \frac{b_{12}}{V} \right) \label{cparte} \end{eqnarray} and introducing quantities $\tilde{b}_{12}$ and $\tilde{b}_{21}$ constrained by the condition \begin{equation}\label{constr1} \tilde{b}_{12} + \tilde{b}_{21} =2 b_{12}~, \end{equation} we can rewrite the CPF in the form \begin{eqnarray} Z(V,T,N_{1},N_{2}) &\cong& \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} V^{N_{1} + N_{2}} \left[ \exp \left( - N_{1} \frac{b_{11}}{V} - N_{2} \frac{\tilde{b}_{21}}{V} \right) \right]^{N_{1}} \nonumber \\ & & \times \left[ \exp \left( - N_{2} \frac{b_{22}}{V} - N_{1} \frac{\tilde{b}_{12}}{V} \right) \right]^{N_{2}}~. \label{cpartet} \end{eqnarray} Imposing the additional constraints $2 N_{v}\tilde{b}_{vw}/V \ll 1$ we obtain the final expression for the partition function of the two-component VdW gas \begin{eqnarray}\label{cpfa} Z(V,T,N_{1},N_{2}) ~&\cong&~ \frac{1}{N_{1}! N_{2}!} \left[ \phi (T;m_{1}) \right ]^{N_{1}} \left[ \phi (T;m_{2}) \right ]^{N_{2}} \nonumber \\ && \times (V - N_{1} b_{11} - N_{2} \tilde{b}_{21})^{N_{1}} (V - N_{2} b_{22} - N_{1} \tilde{b}_{12})^{N_{2}}~. \end{eqnarray} Substituting expression (\ref{cpfa}) into the formula for the pressure one obtains \begin{equation} p(V,T,N_{1},N_{2})\equiv T \frac{\partial \log Z(V,T,N_{1},N_{2})}{\partial V} = \frac{N_{1}T}{V - b_{11}N_{1} - \tilde{b}_{21}N_{2} } + \frac{N_{2}T}{V - b_{22}N_{2} - \tilde{b}_{12}N_{1} }~. \label{steq} \end{equation} The equation of state (\ref{steq}) is consistent with the virial expansion up to second order and can be considered as a generalization of the VdW equation for the two-component system. We define $\tilde{b}_{12}$ and $\tilde{b}_{21}$ as the following: \begin{equation}\label{btilde} \tilde{b}_{12}~ =~ 2 \frac{b_{11} b_{12}}{b_{11}+b_{22}}~,~~~ \tilde{b}_{21}~ = ~2 \frac{b_{22} b_{12}}{b_{11}+b_{22}}~. \end{equation} Eq.~(\ref{btilde}) satisfies the constraints (\ref{constr1}) and leads to the correct physical behavior in the limiting cases when both particle radii are equal to each other or when one of the particle radius is equal to zero. If two species have equal radii, $R_1=R_2$, equation (\ref{steq}) is reduced to the one-component VdW equation (\ref{steq1}) with $N=N_1+N_2$ and $b=b_{11}=b_{22}=\tilde{b}_{12}=\tilde{b}_{21}$. Note that in a general case, $R_1\neq R_2$~, the VdW excluded volumes in Eq.~(\ref{steq}) are different for different particle species. The transformation to the grand canonical ensemble is similar to that of one-component case. The GCPF has the form \begin{equation} {\cal Z}(V,T,\mu_{1},\mu_{2}) ~=~ \sum_{N_{1}=0}^{\infty} \sum_{N_{2}=0}^{\infty} \exp \left( \frac{ \mu_{1}N_{1} + \mu_{2}N_{2} } {T} \right)~ Z(V,T,N_{1},N_{2})~, \label{gcpfe} \end{equation} where $\mu_{q}$ ($q=1,2$) are the chemical potentials of each particle species. The total number of terms, $N_{0}$, in double sum (\ref{gcpfe}) is of the order of $V^2$. Since $V^{-1}\log N_{0}\rightarrow 0$ at $ V \rightarrow \infty$ the pressure can be expressed via the maximum term in the sum (\ref{gcpfe}): \begin{equation}\label{pns} p(T,\mu_{1},\mu_{2})~ = \lim_{V \rightarrow \infty} \frac{T}{V} \log \left[ \exp \left(\frac{\mu_{1}N_{1}^{*} + \mu_{2}N_{2}^{*}}{T} \right) Z(V,T,N_{1}^{*},N_{2}^{*}) \right]~. \end{equation} In the VDW approximation (\ref{cpfa}) the last expression takes the form \begin{equation}\label{pnsvdw} p(T,\mu_{1},\mu_{2}) = \lim_{V \rightarrow \infty} \frac{T}{V}~ \log \left[ \frac{A_{1}^{N_{1}^{*}} A_{2}^{N_{2}^{*}} (V - N_{1}^{*} b_{11} - N_{2}^{*} \tilde{b}_{21} )^{N_{1}^{*}} (V - N_{2}^{*} b_{22} - N_{1}^{*} \tilde{b}_{12} )^{N_{2}^{*}} }{N_{1}^{*}! N_{2}^{*}!} \right]~, \end{equation} where $A_{p}=\exp( \mu_{p}/T ) ~\phi(T,m_{p})$~. Let us show that the pressure (\ref{pnsvdw}) can be calculated by the formula \begin{equation}\label{pxi} p(T,\mu_{1},\mu_{2}) = T(\xi_{1} +\xi_{2})~, \end{equation} where the values of $\xi_{q}$ are found from the set of coupled transcendental equations \begin{eqnarray} \xi_{1} &=& A_{1} \exp(- b_{11} \xi_{1} - \tilde{b}_{12} \xi_{2})~, \label{eqxi1}\\ \xi_{2} &=& A_{2} \exp(- b_{22} \xi_{2} - \tilde{b}_{21} \xi_{1})~. \label{eqxi2} \end{eqnarray} Using the asymptotic representation for the $\Gamma$-function logarithm it is easy to check that the values of $N_{p}^{*}$ satisfying the maximum condition of the logarithm argument in Eq.(\ref{pnsvdw}) are given by the formula \begin{equation}\label{ns} N_{p}^{*}~\cong~ V n_{p}~, \end{equation} where $n_{p}=n_p(T,\mu_1,\mu_2)$ are related to $\xi_{p}$ via the equations \begin{eqnarray} \xi_{1}~ =~ \frac{n_{1}}{1 - n_{1} b_{11} - n_{2} \tilde{b}_{21}}~, \label{eqn1}\\ \xi_{2}~ = ~ \frac{n_{2}}{1 - n_{2} b_{22} - n_{1} \tilde{b}_{12}}~. \label{eqn2} \end{eqnarray} The substitution of Eq.~(\ref{ns}) into Eq.~(\ref{pnsvdw}) yields the formula (\ref{pxi}). The set of linear equations (\ref{eqn1}) and (\ref{eqn2}) can be solved for $n_{p}$: \begin{eqnarray} n_{1}~ &=&~ \frac{\xi_{1}[ 1 + \xi_{2} (b_{22}-\tilde{b}_{21}) ] } {1 + \xi_{1} b_{11} + \xi_{2} b_{22} + \xi_{1} \xi_{2} (b_{11} b_{22} - \tilde{b}_{12} \tilde{b}_{21})} \label{n1}~,\\ n_{2}~ &=&~ \frac{\xi_{2}[1 + \xi_{1}(b_{11}-\tilde{b}_{12})]} {1 + \xi_{1} b_{11} + \xi_{2} b_{22} + \xi_{1} \xi_{2} (b_{11} b_{22} - \tilde{b}_{12} \tilde{b}_{21})}~. \label{n2} \end{eqnarray} Similarly to the one component case, the quantities $n_{p}$ are particle number densities. One can readily check that the definition $n_{p} = \partial p(T,\mu_{1},\mu_{2})/\partial \mu_{p}$ leads to Eqs.~(\ref{n1}) and (\ref{n2}), provided that formulae (\ref{pxi}--\ref{eqxi2}) are taken into account. \section{Multicomponent VdW Gas} The above considerations can be generalized to the multi-component VdW gas with an arbitrary number of particle species. After integration over particle momenta and simplifications similar to those of the above consideration one gets the expression for the CPF of $K$-component VdW gas: \begin{equation}\label{cpfaK} Z(V,T, N_1,...,N_K) ~\cong~ \prod_{q=1}^{K} \frac{1}{N_{q}!} \left[ \phi (T;m_{q}) \right ]^{N_{q}} \left( V - \sum_{p=1}^{K} N_{p} \tilde{b}_{pq} \right)^{N_{q}}~, \end{equation} where \begin{eqnarray} \tilde{b}_{pq}~ &=&~ \frac{2 b_{pp} b_{pq}}{b_{pp}+b_{qq}}~,\\ b_{pq}~& =&~ b_{qp} ~=~ \frac{2}{3} \pi (R_p + R_q)^{3}~. \end{eqnarray} $R_q$ is the radius of particle of species$q$. $\phi (T;m_{q})$ is defined by Eq.~(\ref{phi}). The CPF (\ref{cpfaK}) yields the VDW equation of state for a $K$-component rigid ball gas \begin{equation} p(V,T, N_{1},...,N_K )~ =~ \sum_{q=1}^{K} ~\frac{N_{q}T}{V - \sum_{p=1}^{K} \tilde{b}_{pq}N_{p} }. \label{steqK} \end{equation} The pressure in the grand canonical ensemble is given by the formula \begin{equation}\label{gcpK} p(T, \mu_{1},...,\mu_K )~ =~ T~ \sum_{p=1}^{K} \xi_{p}~, \end{equation} where $\mu_p$ are the chemical potentials ($p=1,...,K$). The functions $\xi_{q}$ satisfy the set of coupled transcendental equations \begin{equation} \xi_{p}~ =~ A_{p}~ \exp \left( - \sum_{q=1}^{K} \tilde{b}_{pq} \xi_{q} \right)~, \end{equation} where $A_p=\exp(\mu_p/T)\phi(T;m_p)$. The particle densities $n_{p}=n_p(T,\mu_1,...,\mu_K)$ are obtained as the solutions of the following set of coupled linear equations \begin{equation} \xi_{p}~ =~ \frac{n_{p}}{1 - \sum_{q=1}^{K} n_{q} \tilde{b}_{qp} }~. \label{eqnK} \end{equation} In the canonical ensemble formulation of the HG model the numbers $N_1,...,N_K$ are not fixed. They can not be fixed because of inelastic reactions between the hadrons. The fixed values have the conserved charges: baryonic number $B$, strangeness $S$ (strangeness is conserved as we neglect weak decays) and electric charge $Q$. The CPF has then the form \begin{eqnarray}\label{canonical} Z(V,T,B,S,Q)~&\equiv&~\sum_{N_1,...,N_K=1}^{\infty} ~Z(V,T,N_1,...,N_K)~\delta (B-\sum_{i=1}^{K}b_i N_i)~\\ &\times&~\delta (S-\sum_{i=1}^{K} s_i N_i)~ \delta(Q-\sum_{i=1}^{K} q_iN_i)~,\nonumber \end{eqnarray} where $b_i,s_i$ and $q_i$ are the baryonic number, strangeness and electric charge of $i$-th hadron species. In the application of the thermal HG models to A+A collisions, $B$ equals the number of nucleons participating in the reaction, $S=0$ and $Q=\alpha B$ with $\alpha \approx 0.5$ for intermediate nuclei and $\alpha \approx 0.4$ for heavy nuclei. CPF (\ref{canonical}) can be calculated with the VdW input (\ref{cpfaK}) for $Z(V,T,N_1,...,N_K)$. The presence of the Kronecker $\delta$-functions in Eq.~(\ref{canonical}) makes the canonical ensemble rather complicated. The grand canonical formulation justified for large systems is much more convenient. In this case the system properties are defined by the pressure function (\ref{gcpK}) with the chemical potentials $\mu_i$ ($i=1,...,K$) defined as \begin{equation}\label{mui} \mu_i~=~b_i\mu_B~+~s_i\mu_S~+~q_i\mu_Q~ \end{equation} in terms of the baryonic $\mu_B$~, strange $\mu_S$~ and electric $\mu_Q$~ chemical potentials. They are chosen, at given $V$ and $T$, to fix the {\it average} values $\overline{B}$, $\overline{S}=0$ and $\overline{Q}=\alpha \overline{B}$. \section{Summary} In the present paper we have proposed a generalization of the VdW excluded volume procedure for the multicomponent hadron gas. The canonical and grand canonical ensemble formulations are presented. Different hard-core radii for different hadron species have been discussed in the literature (e.g. [8-12]). The excluded volume procedure was based on the substitution $V\rightarrow V-\sum_i b_{ii}N_i$. This ansatz is shown to be not correct: for unequal hard-core radii the VdW excluded volumes seen by the different hadron species are different. Thermodynamical quantities in the grand canonical formulations are found to be the solution of a coupled set of the transcendental equations. \vspace{1cm} { \bf Acknowledgements} We thank K.A. Bugaev, A.P. Kobushkin and G.~Yen for discussion and comments. M.I.G. acknowledges the receipt of a DFG Professorship at the ITP, Goethe University of Frankfurt.
1,314,259,996,823
arxiv
\section{ Introduction} In recent decades, a number of new materials such as cuprates, magnesium diboride, chalcogenides and iron pnictides with a high critical temperature have been found. {\cit {Hosono2008,Mazin02,Stewart11,Paglione10,Johnston10,Hosono15}} This generated numerous proposals for the mechanisms of superconductivity and the symmetry of the order parameters. {\cit {Cruz08,Hidenori09,Pratt09,Hirsch2016}} The most recent findings are of iron-based superconductors (FeBS) having critical temperatures up to 100 K \cite{Feng2015}}. The important issue of the pairing mechanisms and the symmetry of the order parameter in these materials is still a matter of an extensive debate. They, as shown by DFT calculations and confirmed by ARPES, are in-fact multiband materials with, either four or five, quasi-2D disconnected Fermi pockets. {\cite{Singh2008,Kontani2015}} The hole pockets are centred at $\Gamma =(0,0)$ and the electron pockets are centred at M=($\pi $,$\pi $). The nesting between the electron and hole pockets on the one hand leads to strong spin fluctuations, which favor \(s_\pm\) superconductivity, with the order parameter having the opposite sign for the electron and the hole pockets.{\cite{Johannes2008,McElroy2003, Mazin2010,Hirschfeld2011,Chubukov2008}} On the other hand it may enhance orbital fluctuations, favoring s++ superconductivity\cite{Hanaguri2010}, with the order parameter, having the same sign for the electron and the hole pockets. Therefore, such a sign change of the order parameter between the electron and hole pockets should hint at the possible pairing mechanism. {\cite{Korshunov2011,Monthoux 2001,Golubov2008,Kuroki8, Kordyuk12,Werner2012,Chen2014}} Even though the symmetry of the order parameter was determined for some of the representative of FeBS, e.g. in the inelastic neutron scattering experiments, it still does not give the complete picture for all compounds. The underlying reason is the multiband character of the Fermi surfaces in the FeBS. In this case the order parameter may change sign due to impurities; as it was demonstrated theoretically \cite{Efremov2011,Efremov13,Korshunov2014} and experimentally \cite{shilling2016} with doping either to d-wave symmetry \cite{Reid2012,Hafeiz2013,Grinenko2014,Grinenko20142} or change a sign \cite{Wang2016}. Therefore, a universal tool to ascertain the pairing symmetry is much needed. In contrast to high $T_{c}$ cuprates, phase-sensitive experiments using FeBS-based Josephson junctions have not been performed yet. The main difficulty for such a multiband superconductor is the need to design an experimental geometry in such an ingenious way, such that, the current through one contact is dominated by carriers having positive sign of order parameter and in the other contact the opposite case occurs. The isotropic nature of the s-wave fails the effort in this direction; however, the extended s-wave nature comes directly under the realm of such experimental investigation.{\cit {Golubov2013,Mazin2009,Burmistrova2015,Golubov2009}} One of the methods for resolving the symmetry of the order parameter, is the study of the local density of states (LDOS) modulations due to the quasiparticle interference (QPI), in the presence of impurities; which, could provide interesting information on the pairing symmetry of the gap function. The STM studies of conductance modulations have been utilized in earlier investigations as the direct probes of the quantum interference of electronic eigenstates in metals{\cite{Crommie1993}}, semiconductors{\cite{Kanisawa2001}} and cuprates{\cit {Hoffman2002,Hanaguri2007,Howald2003}}. In Fe-based superconductors, theoretical predictions for the dispersion of the QPI vector peaks have been made with models with electron and hole pockets for the case of $s_{\pm } superconducting order.{\cit {Coleman09,Skyora2011,Hirschfeld15,Scalapino2003,Scalapino2012}} In view of the above discussion, it would be helpful to formulate a model for the QPI to reveal qualitative differences between the response in the $s_{\pm}$ and $s_{++}$ pairing states. In this work, we formulate such a model for multiband superconductors by employing the Eliashberg formalism which naturally takes into account the temperature and retardation effects. We discuss the temperature dependence of the QPI spectral function and emphasize upon the finite temperature effect on the distinction between the two symmetry cases viz. $s_{\pm}$ and $s_{++}$. We show, both analytically and numerically, that within the Born approximation, the quasiparticle interference response function given as the function of energy has three singularities. Two of these correspond to the values of the energy gaps and the third depends on both the gaps and the transfer momentum. We argue that only the lowest value in the energy singularity may be used as an universal tool for the determination of the phase shift of the order parameter between the bands. We identify the robustness of the sign of response function peak near the smaller gap value in both the symmetry cases is a promising feature that can be used to identify a pairing symmetry. The paper is organized as follows. In section II we shortly introduce the main object of the present study, namely the QPI response function and the Eliashberg approach for the single-particle correlation functions in multiband systems with strong coupling interaction. The theoretical background to obtain the LDOS and the response function is explained in the section III. where, we numerically analyse the response function in strong coupling for inter- and intra-band case. In section IV, the general case of away from ideal nesting condition with non-zero band ellipticity $\epsilon$ and the shifted Fermi surface energy $\delta\mu$ is discussed. We show the dependence of QPI response function on the inherently present large momentum transfer process that could probe the sign-changing gap symmetry. In section V we conclude the paper with the summary of our results. \section{The Eliashberg Approach} To find the single-particle correlation functions in multiband systems with strong coupling interaction we employ the Eliashberg approach {\cit {Carbotte90,Scalapino66,Allen82,Marsiglio2008,Scalapino69,Maksimov82,Parker08,Vonsovskii82 }. For the sake of simplicity, the consideration here is restricted by assuming the two bands scenario. The generalization for higher number of bands is straightforward. Since, the superconducting gap functions have weak momentum dependence, the systems like Fe-based superconductors can be successfully described in the frame of quasi-classical Green functions $\hat{\mathbf{g}}_{\alpha}(\omega)$: \begin{equation} \hat{\mathbf{g}}_{\alpha}(\omega)=N_{\alpha}(0)\int d\xi\hat{\mathbf{G} _{\alpha}(\mathbf{k},\omega) \end{equation} where $\alpha=a,b$ is the band index and and \(N_\alpha(0)\) is the density of states. In the following, we will use retarded Green function throughout and therefore we shall omit the index R. In the Nambu notations the full Green functions have the form: \begin{equation} \hat{\mathbf{G}}_{\alpha}(\mathbf{k},\omega)=\frac{\tilde{\omega}_{\alpha \hat{\tau}_{0}+ \xi_{\alpha, \mathbf{k}}\hat{\tau}_{3}+\tilde{\phi}_{\alpha \hat{\tau}_{1}}{\tilde{\omega}_{\alpha}^{2}- \xi_{\alpha, \mathbf{k}}^{2} \tilde{\phi}_{\alpha}^{2}} \end{equation} where, the $\hat{\tau}_{i}$ denote Pauli matrices in Nambu space. Here, \xi_{\alpha,\mathbf{k}}= \epsilon_{\alpha,\mathbf{k}} - \epsilon_F$ is the dispersion at the Fermi energy. The order parameter $\tilde{\phi _{\alpha}=\tilde{\phi}_{\alpha}(\omega)$ and the renormalized frequency \tilde{\omega}_{\alpha}=\tilde{\omega}_{\alpha}(\omega)$ are complex functions of the $\omega$. Correspondingly, the quasi-classical $\xi -integrated Green functions can be written: \begin{eqnarray} g_{0\alpha}(\omega)&=&-i\pi N_{\alpha}\frac{\omega}{\sqrt{\omega^{2}-\tilde \Delta}_{\alpha}^{2}(\omega)}}, \\ g_{1\alpha}(\omega)&=&-i\pi N_{\alpha}\frac{\tilde{\Delta}_{\alpha}(\omega)} \sqrt{\omega^{2}- \tilde{\Delta}_{\alpha}^{2}(\omega)}}, \end{eqnarray} where, $\tilde{\Delta}_{\alpha}(\omega) = \tilde\phi_\alpha(\omega)/Z_\alpha(\omega)$ and $Z_\alpha(\omega) = \tilde\omega_\alpha(\omega)/\omega$ and are complex functions. The quasi classical Green functions are obtained by numerical solution of the Eliashberg equations {\cite{ Scalapino69,Parker08,Maksimov82,Vonsovskii82}}: \begin{equation} \tilde{\omega}_{\alpha}(\omega)\!\!-\!\omega\!=\!\!\!\sum_{\beta}\,\in \limits _{-\infty}^{\infty}\!\!\!dzK_{\alpha\beta}^{\tilde{\omega }(z,\omega)Re\frac{\tilde{\omega}_{\beta}(z)}{\sqrt{\tilde{\omega _{\beta}^{2}(z)-\tilde{\phi}_{\beta}^{2}(z)}}, \nonumber \end{equation} \begin{equation} \tilde \phi_\alpha(\omega) \!\! = \sum_{\beta } \int\limits_{-\infty}^{\infty} dz K^{\tilde\phi}_{\alpha \beta} (z,\omega) Re \frac{\tilde\phi_\beta(z) }{\sqrt{ \tilde\omega^2_\beta(z) - \tilde\phi^2_\beta(z) }} . \label{eq.Elias.2} \end{equation} The kernels $K^{\tilde\phi, \tilde\omega}_{\alpha \beta}(z,\omega)$ of the fermion-boson interaction have the standard form \cite{Scalapino69}: \begin{equation} K_{\alpha \beta}^{\tilde\phi, \tilde\omega} (z, \omega) \!\! = \!\!\! \int\limits_{-\infty}^\infty \!\! d \Omega \frac{\lambda_{\alpha \beta}^{\tilde\phi, \tilde\omega} B(\Omega)}{2} \left[ \frac{\tanh \frac{z} 2T} + \coth \frac{\Omega}{2T}}{z+ \Omega-\omega - i \delta} \right]. \end{equation} For simplicity, we use the same normalized spectral function of electron-boson interaction $B(\Omega)$ obtained for spin fluctuations in inelastic neutron scattering experiments \cite{Inosov09} for all the channels. The maximum of the spectra is $\Omega_{sf} = 144~cm^{-1}$, which determines the natural energy scale \cite{Efremov13}. This spectrum gives a rather good description of thermodynamical{\cite{Popo2010}} and optical \cite{Charnukha2010,Charnukha2014}}properties in the SC as well as normal states\cite{Dolgov2011}. Moreover, we will use all temperatures and energies, expressed below, in the units of inverse $cm$ (i.e. $cm^{-1}$). The matrix elements $\lambda^{\tilde\phi}_{\alpha\beta}$ are positive for attractive interactions and negative for repulsive ones. The symmetry of the order parameter in the clean case is determined solely by the off-diagonal matrix elements. The case $sign \lambda^{\tilde\phi}_{\alpha\beta} =sign \lambda^{\tilde\phi}_{\beta\alpha} >0$ corresponds to $\spp$ superconductivity and $sign\lambda^{\tilde\phi}_{\alpha\beta} =sign \lambda^{\tilde\phi}_{\beta\alpha} <0$ to $\spm$ case. The matrix elements $\lambda^{\tilde\omega}_{\alpha\beta} $ have to be positive and are chosen $\lambda^{\tilde\omega}_{\alpha\beta} = |\lambda^{\tilde\phi}_{\alpha\beta}|$. Further for simplicity we will omit the subscripts $\tilde \omega$ and $\tilde \phi$ denoting $\lambda^{\tilde\phi}_{\alpha\beta} = \lambda_{\alpha\beta}$ and $\lambda^{\tilde\omega}_{\alpha\beta} = |\lambda_{\alpha\beta}|$. Additionally, we would also use notation \(\Delta_a\) and \(\Delta_b\) for the real band gap energy values. \begin{figure}[] \includegraphics[scale=0.45]{fig1.eps} \caption{Density of states for the bands $a$ and $b$ calculated in strong coupling at various temperatures. The coupling constants are $\protect\lambd _{aa}=0.5$, $\protect\lambda_{ab}=0.2$, $\protect\lambda_{ba}=0.1$, $\protec \lambda_{bb}=3$. The superconducting critical temperature is $T_c=28 \, cm^{-1}$. The DOS is normalized with respect to the normal state and is set equal to 1 for each band. } \label{fig.DOS} \end{figure} In the strong coupling approach, as opposed to the weak coupling limit, the gap functions are complex and frequency dependent $\tilde\phi_\alpha=\tild \phi_\alpha(\omega)$. One of the consequences is the broadening of the quasiparticle peaks and appearing of the finite density of states N_\alpha(\omega) = - \frac{1}{\pi}Im g_{0 \alpha}(\omega)$ at zero energy. This behavior is illustrated in Fig.\ref{fig.DOS}. At zero temperature, DOS in the strong coupling approach exhibits the coherence peak $N(\omega) \propto 1/\sqrt{\omega-\Delta}$ for $\omega\geq\Delta(\omega)$ and zero for \omega<\Delta(\omega)$ quite similar to the weak coupling case. But at finite temperatures, DOS becomes finite for $\omega<\Delta(\omega)$ and the coherence peak is smeared out. This behavior is completely different from the weak coupling approximation. The reason is that the gap function \Delta(\omega)$ in strong coupling approximation is a complex function. Accounting for the frequency dependence of the gap functions on the QPI is the key issue of the present work. At the same time, one has to point out that the DOS measurements are unable to distinguish between $s_{++}$ and s_{\pm}$ order parameter symmetries (as is seen from Eq.3, DOS depends on |\Delta(\omega,T)|$. A phase-sensitive QPI calculation is needed to bring out the contrast between the two types of pairing symmetries. \section{Quasiparticle interference.} The STM measures the differential conductance; which, is proportional to the local single particle density of states $N (\mathbf{r}, \omega)$: \[ \frac{d I }{d V}(\mathbf{r},\omega) \propto |M(\mathbf{r})|^2 N (\mathbf{r}, \omega), \] where, $M(\mathbf{r})$ is the local tunnelling matrix element. The local density of states is related to the single particle retarded Green functions $G^R(\mathbf{r},\mathbf{r},\omega)$: \begin{equation} N(\mathbf{r}, \omega) = - \frac{1}{\pi} \mbox{Im} Tr\left[ \frac{1+\tau_3}{2} \hat G^R(\mathbf{r},\mathbf{r},\omega ) \right] \end{equation} Here, $Tr[..]$ is taken over both Nambu and band indices. Although the tunnelling matrix element may be important in the multiband case, sharpening the spectral weight contribution of some orbitals, the strong coupling does not affect the tunnelling matrix element. Since, we want to focus here on the effects of strong coupling the consideration is restricted by the impact of a single impurity on the local density of states. In the linear response approximation the perturbation of the density of states form due to an impurity with the point-like scattering $\displaystyle\hat{U}(\mathbf{r )=U_{\alpha \beta}\delta(\mathbf{r}) \tau_{3}$ reads \cite{Dolgov2009}: \begin{widetext} \ba \delta N(\mathbf{r},\omega) &=& -\frac{1}{\pi}Im\sum_{\alpha, \beta} Tr\left[\frac{1+\tau_{3}}{2}\int d V ^{\prime\prime}\hat{G}_{clean}^{\alpha}(\mathbf{r}-\mathbf{r}^{\prime\prime},\omega)\hat{U}_{\alpha \beta}(\mathbf{r}^{\prime\prime}) \hat{G}_{clean}^{\beta}(\mathbf{r}^{\prime\prime}-\mathbf{r},\omega)\right] \label{eq.dN} \ea \end{widetext} for $\omega >0$. The negative values of $\omega$ can be obtained by substitution $\tau_3 \to - \tau_3$. Since, in the response function, the bands are considered pairwise within the Born approximation; we will consider below the scattering between two bands, having in mind that one has to sum up the full response function afterwards. Considering Eq.(\ref{eq.dN}) in the momentum space and keeping only the interband impurity scattering, which gives the leading contribution for the momentum $\mathbf{q}$ close to the interband vector $\mathbf{Q}$, we define the QPI response function $I \mathbf{q},\omega)$ as: \[ \delta N(\mathbf{r},\omega) = U_{a b} \int \frac{d^2 q}{(2\pi)^2} e^{i \mathbf{q r}} I(\mathbf{q}, \omega) \] The response function is given by the following expression: \begin{widetext} \be I(\mathbf{q},\omega)=-\frac{1}{2 \pi} \int \frac{d^2 p}{(2\pi)^2} Im Tr\left[\tau_{3} \hat{G}_{clean}^{a}(\mathbf{q+p},\omega)\tau_3 \hat{G}_{clean}^{b}(\mathbf{p},\omega)\right] + (a \leftrightarrow b). \label{eq.qpi.general} \ee \end{widetext} \subsection{The model.} We apply the above formulation to develop the model for the general pnictide case as discussed below. In the low energy limit considered here, the spectrum near to the Fermi-level can be linearized: \begin{equation} \label{eq:11} \xi_b(\mathbf{p}+\mathbf{q}) \approx \beta \xi_a(\mathbf{p}) + \epsilon \cos 2 \theta +\delta \mu. \end{equation} Here, $sign \beta>0$ for impurity scattering between two electron or two hole bands, while $sign \beta<0$ for scattering between electron and hole bands. We assume constant density of states $\displaystyle N_{\alpha}=\int\delta(\xi_{\alpha,\mathbf{p}}) d^{2}p /(2\pi)^{2}$ and $|\beta| = v_b/v_a$. Where, the $v_{a,b}$ are the Fermi velocities for the two bands. The parameter $\epsilon = \mathbf{k}_{Fy} \mathbf{v}_b- \mathbf{k}_{F x} \mathbf{v}_b$ characterizes the ellipticity of the electron bands; where, $\mathbf{k}_{F y}\, \text{and} \, \mathbf{k}_{Fx}$ are the electron band Fermi wave vectors. Here, $\theta$ is angle between the vector \(\mathbf{p}\) and \(\mathbf{q}\). We have $\epsilon =0$ for scattering between two hole bands; otherwise $\epsilon$ is finite. Finally, $\delta \mu$ accounts the relative energy shift of the Fermi-surfaces and is given by the relation \(\delta\mu =(\mathbf{k}_F \mathbf{v}_F)_a-(\mathbf{k}_F \mathbf{v}_F)_b\). \subsection{Scattering at $q=Q$.} The direct integration over $\xi$ and the angle gives the following expression \begin{equation} I\! ( \mathbf{q} =\mathbf{Q} ,\omega)\! = -\frac{\sqrt{N_{a} N_b}}{2} Im[ K(\omega) F(\omega)], \label{eq.qpi.Q} \end{equation} where, the coherence factor $K(\omega)$ is \begin{equation} \label{eq:13} K(\omega)=\left[ \frac{\tilde\Delta_{a}\tilde\Delta_b -\omega^2 }{E_a E_b \pm 1\right] \end{equation} and \begin{eqnarray} \label{eq:14} \!\!\!&F&\!\!\!(\omega)= \frac{1}{\sqrt{|\beta|^{-1} \epsilon^2 - \! \left(\ \sqrt{|\beta|}\!Z_{a}\! E_a+\sqrt{|\beta|^{-1}} (\!Z_{b}\!E_b + \delta\mu) \right)^{2}}}\! \nonumber \\ &+&\!\!\frac{1}{\sqrt{|\beta|^{-1} \epsilon^2-\! \left(\!\sqrt{|\beta| \!Z_{a}\!E_a +\sqrt{|\beta|^{-1}}(\!Z_{b}\!E_b - \delta \mu)\right)^{2}}}\!. \label{eq.F1} \end{eqnarray} Here, $E_\alpha = \sqrt{\omega^{2}-\tilde\Delta_{\alpha}^{2}}$ is the quasiparticle energy spectrum. In the coherence factor $K(\omega)$ the sign "+" corresponds to the scattering between two electron or two hole bands, while "-" to the case of the scattering between electron and hole bands. One can immediately notice that the response function for intraband scattering at $q=0$ vanishes due to the coherence factor for all $\omega$. In our study, we have focussed completely on the inter-band interactions aspect of the phenomenon. This implies the choice of the "-" sign in the relation for the coherence factor given by Eq.(\ref{eq:13}). \subsubsection{Zero ellipticity} The hole bands around $\Gamma$ point can be considered in a good approximations as circle ($\epsilon=0$). For simplicity, in discussing the two cases for the band ellipticity \(\epsilon\), we shall assume the system to be in the weak coupling regime; and hence, take \(\tilde{\Delta}_{\alpha/\beta}\) to be real and write it as \(\Delta_{a/b}\) for the smaller (hole band) and larger (electron band) band gap energy, respectively. We start with perfectly matching hole bands ( \delta\mu=0$), having the gap functions $\Delta_a(\omega)>\Delta_b(\omega)$. The same ratio of the gap functions is used in the relation below. For the sake of simplicity, we put \(\beta =1\) for further analysis. The function $I(\omega)$ diverges as $\displaystyle{\pm Re[1/\sqrt \omega-\Delta_b}]}$ for $\omega>\Delta_b$ and as $1/\sqrt |\omega-\Delta_a|}$ for $\omega$ close to $\Delta_a$. The sign in front of the first singularity depends on the symmetry of the order parameter. Sign " -$" corresponds to $s_\pm$ superconductivity, while "$+$" for $s_{++}$ superconductivity. However, the sign in front of the second singularity does not depend on the superconducting order parameter symmetry. The mismatch of the bands creates non-zero $\delta \mu$, which, considerably changes the \omega$-dependence of the response function. For very large values of \delta\mu$, there is an additional dip at $\omega^* = \sqrt{ (\Delta_a^2 + \Delta_b^2 +\delta \mu^2 )^2- 4\Delta_a^2 \Delta_b^2 /(2|\delta\mu|)$ at energy greater than $\Delta_b$. The divergence for energies near to $\Delta_b$ remains as $1/\sqrt{\omega-\Delta_a}$ for $\omega^*>\Delta_a$. The case for finite band ellipticity is considered below. \subsubsection{Finite ellipticity} For scattering between two electron bands, the essential role is played by the ellipticity of the electron bands i.e. $\epsilon$. Here, we have distinct cases: a) $|\epsilon| + |\delta\mu|<\Delta_b$, b) $|\epsilon| + |\delta\mu|>\Delta_b$ and $||\epsilon| -|\delta\mu||<\Delta_a$, c) ||\epsilon| -|\delta\mu||>\Delta_a$. For the case a) one finds the behaviour similar to the scattering between two hole bands i.e. the appearance of a dip. In the case b) in addition to $1/\sqrt{\omega-\Delta_b}$ and $1/\sqrt \Delta_a-\omega}$ a new divergence of $1/\sqrt{\omega-\omega_1}$ appears at \omega_1 = \sqrt{ (\Delta_a^2 + \Delta_b^2 +(\delta \mu+|\epsilon|)^2)^2- 4\Delta_a^2 \Delta_b^2}/(2(|\delta\mu|+|\epsilon|))$. In the case c) one additional divergence $1/\sqrt{\omega -\omega_2}$ occurs at $\omega_2 = \sqrt{ (\Delta_a^2 + \Delta_b^2 +(\delta \mu-|\epsilon|)^2)^2- 4\Delta_a^2 \Delta_b^2}/(2||\delta\mu|-|\epsilon||)$. \subsection{Scattering at $q= Q + \tilde q$} Now, we consider the quasiparticle interference due to interband scattering at the vector $\mathbf{\tilde q} = \mathbf{q}-\mathbf{Q}$. For small $\tilde q$ one can use the approximation $\xi_b(\mathbf{p}+\mathbf{q}) \approx \beta \xi_a \mathbf{p})+ \epsilon \cos 2 \theta + v_b \tilde q \cos(\theta - \phi)+\delta \mu $, where $\phi$ is the angle between the vector $\tilde{\mathbf{q}}$ and $\mathbf{Q}$. The F-function in Eq.(\ref{eq.qpi.Q}) has the form: \begin{widetext} \begin{eqnarray}\label{eq:15} F(\omega,\phi) = \left\langle \frac{\!\sqrt{|\beta|} Z_{a} E_a +\sqrt{|\beta|^{-1}}Z_{b} E_b }{( \!\sqrt{|\beta|}Z_{a} E_a \!+\!\sqrt{|\beta|^{-1}} Z_{b} E_b )^2\!+ |\beta|^{-1}\left(\epsilon \cos(2 \theta)\! + \! v_b \tilde q \cos (\theta - \phi) \!+\! \delta \mu \right)^2} \right\rangle_{\!\!\theta\,}, \label{eq.F2} \end{eqnarray} \end{widetext} where $\langle ...\rangle_\theta$ is the averaging over the angle. The integration over the angle can be easily performed in two limits of \epsilon\gg \mathbf{v}_{b}\mathbf{\tilde{q}}$ (setting $\mathbf{v}_{b \mathbf{\tilde{q}}=0$) and $\epsilon\ll \mathbf{v}_{b}\mathbf{\tilde{q}} $ (setting $\epsilon =0$). In the second limit we recover expression similar to Eq.(\ref{eq.F1}) with substitution $\epsilon \to \mathbf{v}_{b}\mathbf \tilde{q}}$. \section{Numerical Analysis and Results} In the following, we will apply the above general formulation to Fe-BS, using the electron-boson spectral function, successfully used by Popovich et. al.\cite{Popo2010} for the thermal studies and by Charnukha et. al.\cite{Charnukha2010} for optical conductivities for the description of BaKFeAs at optimal doping. According to \cite{Charnukha2010}, the original four-band model for \(Ba_{1-x}K_x Fe_2 As_2\) can be reduced to an effective two-band model, where the first band is formed by the inner hole pocket with the gap \(\Delta_a\), while the second band with the gap \(\Delta_b > \Delta_a\) consists from combination of two electron pockets and outer hole pocket. Within this two-band model we will calculate the response \(I(\mathbf{q},\omega)\) at \(\mathbf{q}\) values around the nesting vector \(\mathbf{Q} = (\pi, \pi)\). \begin{figure} \includegraphics[scale=0.45]{fig2.eps} \caption{The response function I($\protect\omega$) for the s_{++}\, \text{and}\, s_{\pm}$ case with the strong coupling \protect\lambda$-matrix defined as ($\protect\lambda_{aa}$=3, $\protec \lambda_{ab}$=${\pm}$0.2, $\protect\lambda_{ba}$=${\pm}$0.1, $\protect\lambd _{bb}$=0.5) and \(T_c\) =28 \(cm^{-1}\).} \label{Fig.strongsppm} \end{figure} The model is studied in the beginning with $\epsilon=\delta\mu$ =0 (non-FeBS case), and later in the paper, we would consider finite values of $\delta\mu$ and $\epsilon$, as is the case with pnictides. Hence, the model has broader implications to other high $T_c$ superconductors. In this case, we have only two characteristic energy values, namely the energies of the gaps \(\Delta_a\) and \(\Delta_b\). Our purpose is to identify certain peculiarities of the QPI response for the $s_{++}$ and $s_{\pm}$ pairing symmetries. The resulting real-valued energy gaps in $N_\alpha(\omega)$, as discussed in Fig.1, are $\Delta_{a}$ = 17 \(cm^{-1}\), while $\Delta_{b}$ = 83 \(cm^{-1}\) at T = 0, which gives a gap ratio $\Delta_{b}/ \Delta_{a}$ = 4.82. In Fig. 2, we discuss the temperature evolution of the response function for $s_{++}$ and $s_{\pm}$ symmetry. First, at temperature T =1, the QPI response vanishes for $\omega < \Delta_a$ for both $s_{++}$ and $s_{\pm}$ order parameters; since, there is no excitation at the energy below \(\Delta_a\). In the whole temperature range, the response function for $s_{++}$ superconductivity is positive for all values of $\omega$; while in the $s_{\pm}$ case, for energies around the smaller gap, it is negative. As the temperature increases, the response related to the $s_{\pm}$ symmetry turns positive at much lower energies, while for $s_{++}$ case, the response peak shows a gradual shift towards the energy interval between the two band gaps. To sum up, the main feature that help us to distinguish between the response behaviour for the s_{++}$ and $s_{\pm}$ symmetry cases is the robustness of the sign of the peaks near the small band gap $\Delta_{a}$ over a broad range of $T < T_{c}$. \begin{figure} \centering \subfloat[\(s_{++}\) symmetry]{{\includegraphics[width=0.45\linewidth]{tempspp.eps} }} \qquad \subfloat[\(s_{\pm}\) symmetry]{{\includegraphics[width=0.45\linewidth]{tempspm.eps} }} \caption{3D plot of the response function vs temperature at fixed energy $ \protect\omega$ for $s_{++}$ (upper) and $s_{\pm}$ (lower) cases, respectively. The coupling parameters are the same as used above and the transition temperature, $T_c$=28 \(cm^{-1}\).} \label{Fig.respvstemp} \end{figure} The Fig. 3 represents the 3D plot depicting the variation of $I(\omega)$ simultaneously with temperature T and energy $\omega$ for the case of perfect nesting i.e. $\mathbf{q} =\mathbf{Q}$. For $s_{++}$ symmetry, at low temperatures and $\omega \le \Delta_a$, we consider the slice in the region $0 <T< 10$ that shows a small sharp peak which dips smoothly as the temperature rises. Moving towards high energies and at low temperatures,the peak around the second band gap energy is very strong and decays much slower with rising temperature and energy than compared to the first peak. While in the $s_{\pm}$ case, we see the difference for the first band peak as the response at low temperature and low energy is inverted (at $\omega\leq$ 20) and has large magnitude. This is the main feature that reflects throughout our analysis. The peaks around the first band gap energy are a robust indication of the difference between the two symmetry cases viz. $s_{++}$ and $s_{\pm}$. In the region of sub-gap energies and low temperatures, the $s_{++}$ response shows a negative gradient while the $s_{\pm}$ curve is almost flat and is negative; and for the same energies at high temperatures, the behaviour is similar for both the symmetries and hence it is indistinguishable in this region. Beyond that, the graph shows a monotonically decreasing trend for both $s_{++}$ and $s_{\pm}$ response function and does not provide any interesting distinguishable feature apart from the greater signal strength for $s_{++}$ curve than the latter. As we move to the higher temperatures, a bump in the response function arises, which is appreciably diffused and broadened as compared to the ones at low temperatures. This behaviour of response function is same, in both $s_{++}$ and $s_{\pm}$ case for $T > $25 as stated for the Fig. 2. \begin{figure}[] \includegraphics[width=0.7\columnwidth]{fig3.eps} \caption{The QPI response function for the $s_{++}$ and $s_{\pm}$ case at very strong couplings $\tilde{\protect\lambda}$ i.e. $\protect\lambda_{aa} =1,$\protect\lambda_{bb}$=6,$\protect\lambda_{ab}$=0.4, $\protect\lambda_{ba} $=0.2 with the transition temperature $T_c$ =46 \(cm^{-1}\).} \label{Fig.strongLambda} \end{figure} For Fig. 4, we have I($\omega$) vs energy $\omega$ plotted at various temperatures with very strong coupling parameters $\tilde{\lambda}$ and a raised transition temperature i.e. $T_c$ = 46. In the subgap region, for the s$_{++}$ case, we identify a peculiar behaviour of the response function (compare Fig. 2) as it goes to negative values and peaks just like the response for $s_{\pm}$ case. In summary, for the energies near the second band gap, the behaviour of response function for both the symmetry cases is indistinguishable apart from their relative strengths. However, we again observe that the response peaks near the smaller gap are a defining and distinguishing feature even for a very strong coupling case. \begin{figure}[] \includegraphics[width=0.45\columnwidth]{fig5.eps} \caption{The 2D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and the momentum $\mathbf{q}$ for the strong coupling case with $\protect\epsilon = \protec \delta\protect\mu$ = 0 at temperature T = 1. The values of the coupling constants are $\protec \lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, $\protect\lambda_{ba}=0.1$, \protect\lambda_{bb}=3$.} \label{Fig.vfq2D} \end{figure} In the following, we present the study of the response function behaviour with respect to the changes in parameters such as the ellipticity $\epsilon$ of the electron-like bands, the shifted Fermi energy $\delta\mu$ between the hole-like and the electron-like bands and the experimentally tunable electron momentum parameter $v_b \tilde{q}$; which, points in the radial direction to the electron band Fermi surface. Here, $\mathbf{\tilde{q}}$ is tuned in order to obtain the correct matching condition for the shifted Fermi energy surface, as discussed later, and to study the response behaviour closer to the region of Fermi surface instability, as followed from Eq.(\ref{eq:15}). In the Figs. 5 and 6, we plot in 2D and 3D, the behavior of $\mathbf{q} -resolved response function for both the symmetry cases, with variation in the electron-like quasiparticle momentum $\mathbf{\tilde{q}}$ using Eq.(\re {eq:15}) and setting the ellipticity and surface energy to zero. We also assume that the momentum vector $\tilde{\mathbf{q}}$ is directed along ${\mathbf{Q}}$ and hence, the angle $\phi$ =0. The finite value of \mathbf{\tilde{q}}$ relates to the fact that we are probing the Fermi surface of the electron-like band pocket. We have $||\epsilon| - |\delta\mu||< \Delta_a$ satisfied in this case. For the peak near larger band gap energy, the amplitude and the sign of the peak are robust and distinguishing features. \textbf{\begin{figure} \centering \subfloat[\(s_{++}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{vfqspp.eps} }} \qquad \subfloat[\(s_{\pm}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{vfqspm.eps} }} \caption{The 3D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and momentum $\mathbf{q}$ with the zero ellipticity \protect\epsilon$=0 and zero shifted Fermi surface energy $\protect\delt \protect\mu$=0 for the strong coupling case at temperature T = 1. The values of the coupling constants are $\protect\lambda_{aa}=0.5$, $\protect\lambd _{ab}=0.2$, $\protect\lambda_{ba}=0.1$, $\protect\lambda_{bb}=3$..} \label{Fig.eps4} \end{figure}} We see that the energy dependence of the response function at finite $\mathbf{\tilde{q}}$ shows three peaks. Two of these are momentum independent and correspond to the gaps in the bands \(\Delta_a\) and \(\Delta_b\), while the third peaks has a strong $\mathbf{\tilde{q}}$ dependence. The strong difference between $s_{++}$ and $s_{\pm}$ symmetries we see only for the first peak at he energy of the small gap. For the $s_{\pm}$ order parameter the response function at $\omega = \Delta_a$ is negative, while for $s_{++}$ it is positive. It leads to the conclusion that for determining the symmetry of the order parameter, one has to consider the response function at momenta close to the nesting vector $\mathbf{Q}$, and find the momentum independent peaks. The smallest of these peak will determine the symmetry of the order parameter. The QPI response at energies close to the second gap \( \Delta_b \) is shown in Fig.2 for $\mathbf{\tilde{q}}$ i.e. ($\mathbf{q} - \mathbf{Q}$) has opposite sign compared to the results presented by Hirschfeld et. al\cite{Hirschfeld15}, using a similar model in the weak-coupling regime. The results presented in Figs. 5 and 6 clearly demonstrate that with the increase of \(\tilde{\mathbf{q}}\), the sign of the second peak reverses. In this respect, our results do not contradict to those of \cite{Hirschfeld15}, $\mathbf{q}$-integrated response function was presented to be dominated by large $\mathbf{q}$ values. Moreover, our $\mathbf{q}$-resolved results provide more information about the QPI response behaviour. In particular, for non-zero ellipticity $\epsilon$ or the non-zero chemical potential shift $\delta \mu$, we have obtained additional mode at energies above $\Delta_b$ as shown in Figs. (5-11). Hence, we again argue that the peak near the first band gap energy i.e. \omega \approx \Delta_a(\omega)$ is the only strong distinguishing feature for the phase sensitive experiments for the gap symmetry measurements. \begin{figure} \includegraphics[width=0.45\columnwidth]{fig5b.eps} \caption{The 2D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and momentum $\mathbf{q}$ for the strong coupling case, with $\Delta_a = \Delta_b $, and $\protect\epsilon = \protec \delta\protect\mu$ = 0 at temperature T = 1; with the coupling constants given as $\protec \lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, $\protect\lambda_{ba}=0.1$, \protect\lambda_{bb}=3$.} \label{Fig.5vfq2D} \end{figure} So far, we explored the region around the nesting vector \(\mathbf{Q}=(\pi,\pi)\) with scattering between the smaller/inner hole-like band to the outer/larger averaged electron-like band. Now, we focus on the scattering of the quasiparticles from electron-like band to the outer hole-like band with larger gap value i.e. \(\tilde{\Delta}_{a2}(\omega) \rightarrow \tilde{\Delta}_{b1/b2}(\omega) \). In Fig. 7, we plot the response function for various values of the electron-like quasiparticle momentum \(\mathbf{\tilde{q}}\) over full spectrum of energy \(\omega\) with equal band gap functions. For this, we modify Eq.(\ref{eq:13}) by the substitution of the full gap function \(\tilde{\Delta}_a(\omega) \rightarrow \tilde{\Delta}_b(\omega)\) i.e. we replace the inner hole band gap function by the outer/larger hole band gap function, such that, we also replace all the corresponding renormalization functions i.e. \(Z_a \rightarrow Z_b\) and the related density of states. For \(s_{++}\) symmetry, we find that the response function for the energies \(\omega < \Delta_b\) is zero over a large range and becomes non-zero only at \(\omega = 75 \,cm^{-1}\) and remains positive afterwards. This is in contrast to the behaviour of the response function given in Fig.5, for the same symmetry. Where, the function goes through the zero towards the negative peak situated near the larger gap energy i.e. \(\Delta_b\). Only for \(\mathbf{\tilde{q}}=0\), we have a response function that stays positive over the full energy range. At energies \(\omega \ge \Delta_b\), we observe that the response function peaks are shifted towards higher energies with increase in \(\mathbf{\tilde{q}}\) in both the figures. However, in Fig.7, for the \(s_{++}\) case, there are only single positive peaks, i.e. only single mode, for all the \(\mathbf{\tilde{q}}\). In the \(s_{\pm}\) case, as depicted in Fig. 7, the response function amplitude has a very large value, in fact an order of magnitude larger, than the \(s_{++}\) case in the same figure and also in comparison to the response amplitudes in Fig.5. for both the \(s_{++}\) \text{and} \(s_{\pm}\) symmetry case. The reason for such a behaviour is the contribution of the divergent term \(1/(\Delta_b-\omega)\) in the coherence factor \(K(\omega)\) for the \(s_{\pm}\) case, instead of a constant scalar multiple for the \(s_{++}\) case (see Eq.(13)). In the region \(\omega \approx \Delta_b\), there is a large negative peak of the response function. At \(\omega > \Delta_b\) both the graphs in the upper and lower panel of Fig.7 are qualitatively similar for the increasing value of \(\mathbf{\tilde{q}}\), along with the presence of an additional mode, which is shifted towards higher values of \(\omega\), in all the cases without exception. Although, a difference is present between both the symmetries at \(\omega \approx \Delta_b\) for this scattering; it only exists within a very narrow energy range. Hence, we shall confine the study to the previous case of the scattering of quasiparticles between the smaller/inner hole-like band the gap-averaged electron-like bands to study QPI. In the following, we emphasize that this robustness of the QPI response peak, with respect to various parameters, provides an ideal tool to probe the order parameter phase symmetry. \begin{figure}[] \includegraphics[width=0.45\columnwidth]{muvfq.eps} \caption{The 2D plot of the QPI response function I($\protect\omega$) vs \protect\omega$, momentum $\mathbf{q}$ and the shifted Fermi surface energy $\protect\delta\protect\mu$ for the strong coupling case at temperature T = 1. In the inset, the dependence of |I(\mathbf{q-Q})| $ is shown at fixed energy close to the smaller gap i.e. $\protect\omega \approx$ 18 $cm^{-1}$. The values of the coupling constants are $\protect\lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, \protect\lambda_{ba}=0.1$, $\protect\lambda_{bb}=3$.} \label{Fig.Muvfqa} \end{figure} In Fig. 8, the graph depicts the behaviour of the QPI response function for very large shifted Fermi surface energy i.e. $\delta\mu$=300 and the comparison with the case of zero $\delta\mu$ and non-zero value $\mathbf{v _{b}\mathbf{\tilde{q}}$ for both the symmetry cases. The behaviour of \mathbf{v}_{b}\mathbf{\tilde{q}}$ is shown by dashed curves as the momentum vector $\mathbf{\tilde{q}}$ varies from small to large values and connects the two order parameters on the Fermi surfaces when its of the order $(\pi) . The black curve shows the behaviour of the response function for zero momentum and large shifted Fermi surface energy. The red dashed curve for zero $\delta\mu$ and large $\mathbf{v}_{b}\mathbf{\tilde{q}}$, shows the difference in the two cases with a shifting of the peak that arises for \omega > \Delta_b$. For the equal values of both the parameters, the behaviour is depicted by the blue dotted curve; where, the inverted peaks near the first and second band gap energies are almost equal in magnitude. Finally, the green curve shows the case for very large electron like quasiparticle momentum in comparison to the shifted Fermi surface energy and shifted peak is shown to be highly dispersed. The value of $\omega^*$ calculated through relation \omega^* = \sqrt{ (\Delta_a^2 + \Delta_b^2 +\delta \mu^2)^2- 4\Delta_a^2 \Delta_b^2}/(2|\delta\mu|)$, for the case when $\delta\mu > \Delta_b$ is 162.03 $cm^{-1}$. As stated previously, the most robust feature is the peak of the response function around the first band gap energy, which does not change the sign reversing behaviour with the change in parameters viz. $\delta\mu$, $\epsilon $ or $\mathbf{\tilde{q}}$ in the Eq.(\ref{eq:15}). Hence, this characteristic of the QPI response function presents itself as a very useful feature for the probe of order parameter symmetry between the $s_{++}$ and $s_{\pm}$ case, via the c-axis measurements from the FT-STM studies. The inset in the upper panel of Fig. 8, depicts the strong dependence of the magnitude of the peak on the parameter $\mathbf{\tilde{q}}$. For the perfect nesting case, i.e. $\mathbf{q} = \mathbf{Q}$, we observe the maximum in response function magnitude. For a fixed $\delta\mu$ and for the energy chosen to be near $\Delta_a$, we have the experimentally tunable parameter \mathbf{\tilde{q}}$ start at zero and scan over larger values. The peaks of |I(\mathbf{\tilde{q}})| $ in both the symmetry cases emerge for some optimal value of the momentum i.e. when $\mathbf{\tilde{q}}$ becomes of the order \delta\mu$ (in accordance with Eq.(\ref{eq:15})). At small values of \mathbf{\tilde{q}}$, this magnitude of the peaks is quite small; and hence, to observe this experimentally, we need to find the match between the large value of $\mathbf{\tilde{q}}$ and $\delta\mu$ to sample such behaviour correctly. \section{Summary \& Conclusion} We have analysed the problem of the identification of the order parameter symmetry for the Fe-based superconductors via the QPI measurements. For this purpose, we have developed a theory of the quasiparticle interference in multiband superconductors based on strong-coupling Eliashberg approach. In particular case of a two-band system, we consider two possible pairing symmetries the $s_{\pm}$ state, when the sign of the order parameters changes between the hole and the electron bands and the more conventional s_{++}$ state. The obtained results confirm the concept that the QPI is phase-sensitive technique and may help to determine pairing symmetry in Fe-based superconductors; and in general, could be applicable to other multiband superconductors. We calculate energy, temperature and momentum dependencies of the QPI response and point out qualitative differences between the response in the $s_{\pm}$ and $s_{++}$ cases. Application of the Eliashberg approach allows to take into account self-consistent retardation effects due to strong coupling and to properly describe temperature dependence of the QPI response function at various energies. Further, we have analyzed various regimes of the Fermi surface anisotropy by taking into account the influence of Fermi surface ellipticity. We argued from the analysis that, in general, for $\mathbf{q} \approx \mathbf{Q}$, there are three singularities of the response function. Two of these are momentum independent (weak momentum dependence) \( \omega \approx \Delta_{a,b}(\omega)\) and the one having a strong momentum dependence. Only the momentum independent (weak momentum dependence) peak, corresponding to the lowest gap value \(\Delta_a\), may serve as a universal probe for the gap symmetry in the multiband superconductors. We emphasize that our analysis presents a convincing case in favour of the QPI measurements as a phase sensitive test of the gap symmetry for the FeBS. This conclusion is based on the robustness of the response function peak near the smaller gap energy and is independent of the exact nature or shape of the energy bands. \section{Acknowledgements} We acknowledge useful discussions with P. Hirschfeld, I.I. Mazin, D. Morr and Y. Tanaka. This work was financially supported by the Foundation for Fundamental Research on Matter (FOM), associated with the Netherlands Organization for Scientific Research (NWO), by Russian Science Foundation, Project No. 15-12-30030, and by Ministry of Education and Science of the Russian Federation, grant 14.Y26.31.0007. DE acknowledges DFG financial support through the grant GR 3330/4-1 and financial support of VW-foundation through the grant ``\textit{Synthesis, theoretical examination and experimental investigation of emergent iron-based superconductors}". \pagebreak \section{Appendix} Here, we show the 2D and 3D graphs for the response function variation with shifted Fermi surface energy $\delta\mu$ versus the energy $\omega$ and with the electronic band ellipticity, $\epsilon$ = 0, for both the $s_{++}$ and $s_{\pm}$ cases, as discussed in the main text under sec. III(B). First, in Fig. 9, the trend for the response function at zero ellipticity is presented. The response curve near the second band gap energy has a sharp small negative peak and a broadened secondary peak as the $\delta\mu$ values increase. The second peak shifts away from $\Delta_b$ with larger values of shifted Fermi energy between the electron-like and hole-like pockets and for very large $\delta\mu$ the two lower peaks become relatively similar in strength. The positive peak around the same energy interval also shows a shift towards $\omega > \Delta_b$ and flattens out at very high $\delta\mu$ value. Here, again we observe that the peaks around the smaller band gap is a robust feature with respect to the variation in the parameters. \begin{figure}[] \includegraphics[width=0.45\columnwidth]{eps0.eps} \caption{The 2D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and the shifted Fermi surface energy $\protect\delta\protec \mu$ for the strong coupling case at temperature T = 1 at ellipticity \protect\epsilon$ and the momentum $\tilde{q}$ =0. The values of the coupling constants are $\protect\lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2 , $\protect\lambda_{ba}=0.1$, $\protect\lambda_{bb}=3$.} \label{Fig.eps0} \end{figure} The 3D graph in Fig. 10, shows the change in response function as we move from $\omega < \Delta_a$ to the region $\omega > \Delta_b$. The response function gets the inverted peak near the second band gap energy in both the cases and there is a secondary dip that shifts towards higher energy with increasing shifted Fermi surface energy. The shift of the second peak at \omega > \Delta_b$ is observed. There is almost similar amplitude of the QPI response in both the cases with the strong coupling around the region $\omega$ = \Delta_a$ for $\epsilon$ =0 case as compared to Fig. 2. For higher energies and larger chemical potential, apart from strong peaks, we have no other distinguishing feature for both the cases except for the QPI peak around the smaller band gap, $\Delta_a$. \begin{figure} \centering \subfloat[\(s_{++}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{eps0spp.eps} }} \qquad \subfloat[\(s_{\pm}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{eps0spm.eps} }} \caption{The 3D plot of the QPI response function I($\protect\omega,\protec \delta\protect\mu$) vs $\protect\omega$ and the non-zero Fermi surface energy $\protect\delta\protect\mu$ at zero ellipticity for the strong coupling case at temperature T = 1 for $s_{++}$ and $s_{\pm}$. There is a large amplitude for the response function in the region $\protect\delt \protect\mu$=[100,200] for the latter case. The values of the coupling constants are $\protect\lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, \protect\lambda_{ba}=0.1$, $\protect\lambda_{bb}=3$.} \label{Fig.eps1} \end{figure} The effect of the relative shift of the Fermi surface energy to a non-zero value shows that there is a rather strong suppression of the second response peak in $s_{++}$ case as compared to the s_{\pm}$ in the region $\omega\approx\Delta_b$ as compared to the finite ellipticity case discussed below. \begin{figure}[] \includegraphics[width=0.45\columnwidth]{mu0.eps} \caption{The 2D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and the ellipticity $\protect\epsilon$ for the strong coupling case with value of shifted Fermi surface energy $\protect\delt \protect\mu$ and the momentum $\tilde{q}$ =0, at temperature T = 1. The values of the coupling constants are $\protec \lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, $\protect\lambda_{ba}=0.1$, \protect\lambda_{bb}=3$.} \label{Fig.mu0} \end{figure} In Figs. 11 and 12, we present the change of the response function with variation in the band ellipticity $\epsilon$ as in Eq.(\ref{eq:14}) and setting the shift in Fermi surface energy $\delta\mu$ = 0 with 2D and 3D graphs. The larger ellipticity values lead to the inversion of the peak around second band gap, which reaches its maximum value around $\epsilon$ =200 and thereafter the overall amplitude drops, with the positive peak dampening strongly and shifting towards higher $\omega$ values. The peaks near the first band gap energy are unaltered by the change of the ellipticity and hence present a strong case for the probing of the gap symmetry based on QPI experiments. \begin{figure} \centering \subfloat[\(s_{++}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{mu0spp.eps} }} \qquad \subfloat[\(s_{\pm}\) symmetry]{{\includegraphics[width=0.45\columnwidth]{mu0spm.eps} }} \caption{The 3D plot of the QPI response function I($\protect\omega$) vs \protect\omega$ and the ellipticity $\protect\epsilon$ for the strong coupling case with value of shifted Fermi surface energy $\protect\delt \protect\mu$ and the momentum $\tilde{q}$ =0, at temperature T = 1. The values of the coupling constants are $\protec \lambda_{aa}=0.5$, $\protect\lambda_{ab}=0.2$, $\protect\lambda_{ba}=0.1$, \protect\lambda_{bb}=3$.} \label{Fig.mu1} \end{figure} Additionally, for the energies close to the second band gap energy and with a large $\epsilon$, the response function is negatively peaked for both the cases and has a stronger peak around $\epsilon$ =200 with a very strongly damping for very high ellipticity values. In both the cases, we observe the shifting and high suppression of the positive peak towards energies $\omega > \Delta_b$ and the negative response peak just falls off very slowly without the shift. This gains confirms our assertion that the smaller band gap peak is a promising feature that could be used as a universal tool for the pairing symmetry measurements.
1,314,259,996,824
arxiv
\section{Introduction} The exploration of hadron spectrum is a central issue in nonperturbative QCD. Charmonium is the most suitable system to be studied for its non-relativistic features and a large number of data accumulated in experiments. It provides people fruitful information to study the properties of strong interaction. As well known, $q\bar q$ interaction is described well in terms of potentials including a color Coulombic $\sim 1/r$ potential, a confinement potential and some small corrections. However, the exact form of strong interaction between quark and anti-quark in hadrons is not clear, the nature of confinement and the relation of the potentials to QCD are not clear either. All these properties are expected to be detectable from the hadron spectrum. Furthermore, the study of charmonium spectrum will be helpful both to identify observed states and to find new states. Since the discovery of $J/\psi$, many states have been discovered and identified in charmonium family. In particular, some new charmonium or charmonium-like states have lately been observed. This recent achievement in experiments has stimulated people's great interests on this important field once again. $h_c(^1P_1)$ was identified by CLEO\cite{cleo1} in the isospin-violating reaction \begin{eqnarray*} e^+e^-\to\psi(2s)\to\pi^0h_c,h_c\to\gamma\eta_c. \end{eqnarray*} $X(3872)$ was first observed by Belle\cite{belle} in exclusive B decays \begin{eqnarray*} B^\pm\to K^\pm X(3872),X(3872)\to\pi^+\pi^-J/\psi \end{eqnarray*} with $M=3872.0\pm 0.6(stat)\pm 0.5(syst)$ MeV and $\Gamma<2.3$ MeV(90\% C.L.). This state is then confirmed by CDF II\cite{cdf}, D0\cite{d0} and BaBar\cite{babar1}. $Y(3940)$ was observed by Belle\cite{belle1} in exclusive B decays \begin{eqnarray*} B\to KY(3940), Y(3940)\to\omega J/\psi. \end{eqnarray*} If this enhancement is treated as an S-wave Breit-Wigner resonance, its mass and total width are $M=3943\pm 11\pm 13$ MeV and $\Gamma=87\pm 22\pm 26$ MeV. $X(3940)$ was observed by Belle\cite{belle2} in \begin{eqnarray*} e^+e^-\to J/\psi X(3940), X(3940)\to D^\star\bar D \end{eqnarray*} with $M=3943\pm 6\pm 6$ MeV and $\Gamma<52$ MeV at $90\%$ C. L. $Y(4260)$ was observed by BaBar\cite{babar2} in initial-state radiation events, \begin{eqnarray*} e^+e^-\to\gamma_{ISR}Y(4260), Y(4260)\to\pi^+\pi^-J/\psi \end{eqnarray*} with $M\sim 4.26$ GeV and $\Gamma\sim 90$ MeV. This state of $\pi^+\pi^-J/\psi$ was confirmed recently by CLEO collaboration\cite{cleo2}. The channels $Y(4260)\to\pi^0\pi^0J/\psi$ and $Y(4260)\to K^+K^-J/\psi$ have also been observed in their study. $Z(3930)$ was observed in the process $\gamma\gamma\to D\bar D$ by Belle collaboration\cite{belle3} with $M=3929\pm 5\pm 2$ MeV and $\Gamma=29\pm 10\pm 2$ MeV, respectively. The states $X(3940)$, $Y(3940)$ and $Z(3930)$ have just been observed in single experiment so far and require confirmation by more experiments. There are many interpretations and suggestions to these new states since the first announcement of $X(3872)$. For example, interpretations to the $X(3872)$ could be found in \cite{x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11} and interpretations to the $Y(4260)$ could be found in \cite{y1,y2,y3,y4,y5,y6,y7}. Some systemic analyzes to these newly observed states could be found in \cite{charmonium1,charmonium2,charmonium3}. For the limit of pages, only parts of literatures are listed here. The interpretations include conventional charmonium arrangement and other exotic arrangements outside the $q\bar q$ framework such as the molecule state, the tetraquark state, the hybrid or the mixing state among them. However, the identification of these states, especially the $X(3872)$, is still an open topic. In order to identify all these states, it is necessary for people to know well both convenient hadron and exotic hadron. At present time, people is far away from this. In view of this complexity, we will study the spectrum of these states within the charmonium framework, while put aside their exotic interpretations and complicated production and decay properties. Hadron spectrum has been studied phenomenologically with Regge trajectory theory for a long time. Regge trajectory theory indicates a relation of the square of the hadron masses and the spin of the hadrons. A Regge trajectory is a line in a Chew-Frautschi\cite{chew} plot representing the spin of the lightest particles versus their mass square, t: \begin{eqnarray} \alpha(t)=\alpha(0)+\alpha^\prime t \end{eqnarray} where intercept $\alpha(0)$ and slope $\alpha^\prime$ depend weekly on the flavor content of the states lying on corresponding trajectory. For light quark mesons, $ \alpha^\prime\approx 0.9~GeV^{-2}$. For radial excited light $q\bar q$ mesons, trajectory on $(n,M^2)$-plots is described by\cite{ani} \begin{eqnarray}\label{nm} M^2=M^2_0+(n-1)\mu^2, \end{eqnarray} where $M_0$ is the mass of basic meson, n is the radial quantum number, and $\mu^2$ (approximately the same for all trajectories) is the slope parameter of the trajectory. The behaviors of Regge trajectories in different system, which indicate that a Regge trajectory is approximately linear while different trajectories are approximately parallel, have been studied phenomenologically in many literatures. Regge trajectory with neighboring mesons (opposite $PC$) stepped by $1$ in $J$ was first found to be linear and parallel, but was subsequently found to deviate from linearity and parallelism \cite{nonlinear1,nonlinear2}. In this case, the exchange degeneracy applies not well. In phenomenology, the exact deviation depends on peculiar family of mesons, baryons, glueballs, hybrids and energy regime. In theory, the non-linearity and the non-parallelism of Regge trajectory result from intrinsic quark-gluon dynamics which may be flavor and $J$ dependent. Some detailed studies of Regge trajectories could be found in many more fundamental theories\cite{string}. In fact, once Regge trajectories with neighboring mesons (same $PC$) stepped by $2$ in $J$ are under consideration, the linearity and the parallelism of these trajectories keep well \cite{ani,nonlinear1,zhangal}, which means that the exchange degeneracy applies. Hadron spectroscopy has also been explored in many other models\cite{model,model1,model2,model3,model4,model5} based on QCD. In these models, the spectrum of charmonium has been excellently reproduced for the nonrelativistic features of this system. An interesting conclusion is that some hyperfine splitting relations are predicted to exist among the members in a multiplet in potential models\cite{potential,potential1,potential2}. The S-wave hyperfine splitting (spin-triplet and spin-singlet splitting), $\Delta M_{hf}(nS)= M(n^3S_1)-M(n^1S_0)$, is predicted to be finite. For experimentally observed $M(\psi)$ and $M(\eta_c)$\cite{splitting}, \begin{eqnarray*} \Delta M_{hf}(1S)&=&M(J/\psi)-M(\eta_c)\simeq 115\pm 2~MeV, \\ \Delta M_{hf}(2S)&=&M(\psi(2S))-M(\eta_c(2S))\simeq 43\pm 3~MeV. \end{eqnarray*} The hyperfine splitting of P-wave or higher L-state is predicted to be zero \begin{eqnarray} \Delta M_{hf}(1P)=<M(1^3P_J)>-M(1^1P_1)\approx 0,\\\nonumber \Delta M_{hf}(1D)=<M(1^3D_J)>-M(1^1D_2)\approx 0, \end{eqnarray} where the deviation from zero is no more than a few MeV. Though the exact form of potentials may be different in different potential models, these theoretical predictions are the same. The most important fact is that the relation in the $1P$ charmonium multiplet has been proved to be obeyed in a high degree accuracy\cite{pdg}. In this paper, these relations in the $1P$ and $1D$ multiplets will be used as facts (or assumptions). In constituent quark model, $q\bar q$ mesons could be marked by their quantum numbers, $n^{2S+1}L_J$. For quarkonia, the quantum numbers $PC$ are determined by $P=(-1)^{L+1}$ and $C=(-1)^{L+S}$. From PDG\cite{pdg}, we get table~\ref{table-1} for charmonium mesons without radial excitation. In this table, entries in the first volume are observed states, entries under $J^{PC}$, $n^{2S+1}L_J$ and mass are confirmed or favored assignment by theoretical analyzes based on experiments. Entries in the last volume are information from PDG, and the states marked with a "?" are those not confirmed and omitted from the summary table. Mass of the most lately identified $h_c(1P)$ by CLEO\cite{cleo1}($M=3524.4\pm 0.6\pm 0.4$ MeV) is not filled in the table. \begin{table} \begin{tabular}{lllllll} States & $J^{PC}$ & $n^{2S+1}L_J$ & Mass(MeV) & Note\\ \hline\hline $\eta_c(1S)$ & $0^{-+}$ & $1^1S_0$ & 2979.6 & PDG \\ $J/\psi(1S)$ & $1^{--}$ & $1^3S_1$ & 3096.9 & PDG \\ \hline\hline $\chi_{c0}(1P)$& $0^{++}$ & $1^3P_0$ & 3415.2 & PDG \\ $\chi_{c1}(1P)$& $1^{++}$ & $1^3P_1$ & 3510.6 & PDG\\ $h_c$(1P)& $1^{+-}$ & $1^1P_1$ & 3526.2 & PDG ($J^{PC}=?^{??}$)\\ $\chi_{c2}(1P)$& $2^{++}$ & $1^3P_2$ & 3556.3 & PDG\\ \hline\hline $\psi(3770)$& $1^{--}$ & $1^3D_1$ & 3769.9 & PDG \\ $\psi(3836)$ & $2^{--}$ & $1^3D_2$ & $3836\pm 13$ & PDG (?) \\ ? & $2^{-+}$ & $1^1D_2$ & ? & ? \\ ? & $3^{--}$ & $1^3D_3$ & X(3872)? & $\times$~this work \\ \hline\hline ? & $2^{++}$ & $1^3F_2$ & ? & ? \\ ? & $3^{++}$ & $1^3F_3$ & ? & ? \\ ? & $3^{+-}$ & $1^1F_3$ & ? & ? \\ ? & $4^{++}$ & $1^3F_4$ & ? & ? \\ \hline\hline \end{tabular} \caption{Spectrum of charmonium without radial excitation.} \label{table-1} \end{table} Except for the $1^{--}$ $n^3S_1$ states, there is no excited charmonium having been definitely identified so far. From PDG and some recent assumptions, we obtain table~\ref{table-2}. In the table, $Z(3930)$ was suggested as the $\chi_{c2}(2P)$\cite{belle3}, and $Y(3940)$ was suggested as $3^1S_0$\cite{charmonium2} or $2^3P_0$\cite{gershtein}. \begin{table} \begin{tabular}{lllllll} States & $J^{PC}$ & $n^{2S+1}L_J$ & Mass(MeV) & Note\\ \hline\hline $\eta_c(1S)$ & $0^{-+}$ & $1^1S_0$ & 2979.6 & PDG\\ $\eta_c(2S)$ & $0^{-+}$ & $2^1S_0$ & $3654\pm 6\pm 8$& PDG(?) \\ $\eta_c(3S)$ & $0^{-+}$ & $3^1S_0$ & Y(3940)? & \cite{charmonium2} \\ \hline\hline $J/\psi(1S)$& $1^{--}$ & $1^3S_1$ & 3096.9 & PDG \\ $\psi$(2S)& $1^{--}$ & $2^3S_1$ & 3686.1 & PDG \\ $\psi(4040)$& $1^{--}$ & $3^3S_1$ & $4040\pm 10$ & PDG \\ $\psi(4415)$& $1^{--}$ & $4^3S_1$ & $4415\pm 6$ & PDG \\ \hline\hline $\chi_{c0}(1P)$& $0^{++}$ & $1^3P_0$ & 3415.2 & PDG\\ $\chi_{c0}(2P)$& $0^{++}$ & $2^3P_0$ & Y(3940)? & \cite{gershtein}\\ \hline\hline $\chi_{c1}(1P)$& $1^{++}$ & $1^3P_1$ & 3510.6 & PDG\\ $\chi_{c1}(2P)$& $1^{++}$ & $2^3P_1$ & X(3872)? & ?\\ \hline\hline $h_c(1P)$& $1^{+-}$ & $1^1P_1$ & 3526.2 & PDG ($J^{PC}=?^{??}$)\\ $h_c(2P)$& $1^{+-}$ & $2^1P_1$ & ? & ?\\ \hline\hline$\chi_{c2}(1P)$& $2^{++}$ & $1^3P_2$ & 3556.3 & PDG \\ $\chi_{c2}(2P)$ & $2^{++}$ & $2^3P_2$ & Z(3930) & \cite{belle3} \\ \hline\hline $\psi(3770)$& $1^{--}$ & $1^3D_1$ & 3770 & PDG \\ $\psi(4160)$ & $1^{--}$ & $2^3D_1$ & $4159\pm 20$ & PDG \\ Y(4260) & $1^{--}$ & $3^3D_1$ & $4260?$ & ? \\ \hline\hline \end{tabular} \caption{Spectrum of charmonium with different radial n.} \label{table-2} \end{table} After filling in these two tables, we proceed firstly with the study of properties of relevant Regge trajectories. For states without radial excitation, states in each group below make a trajectory \begin{eqnarray*} 0^{-+} ~(^1S_0), ~~1^{+-} ~(^1P_1), ~~2^{-+} ~(^1D_2), ~~\cdots ,\\ 1^{--} ~(^3S_1), ~~2^{++} ~(^3P_2), ~~3^{--} ~(^3D_3), ~~\cdots ,\\ 0^{++} ~(^3P_0), ~~1^{--} ~(^3D_1), ~~2^{++} ~(^3F_2), ~~\cdots ,\\ 1^{++} ~(^3P_1), ~~2^{--} ~(^3D_2), ~~3^{++} ~(^3F_3), ~~\cdots . \end{eqnarray*} These trajectories were analyzed also in a recent work\cite{gershtein}. The high excitation states appeared in these trajectories have not been observed. In terms of the first two states in each trajectory, their rough slopes $\alpha^\prime$ are determined \begin{eqnarray*} 0.282, ~0.327, ~0.392, ~0.419~GeV^{-2}, \end{eqnarray*} respectively. The slopes of these trajectories increase slowly. Obviously, these trajectories are not straightly parallel, and the exchange degeneracy applies not well. These Regge trajectories indeed fan out as pointed out in reference\cite{nonlinear1}. For states with radial excitation, the situation is not so satisfactory for lack of experimental data. At present, there isn't enough data to make a trajectory even with neighboring mesons stepped by $1$ in $J$. However, if more states are pinned down, they will consist of some new Regge trajectories. Once the $Z(3930)$ is confirmed as the $2^{++}$ $\chi_{c2}(2P)$\cite{belle3}, the $1^{--}$ $2^3S_1$ and the $2^{++}$ $2^3P_2$ will make an excited trajectory. The slope $\alpha^\prime=0.540~GeV^{-2}$ is larger than the corresponding one ($\alpha^\prime=0.327~GeV^{-2}$) without radial excitation. If the suggestion of $Y(3940)$ in \cite{gershtein} is right, the $0^{++}$ $2^3P_0$ and the $1^{--}$ $2^3D_1$ will make an excited trajectory with slope $\alpha^\prime=0.564~GeV^{-2}$, which is also larger than the corresponding one ($\alpha^\prime=0.392~GeV^{-2}$) without radial excitation. The favorable quantum numbers for $X(3872)$ is now believed to be $J^{PC}=1^{++}$ or $2^{-+}$\cite{cdf2}. If $X(3872)$ is the candidate for $2^{-+}$ $2^1D_2$ state, the $0^{-+}$ $2^1S_0$ and the $2^{-+}$ $2^1D_2$ will make an excited Regge trajectory with slope $\alpha^\prime=1.219~GeV^{-2}$. If $X(3872)$ is the radial excited $1^{++}$ $2^3P_1$ charmonium, it makes another trajectory with the unknown $2^{--}$ $2^3D_2$. After we have an overall understanding of the properties of Regge trajectories for charmonium, we start our analyzes to the newly observed states, and give some predictions to the spectrum of the $1D$ multiplet. From previous statements and our analyzes, Regge trajectories with neighboring mesons stepped by $1$ in $J$ really deviate from linearity and parallelism. The worst thing is that we don't know how large the deviations from the linearity and parallelism are for these trajectories. Though we can give a rough analysis and predictions to the charmonium spectrum in terms of the trajectory with neighboring mesons stepped by $1$ in $J$ as did in \cite{gershtein}, we have no such an intention to go ahead in this Letter. In order to make a more precise analysis and potential predictions, we should make use of the linearity and parallelism for trajectories with neighboring mesons stepped by $2$ in $J$. Unfortunately, there exists no Regge trajectory with neighboring mesons stepped by $2$ in $J$ from experimental data in table~\ref{table-1} and table~\ref{table-2}. However, if the linearity and parallelism of Regge trajectories with neighboring mesons stepped by $2$ in $J$ (all mesons with the same $PC$ in one trajectory) is combined with the hyperfine splitting relation of P-wave or higher L-state charmonium, some predictions to the spectrum of the $1D$ charmonium multiplet could be obviously made. The two lowest Regge trajectories with neighboring mesons stepped by $2$ in $J$ consist of \begin{eqnarray*} 0^{-+} ~(^1S_0),~~2^{-+} ~(^1D_2), \\\nonumber 1^{--} ~(^3S_1),~~3^{--} ~(^3D_3). \end{eqnarray*} These two trajectories should be parallel and have the same slope. In these trajectories, it is well known that $0^{-+}$ $\eta_c(1S)$ and $1^{--}$ $\psi(3770)$ have been identified, while $2^{-+} ~1^1D_2$ and $3^{--} ~1^3D_3$ have not been pinned down. In the $1D$ multiplet, the $1^{--}$ $\psi(3770)$ is identified, while other three states have not been identified. From our analyzes, if another $1D$ state ($2^{--}$, $2^{-+}$ or $3^{--}$) is confirmed, the total spectrum of the $1D$ multiplet could be predicted. The favored assignment to $\psi(3836)$ is the $1^3D_2$ state\cite{pdg}, which has not been definitely confirmed. If $\psi(3836)$ is really confirmed as the $2^{--}$ $1^3D_2$ state, the masses of $3^{--}$ $1^3D_3$ and $2^{-+}$ $1^1D_2$ could be predicted as follows. When the mass of $3^{--}$ $1^3D_3$ is set to be M GeV, the mass of $2^{-+}$ $1^1D_2$ is found to be $(7M+5\times 3.836+3\times 3.770)/15$ GeV due to zero of hyperfine splitting of the $1D$ charmonium. On the other hand, these two Regge trajectories have the same slope. Therefore, an equation with M is obtained \begin{eqnarray} M^2-3.097^2=({7M+5\times 3.836+3\times 3.770\over 15})^2-2.98^2. \end{eqnarray} The solution to this equation is $M=3.981$ GeV. Correspondingly, the mass of $2^{-+}$ $1^1D_2$ is predicted to be $3.890$ GeV. Within the charmonium framework, $X(3872)$ was ever interpreted as the $1^3D_2$ and $1^3D_3$\cite{bettoni}. These arrangements could also be checked. If $X(3872)$ is really the $2^{--}$ $1^3D_2$ state, a similar equation \begin{eqnarray} M^2-3.097^2=({7M+5\times 3.872+3\times 3.770\over 15})^2-2.98^2 \end{eqnarray} with M being the mass of $3^{--}$ $1^3D_3$ is obtained. The masses of the $3^{--}$ $1^3D_3$ and the $2^{-+}$ $1^1D_2$ are therefore found to be $4.002$ GeV and $3.912$ GeV, respectively. If $X(3872)$ is the $3^{--}$ $1^3D_3$, the relations in this $1D$ multiplet do not respect the parallelism of Regge trajectory and hyperfine splitting relation. The breaking of hyperfine splitting relation is quite large even though a large deviation from the parallelism of Regge trajectory is assumed. So we can conclude safely that this arrangement is impossible and should be ruled out. This viewpoint is supported by a recent experimental analysis\cite{cdf2} where the interpretation of $3^{--}$ $1^3D_3$ seems to be excluded. In any case of these arrangements, once another new state in the $1D$ multiplet is definitely pinned down, the masses of another two states will be determined. Obviously, no matter which case is the reality, it is important to identify another new state in the $1D$ multiplet firstly, and then to find another two states in the relevant energy regime. Of course, if $X(3872)$ is the $2^{--}$ $1^3D_2$ state, the existed $\psi(3836)$ requires other interpretation. In this Letter, the properties of Regge trajectories of charmonium are studied. We combined the linearity and parallelism of Regge trajectories with a hyperfine splitting relation, and observed that some predictions could be given to the spectrum of the $1D$ multiplet. From these analyzes, some results on charmonium spectrum have been obtained: 1, The assignment of $X(3872)$ as the $3^{--}~1^3D_3$ charmonium state should be ruled out. 2, If the $X(3872)$ is the $2^{--}$ $1^3D_2$ state, the masses of the $3^{--}$ $1^3D_3$ and the $2^{-+}$ $1^1D_2$ are predicted to be $4002$ MeV and $3912$ MeV, respectively. 3, The definite confirmation of $\psi(3836)$ is important for the $1D$ multiplet. If it is confirmed as the $2^{--}$ $1^3D_2$ state, the masses of the $3^{--}$ $1^3D_3$ and the $2^{-+}$ $1^1D_2$ is predicted to be $3981$ MeV and $3890$ MeV, respectively. These theoretical predictions are expected to give some hints to the forthcoming experiments. As well known, the linearity and the parallelism of Regge trajectories with neighboring mesons stepped by $2$ in $J$ were obtained in the analyzes to many mesons system in literatures, slight deviations were observed also. The deviations are expected to affect our conclusions little for their smallness. However, it will be still interesting to detect how large the deviations are in charmonium, especially, their relations to the spin-spin and spin-obit interactions in hadron. The phenomenological study of the deviations may be important for the study of potential models. Acknowledgment: This work is supported in part by National Natural Science Foundation of China, Foundation of Department of Education of Shanghai and Shanghai Leading Academic Discipline Project with project number: T0104.
1,314,259,996,825
arxiv
\section{INTRODUCTION} Animal sensory organs have inspired diverse artificial sensing systems, from chemosensing based on olfaction to whiskered robots based on the vibrissal sense of rodents. We are intimately familiar with our own human sense of touch, which underlies our abilities to manipulate the world and interact with other human beings. Yet for rodents and some other mammals, their primary tactile sense is from vibrissae, or tactile hair, which functions as a proximity sense for navigation and to catch prey. Whiskered robots are of interest to deploy this proximity sense in applications where vision is compromised, such as disaster recovery, and as biomimetic instantiations to investigate the physiological and neuroscientific principles underlying natural sensing~\cite{Prescott2009}. Accordingly, whiskered robots are an active area of research, with state-of-the-art devices including the SHREWbot and BELLAbot robots~\cite{Pearson2011,Assaf2016a}. While these whiskered robots are excellent biomimetic counterparts of biological vibrissal systems, they are highly complex, one-off robots that are expensive and labour-intensive to make; for example, the SHREWbot nose cone has 24 whisker modules, each having an actuated base with a Hall effect sensor to measure whisker deflection. Meanwhile, within the related area of artificial cutaneous (fingertip) touch, there has been progress towards 3D-printed, optical tactile sensors such as the TacTip family~\cite{Chorley2009,Ward-Cherrier2018}, which are inexpensive and simple to fabricate. Here we propose a new class of 3D-printed, optical tactile whisker sensors that we call TacWhiskers, based on the TacTip design. We consider two versions: a static TacWhisker array analogous to immotile tactile vibrissae ({\em e.g.} rodent microvibrissae) and a dynamic TacWhisker array analogous to motile tactile vibrissae ({\em e.g.} rodent macrovibrissae). The dynamic TacWhisker uses a single motor to protract and retract its whiskers in a rhythmic whisking motion, using a tendon passing between two rows of whiskers (Fig.~\ref{fig:1}). \begin{figure}[t!] \centering \includegraphics[width=0.46\columnwidth,trim={0 30 0 30},clip]{fig1a_compnobg} \includegraphics[width=0.45\columnwidth,trim={0 -50 0 0},clip]{fig1b_compnobg}\vspace{0em} \caption{Side (left image) and front (right image) views of the dynamic TacWhisker array mounted on an ABB robot arm. The actuation module, sensor body and tip are visible, with the tendon that protracts the whiskers.} \label{fig:1} \vspace{-1em} \end{figure} To assess the TacWhisker performance, we consider a localization task where the position of a rod is classified from whisker deflections, analogously to an experimental protocol for rodent perception~\cite{Diamond2008}. We find the accuracy of TacWhisker perception depends strongly on the whisker motion: the static TacWhisker has poor location perception with a forward/back dabbing motion and the dynamic TacWhisker has good location perception when whisking. These motions are then applied to an active perception task, in which the rod is both localized and centred within the whisker array. Only the dynamic TacWhisker is able to perform well at this task, with trajectories quickly centring on the stimulus and localization performance near perfect after a few contacts. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth,trim={0 0 0 0},clip]{fig2_comp} \caption{Physiology of tactile vibrissae - the follicle-sinus complex from which the whisker shaft emanates, and within which are the vibrissal sensory mechanoreceptors. (Diagram modified from~\cite{Ebara2017} under a Creative Commons Attribution-ShareAlike 3.0 License.) } \label{fig:2} \vspace{-1em} \end{figure} \section{BACKGROUND AND RELATED WORK} \subsection{Biological sensing with tactile hair} Almost all mammals except {\em Homo Sapiens} have a specialised form of tactile hair called vibrissae or whiskers~\cite{Ahl1986}. Whiskers differ from conventional (pelagic) hair by being~\cite{Ahl1986}: (i) much longer; (ii) primarily facial (although they can also occur on the body); (iii) sited on a large, highly-innervated follicle-sinus complex (Fig.~\ref{fig:2}); and (iv)~specifically represented in the sensory cortex of the brain. Tactile whiskers may be motile or immotile, depending on the animal and where on the body they are located~\cite{Prescott2011,Ahl1986}. In mice and rats, the long facial whiskers (macrovibrissae) around the snout move bilaterally back and forth in an active sensing motion known as `whisking'; meanwhile, the short facial whiskers (microvibrissae) underneath the nostrils are fixed. Immotile whiskers are also found on other body regions; for example, the carpal vibrissae just above cat paws. Sensory mechanoreceptors within the whisker follicle transduce motion of the whisker shaft into contact information about the environment~\cite{Ebara2017}. Merkel cells in a collar around the follicle opening activate slowly-adapting (SA) neurons that fire during sustained whisker deformation. Deeper receptors such as Lanceloate, Ruffini and Paccinian endings within the whisker follicle act in concert with the Merkel cells, signalling information about vibration, motion and whisker deflection, to comprise the vibrissal tactile sense. \subsection{Biomimetic tactile whiskered robots} Over the last decade there have been a succession of biomimetic tactile whiskered robots developed from a collaboration between Sheffield Robotics and Bristol Robotics Laboratory~\cite{Prescott2009}. The initial Whiskerbot mobile platform had 6 glass-fibre moulded whiskers mounted on strain gauges to measure 2D deflections of the whisker shaft~\cite{Pearson2007}. An improved SCRATCHbot platform had 18 actuated 3D-printed whiskers with Hall effect sensors to measure deflections while actively whisking~\cite{Pearson2010}. These single-actuated whiskers were modularized as part of the BIOTACT project, leading to another mobile whiskered platform called Shrewbot~\cite{Pearson2011} and a stand-alone whisker array for mounting on a robot arm~\cite{Sullivan2012}, both with 24 individually-actuated whiskers from $\sim$5-15\,cm long arranged in a conical 6-by-4 array. Other technologies have been used for whiskered robots. An early mobile robot from the aMouse project attached real rodent whiskers to microphones for detecting high-frequency vibrations~\cite{Fend2005}. Soon after, a whisker array was built with both slowly adapting (deflection) and rapidly adapting (velocity) components of the whisker signal and used to recognize shape~\cite{Kim2007}. A more recent robot, BELLAbot, combined the BIOTACT technology with advances in electroactive polymeric (EAP) actuation to orient, whisk and sense over an array of 20 distinct EAP whisker modules~\cite{Assaf2016a}. Simpler biomimetic whiskers have also been proposed, such as using strain gauges in the follicle to measure the bending moments proposed to underlie localization with rodent vibrissae~\cite{Emnett2018}. \subsection{Biomimetic optical tactile sensors} \begin{figure}[t!] \centering \begin{tabular}[b]{@{}cc@{}} \small\bfseries{(a) TacTip transduction}& \small\bfseries{(b) TacWhisker transduction}\\ \includegraphics[width=0.5\columnwidth,trim={0 0 0 0},clip]{fig3a}& \includegraphics[width=0.38\columnwidth,trim={0 0 0 0},clip]{fig3b} \end{tabular} \caption{Common transduction principle of the TacTip and TacWhiskers. (a)~For the TacTip, surface strain separates the inner pins. (b) For the TacWhisker, whisker shaft deformation deflects the pins. In both cases, the movement of markers on the pin tips is tracked by a camera.} \label{fig:3} \vspace{-1em} \end{figure} This paper investigates a novel vibrissal tactile sensor based on modifying a 3D-printed cutaneous (fingertip) tactile sensor called the TacTip (see~\cite{Ward-Cherrier2018} for a recent overview). The TacTip is a biomimetic tactile sensor based on the layered structure of human glabrous skin~\cite{Chorley2009}. It has an outer biomimetic epidermis made from a rubber-like material over an inner biomimetic dermis made from polymer gel. These two materials interdigitate in a mesh of inner nodular pins, based on the intermediate ridge structure of human skin that extends under the epidermis into the dermis. The biomimetic counterparts to sensory mechanoreceptors are markers on the pin tips, which can be imaged through a transparent gel that comprises the dermis. The pin movement is considered analogous to Merkel cell activity in the intermediate ridges~\cite{Cramphorn2017}. A principal observation underlying this paper is that the transduction mechanism in the TacTip can also be applied to tactile whiskers (Fig~\ref{fig:3}). Information about how the TacTip surface deforms upon contacting an object is represented in the movement of markers on the internal pins, {\em e.g.} separating on regions of high spatial curvature (Fig.~\ref{fig:3}a). Instead, for internal pins attached to whiskers extending out of the sensor surface, the markers represent displacement of the whisker shafts (Fig.~\ref{fig:3}b). As the same receptor type (Merkel cells) is implicated in both types of biological sensing, this mechanism also gives a commonality in the biomimetics of cutaneous and vibrissal touch (although it should be noted that a multitude of other mechanoreceptor types are also involved in both types of touch). \begin{figure}[t] \centering \begin{tabular}[b]{@{}cc@{}} \begin{tabular}[b]{@{}c@{}} \small\bfseries{(a) CAD for the whisker}\\ \includegraphics[width=0.4\columnwidth,trim={0 75 0 35},clip]{fig4a}\\ \small\bfseries{(b) CAD for the tip}\\ \includegraphics[width=0.32\columnwidth,trim={0 40 0 20},clip]{fig4b}\\ \small\bfseries{(c) CAD for the base}\\ \includegraphics[width=0.4\columnwidth,trim={0 5 0 5},clip]{fig4c}\\ \end{tabular}& \begin{tabular}[b]{@{}c@{}} \small\bfseries{(d) Static TacWhisker array}\\ \includegraphics[width=0.49\columnwidth,trim={195 10 235 10},clip]{fig4d_comp} \end{tabular} \end{tabular} \caption{Design of the static TacWhisker array. (a) CAD for the 3D-printed whisker, a tapered shaft. (b) CAD for the tip, comprising a compliant skin with sockets for the whiskers and a rigid base. (c) CAD for the base housing for the camera, showing also the LED ring. (d) Assembled TacWhisker array. } \label{fig:4} \vspace{-1em} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth,trim={50 10 30 30},clip]{fig5} \caption{Design of the dynamic TacWhisker array. The 3D-printed tip with mounted whiskers attaches to the base housing the camera, which attaches to the actuation module comprising the motor and housing. A tendon runs from the spool, through guides and across a groove in the compliant tip; actuation of the tendon causes the tip to deform, moving the whiskers.} \label{fig:5} \vspace{1em} \begin{tabular}[b]{@{}cc@{}} \small\bfseries{(a) Active protraction}& \small\bfseries{(b) Passive retraction}\\ \includegraphics[width=0.4\columnwidth,trim={1200 120 200 600},clip]{fig6_comp}& \includegraphics[width=0.4\columnwidth,trim={1200 670 200 50},clip]{fig6_comp} \end{tabular} \caption{Whisking motion of the dynamic TacWhisker array. (a) The motor pulls on the tendon to actively protract (bring together) the two rows of whiskers. (b) Reversing the motor releases the tendon to passively retract (pull apart) the whiskers by elastic reformation of the tip.} \label{fig:6} \vspace{-1em} \end{figure} \section{METHODS} \label{sec:3} \subsection{Inspiration and design of the TacWhisker} \label{sec:3a} We call the whiskered version of the TacTip a {\em TacWhisker} array, emphasising it is based on tactile whiskers rather than tactile (finger)tips. In this work, we consider two types of TacWhisker array: static (immotile) and dynamic (motile). \begin{figure*}[h!] \centering \includegraphics[width=0.24\textwidth,trim={-20 5 -60 0},clip]{fig7a} \includegraphics[width=0.24\textwidth,trim={-30 -5 -50 0},clip]{fig7b} \includegraphics[width=0.24\textwidth,trim={-10 10 -20 0},clip]{fig8cr} \includegraphics[width=0.23\textwidth,trim={-0 0 -20 0},clip]{fig7d} \includegraphics[width=\textwidth,trim={-10 20 -10 10},clip]{fig7e} \caption{Data processing pipeline. The internal camera captures an image of the pins attached to the shafts of the whiskers. The pins are detected and located with a blob detection algorithm. The pins are then ordered, by tracking the pins from frame-to-frame (here coloured by their row). Over multiple frames, the processing pipeline produces a time series of pin movement, representing deflection of the whiskers.} \label{fig:7} \vspace{-1em} \end{figure*} \subsubsection{Static TacWhisker array (Fig.~\ref{fig:4})} The first design of TacWhisker array modifies just the standard TacTip tip to house whiskers (Figs~\ref{fig:4}a,b). There is no modification of the 3D-printed TacTip base (Fig.~\ref{fig:4}c), which contains the USB camera (Lifecam, Microsoft) and an LED ring to illuminate the pin markers (see~\cite{Ward-Cherrier2018} for details). The tip is based on recent versions of the TacTip~\cite{Ward-Cherrier2018} that use multi-material 3D-printing. The compliant surface and inner pins printed in a rubber-like material (Tango Black+ 27) and the pin tips and mount in hard plastic (Vero White); this outer surface is filled with a soft clear silicone gel (Techsil RTV27905) held in place with a clear acrylic lens cap. For housing whiskers, the tip is modified to (Fig.~\ref{fig:4}b): (i)~reduce the number of pins to 21 (from 127) sited near the top of the tip; (ii)~space the pins further apart (4.5\,mm separation, rather than 3\,mm keeping the hexagonal projection); (iii)~enlarge and extrude the solid markers through the compliant surface (2.2\,mm dia.$\times$3.5\,mm depth pins, increased from 1.2\,mm$\times$2\,mm); and (iv)~include a hole (1\,mm dia.$\times$3\,mm depth) functioning as a socket for the whiskers. These design choices were chosen to give good pin movement upon deflection of the whiskers, and to site the whiskers appropriately for contact. The whiskers~(Fig.~\ref{fig:4}a) are modified versions of BIOTACT vibrissae~\cite{Pearson2011} that are 3D printed using nanocure-25. The main change is to reduce the whisker size for the smaller scale of the TacTip (40\,mm dia.) compared with the BIOTACT conical housing (100\,mm dia.). Accordingly, we chose whiskers 40\,mm long with a 0.98\,mm dia. base tapering to 0.6\,mm dia. at the tip, similar in scale to real rat whiskers. For simplicity, all whiskers had the same dimensions, but it would be straightforward to introduce size variations like those of rodent macrovibrissae. \subsubsection{Dynamic TacWhisker array (Fig.~\ref{fig:5})} The TacWhisker array can join onto an actuation module that protracts and retracts the whiskers back and forth in a whisking motion (Fig.~\ref{fig:6}). The whiskered tip is modified to have 2 rows of 5 whiskers arranged in a bilaterally symmetric pattern. A tendon runs through a groove between these rows and two guides in the tip mount (Fig.~\ref{fig:6}). Forwards whisker motion (protraction) results from tensioning the tendon to compress the surface at the midline (Fig.~\ref{fig:6}a); backwards whisker motion (retraction) results from releasing the tendon to elastically reform the surface (Fig.~\ref{fig:6}b). The compliant surface and whisker mounts are shaped so that the whisker tips can meet under modest surface compression. The dynamic TacWhisker array can thus rhythmically protract and retract its whiskers together and apart in a motion akin to rodent whisking. The dynamic TacWhisker is designed to be modular and re-use parts of the static TacWhisker. Apart from the modified whiskered tip, the TacWhisker base housing the camera and LED lighting is the same as the conventional TacTip. The underside of the base has a bayonet fitting, which is used to connect to an actuation module for driving the tendon (Fig.~\ref{fig:5}). This actuation module houses a Dynamixel MX 28 servomotor and spool for the tendon, with outer guides to ensure the tendon runs smoothly from the spool, outside the actuation and body modules, and over the TacWhisker tip. \subsection{Robotic platform and software infrastructure} \label{sec:3b} For testing, the static or dynamic TacWhisker array is mounted as an end-effector on a 6-DOF robotic arm (IRB 120, ABB) ({\em e.g.} Fig.~\ref{fig:1}). The arm is mounted on a table that also contains mounting stations for the stimuli. A custom 3D-printed mount is bolted to the rotating (wrist) section of the arm to which either sensor can be attached via a common bayonet fitting on the TacWhisker base and actuation module. A modular software infrastructure is used in which MATLAB is the primary interface for running tests and analysing data. The ABB arm is controlled via an IronPython and RAPID interface, and data gathered from the USB camera within the TacWhisker sensor with Python OpenCV. Similarly, a Python interface controls the dynamixel motor of the dynamic TacWhisker array. Communication between software modules is via TCP/IP ports and sockets. \subsection{Sensory transduction and data processing} \label{sec:3c} \subsubsection{Sensing} Following recent studies with the TacTip~\cite{Lepora2015,Ward-Cherrier2018}, the sensor output is a time series of pin deflections extracted from the camera images. The transformation of the camera image to marker positions requires that the pin markers be detected, which is done via standard `blob detection' methods in Python OpenCV. Overall, the data processing is a pipeline: camera image to pin detection to pin identification (nearest neighbour tracking) to give an ordered time series of pin deflections measured in pixels~(Fig.~\ref{fig:7}). The resulting tactile whisker data comprises a multi-dimensional time series of $(x,y)$ pin deflections, measured in pixels on the horizontal and vertical directions of the camera image. For visualization, the time series plots of the $x$- and $y$-deflections are labelled by colouring the tactile dimension by its pin location (Fig.~\ref{fig:7}, right plots). \begin{figure*}[t!] \centering \begin{tabular}[b]{@{}c@{}c@{}} \multicolumn{2}{c}{\bf (a) Static TacWhisker array with dabbing motion} \\ \includegraphics[width=0.8\textwidth,trim={0 25 0 0},clip]{fig8b} & \includegraphics[width=0.16\textwidth,trim={-5 15 -5 0},clip]{fig8ar} \\ \multicolumn{2}{c}{\bf (b) Dynamic TacWhisker array with whisking motion} \\ \includegraphics[width=0.8\textwidth,trim={0 25 0 0},clip]{fig8c} & \includegraphics[width=0.16\textwidth,trim={-5 15 -5 0},clip]{fig8cr} \\ \multicolumn{2}{c}{\bf (c) Dynamic TacWhisker array with self-motion calibration} \\ \includegraphics[width=0.8\textwidth,trim={0 0 0 0},clip]{fig8d} & \includegraphics[width=0.16\textwidth,trim={-5 -20 -5 0},clip]{fig8cr} \\ \end{tabular} \caption{Location data collected from the static (a) and dynamic (c,d) TacWhisker arrays. In all cases, the sensor is moved across a horizontal range from left to right (static: 50\,mm range; dynamic: 40\,mm range). The plot colour denotes the identity of the whiskers (right images).} \label{fig:8} \vspace{-1em} \end{figure*} \subsubsection{Perception} Tactile perception is the process of inferring the properties of a stimulus from data collected by contacting that stimulus. Here we use a likelihood model that transforms tactile data $D$ into a likelihood probability $P(D\,|\,H_i)$ for a set of perceptual hypotheses $\{H_1,...,H_N\}$, which could be the labels ({\em e.g.} location $x_i$ in mm) for training data used to construct the model. The perceptual decision is then the hypothesis $H_i=\argmax_i P(D\,|\,H_i)$ that has the maximum likelihood for some sensed tactile data $D$. Following recent studies with the TacTip, here we use a histogram likelihood model~\cite{Lepora2012,Lepora2016}, which bins the sensor data into intervals and counts bin frequency to form sampling distributions that are multiplied over sensor dimension and time. While simple, this model is effective for the TacTip and other sensors~\cite{Lepora2015,Lepora2016}, bears analogy with neural processing~\cite{Lepora2016} and is fairly robust and efficient. That said, the likelihood model is not the focus of this study, and so any model that works reasonably well would have been sufficient. All quantitative analyses of perceptual accuracy in this paper are based on cross validation over 10 repeated runs for data collection (representative single sets shown in Fig.~\ref{fig:8}). Monte Carlo sampling of a randomly-chosen training set and test data chosen from a different random set is then used. Typically 10,000 samples are used per analysis. \subsubsection{Active perception} We follow the approach of `biomimetic active touch for fingertips and whiskers'~\cite{Lepora2016} in which active perceptual decisions are sequential over multiple tactile contacts $D(1),\cdots,D(T)$ with actions made between contacts to fulfil a goal, such as centring the sensor on a stimulus. Bayes rule is applied recursively to the likelihoods $P(D\,|\,H_i)$ to integrate evidence over contacts $$P(H_i|D(t)) = \frac{P(D(t)|H_i)P(H_i|D(t-1))}{\sum_j P(D(t)|H_j)P(H_j|D(t-1))}$$ beginning from flat priors $P(H_i|D(0))=P(H_i)=1/N$. Here we use a simple active perception policy in which actions move the tactile sensor towards a goal location $x_{\rm fix}=H_{N/2}$, here taken as the stimulus centre. Then the actions are translations $\Delta x(t) = (x_{\rm fix}-x_j(t))$, with $x_j$ the $j$th location class $j = \argmax_i P(D(t)|H_i)$. The decision is made when the probability crosses a threshold $P(H_i|D(t))>\theta$ that sets how many contacts (on average) are needed for a decision. \section{RESULTS} \subsection{Inspection and comparison of TacWhisker data} \label{sec:4a} Whisker contact data from two distinct experimental situations were collected (Figs~\ref{fig:8}). We chose motions in which the TacWhisker made discrete contacts with the stimulus, with the whiskers leaving the stimulus between contacts:\\ (a) Static TacWhisker array with dabbing motion. The sensor was moved horizontally across a rod keeping, dabbing vertically down (15\,mm) to make a tapping contact onto the rod stimulus, with 50 taps across 50\,mm.\\ (b) Dynamic TacWhisker array with whisking motion. The experiment was repeated using the dynamic array to whisk onto the rod stimulus, with 40 whisks across 40\,mm. (The shorter range was due to to a narrower whisker field.) A further dataset was created from modifying set (b):\\ (c) Dynamic TacWhisker array with self-motion calibration. A reference signal (taken at the centre of the location range) was subtracting from the whisking data; this calibration makes a whisker contact more visually apparent since the self-motion dominates otherwise. In all experiments, good quality data were obtained from the TacWhisker arrays, as evident in the smoothly varying plots in Fig.~\ref{fig:8} with signal dominating over noise. Furthermore, the sensor data clearly covaries with contact location, which is especially evident in the dynamic array calibrated for self-motion (Fig.~\ref{fig:8}c), suggesting they will accurately perceive location. However, as we will explore below, the manner in which the whisker contacts the stimulus is key for perception, as is evident in the significant differences between the data in Figs~\ref{fig:8}a,b at the same locations. \subsection{Self-motion affects the TacWhisker data} \label{sec:4b} A principal difference of the dynamic from the static TacWhisker arrays is that the tactile data is strongly affected by the self-motion of the dynamic array: the whiskers sweeping forwards then back is the most prominent feature of the data (Fig.~\ref{fig:8}b). This self-motion effect is also a known aspect of biological vibrissal systems (see discussion). Since the whisking motion remains constant frequency and amplitude in the present experimental task, its effect on the tactile data can be removed by subtracting a reference signal, which we choose to be at the centre of the location range. This gives the self-motion compensated tactile data (Fig.~\ref{fig:8}c), where the changing contact with the stimulus is clearly evident. \begin{figure}[t!] \centering \begin{tabular}[b]{@{}cc@{}} {\bf (a) Static TacWhisker} & \\ \hspace{2em}{\bf dabbing motion} & \\ \includegraphics[width=0.47\columnwidth,trim={0 0 0 0},clip]{fig9b} & \\ {\bf (b) Dynamic TacWhisker} & {\bf (c) Dynamic TacWhisker} \\ \hspace{2em}{\bf whisking motion} & \hspace{2em}{\bf calibrated self-motion} \\ \includegraphics[width=0.47\columnwidth,trim={0 0 0 0},clip]{fig9c} & \includegraphics[width=0.47\columnwidth,trim={0 0 0 0},clip]{fig9d} \\ \end{tabular} \caption{Accuracy of location perception for the static (a) and dynamic (c,d) TacWhisker arrays. Monte Carlo 10-fold cross validation (10000 samples) is shown by plotting the the perceived against ground truth locations (red markers). The variability of the location perception is shown between 25th and 75th percentiles (gray region).} \label{fig:9} \vspace{0em} \end{figure} \subsection{Tactile perception depends on TacWhisker motion} \label{sec:4c} The accuracy of location perception in the three experimental conditions was then assessed with a standard classifier of tactile data based on a histogram likelihood model (details in Sec.~\ref{sec:3c}). Monte Carlo cross validation over 10 repeated runs was used to generate distributions of the perceived class label against the ground truth class label (Fig.~\ref{fig:9}). For the static TacWhisker array (Fig.~\ref{fig:9}a), the location perception is variable for the dabbing motion (interquartile range IQR\,$=12$\,mm). This agrees with qualitative inspection of the data, which on has a sparse, unstructured structure as the rod hits or misses vibrissae (Fig.~\ref{fig:8}a). For the dynamic TacWhisker array (Figs~\ref{fig:9}b,c), the location perception is accurate with little variability (interquartile range, IQR\,$=1.5$\,mm), and similar results for the uncompensated and self-motion compensated tactile data. It seems a basic compensation of self-motion does not help the perception, although more sophisticated methods ({\em e.g.} adaptive noise cancellation~\cite{Anderson2010}) may be better. The compensation does reveal a significant covariation of the tactile data with location (Fig.~\ref{fig:8}c), consistent with the accurate perception. \begin{figure}[t!] \centering \begin{tabular}[b]{@{}cc@{}} {\bf (a) Static TacWhisker} & {\bf (b) Dynamic TacWhisker} \\ \hspace{2em}{\bf dabbing motion} & \hspace{2em}{\bf whisking motion} \\ \includegraphics[width=0.47\columnwidth,trim={0 0 0 0},clip]{fig10a} & \includegraphics[width=0.47\columnwidth,trim={0 0 0 0},clip]{fig10b} \\ \end{tabular} \caption{Trajectories for actively localizing a stimulus with TacWhisker sensor for the static (a) and dynamic (b) arrays. Trajectories begin from random locations and aim towards the central location (dashed red line). In both cases, a posterior threshold $\theta=0.5$ was used to make a decision.} \label{fig:10} \vspace{0.5em} \centering \begin{tabular}[b]{c} {\bf Dynamic TacWhisker}\\ {\bf Location error (active vs passive)}\\ \includegraphics[width=0.6\columnwidth,trim={0 0 0 0},clip]{fig11} \end{tabular} \caption{Mean location errors for active and passive touch. Active localization is over multiple contacts while centring on the object, with either a posterior threshold (range 0-0.95; red histogram) or set decision time (1-10 contacts; black histogram). Passive localization does not move the sensor. Averages are over 1000 runs with random starting locations.} \label{fig:11} \vspace{-1em} \end{figure} \subsection{The dynamic TacWhisker aids active touch} \label{sec:4d} The location perception was then applied to a simple task in which the TacWhisker actively localizes a stimulus while using intermediate estimates of the object location to centre it within the whisker array (Sec~\ref{sec:3c}). This task is a simple example of active touch~\cite{Lepora2016} and bears analogy with rodent behaviour when exploring stimuli. For the static TacWhisker array (Fig.~\ref{fig:10}a), the trajectories do not converge on the central location. It appears the quality of the perception from a dabbing motion is not sufficient to actively localize the stimulus within the whisker field. The dynamic TacWhisker array (Fig.~\ref{fig:10}b) achieves successful active localization across the range of starting locations. All trajectories converge on the central location. Repeating over many trials, the mean active localization errors improve with mean decision time (Fig.~\ref{fig:11}, red histogram), reaching near perfect accuracy after $\sim$5 contacts with the threshold-crossing decision rule. A fixed-time rule also improves with decision time but is not as accurate for longer decision times (black histogram). Conversely, mean errors for passive perception do not improve because the robot cannot move to gather new data (white histogram). Overall, the dynamic TacWhisker array with active localization using a posterior threshold-crossing decision rule gives the best localization performance. \section{Discussion} In this paper, we introduced two novel whiskered robots: the static and dynamic TacWhisker arrays (Figs~\ref{fig:4},\ref{fig:5}). These robots are biomimetic in being based on the sensory transduction principles of biological vibrissae, bearing analogy with mechanoreception by Merkel cells in the whisker follicle. We based the designs on a 3D-printed cutaneous (fingertip) optical tactile sensor called the TacTip~\cite{Chorley2009,Ward-Cherrier2018}, which is based on sensory transduction in skin via Merkel cell mechanoreceptors. The static TacWhisker array modifies the TacTip skin to house 21 whiskers arranged around its tip. The dynamic TacWhisker array is actuated to move its whiskers back and forth in a whisking motion, with the whiskers arranged bilaterally in 2 rows of 5 whiskers. The performance of the TacWhisker sensors was examined by perceiving the location of a rod, motivated by similar experiment quantifying rodent perception~\cite{Diamond2008}. The quality of the perception depended strongly on the whisker motion. For the static TacWhisker array, a forward dabbing motion was inaccurate and variable (IQR$\sim$10\,mm), consistent with sparse, unstructured contacts. For the dynamic TacWhisker array, the whisking motion resulted in accurate and reliable perception (IQR$\sim$1.5\,mm). In consequence, only the dynamic TacWhisker array could perform an active localization task in which stimulus location is estimated while centring the object in the whisker array. Performance improved from a mean error of $\sim$1\,mm after 1 contact to perfect performance after 5 contacts, in accordance with previous studies of active touch~\cite{Lepora2016}. Other active localization strategies could work better for the static TacWhisker (e.g. avoiding locations where the sensing is bad); this would however require a more complex policy for moving the sensor that would need learning. It is also possible that other exploratory motions for the static TacWhisker would improve performance; however, active localization would be far more complex if the contacts were not discrete and independent, as provided by a dabbing or whisking motion. A potential issue is that the dynamic TacWhisker signal is dominated by the self-generated deflection of the whiskers. This can be partially compensated by subtracting a reference signal (here taken from the central contact); although this did not affect perception, it did improve the visible interpretation of the data. This effect and compensation mechanism are known from animal investigations and have also been considered with the SCRATCHbot whiskered robot~\cite{Anderson2010}. Note that the TacWhisker has greater self-generated signal than both animal and SCRATCHbot whiskers, since the dynamic TacWhisker is directly affected by its actuation (and not just the inertia of its whiskers). Overall, the TacWhisker arrays give a new class of optical tactile whiskered robots, related to optical tactile sensors that have progressed in recent years~\cite{Ward-Cherrier2018}. They have the benefit of being relatively inexpensive, readily customizable and easy fabrication using multi-material 3D-printing. We expect they can be applied to some of the application domains of tactile whiskers, such as proximity and flow sensing. Furthermore, the biomimetic basis for the TacWhiskers lends them to neurorobotic investigations of rodent tactile vibrissal sensing as an embodied model of animal perception. {\em Acknowledgements:} We thank Niels Burnus, Yilin Tao, and members of the Tactile Robotics group: Ben Ward-Cherrier, Nick Pestell, Kirsty Aquilina, Jasper James and John Lloyd. \bibliographystyle{unsrt}
1,314,259,996,826
arxiv
\section{Introduction} A compact connected K\"ahler manifold $X$ is called a \emph{hyperk\"ahler manifold} if it is simply connected and $H^0(X,\Omega^2_X)$ is generated by a holomorphic symplectic form $\sigma$. For a Lagrangian fibration $\pi\colon X\to \mathbb P^n$ we can associate a certain group called the {\em Shafarevich--Tate group} \begin{definition} The {\em Shafarevich--Tate group}\footnote{It is sometimes called {\em analytic Shafarevich--Tate group} in the literature in order to distinguish it from {\em arithmetic Shafarevich--Tate group}.} $\Sha$ of a Lagrangian fibration is the abelian group $H^1(\mathbb P^n,Aut^0_{X/\mathbb P^n})$ where $Aut^0_{X/\mathbb P^n}$ is the connected component of unity of the sheaf of vertical automorphisms of $X$ over $\mathbb P^n$. \end{definition} Choose an affine open cover $\mathbb P^n = \bigcup U_i$. Denote $U_i\cap U_j$ by $U_{ij}$. A class $s\in\Sha$ can be represented by a \v{C}ech cocycle with coefficients in $Aut^0_{X/\mathbb P^n}$. That is to say, for every pair of indices $i,j$ we are given an automorphism $s_{ij}$ of $\pi^{-1}(U_{ij})$ that commutes with $\pi$. Let us reglue the manifolds $\pi^{-1}(U_i)$ by the automorphisms $s_{ij}$. We obtain a complex manifold$X^s$ equipped with a holomorphic projection $\pi^s \colon X^s \to \mathbb P^n$. The fibers of $\pi^s$ are isomorphic to the fibers of $\pi \colon X \to \mathbb P^n$. This new manifold is called the {\em Shafarevich--Tate twist} of $X$ by $s\in \Sha$. Markman studied Shafarefich-Tate twists of Lagrangian fibrations on manifolds of $K3^{[n]}$-type in \cite{Mar}. Our work was notably influenced by his paper. Define the sheaf $\Gamma$ of finitely generated abelian groups by the exact sequence $$ 0\to\Gamma\to\pi_*T_{X/\mathbb P^n}\to Aut^0_{X/\mathbb P^n}\to 0 $$ This exact sequence induces the long exact sequence of cohomology groups. It can be proven to be right exact. $$ H^1(\mathbb P^n,\Gamma) \to H^1(\pi_*T_{X/\mathbb P^n}) \to \Sha \to H^2(\mathbb P^n,\Gamma) \to 0 $$ The holomorphic symplectic form $\sigma$ on $X$ induces an isomorphism $\pi_*T_{X/\mathbb P^n}\cong \Omega^1_{\mathbb P^n}$ \cite[Lemma 2.3.1]{CRS}. Thus, $H^1(\pi_*T_{X/\mathbb P^n}) \cong H^{1,1}(\mathbb P^n) = \C$. Let $\Sha^0$ be the image of $H^1(\pi_*T_{X/\mathbb P^n})$ in $\Sha$. We conclude that $\Sha^0$ is isomorphic to a quotient of $\C$ by the finitely generated abelian group $\operatorname{im}(H^1(\mathbb P^n,\Gamma))$. In \cite{Ver15} the author associates to a Lagrangian fibration $\pi\colon X\to \mathbb P^n$ a family of deformations over $\C$ called the {\em degenerate twistor family}. The construction goes as follows. Let $\alpha$ be a closed $(1,1)$-form on $\mathbb P^n$. There exists a unique complex structure $I_\alpha$ on $X$ such that the form $\omega + \pi^*\alpha$ is a holomorphic $2$-form on $(X,I_\alpha)$ \cite[Thm. 3.5]{Ver15}. The manifold $(X,I_\alpha)$ is the {\em degenerate twistor deformation} of $X$ with respect to the form $\alpha$. Note that the construction of degenerate twistor deformations has differential geometric flavour while the construction of Shafarevich--Tate twists is of complex analytic nature. Nevertheless, these two constructions turn out to yield the same result. \begin{theorem}[Theorem \ref{deg.tw=ShaT}]\label{ShT is deg tw intro} Pick a class $s\in H^1(\pi_*T_{X/\mathbb P^n})$. Consider the twist $X^s$ of $\pi\colon X\to \mathbb P^n$ by the image of $s$ in $\Sha$. Let $\alpha$ be a closed $(1,1)$-form on $\mathbb P^n$ representing the same class in $H^{1,1}(\mathbb P^n)\cong H^1(\pi_*T_{X/\mathbb P^n})$ as $s$. Then the complex manifolds $X^s$ and $(X,I_\alpha)$ are isomorphic as fibrations over $\mathbb P^n$. \end{theorem} Are Shafarevich--Tate twists of a hyperk\"ahler manifold hyperk\"ahler themselves? We prove that indeed, they admit a holomorphic symplectic form (Subsection \ref{symplectic}) but the question of K\"ahlerness turns out to be trickier. We need to introduce a notion of {\em M-special} hyperk\"ahler manifolds which is motivated by \cite[Def. 1]{Mar}. A hyperk\"ahler manifold $X$ is called {\em M-special} if the subspace $H^{2,0}(X)+H^{0,2}(X)\subset H^2(X,\C)$ contains a rational class. A very general hyperk\"ahler manifold is not M-special. \begin{theorem}[Theorems \ref{Kahlerness}, \ref{projectivity=torsion}] \label{kahlerness theorem intro} Let $\pi\colon X\to \mathbb P^n$ be a Lagrangian fibration on a not M-special projective hyperk\"ahler manifold $X$. Then every Shafarevich--Tate twist $X^s$ of $X$ by an element $s\in \Sha^0$ is a hyperk\"ahler manifold. Moreover, the set of $s\in\Sha^0$ such that the twist $X^s$ is a projective manifold forms a non-empty torsor over the group of torsion points of $\Sha^0$. \end{theorem} In order to prove Theorem \ref{kahlerness theorem intro}, we first study the sheaf $\Gamma$. It turns out that $\Gamma\o\mathbb Q$ is isomorphic to $R^1\pi_*\mathbb Q$ (Proposition \ref{Gamma is almost R^1pi_*Z}). The Leray spectral sequence helps us to compute some cohomology groups of $R^1\pi_*\mathbb Q$. We derive from that a description of the group $\Sha^0$ solely in terms of the Hodge structure on $H^2(X,\mathbb Z)$ (Corollary \ref{description of sha zero}). As we have already seen, the group $\Sha^0$ is isomorphic to $\C/\Lambda$ for a finitely generated abelian group $\Lambda$. It turns out that $X$ is not M-special if and only if $\Lambda$ is a dense subgroup of $\C$. We use that the set of twists of $X$ that are K\"ahler manifolds is open and $\Lambda$-invariant to conclude that all Shafarevich--Tate twists of $X$ are K\"ahler. The statement about projectivity follows by carefully applying Huybrechts' criterion \cite[Thm. 3.11]{Huy01}. In Section \ref{obstruction} we study whether there exists a Shafarevich--Tate twist $X^s$ of $\pi\colon X\to \mathbb P^n$ that admits a holomorphic section. We show that obstructions to existence of such a twist lie in $H^2(\mathbb P^n,\Gamma) = \Sha/\Sha^0$ (Corollary \ref{main property of a}). Assume that the fibers of $\pi$ are reduced and irreducible. We show that the group $H^2(\mathbb P^n,\Gamma)\o\mathbb Q$ can be embedded into $H^3(X,\mathbb Q)$ (Corollary \ref{restriction is surjective}). This implies the following theorem. \begin{theorem}[Theorem \ref{when a vanishes}]\label{sections intro} Let $\pi \colon X \to \mathbb P^n$ be a Lagrangian fibration on a compact hyperk\"ahler manifold over a smooth base. Assume that the following holds: \begin{itemize} \item the fibers of $\pi$ are reduced and irreducible; \item $H^3(X, \mathbb Q) =0$; \item $H^2(\mathbb P^n, \Gamma)$ is torsion-free. \end{itemize} Then there exists a unique $s\in \Sha^0$ such that $\pi^s \colon X^s \to \mathbb P^n$ admits a holomorphic section. In particular, the fibration $\pi$ can be deformed to a fibration with a holomorphic section through the Shafarevich--Tate family. \end{theorem} At the core of the proof of Theorem \ref{sections intro} is a Hard Lefschetz-type result for fibers of Lagrangian fibrations. We believe that it may be of independent interest and state it here. \begin{theorem}[Corollary \ref{Lefschetz}]\label{Lefschetz intro} Let $\pi\colon X\to \mathbb P^n$ be a Lagrangian fibration on a projective hyperk\"ahler manifold $X$. Then there exists a sheaf $\mathcal N$ on $\mathbb P^n$ such that the sheaf $R^{2n-1}\pi_*\mathbb Q_X$ decomposes into the direct sum $$ R^{2n-1}\pi_*\mathbb Q_X \simeq R^1\pi_*\mathbb Q_X \oplus \mathcal N. $$ \end{theorem} Theorem \ref{Lefschetz intro} allows us to prove that the differential $d_2\colon H^0(R^2\pi_*\mathbb Q)\to H^2(R^1\pi_*\mathbb Q)$ in the Leray spectral sequence for $\mathbb Q_X$ vanishes. The arguement is inspired by the one used by Deligne to prove that the Leray spectral sequence of a smooth projective submersion degenerates on the second page \cite{Deligne}. \hfill The paper is organized as follows. The notions of a Shafarevich--Tate group and a degenerate twistor deformation, as well as basic facts about Lagrangian deformations are recalled in Section \ref{prelim}. The main result of Section \ref{ShT-properties} is a refined version of Theorem \ref{ShT is deg tw intro}. We also prove that Shafarevich--Tate twists are holomorphic symplectic in Subsection \ref{symplectic} and study the period map for the family of Shafarevich--Tate twists in Subsection \ref{period map}. Section \ref{connected component} is concerned with the description of the group $\Sha^0$ in terms of topological invariants of $X$. Its results are used in Section \ref{applications} to prove Theorem \ref{kahlerness theorem intro}. In Section \ref{obstruction} we discuss obstructions to existence of sections and prove Theorem \ref{sections intro}. Theorem \ref{Lefschetz intro} is proved there as an intermediate result. \hfill \textbf{Acknowledgments.} We are deeply grateful to Misha Verbitsky, Dmitry Kaledin, Rodion D\'eev, and Andrey Soldatenkov for stimulating conversations and encouragement. A.A. thanks Giulia Sacc\`a for her interest and for pointing out to us the papers \cite{CRS} and \cite{Mar96}. A.A. is also extremely grateful to Raymond Cheng for reading a draft of this paper. A.A. very much appreciates his reasonable and helpful suggestions. V.R. thanks Olivier Benoist for his insights about the content of Section \ref{obstruction}. The work on this project started during our stay at the summer school ''Algebra \& Geometry--2021'' in Yaroslavl, Russia organized by the Laboratory of Algebraic Geometry of Higher School of Economics and Yaroslavl State Pedagogical University. We are grateful to the organizers for their hospitality and inspiring atmosphere. A.A. was supported in part by Simons Foundation. \section{Preliminaries}\label{prelim} \subsection{Lagrangian fibrations}\label{Geometry of Lagrangian fibrations} Let $X$ be a hyperk\"ahler manifold. Fix a holomorphic symplectic form $\sigma$ on $X$. Consider a map $\pi\colon X\to B$ to a normal base $B$ of dimension half of that of $X$. A map $\pi \colon X \to B$ is called \textit{a Lagrangian fibration} if it is surjective with connected fibers and the restriction of $\sigma$ to every smooth fiber vanishes. We always assume that the base $B$ of a Lagrangian fibration $\pi$ is isomorphic to $\mathbb P^n$. It is the case in all known examples (see e.g. \cite[Subection 1.2]{HM}). We refer the reader to \cite{Huy01} for basics on hyperk\"ahler manifolds and Lagrangian fibrations. Throughout this paper, we write $X$ for a compact hyperk\"ahler manifold, $\sigma$ for a holomorphic symplectic form on $X$ and $\pi\colon X\to B$ for a Lagrangian fibration. For $b\in B$ we denote the fiber of $\pi$ over $b$ by $F_b$. The subset of the regular values of $\pi$ is denoted by $B^\circ$. This is a Zariski open subset of $B$ and $D:= B \,\setminus\, B^{\circ}$ is known to be a divisor \cite[Prop. 3.1]{HO9}. Smooth fibers of a Lagrangian fibration $\pi\colon X\to B$ are complex tori \cite{Mar86}. One can show that they are abelian varieties even if $X$ is not projective \cite{C}. Let $X$ be a hyperk\"ahler manifold. Then $H^2(X, \mathbb Z)$ carries a non-degenerate symmetric bilinear form $q \colon H^2(X, \mathbb Z) \o H^2(X, \mathbb Z) \to \mathbb Z$ called the {\em Beauville--Bogomolov--Fujiki form} \cite{Beauv83,F85}. It satisfies the following properties: \begin{itemize} \item[(1)] The Hodge decomposition \[ H^2(X, \C) = H^{2,0}(X) \oplus H^{1,1}(X) \oplus H^{0,2}(X) \] is orthogonal with respect to $q$. The restriction of $q$ to $H^{2,0}(X) \oplus H^{0,2}(X)$ is a positive Hermitian form and the restriction of $q$ to $H^{1,1}(X)$ has the signature $(1, h^{1,1}(X)-1)$; \item[(2)] ({\em Fujiki formula}) There exists a non-zero constant $c_X$ such that \begin{equation*} \label{Fujiki} q(\omega, \omega)^n = c_X \int_X \omega^{2n} \end{equation*} \end{itemize} Let $\eta = \pi^*[H]\in H^2(X,\mathbb Z)$ be the pullback of the class of a hyperplane. It follows from the Fujiki formula that $q(\eta,\eta) = 0$. \begin{prop}\label{restriction} (\cite[Section 2]{Og}) Let $\pi \colon X \to B$ be a Lagrangian fibration, $\eta\in H^2(X,\mathbb Z)$ as above. Then the restriction map $H^2(X,\mathbb Q) \to H^2(F_b, \mathbb Q)$ has rank $1$ for every smooth fiber $F_b=\pi^{-1}(b)\subset X$. Moreover, the kernel of this map is precisely $$ \eta^{\perp}:= \{ v \in H^2(X, \mathbb Q) \ | q(v, \eta)=0\}. $$ \end{prop} \begin{cor} \label{relative polarisation} With the assumptions of Proposition \ref{restriction}, there is a K\"ahler class $[\omega]$ on $X$ that restricts to an integral class on every smooth fiber $F_b$. \end{cor} \begin{proof} By Proposition \ref{restriction} the restriction map $H^2(X,\mathbb Q)\to H^2(F_b,\mathbb Q)$, for any $b\in B^\circ$, factors through the one-dimensional quotient $H^2(X,\mathbb Q)/\eta^\perp$. This vector space is generated by an integral class. Every K\"ahler class $[\omega]$ on $X$ restricts to a non-zero class on a fiber. Therefore, the class $[\omega]$ restricts to an integral class after appropriate scaling. \end{proof} \begin{lemma}\label{tangent cotangent isomorphism} (\cite[Lemma 2.3.1]{CRS}) Let $\pi\colon X\to B$ be a Lagrangian fibration. Consider the isomorphism $\sigma\colon \Omega^1_X\xrightarrow{\sim} T_X$ induced by the holomorphic symplectic form $\sigma$. Then the map $\sigma$ sends $\pi^*\Omega^1_B\subset \Omega^1_X$ isomorphically to $T_{X/B}:=\ker({d\pi\colon T_X\to \pi^*T_B})$. \end{lemma} It follows that the sheaves $\Omega^1_B$ and $\pi_*T_{X/B}$ are isomorphic. In particular, the sheaf $\pi_*T_{X/B}$ is locally free. Assume that $X$ is projective. Let $\omega$ be a closed $(1,1)$-form on $X$ that represents a rational class $[\omega] \in H^{1,1}_\mathbb Q(X)$. The contraction of $\omega$ with holomorphic vector fields defines a map $$ \tilde{\omega}\colon \pi_*T_{X/B}\to R^1\pi_*\O_X. $$ We get the following maps by taking the exterior powers of $\tilde{\omega}$: $$ \widetilde{\omega}^i\colon \Omega^i_B\cong\Lambda^i(\pi_*T_{X/B})\to \Lambda^iR^1\pi_*\O_X \to R^i\pi_*\O_X, \quad i=1, \ldots, n. $$ \begin{prop}\label{Matsushita isomorphism} (\cite[Thm. 1.2]{Mats05}) Let $\pi \colon X \to B$ be a Lagrangian fibration on a projective hyperk\"ahler manifold $X$ over a smooth base $B$. Then the map \[ \widetilde{\omega}^i\colon \Omega^i_B\to R^i\pi_*\O_X. \] is an isomorphism for every $i=1,\dots,n$. In particular, the sheaves $R^i\pi_*\O_X$ are locally free. \end{prop} \begin{rmk} \label{not projective} The assumption of projectivity of $X$ can be dropped, as was shown in \cite{SV}. Let $[\omega]$ be a K\"ahler class on $X$ as in Corollary \ref{relative polarisation}. Then the restriction of the isomorphism $\pi_*T_{X/B}\to R^1\pi_*\mathcal O_X$ to $B^\circ$ is given by the contraction of $\omega$ with holomorphic vector fields \cite[Cor. 3.7]{SV}. \end{rmk} \begin{cor}\label{isomorphism of bases} The following cohomology groups are isomorphic: $$H^1(B, \pi_*T_{X/B}) \simeq H^1(B, \Omega^1_B) \simeq H^1(B, R^1\pi_*\O_X) \simeq \C.$$ \end{cor} \begin{proof} The sheaves $\pi^*T_{X/B}$, $\Omega^1_B$, and $R^1\pi_*\mathcal O_X$ are isomorphic by Lemma \ref{tangent cotangent isomorphism} and Proposition \ref{Matsushita isomorphism}. The last isomorphism holds since $h^{1,1}(\mathbb P^n) =1$. \end{proof} \subsection{Shafarevich--Tate groups}\label{twists} Let $\pi \colon X \to B$ be a surjective holomorphic map of complex manifolds. Consider the sheaf $Aut_{X/B}$ of \textit{vertical automorphisms} of $X$ over $B$. This is a sheaf of groups on $B$ defined as $$ Aut_{X/B}(U) := \{\phi\colon \pi^{-1}(U)\:\tilde{\to}\: \pi^{-1}(U)\:|\:\pi\circ\phi = \pi\}. $$ Let us define the subsheaf $Aut^0_{X/B}\subset Aut_{X/B}$ as follows. It consists of those vertical automorphisms $\phi$ whose restriction to every fiber $F_b$ lies in $Aut^0(F_b)$. Pick a class $s \in H^1(B, Aut^0_{X/B})$. We can represent it by a \v{C}ech cocycle $s_{ij}$ for an open cover $B = \bigcup U_i$. We are given an automorphism $s_{ij} \colon \pi^{-1}(U_{ij}) \to \pi^{-1}(U_{ij})$ for each pairwise intersection $U_{ij} := U_i \cap U_j$ . Let us glue a new complex manifold as $$X^{s} := \bigsqcup \pi^{-1}(U_i) \Bigg / x\in\pi^{-1}(U_i) \sim s_{ij}(x)\in\pi^{-1}(U_j).$$ The manifold $X^s$ is equipped with a natural fibration $\pi^s\colon X^s\to B$. The isomorphism class of the fibration $\pi^s\colon X^s\to B$ depends only on the class of $s$ in $H^1(B,Aut^0_{X/B})$. The manifold $X^s$ is called the {\it twist of $X$ by the class $s \in H^1(B, Aut^0_{X/B})$}. Suppose from now that a general fiber of $\pi$ is a complex torus. Let $B^{\circ} \subset B$ be the set of values of $\pi$ for which the fiber $\pi^{-1}(b)$ is a complex torus. \begin{lemma} \label{commutativity} Let $\pi\colon X\to B$ be as above. Then the sheaf $Aut^0_{X/B}$ is a sheaf of commutative groups. \end{lemma} \begin{proof} The sheaf $Aut^0_{X/B}$ is a sheaf of commutative groups at least over $B^\circ\subset B$. Indeed, for any point $b\in B^\circ$ the group $Aut^0(F_b)$ is a complex torus. In particular, $Aut^0(F_b)$ is commutative. Let $g,h\in Aut^0_{X/B}(U)$ be two local sections of $Aut^0_{X/B}$. Then $[g,h]|_{B^\circ\cap U}$ is trivial. Therefore, $[g,h]$ is itself trivial, because $B^{\circ} \cap U$ is dense in $U$. \end{proof} \begin{df} (\cite[Ch. I, Section 1.5.1]{FM}) \label{definition of Sha} The group $H^1(B,Aut^0_{X/B})$ is called the {\it Shafarevich--Tate group} of the fibration $\pi\colon X\to B$ and is denoted by $\Sha$. \end{df} See \cite{DG,Ko,Mar} for applications of Shafarevich--Tate groups in study of elliptic and Lagrangian fibrations. \subsection{Degenerate twistor deformations}\label{parabolae explained} The main references for this Subsection are \cite{Ver15,BDV,SV}. Let $X$ be a hyperk\"ahler manifold. Fix a holomorphic symplectic form $\sigma \in H^{0}(X, \Omega^2_X)$. The key observation here is that the complex structure of $X$ is determined by the form $\sigma$, viewed as a smooth $\C$-valued $2$-form on $X$. Namely, the complex structure is uniquely determined by the subbundle of $(0,1)$-vectors $T^{0,1}X \subset TX$. This subbundle can be recovered as $\ker \sigma|_{TX\o\C}$. We can characterize all complex-valued smooth $2$-forms on $X$ that can be realized as holomorphic symplectic forms for some complex structures. \begin{df}(\cite{BDV}) Let $M$ be a smooth manifold of real dimension $4n$. Consider a smooth complex-valued $2$-form $\sigma \in \Gamma(\Lambda^2T^*M \o \C)$. It is called a \textit{c-symplectic structure} if the following hold: \begin{itemize} \item $d\sigma = 0$; \item $\sigma^{n+1}=0$; \item $\sigma^n \wedge \overline{\sigma}^n \neq 0$ at each point of $M$. \end{itemize} \end{df} \begin{prop} (\cite[Prop. 3.1]{Ver15}) Let $\sigma$ be a c-symplectic structure on $M$. Define $$T^{0,1}M := \ker \sigma \subset TM \o \C $$ and $T^{1,0}M:= \overline{T^{0,1}M}$. Then the following holds: \begin{itemize} \item $TM = T^{0,1}M \oplus T^{1,0}M$; \item $[T^{0,1}M, T^{0,1}M] \subseteq T^{0,1}M$. \end{itemize} Equivalently, the operator $I_{\sigma} \colon TM \to TM$ that acts as $\i$ on $T^{1,0}M$ and as $-\i$ on $T^{0,1}M$ is an integrable complex structure on $M$. The form $\sigma$ is holomorphic symplectic with respect to $I_{\sigma}$. \end{prop} \begin{lemma}(\cite[Thm. 1.10]{Ver15}) Let $M$ be a complex manifold (not necessarily compact) and $\sigma$ a holomorphic symplectic $2$-form on $M$. Consider a proper Lagrangian fibration $\pi \colon M \to S$ over a complex manifold $S$. Fix a closed $2$-form $\alpha$ on $S$ of Hodge type $(2,0)+(1,1)$. Then the form $\sigma_\alpha:= \sigma +\pi^*\alpha$ is a c-symplectic structure. Moreover, the projection $\pi \colon M \to S$ is holomorphic with respect to the complex structure $I_{\alpha}:= I_{\sigma_{\alpha}}$. It is a Lagrangian fibration with respect to the holomorphic symplectic form $\sigma_{\alpha}$. \end{lemma} Suppose that the base $S$ of a Lagrangian fibration satisfies the condition $H^1(S,\mathcal O_S) = 0$. The next two statements will show that the isomorphism class of the complex manifold ($M$, $I_{\alpha}$) depends only on the cohomology class of $\alpha$. \begin{lemma}\label{holomorphic moser}(\cite[Thm. 2.7]{SV}) Let $(M, \sigma)$ be a holomorphic symplectic manifold and $\pi \colon M \to S$ a proper Lagrangian fibration. Assume that $H^1(S,\mathcal O_S) = 0$. Let $\alpha$ be an exact $2$-form of type $(2,0)+(1,1)$ on $S$. Consider a family of c-symplectic structures \[ \sigma_t := \sigma+t\pi^*\alpha. \] Then there exists a flow of diffeomorphisms $\phi_t$ on $M$ preserving the fibers of $\pi$ such that for each $t$ \[ \phi_t \colon (M, I_0) \to (M, I_t) \] is a biholomorphism. \end{lemma} \begin{cor} Let $\pi\colon M\to S$ be as in Lemma \ref{holomorphic moser}. Consider two cohomologous $2$-forms $\alpha$ and $\alpha'$ of Hodge type $(2,0) + (1,1)$ on $S$. Let $I$ and $I'$ be the complex structures on $M$ induced by the c-symplectic forms $\sigma + \pi^*\alpha$ and $\sigma + \pi^*\alpha'$ respectively. Then there exists a biholomorphism $\phi\colon (M,I) \to (M,I')$ preserving the fibers of $\pi$. \end{cor} \begin{proof} Follows by applying Lemma \ref{holomorphic moser} to the holomorphic symplectic manifold $(M,I)$ equipped with the holomorphic symplectic form $\sigma_{\alpha}:= \sigma + \pi^*\alpha$. Indeed, in this case the form $\sigma + \pi^*\alpha'$ can be written as $\sigma_{\alpha} + \pi^*d\beta$ for some $1$-form $\beta$. \end{proof} \begin{df}(\cite[Def. 3.17]{Ver15})\label{definition of dg.tw} Let $\pi \colon X \to B$ be a Lagrangian fibration and $\alpha$ a closed $2$-form on $B$ of type $(2,0) + (1,1)$. Consider the family of c-symplectic structures $\sigma_t:= \sigma+t\pi^*\alpha$, $t\in\C$. Let $I_t$ be the associated family of complex structures on $X$. The \textit{degenerate twistor family} $\mathfrak{X}_{deg.tw}$ of $(X, \pi)$ is the following manifold. Its underlying smooth manifold is $X \times \C$ and the almost complex structure is defined as \[ (I_{tw})_{(x,t)}:= I_t \oplus I_{\C}, \] where $x\in X$, $t\in \C$, and $I_{\C}$ is the standard complex structure on $\C$. \end{df} One can show that the almost complex structure $I_{tw}$ from Definition \ref{definition of dg.tw} is integrable \cite[Thm. 3.18]{Ver15}. In other words, $\mathfrak{X}_{deg.tw}$ is a complex manifold. The projection $\mathfrak{X}_{deg.tw} \to \C$ is holomorphic and $\mathfrak{X}_{deg.tw}$ is the total space of a holomorphic family of complex holomorphically symplectic manifolds. It is endowed with a holomorphic fibration $\Pi_{deg.tw} \colon \mathfrak{X}_{deg.tw} \to B \times \C$ that restricts to a fiber $X_t \subset \mathfrak{X}_{deg.tw}$ as $\pi$. \section{Shafarevich--Tate group of a Lagrangian fibration}\label{ShT-properties} \subsection{Structure of Shafarevich--Tate groups: first steps}\label{ShT-def} Let $X$ be a compact hyperk\"ahler manifold and $\pi \colon X \to B$ a Lagrangian fibration, $B\simeq \mathbb P^n$. Recall that the Shafarevich--Tate group $\Sha$ of $\pi$ is defined to be $H^1(B, Aut^0_{X/B})$ (Definition \ref{definition of Sha}). Consider the exponential map $\pi_*T_{X/B}\to Aut^0_{X/B}$. Define the sheaf $\Gamma$ by the short exact sequence $$ 0 \to \Gamma \to \pi_*T_{X/B} \to Aut^0_{X/B}\to 0. $$ It induces the following long exact sequence: \begin{equation} \label{exact seq of shas} H^1(B,\Gamma) \to H^1(B,\pi_*T_{X/B}) \to \Sha \to H^2(B,\Gamma) \to 0. \end{equation} Indeed, the sheaf $\pi_*T_{X/B}$ is isomorphic to $\Omega^1_B$ by Lemma \ref{tangent cotangent isomorphism}. Hence, $$H^2(B,\pi_*T_{X/B}) = H^{1,2}(\mathbb P^n) = 0.$$ Write \[ \widetilde{\Sha}= \widetilde{\Sha}(X, \pi):=H^1(B, \pi_*T_{X/B}). \] Of course, the group $\widetilde{\Sha}(X, \pi)$ is (non-canonically) isomorphic to $\mathbb C$ (Corollary \ref{isomorphism of bases}). However, we will use this notation when we want to emphasize its relation to $\Sha$. \begin{lemma}\label{about Gamma} The sheaf $\Gamma:=\ker (\pi_*T_{X/B}\to Aut^0_{X/B})$ is a sheaf of finitely generated torsion-free abelian groups. \end{lemma} \begin{proof} The restriction of the sheaf $\Gamma$ to $B^\circ$ is a local system of torsion-free abelian groups of rank $n = \frac{1}{2}\dim X$. For every open $U\subset B$ the restriction map \begin{equation} \label{frf} H^0(U,\Gamma)\to H^0(U\cap B^\circ,\Gamma) \end{equation} is injective. Indeed, an element of $H^0(U, \Gamma)$ is a vector field on $\pi^{-1}(U)$. If a vector field vanishes on an open subset of $\pi^{-1}(U)$, then it vanishes on $\pi^{-1}(U)$. The group on the right-hand side of (\ref{frf}) is torsion-free of finite rank, hence so is the group on the left-hand side. \end{proof} Lemma \ref{about Gamma} implies that the cohomology groups of $\Gamma$ are finitely generated abelian groups. Let $\Sha^0$ denote the image of the map $\widetilde{\Sha} = H^1(B,\pi_*T_{X/B})\to \Sha$ from the long exact sequence (\ref{exact seq of shas}). The group $\Sha$ fits into the short exact sequence \begin{equation} \label{sha zero to sha} 0 \to \Sha^0 \to \Sha \to H^2(B,\Gamma) \to 0. \end{equation} Being a quotient of $\widetilde{\Sha} = \C$ by a subgroup, the group $\Sha^0$ inherits a structure of a connected topological group. Endow $\Sha$ with the translation invariant topology such that $\Sha^0$ is the connected component of unity of $\Sha$. View $H^2(B,\Gamma)$ as a discrete topological group. Both maps in the exact sequence (\ref{sha zero to sha}) are continuous maps of topological groups. The following provides useful intuition for Shafarevich--Tate groups. \begin{rmk} Let $B$ be a complex manifold. Let $Y: = B \times \C^{\times}$. Let $Aut^{0}_{Y/B} \subset Aut_{Y/B}$ be the subsheaf consisting of the automorphisms that lie in the connected component of unity. Then $Aut^0_{Y/B} = \O^{\times}_B$ and $H^1(B, Aut^0_{Y/B}) = \operatorname{Pic}(B)$. Total spaces of twists by classes in $H^1(B, Aut^{0}_{X/B})$ are holomorphic principal $\C^{\times}$-bundles over $X$. They are in one-to-one correspondence with holomorphic line bundles. Thus, the Shafarevich--Tate group $\Sha$ serves as an analog of the Picard group, its subgroup $\Sha^0$ is an analog of $\operatorname{Pic}^0(B)$, and $\Sha/\Sha^0$ may be thought of as the N\'eron--Severi group $NS(B)$. \end{rmk} \subsection{Shafarevich--Tate family}\label{ShT-family} Consider an element $s\in \widetilde{\Sha}$. The twist $X^s$ of $X$ by $s$ is defined to be the manifold $X^{[s]}$ where $[s]$ is the image of $s$ under the map $\widetilde{\Sha}\to \Sha$ (Subsection \ref{twists}). The twist $X^{s}$ is a complex manifold endowed with a holomorphic map $\pi^{s}:= \pi^{[s]}$ to $B$. \begin{prop} \label{prop_ShT_family} There exists a holomorphic family of complex manifolds $$\mathfrak{X}_{\Sha \mathrm{T}} \to \widetilde{\Sha} \cong \C$$ such that the fiber over $s \in \widetilde{\Sha}$ is $X^s$. This family is endowed with a holomorphic fibration $$ \Pi_{\Sha \mathrm{ T}} \colon \mathfrak{X}_{\Sha \mathrm{T}} \to B \times \widetilde{\Sha}. $$ and the map $\Pi_{\Sha \mathrm{T}}$ restricts to $\pi^s$ on each fiber $X^s$. \end{prop} \begin{proof} Set $\widetilde{B}:=B \times \widetilde{\Sha}$ and $\widetilde{X} := X \times \widetilde{\Sha}$. Let $\widetilde{\pi} := \pi \times id \colon \widetilde{X} \to \widetilde{B}$ be the projection. For any affine covering $B = \bigcup_i U_i$ define $\widetilde{U_i}:= U_i \times \widetilde{\Sha}$. Of course, $\widetilde B = \bigcup_i \widetilde{U_i}$ is an affine covering of $\widetilde{B}$. Let $v \in \widetilde{\Sha}$ be a non-zero class. Represent it by a \v{C}ech cocycle $v_{ij}$ on some open covering $U_{ij}$. Let $\widetilde{v}_{ij}$ be the pullback of $v_{ij}$ to $\widetilde{U}_{ij}$. Define a $1$-cocycle on $\widetilde B$ with coefficients in $\widetilde{\pi}_*T_{\widetilde X/\widetilde B}$ as \[ w_{ij} := t\widetilde{v_{ij}} \in \widetilde\pi_*T_{\widetilde{X}/\widetilde B}(\widetilde{U_{ij}}) \] where $t$ is the coordinate on $\widetilde{\Sha} \cong \C$. The twist of $\widetilde{\pi}\colon\widetilde{X} \to \widetilde{B}$ along the cocycle $\{\phi_{ij}\}:= \{\exp(w_{ij})\}$ is the desired family $\mathfrak{X}_{\Sha \mathrm{T}}$. In other words, the manifold $\mathfrak X_{\Sha\mathrm T}$ is obtained as $$ \mathfrak{X}_{\Sha\mathrm{T}} = \bigsqcup \left(\pi^{-1}(U_i)\times\C\right) \Bigg/ \left(\pi^{-1}(U_i)\times\C\right) \ni x \sim \phi_{ij}(x)\in \left(\pi^{-1}(U_j)\times\C\right). $$ \end{proof} \begin{df} The family $\mathfrak{X}_{\Sha \mathrm{T}} \to \widetilde{\Sha}$ constructed in Proposition \ref{prop_ShT_family} will be referred to as the \textit{Shafarevich--Tate family}. \end{df} \subsection{Shafarevich--Tate twists are symplectic}\label{symplectic} A twist of a Lagrangian fibration by an element of $\Sha$ is a priori only a complex manifold. We will see in this Subsection that it is a holomorphic symplectic manifold.\footnote{A twist of a Lagrangian fibration might not be K\"ahler. That is why we use the term holomorphic symplectic instead of hyperk\"ahler.} \begin{lemma} \label{autos and symplectic form} Let $(M,\sigma)$ be a (not necessarily compact) holomorphic symplectic manifold and $\pi\colon M\to S$ a proper Lagrangian fibration on $M$ over a smooth base $S$. Consider an automorphism $\phi\in H^0(S, Aut^0_{M/S})$. Then $$ \phi^*\sigma - \sigma = \pi^*\alpha $$ where $\alpha$ is a closed holomorphic $2$-form on $S$. \end{lemma} \begin{proof} The statement is local on $S$ so we can always shrink $S$ if necessary. For $S$ small enough, one can realize $\phi$ as $\exp(v)$ for a vertical holomorphic vector field $v$. There exists a holomorphic $1$-form $\beta$ on $S$ such that $\iota_v\sigma = \pi^*\beta$ (Lemma \ref{tangent cotangent isomorphism}). Let $\phi_t$ denote the flow of $v$. Then $$ \phi^*\sigma - \sigma = \int\limits_0^1\frac{d(\phi_t^*\sigma)}{dt}dt = \int\limits_0^1 \phi_t^*\mathsf L_v\sigma dt = d\int_0^1\phi_t^*(\iota_v\sigma) dt = d\int\limits_0^1 \phi_t^*\pi^*\beta dt = \pi^*d\beta. $$ The last identity holds because $\pi\circ \phi_t = \pi$. Thus $\phi^*\sigma - \sigma = \pi^*\alpha$ where $\alpha = d\beta$. \end{proof} \begin{thrm} \label{Sha from symplectic automorphisms} Let $\pi\colon X\to B$ be a Lagrangian fibration on a compact holomorphic symplectic manifold $X$. Denote by $Aut^{0,\sigma}_{X/B}$ the subsheaf of $Aut^0_{X/B}$ consisting of $\sigma$-symplectic automorphisms. Then the inclusion $Aut^{0,\sigma}_{X/B} \to Aut^0_{X/B}$ induces an isomorphism of cohomology groups $H^i(B, Aut^{0,\sigma}_{X/B}) \to H^i(B, Aut^0_{X/B})$ for $i=0,1$. \end{thrm} \begin{proof} We will only prove that the map $H^1(B,Aut^{0,\sigma}_{X/B}) \to H^1(B, Aut^0_{X/B})$ is surjective. The proof of injectivity of this map, as well as the proof of the isomorphism $H^0(B, Aut^{0,\sigma}_{X/B})\simeq H^0(B,Aut^0_{X/B})$ follow the same pattern and are left to the reader. Consider a class $\phi \in H^1(B,Aut^0_{X/B})$. It can be represented by a \v{C}ech $1$-cocycle $\phi_{ij}\in Aut^0_{X/B}(U_{ij})$ for an open covering $B = \bigcup U_i$. Lemma \ref{autos and symplectic form} implies that $\phi_{ij}^*\sigma - \sigma = \pi^*\alpha_{ij}$ for a closed holomorphic $2$-form $\alpha_{ij}$ on $U_{ij}$. The collection of forms $\{\alpha_{ij}\}$ defines a \v{C}ech $1$-cocycle on $B$ with coefficients in $\Omega^2_B$. As $H^1(B,\Omega^2_B)$ vanishes, the cocycle $\{\alpha_{ij}\}$ is exact. Consequently, there exist holomorphic $2$-forms $\beta_i$ on $U_i$ such that $\alpha_{ij} = \beta_j|_{U_{ij}} - \beta_i|_{U_{ij}}$. The form $\alpha_{ij}$ is closed for every $i, j$, hence $d\beta_i|_{U_{ij}} = d\beta_j|_{U_{ij}}$. That means that the forms $d\beta_i$ glue to a globally defined holomorphic $3$-form. There are no non-trivial holomorphic $3$-forms on $B = \mathbb P^n$, hence $d\beta_i = 0$ for every $i$. Therefore, the forms $\beta_i$ are exact. We can find a holomorphic $1$-form $\gamma_i$ on $U_i$ such that $\beta_i = d\gamma_i$. Let $v_i$ be the vertical vector field on $\pi^{-1}(U_i)$ such that $\iota_{v_i}\sigma = \pi^*\gamma_i$ and $\psi_i$ be the flow of $v_i$. Define $\widetilde{\phi_{ij}}$ to be the automorphism $$ \widetilde{\phi}_{ij}:= \phi_{ij}\circ\psi_i|_{U_{ij}}\circ\psi_j^{-1}|_{U_{ij}}. $$ A direct computation shows that $\widetilde{\phi}_{ij}^*\sigma = \sigma$. At the same time $\{\widetilde{\phi}_{ij}\}$ define the same class in $H^1(B,Aut^0_{X/B})$ as $\{\phi_{ij}\}$. This proves the surjectivity of the map $$ H^1(B, Aut^{0,\sigma}_{X/B}) \to H^1(B, Aut^0_{X/B}).$$ \end{proof} \begin{cor} \label{sha tate are symplectic} Let $\pi\colon X\to B $ be a Lagrangian fibration and $\pi^s\colon X^s\to B$ be the twist of $\pi$ by an element $s\in \Sha$. Then $X^s$ is a holomorphic symplectic manifold and $\pi^s$ is a Lagrangian fibration. \end{cor} \begin{proof} Every element $s\in\Sha$ can be represented by a \v{C}ech cocycle $\phi_{ij}\in Aut^{0,\sigma}_{X/B}(U_{ij})$ (Theorem \ref{Sha from symplectic automorphisms}). The manifold $X^s$ is obtained by gluing the holomorphic symplectic manifolds $\pi^{-1}(U_i)$ by the automorphisms $\phi_{ij}$. Since the automorphisms $\phi_{ij}$ preserve $\sigma$, the forms $\sigma|_{\pi^{-1}(U_i)}$ are glued to a well-defined holomorphic symplectic form $\sigma^{s}$ on $X^{s}$. Locally $\pi^s$ coincides with $\pi$, therefore it is Lagrangian with respect to $\sigma^s$. \end{proof} \subsection{Degenerate twistor deformations as Shafarevich--Tate twists}\label{deg.tw. as ShT} Let $\pi\colon X\to B$ be a Lagrangian fibration. Consider the Shafarevich--Tate family $\mathfrak{X}_{\Sha \mathrm{T}} \to \widetilde{\Sha}$ (Proposition \ref{prop_ShT_family}) and the degenerate twistor family $\mathfrak{X}_{deg.tw} \to \C$ (Definition \ref{definition of dg.tw}) associated to $\pi$. Fix a holomorphic symplectic form $\sigma$ on $X$. It induces an isomorphism $\widetilde{\Sha}\cong H^{1,1}(B)$. Let $[\alpha]\in H^{1,1}(B)$ denote the class of a hyperplane section of $B=\mathbb P^n$. The class $[\alpha]$ induces the natural isomorphism $H^{1,1}(B)\cong\C$. \begin{thrm}\label{deg.tw=ShaT} There exists an isomorphism of $\mathfrak{X}_{\Sha \mathrm{T}}$ and $\mathfrak{X}_{deg.tw}$ as families of complex manifolds that lifts the isomorphism $\widetilde{\Sha}\cong \C$. In other words, we have the commutative diagram: \[ \xymatrix{ \mathfrak{X}_{deg.tw} \ar[r]^{\Phi} \ar[d]_{\Pi_{deg.tw}} & \mathfrak{X}_{\Sha \mathrm T} \ar[d]^{\Pi_{\Sha \mathrm T}} \\ \C \ar[r]^{\simeq}& \widetilde{\Sha}. } \] \end{thrm} {\bf Proof. Step 1:} First, let us recall the relation between Dolbeault and \v{C}ech cohomology groups of the sheaf $\Omega^1_B$. The class $[\alpha]$ considered as a Dolbeault cohomology class, can be represented by a closed $(1,1)$-form $\alpha$ on $B$. Take an affine open cover $B = \bigcup U_i$. The class $[\alpha]$ may be represented by a \v{C}ech cocycle $\{a_{ij}\}$ where $a_{ij}\in H^0(U_{ij},\Omega^1_B)$ are holomorphic $1$-forms on $U_{ij}$. Given a closed $(1,1)$-form $\alpha$, one can construct $\{a_{ij}\}$ as follows. There exists a $(1,0)$-form $a_i$ on $U_i$ such that $\omega|_{U_i} = da_i = \overline{\partial} a_i$ for every $i$. We define the form $a_{ij}$ as \[ a_{ij}:= a_i|_{U_{ij}} - a_j|_{U_{ij}}. \] \hfill {\bf Step 2:} In this step we will construct an isomorphism from $\mathfrak{X}_{\Sha\mathrm{T}}$ to $\mathfrak{X}_{deg.tw}$ locally on the base $B$. Consider the vertical vector fields $w_i$ on $\pi^{-1}(U_i)\times\C \subset X\times \C$ defined by the formula $$ \iota_{w_i}\sigma = t\pi^*a_i $$ Here $t$ is the linear coordinate function on $\C$ and $\sigma$ is the pullback of the holomorphic symplectic form on $X$ to $\pi^{-1}(U_i) \times \C$. For any $t\in \C$ we have the following identity on $\pi^{-1}(U_i)\times\{t\}$: $$ \mathsf L_{w_i} \sigma = t\pi^*da_i = t\pi^*\alpha|_{\pi^{-1}(U_i)}. $$ Let $\phi_i$ be the exponential of the vector field $w_i$. This gives the following equality of forms on $\pi^{-1}(U_i) \times \{t\}$: $$ \phi_i^*\sigma = \sigma + t\pi^*\alpha, $$ so that for any $t\in\C$ the map $\phi_i$ is a biholomorphism $$ \phi_i|_{\pi^{-1}(U_i)\times\{t\}}\colon (\pi^{-1}(U_i),I_t) \to \pi^{-1}(U_i), $$ where $I_t$ is the complex structure from Lemma \ref{holomorphic moser}. In fact, since every automorphism $\phi_i$ commutes with the projection $\pi^{-1}(U_i)\times \C\to \C$, it induces a biholomorphism $$ \phi_i\colon (\pi^{-1}(U_i)\times\C,I_{tw})\to \pi^{-1}(U_i)\times\C. $$ Here $I_{tw}$ is the complex structure introduced in Definition \ref{definition of dg.tw}. \hfill {\bf Step 3:} It remains to prove that the maps $\phi_i$ can be glued together to a global isomorphism of families $\mathfrak{X}_{deg.tw} \to \mathfrak{X}_{\Sha \mathrm{T}}$. Define the vector fields $w_{ij}$ on $\pi^{-1}(U_{ij})\times\C$ by $$ \iota_{w_{ij}}\sigma = t\pi^*(a_{ij}). $$ Let $\phi_{ij}$ be the exponential of $w_{ij}$. Then $\phi_{ij}$ is a holomorphic automorphism of $\pi^{-1}(U_{ij})\times\C$. Since $a_{ij} = a_j-a_i$ one has $$ \phi_{ij} = \phi_j\circ\phi_i^{-1}. $$ By Proposition \ref{prop_ShT_family} the manifold $\mathfrak{X}_{\Sha\mathrm{T}}$ is the twist of $X\times\widetilde{\Sha}$ by the cocycle $\{\phi_{ij}\}$. Therefore the maps $\phi_i$ glue together to give rise to a global biholomorphism $\Phi \colon \mathfrak{X}_{deg.tw}\to \mathfrak{X}_{\Sha\mathrm{T}}$.\qed \subsection{The period map of a Shafarevich--Tate family}\label{period map} Here we will describe the period map of a Shafarevich--Tate family. That can be viewed as a generalization of a Markman's result \cite[Thm. 7.11]{Mar}. The following definition is standard. Let $X$ be a hyperk\"ahler manifold. Its \emph{period domain} $\mathcal{D}$ is defined as \[ \mathcal{D}:= \{[x] \ | \ q(x,x)=0 \text{ and } q(x, \overline{x})>0\} \subset \mathbb{P}(H^2(X, \C)) \] where $q$ is the BBF form. Let $p \colon \mathcal{X} \to T$ be a holomorphic family of hyperk\"ahler manifolds over a connected simply connected base $T$. One can associate the \emph{period map} $\mathcal{P}_T \colon T \to \mathcal{D}$ to $p$. This is the holomorphic map defined as \[ t \mapsto [H^{2,0}(p^{-1}(t))]. \] Its importance is manifested by the Torelli theorem that says the following. Let ${Teich}(X)$ be the Teichm\"uller space of $X$ i.e. the space of complex structures of hyperk\"ahler type on $X$ modulo isotopies. Then the period map \[ \mathcal{P} \colon Teich(X) \to \mathcal{D} \] is generically injective on connected components of $Teich(X)$ \cite{Ver13,Huy10}. \begin{prop} Let $\mathfrak{X}_{deg.tw} \to \C$ be the degenerate twistor family of a Lagrangian fibration $\pi \colon X \to B$. Then its period map $\mathcal{P}_{deg.tw} \colon \C \to \mathcal{D}$ is injective. The image of $\mathcal P_{deg.tw}$ is the entire curve that is obtained as the intersection of $\mathcal{D}$ with a projective line lying on the quadric $\{q(x,x) = 0\}$. \end{prop} \begin{proof} Let $\alpha$ be a K\"ahler form on $B$. The period map of the degenerate twistor family is given by $$ \mathcal P_{deg.tw}(t) = [\sigma + t\pi^*\alpha] $$ Let $\eta$ denote the class of $\pi^*\alpha$ in $H^2(X,\C)$. Consider the projective line $L:=\mathbb P(\{\sigma,\eta\})$ lying in $\mathbb P(H^2(X,\C))$. The map $\mathcal P_{deg.tw}$ sends $\C$ isomorphically to $L\,\setminus\, [\eta]$. All points of this line satisfy $q(x,x) = 0$. The point corresponding to $\eta$ does not lie in the period domain $\mathcal D$ since $q(\eta,\overline{\eta}) = 0$. Therefore, the image of $\mathcal P_{deg.tw}$ is $L\cap\mathcal D$. \end{proof} \begin{rmk} One may think of $\widetilde{\Sha}$ as an entire curve in the Teichm\"uller space of $X$ and $\Sha^0$ as its image inside the moduli space $\mathcal{M}$. The moduli space $\mathcal{M}$ can be defined as $Teich(X)$ modulo the mapping class group of $X$. It is known that the action of the mapping class group on $Teich(X)$ is very non-discrete. Therefore, $\mathcal{M}$ is non-Hausdorff \cite{Ver13}. This phenomenon is often seen already at the level of Shafarevich--Tate groups (Theorem \ref{density}) \end{rmk} \section{Connected component of unity of a Shafarevich--Tate group}\label{connected component} \subsection{The sheaf $\Gamma$} Consider the exponential exact sequence \[ 0 \to \mathbb Z_X \to \O_X \to \O^{\times}_X \to 0 \] Apply the higher direct image functor $R^{\bullet}\pi_*(-)$ to it. Since the fibers of $\pi$ are connected and proper, we have $\pi_*\mathbb Z_X \simeq \mathbb Z_B$, $\pi_*\O_X \simeq \O_B$, and $\pi_*\O^{\times}_X \simeq \O^{\times}_B$. The sequence \[ 0 \to \pi_*\mathbb Z_X \to \pi_*\O_X \to \pi_*\O^{\times}_X \to 0 \] is therefore exact. Hence we get a long exact sequence of sheaves: \[ 0 \to R^1\pi_*\mathbb Z_X \to R^1\pi_*\O_X \to R^1\pi_*\O^{\times}_X \to R^2\pi_*\mathbb Z_X \to\ldots \] \begin{prop}\label{R^1pi_*Z maps to Gamma} The isomorphism $$\tilde{\omega}\colon R^1\pi_*\O_X \to \pi_*T_{X/B}$$ from Proposition \ref{Matsushita isomorphism} sends $R^1\pi_*\mathbb Z_X \subset R^1\pi_*\O_X$ into $\Gamma:=\ker(\pi_*T_{X/B}\to Aut^0_{X/B})$. \end{prop} \begin{proof} We need to prove that the composition of the maps $$ R^1\pi_*\mathbb Z\xrightarrow{i} R^1\pi_*\mathcal O_X\xrightarrow{\widetilde\omega} \pi_*T_{X/B}\xrightarrow{\varepsilon} Aut^0_{X/B} $$ vanishes. Let $b\in B^\circ$ and $F_b$ be a smooth fiber. Then $(R^1\pi_*\O_X)_b = H^1(F_b, \O_{F_b})$, $(\pi_*T_{X/B})|_b = H^0(F_b, TF_b)$ and $\widetilde{\omega}_b$ is the map \[ H^{0,1}(F_b) \to H^{1,0}(F_b)^{*} \cong H^0(F_b, TF_b), \] given by the polarization $[\omega]_{F_b}$ on the abelian variety $F_b$. By Corollary \ref{relative polarisation} we can choose $\omega$ in such a way that this polarization is integral, so $\widetilde{\omega}_b$ maps $H^1(F_b, \mathbb Z) \subset H^1(F_b, \O_{F_b})$ to $\Gamma_b = H_1(F_b, \mathbb Z) \subset H^0(F_b, TF_b)$. It follows that the statement of the lemma holds after the restriction to $B^{\circ}$. Consider a local section $\tau$ of $R^1\pi_*\mathbb Z_X$ over an open subset $U \subset B$. Denote $U \bigcap B^{\circ}$ by $U^{\circ}$. We have seen that the restriction of the automorphism $(\varepsilon\circ\widetilde{\omega} \circ i)(\tau)$ to $\pi^{-1}(U_\circ)$ is trivial. An automorphism that is trivial on a dense open subset is trivial. Hence, the vertical automorphism $(\varepsilon \circ \widetilde{\omega} \circ i)(\tau)$ of $\pi^{-1}(U)$ is trivial. \end{proof} We have constructed a morphism of sheaves $\alpha\colon R^1\pi_*\mathbb Z_X \to \Gamma$. Note that it is injective, since it is a composition of an isomorphism $\widetilde{\omega} \colon R^1\pi_*\O_X \to \pi_*T_{X/B}$ and an embedding $i \colon R^1\pi_*\mathbb Z_X \to R^1\pi_*\O_X$. There is the following diagram with exact rows: \[ \xymatrix{ 0 \ar[r]& \Gamma \ar[r]& \pi_*T_{X/B} \ar[r]^{\varepsilon}& Aut^0_{X/B} \ar[r]& 0\\ 0 \ar[r]& R^1\pi_*\mathbb Z_X \ar[r]^{i} \ar[u]^{\alpha} & R^1\pi_*\O_X \ar[u]_{\widetilde{\omega}} \ar[r]& R^1\pi_*\O^{\times}_X } \] \hfill Recall the \textit{local invariant cycle theorem}: \begin{prop}(\cite[Cor. 6.2.9]{BBDG}) \label{local invariant cycle} Let $f \colon X \to Y$ be a proper map of complex algebraic varieties with $X$ non-singular. For a subset $U \subset Y$, denote by $U^{\circ}$ the set of non-critical values of $f$ in $U$. Then for any $b \in Y$ there exists a small ball $U_b \subset Y$ with the center $b$ such that the following holds. \begin{itemize} \item[(1)]For any $i$ the following groups are isomorphic: $H^i(f^{-1}(b), \mathbb Q) \simeq H^i(f^{-1}(U_b), \mathbb Q)$; \item[(2)] For any point $b^{\circ} \in U_b^{\circ}$ the cohomology group $H^i(f^{-1}(U_b),\mathbb Q)$ surjects onto the local monodromy invariants $H^i(f^{-1}(b^\circ),\mathbb Q)^{\pi_1(U_b^\circ)} = R^i\pi_*\mathbb Q_X(U_b^\circ)$. \end{itemize} \end{prop} \begin{rmk} Let $\pi \colon X \to B$ be a Lagrangian fibration on a hyperk\"ahler manifold. For each $b \in B$ there exists an open subset $U \subset B$ containing $b$ such that $\pi^{-1}(U)$ is a smooth algebraic variety \cite[Cor. 3.4]{SV}. Therefore, the local invariant cycle theorem can be applied in this situation. \end{rmk} Let $\Gamma_{\mathbb Q}$ denote the sheaf $\Gamma \o \mathbb Q_B$. \begin{prop}\label{Gamma is almost R^1pi_*Z} The map $\alpha\otimes\mathbb Q\colon R^1\pi_*\mathbb Q\to \Gamma_\mathbb Q$ is an isomorphism. \end{prop} \begin{proof} \textbf{Step 1.} First, notice that the lemma holds after the restriction to the non-critical set $B^{\circ} \subset B$. Indeed, if $F_b = \pi^{-1}(b)$ is a smooth fiber, then \[ \alpha_b \colon (R^1\pi_*\mathbb Q_X)_b = H^1(F_b, \mathbb Q) \to (\Gamma_{\mathbb Q})_b = H_1(F_b, \mathbb Q) \] is the map given by the polarization on the abelian variety $F$. Hence it is an isomorphism. Note that in general the same might not hold with integral coefficients. \hfill \textbf{Step 2.} Take a point $b\in B$ and let $U_b \subset B$ be as in Proposition \ref{local invariant cycle}. We need to prove that the map $$ \alpha_{U_b}\colon H^1(U_b,\mathbb Q) \to \Gamma_\mathbb Q(U_b) $$ is an isomorphism. We already know that it is injective, hence we only need to check surjectivity. Take a local section $\gamma \in \Gamma_\mathbb Q(U_b)$. Let $\gamma^\circ$ be its restriction to $U_b^\circ$. By {\bf Step 1} there exists $\beta^\circ \in H^0(U_b^\circ, R^1\pi_*\mathbb Q_X)$, such that $\alpha_{U_b^\circ}(\beta^\circ) = \gamma^\circ$. The section $\beta^\circ$ can be lifted to a section $\beta$ of $H^0(U_b,R^1\pi_*\mathbb Q_X) = H^1(\pi^{-1}(U_b),\mathbb Q)$ by the local invariant cycle theorem. The local sections $\alpha_{U_b}(\beta) \in \Gamma_\mathbb Q(U_b)$ and $\gamma\in \Gamma_\mathbb Q(U_b)$ coincide after restriction to $U_b^{\circ}$ by their construction. The sheaf $\Gamma_\mathbb Q$ is a subsheaf of the locally free sheaf $R^1\pi_*\mathcal O_X$, hence $\alpha_{U_b}(\beta)$ and $\gamma$ coincide. \end{proof} \subsection{The connected component of unity of Shafarevich--Tate groups}\label{Sha^0} The goal of this Subsection is to describe the group $\Sha^0$ i.e. the connected component of unity of the group $\Sha$. This group fits into the short exact sequence $$ H^1(B,\Gamma) \to \widetilde{\Sha} \to \Sha^0\to 0. $$ In the previous Subsection, we constructed an injective map $\alpha\colon R^1\pi_*\mathbb Z\to \Gamma$ and proved that $\alpha$ is an isomorphism after tensoring with $\mathbb Q$ (Proposition $\ref{Gamma is almost R^1pi_*Z}$). Although we are mostly concerned with the sheaf $\Gamma$, the sheaf $R^1\pi_*\mathbb Z$ is often much easier to understand. Define the subspace $W\subset H^2(X,\C)$ as the kernel of the restriction map $$ H^2(X, \C) \to H^0(B, R^2\pi_*\C_X) $$ This is a Hodge substructure in $H^2(X, \mathbb Z)$ that can be described as \begin{equation} \label{definition of W} W_\mathbb Z = \bigcap_{b \in B} \ker (H^2(X, \mathbb Z) \to H^2(F_b, \mathbb Z)), \end{equation} Let us denote $W_{\mathbb Q}:= W \bigcap H^2(X, \mathbb Q)$. As before, let $\eta\in H^2(X,\mathbb Z)$ be the pullback of the class of a hyperplane section. Let $\eta^{\perp}$ be the orthogonal to $\eta$ with respect to the BBF form. \begin{prop}\label{W is big enough} Let $W$ be as above. Then we have the following chain of inclusions $$ \{\eta,[\sigma],[\bar\sigma]\}\subset W \subset \eta^\perp. $$ \end{prop} \begin{proof} The class $\eta$ is the Chern class of the line bundle $\pi^*\mathcal O_{\mathbb P^n}(1)$. This line bundle restricts trivially to every fiber of $\pi$, hence $\eta$ is contained in $W$. By Proposition \ref{restriction} the kernel of the restriction map $H^2(X,\mathbb C)\to H^2(F_b,\mathbb C)$ to a smooth fiber $F_b$ is $\eta^\perp$. We obtain that $W\subset\eta^\perp$. We are left to prove that $[\sigma]$ is contained in $W$. Indeed, $W$ is a Hodge substructure, thus it will imply that $[\overline{\sigma}]$ is in $W$ as well. The fibration $\pi$ is Lagrangian, hence the restriction of the $2$-form $\sigma$ to every smooth fiber of $\pi$ vanishes. Therefore the class $[\sigma]$ in $H^2(X, \C)$ restricts trivially to a smooth fiber. Let $F_b$ be a singular fiber. By \cite[Thm. 6.9]{Clemens} there is a neighborhood $U_b\subset B$ of $b$ and a smooth retraction of $\pi^{-1}(U_b)$ to $F_b$ i.e. a one-parameter family of diffeomorphisms $f_t$ such that $$ f_t\colon \pi^{-1}(U_b)\to \pi^{-1}(U_b)\quad\quad\quad f_0 = \mathrm{id}_{\pi^{-1}(U_b)},\; \mathrm{im}(f_1) = F_b,\; f_t|_{F_b} = \mathrm{id}_{F_b}\, \forall t $$ One can see that $f_0^*\sigma = \sigma$ and $f_1^*\sigma = 0$. Let $\xi_t$ denote the tangent vector field to the one-parameter family $f_t$. Compute $$ \sigma|_{\pi^{-1}(U_b)} = -(f_1^*\sigma - f_0^*\sigma) = -\int\limits_0^1\frac{d}{dt}f_t^*\sigma = -\int\limits_0^1f_t^*\mathsf{L}_\xi \sigma = - d\left(\int_0^1 f_t^*\iota_\xi\sigma\right). $$ We conclude that the form $\sigma$ becomes exact after the restriction to $\pi^{-1}(U_b)$. Therefore, the class of $\sigma$ is indeed contained in $W$. \end{proof} \begin{rmk} If all fibers of $\pi$ are irreducible, then $\ker \left( H^2(X, \C) \to H^2(F_b, \C) \right)$ does not depend on $b$. In particular, $W = \eta^{\perp}$ (Proposition \ref{restriction}). \end{rmk} \begin{prop}\label{spectrals} Consider the map $$ H^1(B,R^1\pi_*\mathbb Z)\to H^1(R^1\pi_*\mathcal O_X)\simeq\widetilde{\Sha} $$ induced by the embedding $R^1\pi_*\mathbb Z\to R^1\pi_*\mathcal O_X$. There are canonical isomorphisms $$\widetilde{\Sha} \simeq H^{0,2}(X)$$ and $$H^1(B,R^1\pi_*\mathbb Z) \simeq W_{\mathbb Z}/\eta$$ that fit into the commutative diagram $$ \xymatrix{ H^1(B,R^1\pi_*\mathbb Z) \ar[r]\ar[d]_{\simeq}& \widetilde{\Sha} \ar[d]^{\simeq}\\ W_{\mathbb Z}/\eta \ar[r]^{p} & H^{0,2}(X) } $$ Here $p$ is induced by the Hodge projection $W_\mathbb Z\subset H^2(X,\mathbb Z)\to H^{0,2}(X)$. \end{prop} \begin{proof} Consider the Leray spectral sequence for the sheaf $\mathcal O_X$. The terms of the second page of this spectral sequence can be computed using Proposition \ref{Matsushita isomorphism} as \[ E^{p,q}_2=H^p(B, R^q\pi_*\O_X) = H^{p}(B, \Omega^q_B) = \begin{cases} \C \text{ if } p=q,\\ 0 \text{ otherwise. } \end{cases} \] This spectral sequence degenerates on the second page. It converges to the cohomology groups of the sheaf $\mathcal O_X$. Therefore, $$ H^2(X,\mathcal O_X) \cong E_2^{1,1} = H^1(R^1\pi_*\mathcal O_X) \cong \widetilde{\Sha}. $$ Let us now consider the second page of the Leray spectral sequence for the sheaf $\mathbb Z_X$. The differential $d_2\colon E_2^{1,1}\to E_2^{3,0}$ vanishes because $E_2^{3,0} = H^3(\mathbb P^n,\mathbb Z) = 0$. All the other differentials starting in $E^{1,1}$ vanish because their targets are in negative grading. Therefore, $$ E^{1,1}_\infty = E^{1,1}_2 = H^1(B,R^1\pi_*\mathbb Z). $$ The filtration on the Leray spectral sequence induces a filtration $F^\bullet H^2(X,\mathbb Z)$ on the cohomology groups of $X$. We obtain the following short exact sequences: \begin{gather*} 0 \to \mathbb Z \xrightarrow{\cdot\eta} F^1H^2(X, \mathbb Z) \to H^1(B, R^1\pi_*\mathbb Z) \to 0\\ 0 \to F^1H^2(X, \mathbb Z) \to H^2(X, \mathbb Z) \to H^0(B, R^2\pi_*\mathbb Z). \end{gather*} The second exact sequence implies that $F^1H^2(X,\mathbb Z) = W_{\mathbb Z}$. We see from the first exact sequence that $H^1(B, R^1\pi_*\mathbb Z_X) \simeq W_{\mathbb Z}/\mathbb Z \eta$. The commutativity of the diagram in the statement of the proposition follows from functoriality properties of Leray spectral sequences. \end{proof} \begin{cor} \label{description of sha zero} Let $\pi\colon X\to B$ be a Lagrangian fibration. Define the group $\widehat{\Sha}^0$ to be the cokernel of the map $$ W_\mathbb Z/\eta \xrightarrow{p} H^{0,2}(X). $$ Then the group $\Sha^0$ is a quotient of $\widehat{\Sha}^0$ by a finite group. \end{cor} \begin{proof} The map $\alpha\colon R^1\pi_*\mathbb Z\to \Gamma$ induces the map $H^1(\alpha)\colon H^1(R^1\pi_*\mathbb Z)\to H^1(B,\Gamma)$. The map $H^1(\alpha)$ has a finite cokernel by Proposition \ref{Gamma is almost R^1pi_*Z}. By Proposition \ref{spectrals} there is the following commutative diagram $$ \xymatrix{ W_\mathbb Z/\eta \ar[r]\ar[d]_{H^1(\alpha)}& H^{0,2}(X) \ar[r]\ar[d]^{\simeq}& \widehat{\Sha}^0\ar[r]\ar[d]& 0\\ H^1(B,\Gamma) \ar[r]& \widetilde{\Sha} \ar[r]& \Sha^0\ar[r]&0 } $$ The kernel of the map $\widehat{\Sha}^0\to \Sha^0$ is isomorphic to $\operatorname{coker}(H^1(\alpha))$ by the Snake lemma. Hence it is finite. \end{proof} \section{K\"ahlerness and projectivity properties}\label{applications} In this Section we study K\"ahlerness and algebraicity properties of degenerate twistor deformations. Unfortunately we are not able to prove that a denegerate twistor deformation of a hyperk\"ahler manifold $X$ is always K\"ahler. We need to impose some additional assumption on $X$. We call manifolds that do not satisfy this assumption \textit{M-special}. \subsection{M-special Hodge structures}\label{M-special} In this Subsection, we summarize several results about Hodge structures of K3 type. In particular, we introduce and discuss M-special Hodge structures. A pure Hodge structure $W$ is said to be \textit{of K3 type} if it is of weight $2$ and $\dim W^{2,0}=1$. Let $W$ be a $\mathbb Z$-Hodge structure of weight $2$ and $\mathbb{K} \subset \C$ a subring. We will write $W^{1,1}_{\mathbb{K}}: = W_{\mathbb{K}} \bigcap W^{1,1}$ and $W^{2,0+0,2}_{\mathbb{K}}:=W_{\mathbb{K}} \bigcap \left(W^{2,0} \oplus W^{0,2} \right)$. We denote the Hodge projection by $p\colon W\to W^{0,2}$. Let $q$ be a polarisation on $W$. If $W$ is a Hodge structure of K3 type, then the projection $p\colon W\to W^{0,2}$ is given by the map $$ v\to q(v,\sigma)\bar{\sigma} $$ for an appropriate choice of $\sigma \in W^{2,0}$. We define the transcendental Hodge substructure $T\subseteq W$ to be the orthogonal complement to $W^{1,1}_\mathbb Z$. The lattice $T$ has the rank at least two because $$ W^{2,0 + 0,2}_{\mathbb R} \subseteq T_{\mathbb R}. $$ Note that the restriction of the Hodge projection $p$ to $T$ is an isomorphism to its image. \begin{df}\label{definition of M-special} Let $W$ be a $\mathbb Z$-Hodge structure of K3 type. It is said to be \textit{M-special} if $W_{\mathbb Z}^{2,0+0,2} \neq 0$. \end{df} \begin{rmk} \label{subHS is M-special} Let $V\subset W$ be an embedding of Hodge structures of K3 type. Then $V$ is M-special if and only if so is $W$. \end{rmk} The following definition is a straightforward generalization of \cite[Def. 1]{Mar}. \begin{lemma} \label{first} Let $W$ be a polarized $\mathbb Z$-Hodge structure of K3 type. The following are equivalent. \begin{enumerate} \item The subspace $W^{1,1}$ is rational ("Picard rank is maximal"). \item The subspace $W^{2,0+0,2}$ is rational. \item The rank of the transcendental Hodge substructure $T$ is two. \item The image of $W_\mathbb Z$ under the Hodge projection $p\colon W_\mathbb Z\to W^{0,2}$ is a discrete subgroup of $W^{0,2}$. \end{enumerate} \end{lemma} The proof of this lemma is straightforward and is left to the reader. If one of the conditions of the lemma holds, then $W$ is M-special. However, the converse is not true. \begin{lemma} \label{second} (\cite[Lemma 5.5]{Mar}) Let $W$ be a polarized $\mathbb Z$-Hodge structure of K3 type that is not M-special. Assume that $\operatorname{rk} T\ge 3$. Then for every sublattice $T'\subset T$ of corank one, the group $p(T')$ generates $W^{0,2}$ as an $\mathbb R$-vector space. \end{lemma} \begin{proof} Let $T'\subset T$ be a sublattice of corank one. Suppose that $p(T')$ does not generate $W^{0,2}$ over $\mathbb R$. In that case there exist real numbers $a, b$ such that $$ a \cdot q(\operatorname{Re}(\sigma),t) + b\cdot q(\operatorname{Im}(\sigma),t) = 0\quad \forall t\in T' $$ Hence the vector $l = a\cdot\operatorname{Re}(\sigma) + b\cdot\operatorname{Im}(\sigma) \in W^{2,0+0,2}_\mathbb R$ is orthogonal to $T'$. Observe that $l^\perp = (W^{1,1}_\mathbb Z + T')\otimes\mathbb R$ since $(W^{1,1}_\mathbb Z+ T')\o\mathbb R\subseteq l^\perp$ and the dimensions of the two spaces coincide. The subspace $l^\perp$ is therefore rational. Hence, a multiple of $l$ is rational. This contradicts the assumption that $W$ is not M-special. \end{proof} \begin{lemma} \label{third} (\cite[Lemma 5.5]{Mar}) Let $T\subset \mathbb R^2$ be a finitely generated subgroup of $\mathbb R^2$ of rank at least three. Assume that every subgroup $T'\subset T$ of corank one generates $\mathbb R^2$ over $\mathbb R$. Then $T$ is dense in $\mathbb R^2$. \end{lemma} \begin{thrm} \label{M-special theorem} (\cite[Lemma 5.4, 5.5]{Mar}) Let $W$ be a polarized $\mathbb Z$-Hodge structure of K3 type. Then the following are equivalent \begin{enumerate} \item The Hodge structure $W$ is M-special. \item The image of $W_\mathbb Z$ under the Hodge projection $p\colon W\to W^{0,2}$ is not dense in $W^{0,2}$. \end{enumerate} \end{thrm} \begin{proof} {$\mathbf{(1)\Rightarrow(2)}$} Suppose that $W$ is M-special. Choose a non-zero element $l\in W^{2,0+0,2}_\mathbb Z$. There exists a unique element $\sigma\in W^{2,0}$ such that $\operatorname{Re}\sigma = l$. The projection $p\colon W\to W^{0,2}$ is identified with the map $v\in W\mapsto q(\sigma,v)$ up to scaling. For every $v \in W_{\mathbb Z}$ the number $\operatorname{Re}(q(\sigma, v)) = q(l,v)$ is an integer. Hence, there exists a constant $\lambda \in \mathbb R$ such that $\operatorname{Re}(z) \in \mathbb Z\lambda$ for every vector $z$ in $p(W_{\mathbb Z})$. Therefore, $p(W_\mathbb Z)$ cannot be dense in $W^{0,2}$. \hfill {$\mathbf{(2)\Rightarrow(1)}$} Suppose $W$ is not M-special. In that case the rank of $T$ is at least three (Lemma \ref{first}). Lemma \ref{second} implies that for every sublattice $T'\subset T$ of corank one, the group $p(T')$ generates $W^{0,2}$ over $\mathbb R$. It follows from Lemma \ref{third} that $p(T_\mathbb Z) = p(W_\mathbb Z)$ is dense in $W^{0,2}$. \end{proof} \subsection{K\"ahlerness of degenerate twistor deformations}\label{kahler} Let $\pi\colon X\to B$ be a Lagrangian fibration. Recall that we defined the Hodge substructure $W \subset H^2(X, \mathbb Z)$ as $$ W:= \ker \left( H^2(X, \mathbb Z) \to H^0(B, R^2\pi_*\mathbb Z_X) \right). $$ The Hodge structure $W$ is of K3 type by Proposition \ref{W is big enough}. Let us denote the Hodge projection by $p\colon H^2(X)\to H^{0,2}(X)$. \begin{thrm} \label{density} Let $\pi\colon X\to B$ be a Lagrangian fibration, $\Sha^0$ the connected component of unity of its Shafarevich--Tate group $\Sha$. Then $\Sha^0$ is Hausdorff if and only if $X$ is of maximal Picard rank. In this case $\Sha^0$ is isomorphic to an elliptic curve. \end{thrm} \begin{proof} The group $\Sha^0$ is Hausdorff if and only if the topological group $\widehat{\Sha}^0$ defined as $H^{0,2}(X)/p(W_\mathbb Z)$ is Hausdorff. Indeed, the latter group is a finite unramified cover of $\Sha^0$ (Corollary \ref{description of sha zero}). We see that $\Sha^0$ is Hausdorff if and only if $p(W_\mathbb Z)\subset H^{0,2}(X)$ is a discrete subgroup. Apply Lemma \ref{first} to the Hodge structure $W/\eta$. We obtain that $p(W_\mathbb Z)$ is discrete in $H^{0,2}(X)$ if and only if $W^{2,0+0,2} = H^{2,0}(X)\oplus H^{0,2}(X)$ is a rational subspace. This is equivalent to saying that $H^{1,1}(X)$ is rational i.e. $X$ has the maximal Picard rank (Lemma \ref{first}) If $p(W_\mathbb Z)$ is discrete, then $p(W_\mathbb Z)$ is a lattice in $H^{0,2}(X)$ of rank two and $\Sha^0$ is an elliptic curve. \end{proof} \begin{df} \label{M-special manifolds} Let $X$ be a hyperk\"ahler manifold. It is called {\em M-special} if $$(H^{2,0}(X)\oplus H^{0,2}(X))\cap H^2(X,\mathbb Z)\ne\{0\}$$ \end{df} In other words, a hyperk\"ahler manifold is {\em M-special} if the Hodge structure on its second cohomology is M-special (Definition \ref{definition of M-special}). \begin{lemma} \label{speciality of X and W} Let $\pi\colon X\to B$ be a Lagrangian fibration on a projective hyperk\"ahler manifold $X$. Then $X$ is M-special if and only if the Hodge structure $W_\mathbb Z/\eta$ is M-special. \end{lemma} \begin{proof} Let $l\in H^{1,1}(X)_\mathbb Z$ be an ample class. Since $q(l,\eta)>0$ the pairing with $l$ induces a non-zero functional on $W$. Thus, $l^\perp\cap W$ is a hyperplane in $W$ not containing $\eta$. We obtain the direct sum decomposition $W = \eta\oplus (l^\perp\cap W)$. The subspace $l^\perp\cap W$ is a polarized Hodge substructure of $H^2(X,\mathbb Z)$ isomorphic to $W/\eta$. It follows from Remark \ref{subHS is M-special} that $l^\perp\cap W$ is M-special if and only if so is $H^2(X,\mathbb Z)$. \end{proof} \begin{rmk} The statement of Lemma \ref{speciality of X and W} in general does not hold for non-projective manifolds. However, it is still true that if $X$ is M-special then so is $W/\eta$. \end{rmk} We are now ready to prove the main theorem of this Section. \begin{thrm}\label{Kahlerness} Let $\pi \colon X \to B$ be a Lagrangian fibration and $\mathfrak{X}_{\Sha\mathrm{T}} \to \widetilde{\Sha}$ the Shafarevich--Tate family (Subsection \ref{ShT-family}). Assume that $X$ is projective and not M-special. Then the members of the family $X^s \subset \mathfrak{X}_{\Sha\mathrm T}$ are K\"ahler for each $s \in \widetilde{\Sha}$. \end{thrm} \begin{proof} The zero fiber $X^0$ is K\"ahler by assumption. K\"ahlerness is an open condition. Thus there exists an open subset $U \subset \widetilde{\Sha}$ such that for every $s \in U$ the twist $X^s$ is K\"ahler \cite[Thm. 9.23]{Vois1}. The manifolds $X^{s}$ and $X^{s+\lambda}$ are isomorphic for each $\lambda$ in the subgroup $\Lambda:=\mathrm{im}(H^1(B,\Gamma)) \subset \widetilde{\Sha}$. The vector spaces $p(W_\mathbb Z)\o\mathbb Q$ and $\Lambda\o\mathbb Q$ coincide (Proposition \ref{Gamma is almost R^1pi_*Z}). By Lemma \ref{speciality of X and W} the Hodge structure $W/\eta$ is not M-special. It follows from Theorem \ref{M-special theorem} that $p(W_\mathbb Z)$ is dense in $\widetilde{\Sha} \simeq H^{0,2}(X)$. Hence $\Lambda$ is dense in $\widetilde{\Sha}$. We obtain that for every $s\in \widetilde{\Sha}$ there exists $s' \in U$ such that $X^s \simeq X^{s'}$. Consequently, $X^{s}$ is K\"ahler for every $s$. \end{proof} \begin{rmk} The statement of Theorem \ref{Kahlerness} remains true for non-projective manifolds if we assume that the Hodge structure $H^2(X,\mathbb Z)/\eta$ is not M-special. \end{rmk} \begin{rmk} We do not know if the statement of Theorem \ref{Kahlerness} holds for M-special manifolds. Another open question is the following. Consider an element $s\in \Sha\,\setminus\, \Sha^0$. Is the twist $X^s$ a K\"ahler manifold? Note that from results of Subsection \ref{symplectic} we know that $X^s$ possesses a holomorphic symplectic form. It would be interesting to know if one can obtain examples of non-K\"ahler holomorphic symplectic manifolds, such as in \cite{Gu,Bog96}. \end{rmk} \begin{rmk} Being M-special is a restrictive condition. Hence the statement of Theorem \ref{Kahlerness} holds for most Lagrangian fibrations. Consider the set $\mathcal D_\eta$ of Hodge structures on $H^2(X,\mathbb Z)$ such that $\eta\in H^{1,1}(X)$. It can be identified with a complex manifold of complex dimension $b_2-3$. The set of Hodge structures in $\mathcal D_\eta$ such that $H^2(X,\mathbb Z)/\eta$ is M-special is a countable union of real analytic subvarieties of real dimension $b_2-3$. \end{rmk} \subsection{Algebraic points in Shafarevich--Tate families}\label{algebraic points} In this subsection we will describe the set of projective deformations in the Shafarevich--Tate family. \begin{lemma}\label{invariants in H^2} Consider the degenerate twistor family $\mathfrak{X}_{deg.tw} \to \C$ and let $X=X^0$ be the fiber over $0 \in \C$. Let $\alpha$ be a class in $H^{2}(X, \mathbb R)$. Then exactly one of the following holds: \begin{itemize} \item[(1)] $\alpha \in \eta^{\perp}$; \item[(2)] there exists a unique $s \in \C$ such that $\alpha \in H^{1,1}(X^s)$. \end{itemize} Moreover, $\bigcap_{s \in \C} H^{1,1}(X^s) = (\eta^{\perp})^{1,1}$. \end{lemma} \begin{proof} For each $s\in\C$ the complex structure on $X^s$ is defined by a c-symplectic form with cohomology class $[\sigma_s]=[\sigma_0]+s\eta$. A real class $\alpha$ lies in $H^{1,1}(X^s)$ if and only if it is orthogonal to $\sigma_s$ i.e. $$ q(\alpha, \sigma_s) = q(\alpha, \sigma_0) + sq(\alpha, \eta) = 0. $$ Suppose that $\alpha \in H^2(X,\mathbb R) \,\setminus\, \eta^{\perp}$. Then $s_{\alpha} := - q(\alpha, \sigma_0)/q(\alpha, \eta)$ is the unique number satisfying $q(\alpha,\sigma_{s_\alpha}) = 0$. If $\alpha$ is contained in $\eta^\perp$, then it is of type $(1,1)$ for every degenerate twistor deformation. \end{proof} \begin{cor}\label{Fujiki-Verbitsky deg.tw} The set $\mathcal{R}:= \{ s \in \widetilde{\Sha} \ | \ X^s \text{ is algebraic }\}$ is at most countable. In particular, a very general member of the Shafarevich--Tate family is non-algebraic. \end{cor} \begin{proof} If $X^s$ is algebraic, it carries an ample divisor $L$ and $c_1(L)\in H^{1,1}_\mathbb Z(X)$. Moreover, we have the inequality $q(c_1(L), \eta) > 0$ in this case. For every $\alpha \in H^2(X,\mathbb Z) \,\setminus\, \eta^{\perp}$ there is only one point $s_{\alpha}$ such that $\alpha \in H^{1,1}(X^{s_{\alpha}})$ (Lemma \ref{invariants in H^2}). Thus, $\mathcal R$ is contained in the set \[ \{ s_{\alpha} \ | \ \alpha \in H^2(X, \mathbb Z) \,\setminus\, \eta^{\perp} \}. \] It follows that $\mathcal R$ is at most countable. \end{proof} See Theorem \ref{projectivity=torsion} below for a refinement of Corollary \ref{Fujiki-Verbitsky deg.tw}. A similar theorem holds for twistor families $\mathcal{X}_{tw} \to \mathbb P^1$ (\cite{Ver96,F82}). \begin{lemma}\label{characterization of projectivity} Let $\pi \colon X \to B$ be a Lagrangian fibration on a hyperk\"ahler manifold over $B=\mathbb P^n$. The following are equivalent: \begin{itemize} \item[(1)] $X$ is projective; \item[(2)] $\pi$ admits a \textit{multisection} i.e. there exists a subvariety $Z \subset X$ such that $\pi|_Z \colon Z \to B$ is a finite morphism; \item[(3)] there exists a class $\alpha \in H^{1,1}_{\mathbb Q}(X)$ such that $q(\alpha, \eta) \neq 0$; \item[(4)] there exists a class $\omega \in H^{1,1}_{\mathbb Q}(X)$ such that $q(\omega, \omega) >0$. \end{itemize} \end{lemma} \begin{proof} {$\mathbf{(1)\Rightarrow(2)}$} Choose a holomorphic embedding $i \colon X \hookrightarrow \mathbb P^N$. Pick a point $x \in X$ outside of the singular locus of $\pi$. Consider a general linear subspace $L\subset \mathbb P^N$ of codimension $n$ passing through $x$ and transversal to the fibers of $\pi$. Let $Z$ be a component of $Z' := L \bigcap X \subset X$ which is transversal to the fibers of $\pi$. Then $Z$ is a multisection of $\pi$. \hfill {$\mathbf{(2)\Rightarrow(3)}$} Let $Z \subset X$ be a multisection. Let $C_0 \subset B$ be a smooth curve and let $C := \pi^{-1}(C_0) \bigcap Z$ be its preimage in $Z$. Consider the homology class $[C] \in H_2(X, \mathbb Z)$ of the curve $C$. The BBF form defines an isomorphism \[ H^2(X, \mathbb Q) \to H^2(X, \mathbb Q)^* \simeq H_2(X, \mathbb Q). \] Let $\alpha \in H^2(X, \mathbb Q)$ be the class corresponding to $[C]$ under this isomorphism. Then $q(\alpha, \eta) = \int_{C} \eta > 0$. \hfill {$\mathbf{(3)\Rightarrow(4)}$} Let $\alpha$ be a rational $(1,1)$-class such that $q(\alpha, \eta) \neq 0$. We may assume that $q(\alpha, \eta) > 0$. Let us find a number $t$ such that $\omega:= \alpha +t\eta$ satisfies $q(\omega, \omega) > 0$. We compute that \[ q(\alpha+t\eta, \alpha+t\eta) = q(\alpha, \alpha) +2tq(\alpha, \eta), \] Therefore, the class $\omega$ is rational and has a positive square for any rational $t > \frac{-q(\alpha, \alpha)}{2q(\alpha, \eta)}$. \hfill {$\mathbf{(4)\Rightarrow(1)}$} See \cite[Thm. 3.11]{Huy01} \end{proof} \begin{rmk} If a K\"ahler manifold is algebraic\footnote{ i.e., the analytification of a smooth proper algebraic variety over $\C$}, it is necessarily projective by the Moishezon theorem (\cite[Thm. 11]{Moish}). Therefore, we can replace the first condition in the theorem above with the one that $X$ is algebraic. \end{rmk} Denote by $p \colon H^2(X, \C) \to H^{0,2}(X)$ the Hodge projection. Take $\sigma$ to be the holomorphic symplectic form such that $p(\sigma)$ is identified with the class of the Fubini-Study form under the isomorphism $H^{0,2}(X) \simeq \widetilde{\Sha} \simeq H^{1,1}(B)$. In the following lemma we will give a characterization of torsion in $\Sha^0$ in terms of the BBF form. \begin{lemma} \label{torsion in sha} Let $\pi\colon X\to B$ be a Lagrangian fibration. Consider a class $t\bar\sigma \in H^{0,2}(X)$, $t\in \C$. Let $[t\bar\sigma]$ denote its image in $\Sha^0$. Then $[t\bar\sigma]$ is torsion if and only if there exists a rational class $l\in W_{\mathbb Q}\subset H^2(X,\mathbb Q)$ such that $q(l,\sigma) = t$ (see formula (\ref{definition of W}) for the definition of $W_{\mathbb Q}$). \end{lemma} \begin{proof} We know from the short exact sequence (\ref{exact seq of shas}) that $\Sha^0 = \Sha/\Lambda$ where $\Lambda$ denotes the group $\mathrm{Im}(H^1(B,\Gamma) \to \widetilde{\Sha})$. Hence, the class $[t\bar\sigma]$ is torsion if and only if $t\bar\sigma$ lies in the image of the group $H^1(B,\Gamma)\otimes\mathbb Q$. This is equivalent to saying that $t\bar\sigma$ lies in the image of $W_{\mathbb Q}\subset H^2(X,\mathbb Q)$ under the Hodge projection $H^2(X,\mathbb Q)\to H^{0,2}(X)$ (see Proposition \ref{Gamma is almost R^1pi_*Z}, Proposition \ref{spectrals}). Since $H^{1,1}(X)$ is orthogonal to $H^{2,0}(X)\oplus H^{0,2}(X)$, the projection $H^2(X,\mathbb C)\to H^{0,2}(X)$ is given by the formula $$ v\to q(v,\sigma)\bar\sigma. $$ Hence, $t\bar\sigma$ lies in the image of $W_{\mathbb Q}$ under the Hodge projection if and only if there exists a class $l\in W_{\mathbb Q}$ such that $t = q(l,\sigma)$. \end{proof} \begin{thrm} \label{projectivity=torsion} Let $\pi\colon X\to B$ be a Lagrangian fibration. Suppose that $X$ is projective. Let $s$ be an element of $\Sha^0$. The following are equivalent. \begin{enumerate} \item The Shafarevich--Tate twist $X^s$ of $X$ is projective. \item The manifold $X^s$ is K\"ahler and $s$ is a torsion element of $\Sha^0$. \end{enumerate} \end{thrm} \begin{proof} {$\mathbf{(2)\Rightarrow(1)}$} Suppose that the element $s$ is torsion. Let $\tilde{s} = t\bar\sigma$ denote a preimage of $s$ in $H^{0,2}(X)\simeq\widetilde{\Sha}$. There exists a class $ l\in W_{\mathbb Q}$ such that $q(l,\sigma) = t$ (Lemma \ref{torsion in sha}). As $X$ is projective, there exists a class $\alpha\in H^{1,1}_{\mathbb Q}(X)$ such that $q(\alpha,\eta) = 1$ (Lemma \ref{characterization of projectivity}). The class $\alpha^s := \alpha - l$ satisfies $q(\alpha^s,\eta) = q(\alpha,\eta) = 1$ as $l$ is contained in $W_{\mathbb Q}\subset \eta^\perp$. We claim that $\alpha^s$ lies in $H^{1,1}_{\mathbb Q}(X^{\tilde s})$. Indeed, $\alpha^s$ is orthogonal to $H^{2,0}(X^{\tilde s})$ because $$ q(\alpha^s,\sigma + t\eta) = q(\alpha - l,\sigma + t\eta) = q(\alpha,\sigma) - tq(l,\eta) = t-t = 0. $$ Lemma \ref{characterization of projectivity} implies that $X^s$ is projective. \hfill {$\mathbf{(1)\Rightarrow(2)}$} Suppose that $X$ and $X^s$ are both projective. Let $\tilde s = t\bar\sigma$ denote a preimage of $s$ in $H^{0,2}(X)$ as before. There exist two rational classes $\alpha\in H^{1,1}_{\mathbb Q}(X)$ and $\alpha^s\in H^{1,1}_{\mathbb Q}(X^{\tilde s})$ such that $q(\alpha,\eta) = q(\alpha^s,\eta) = 1$ (Lemma \ref{characterization of projectivity}). Hence, the class $l:=\alpha - \alpha^s$ is a rational class orthogonal to $\eta$. This class satisfies $$ q(l,\sigma) = q(l,\sigma + t\eta) = q(\alpha,\sigma + t\eta) = aq(\alpha,\eta) = t. $$ We can not conclude directly that $[t\overline{\sigma}]$ is torsion because $l$ might not lie in $W_{\mathbb Q}$. Therefore, we need to adjust $l$. Consider the subspace $W^\perp\subset H^2(X,\mathbb Q)$. It follows from Proposition \ref{W is big enough} that $$ \eta\in W^{\perp} \subset \left (\eta^{\perp} \cap H^{1,1}(X) \right). $$ The BBF form $q$ has signature $(1, h^{1,1}(X)-1)$ on $H^{1,1}(X)$ and $\eta$ is isotropic with respect to this form. Therefore, the restriction of $q$ to $W^{\perp}$ is semi-negative definite with kernel generated by $\eta$. Let $U\subset W^\perp$ be a rational hyperplane in $W^\perp$ not containing $\eta$. The form $q|_U$ is negative definite, in particular, it is non-degenerate. Therefore there exists a unique rational vector $u\in U$ such that for every $v\in U$ the following holds $$ q(l,v) = q(u,v). $$ The vector $l-u$ is orthogonal to every vector in $W^\perp$. Hence $l-u$ is contained in $W_{\mathbb Q}$. Since $u\in H^{1,1}(X)$, we have $q(l-u,\sigma) = q(l,\sigma) = t$. Lemma \ref{characterization of projectivity} concludes the proof. \end{proof} \begin{rmk} By \cite[Cor. 3.4]{SV} the set of $s\in\Sha^0$ such that $X^s$ is projective is non-empty. \end{rmk} \section{Sections of Lagrangian fibrations}\label{obstruction} \subsection{Obstruction for existence of a section}\label{obstruction construction} We move on to study obstructions to existence of sections of Lagrangian fibrations. In this Section, we will always assume that $\pi\colon X\to B$ is a Lagrangian fibration with reduced irreducible fibers. In this case, the fibration $\pi \colon X \to B$ admits a local section in a neighborhood of every point $b\in B$. Consider an open cover $B = \bigcup U_i$ and choose a collection of local sections $s_i \colon U_i \to \pi^{-1}(U_i)$. By \cite[Prop. 2.1 (iii)]{Mar96} for every pair $i,j$ there exists a unique automorphism $\phi_{ij}\in Aut^0_{X/B}(U_{ij})$ such that $\phi_{ij}(s_i|_{U_{ij}}) = s_j|_{U_{ij}}$. The collection of automorphisms $\{\phi_{ij}\}$ satisfies the cocycle condition. Therefore, $\{\phi_{ij}\}$ defines a class $$\alpha(X, \pi)\in H^1(B, Aut^0_{X/B}) = \Sha. $$ We denote this class by $\alpha(X)$ when the structure of the Lagrangian fibration on $X$ is clear. \begin{lemma}\label{main property of alpha} \begin{itemize} \item [(1)] The class $\alpha(X)$ does not depend on the choice of local sections $s_i\colon U_i\to \pi^{-1}(U_i)$. \item[(2)] For any element $s\in \Sha$ we have $\alpha(X^s) = \alpha(X) + s$. \item [(3)] The class $\alpha(X)$ vanishes if and only if the fibration $\pi\colon X\to B$ admits a section. \end{itemize} \end{lemma} \begin{proof} We will prove only the third statement. The proof of the first two ones follows the same lines. Suppose that $\alpha(X) = 0$. Then there exist automorphisms $\phi_i\in Aut^0_{X/B}(U_i)$ such that $\phi_{ij} = \psi^{-1}_j\psi_i$. The sections $\phi_i(s_i)$ coincide on intersections, so they define a global section of $\pi$. The converse implication is straightforward. \end{proof} Let $a(X)$ to be the image of $\alpha(X)$ in $\Sha/\Sha^0 = H^2(B,\Gamma)$. \begin{cor}\label{main property of a} Let $\pi\colon X\to B$ be a Lagrangian fibration with reduced irreducible fibers. Then the class $a(X)$ vanishes if and only if there exists a deformation $X^s$ of $X$ in the Shafarevich--Tate family that admits a holomorphic section. Moreover, in this case, the class of $s$ in $\Sha^0$ is uniquely defined. \end{cor} \begin{proof} The class $a(X)\in H^2(B,\Gamma)$ vanishes if and only if $\alpha:=\alpha(X)$ lies in $\Sha^0$. The class $\alpha(X^{-\alpha})$ vanishes by Lemma \ref{main property of alpha} (2). Therefore $$ \pi^{-\alpha} \colon X^{-\alpha} \to B $$ admits a holomorphic section. Conversely, if $\alpha \notin \Sha^0$, then for every deformation $X^s$ in the Shafarevich--Tate family we have $a(X^s) \neq 0$. \end{proof} It was proved in \cite[Thm. 3.5]{BDV} that a Lagrangian fibration admits a \textit{smooth} section if and only if some of its degenerate twistor deformations admits a holomorphic section. Combined with Corollary \ref{main property of a} we get that $a(X)$ is indeed a complete topological obstruction for existence of a section on a Lagrangian fibration with reduced irreducible fibers. For the rest of the paper we will be proving the following theorem. \begin{thrm}\label{when a vanishes} Let $\pi \colon X \to B$ be a Lagrangian fibration on a compact hyperk\"ahler manifold over a smooth base. Let $\Gamma$ be the kernel of the exponential map $\pi_*T_{X/B} \to Aut^0_{X/B}$. Assume that the following holds: \begin{itemize} \item the fibers of $\pi$ are reduced and irreducible; \item $H^3(X, \mathbb Q) =0$; \item $H^2(B, \Gamma)$ is torsion-free. \end{itemize} Then there exists a unique deformation $(X^s, \pi^s)$ of $(X, \pi)$ in the Shafarevich--Tate family such that $\pi^s \colon X^s \to B$ admits a holomorphic section. \end{thrm} Note that the condition $H^3(X, \mathbb Q)=0$ holds if $X$ is deformation equivalent to the Hilbert scheme of points on a K3 surface or to one of the exceptional O'Grady examples. Unfortunately, we are not able to get rid of the condition on $H^2(B, \Gamma)$. However, we strongly believe, that the theorem should be true without this assumption. Therefore we pose the following conjecture. \begin{conj} Let $\pi\colon X\to B$ be a Lagrangian fibration. Assume that $b^3(X) = 0$. Then there exists a degenerate twistor deformation of $\pi$ admitting a holomorphic section. \end{conj} We believe that the assumption that fibers are reduced and irreducible is not too restrictive. \begin{conj}(\cite{Bog}) \label{first conjecture} Let $\pi\colon X\to B$ be a Lagrangian fibration on a compact hyperk\"ahler manifold. Then it has no multiple fibres. \end{conj} \begin{conj} \label{second conjecture} Let $\pi\colon X\to B$ be a Lagrangian fibration. Consider the space $\mathcal M_\eta$ of deformations of $X$ such that $\eta$ remains of type $(1,1)$. Then a very general deformation of $X$ in $\mathcal M_\eta$ is a Lagrangian fibration with irreducible fibers. \end{conj} Both conjectures hold for K3 surfaces (see \cite[Prop. 1.6 (ii)]{Huy_k3} for Conjecture \ref{first conjecture} and \cite[I.1.Thrm. 4.8]{FM} for Conjecture \ref{second conjecture}). \subsection{Hard Lefschetz type theorems for higher direct images of $\mathbb Q_X$}\label{hard lefschetz} Let $\pi \colon X \to B$ be a Lagrangian fibration. In this Subsection we prove a version of the Hard Lefschetz theorem for the sheaf $R^1\pi_*\mathbb Q_X$ which will be used in the proof of the Theorem \ref{when a vanishes}. Throughout this Subsection we assume that $X$ is projective. Let $l\in H^{1,1}_\mathbb Q (X)$ be an ample class. Abusing notation, we will denote by the same letter the induced section of $R^2\pi_*\mathbb Q_X$. For each $p\ge 0$, multiplication by $l$ induces a map $$ L\colon R^p\pi_*\mathbb Q_X \to R^{p+2}\pi_*\mathbb Q_X. $$ We will refer to $L$ as the \textit{Lefschetz map}. \begin{lemma} (Lefschetz decomposition for $R^2\pi_*\mathbb Q_X$)\label{Lefschetz decomposition} Let $\pi\colon X\to B$ be a Lagrangian fibration with $X$ projective. Assume that all fibers of $\pi$ are reduced and irreducible. Then the sheaf $R^2\pi_*\mathbb Q_X$ decomposes as $$ R^2\pi^*\mathbb Q_X = \mathbb Q_B \cdot l\oplus (R^2\pi_*\mathbb Q_X)_{prim} $$ where $(R^2\pi_*\mathbb Q_X)_{prim}$ is the kernel of the map $$ L^{n-1}\colon R^2\pi_*\mathbb Q_X \to R^{2n}\pi_*\mathbb Q_X. $$ \end{lemma} \begin{proof} Since the fibres are irreducible we have $R^{2n}\pi_*\mathbb Q_X \simeq \mathbb Q_B$. The restriction of $L^{n-1} \colon R^2\pi_*\mathbb Q_X \to R^{2n}\pi_*\mathbb Q_X$ on the subsheaf generated by $l$ is an isomorphism. Hence the claim. \end{proof} We move on to study the $(n-1)$-th power of the Lefschetz map on $R^1\pi_*\mathbb Q_X$, that is \begin{equation} \label{L on one} L^{n-1}\colon R^1\pi_*\mathbb Q_X \to R^{2n-1}\pi_*\mathbb Q_X. \end{equation} First, note that multiplication by $l\in H^{1,1}_\mathbb Q(X)$ induces maps on $R^p\pi_*\Omega^q_X$ as well: $$ L\colon R^p\pi_*\Omega^q_X \to R^{p+1}\pi_*\Omega^{q+1}_X. $$ for any $p,q = 0,\dots, n$. Abusing notation, we will denote these maps also by $L$. Lefschetz maps commute with the natural morphisms $R^{\bullet}\pi_*\mathbb Q_X \to R^{\bullet}\pi_*\O_X$, the \textit{relative Hodge projections}. In particular, the following diagram is commutative. $$ \xymatrix{ R^1\pi_*\mathbb Q_X \ar[r] \ar[d]_{L^{n-1}} & R^1\pi_*\mathcal O_X \ar[d]^{L^{n-1}}\\ R^{2n-1}\pi_*\mathbb Q_X \ar[r] & R^n\pi_* \Omega^{n-1}_X } $$ The horizontal arrows in this diagram are Hodge projections. The holomorphic symplectic form $\sigma$ on $X$ induces the isomorphism $\Omega^1_X\simeq T_X$. The composition of this isomorphism with the natural map $T_X\to\pi^* T_B$ gives us the map $$ \theta\colon \Omega^1_X\to \pi^* T_B. $$ For every pair of integers $p,q$ one can apply the functor $R^q\pi_*\Lambda^p(-)$ and get the following map of sheaves on $B$: $$ R^q\pi_*(\Lambda^p\theta)\colon R^q\pi_*\Omega^p_X \to R^q\pi_*(\pi^*\Lambda^pT_B). $$ The sheaf on the right is: $$ R^q\pi_*(\pi^*\Lambda^p T_B) \simeq R^q\pi_*\mathcal O_X \otimes \Lambda^p T_B \simeq \Omega^q_B\otimes \Lambda^p T_B\simeq R^q\pi_*\mathcal O_X \otimes \left(R^p\pi_*\mathcal O_X\right)^*. $$ The isomorphisms follow from the projection formula and Matsushita's theorem (Proposition \ref{Matsushita isomorphism}). For each $p,q$ we have constructed a map $$ f_{p,q}\colon R^q\pi_*\Omega^p_X \to R^q\pi_*\mathcal O_X\otimes \left(R^p\pi_*\mathcal O_X\right)^*. $$ In particular, there are the following maps: \begin{gather*} f_{0,1}\colon R^1\pi_*\mathcal O_X \to R^1\pi_*\mathcal O_X\\ f_{1,1}\colon R^1\pi_*\Omega^1_X \to \mathcal{End} (R^1\pi_*\mathcal O_X)\\ f_{n-1,n}\colon R^n\pi_*\Omega^{n-1}_X \to R^n\pi_*\mathcal O_X\otimes \left(R^{n-1}\pi_*\mathcal O_X\right)^* \simeq R^1\pi_*\mathcal O_X. \end{gather*} The map $f_{0,1}$ is the identity map by construction. It is easy to see that the image of $l$ under $f_{1,1}$ is the identity operator. \begin{lemma} \label{decomposition coherent} $f_{n-1,n}\circ L^{n-1} = \mathrm{id}_{R^1\pi_*\mathcal O_X}.$ \end{lemma} \begin{proof} The maps $f_{p,q}$ commute with the multiplication of forms in the sense that the following diagram is commutative for any $p_1,q_1,p_2,q_2 = 0,\dots,n$. $$ \xymatrix{ R^{q_1}\pi_*\Omega^{p_1}_X\otimes R^{q_2}\pi_*\Omega^{p_2}_X \ar[r]\ar[d]^{f_{p_1,q_1}\otimes f_{p_2,q_2}} & R^{q_1+q_2}\pi_*\Omega^{p_1+p_2}_X \ar[d]^{f_{p_1+p_2,q_1+q_2}}\\ R^{q_1}\pi_*\mathcal O_X\otimes \left(R^{p_1}\pi_*\mathcal O_X\right)^* \otimes R^{q_2}\pi_*\mathcal O_X\otimes \left(R^{p_2}\pi_*\mathcal O_X\right)^* \ar[r] & R^{q_1+q_2}\pi_*\mathcal O_X\otimes \left(R^{p_1+p_2}\pi_*\mathcal O_X\right)^* } $$ It follows that the following diagram is commutative $$ \xymatrix{ R^1\pi_*\mathcal O_X \o \left(R^1\pi_* \Omega^1_X\right)^{\o\: n-1} \ar[r] \ar[d]^{\mathrm{id}\:\o (f_{1,1})^{\o\: n-1}} & R^n\pi_*\Omega^{n-1}_X \ar[d]^{f_{n-1,n}}\\ R^1\pi_*\mathcal O_X\o \left(\mathcal{End}(R^1\pi_*\mathcal O_X)\right)^{\o\: n-1} \ar[r] & R^1\pi_*\mathcal O_X } $$ Since $f_{1,1}(l) = \mathrm{id}_{R^1\pi_*\O_X}$, we obtain that for every local section $\alpha$ of $R^1\pi_*\O_X$ $$ f_{n-1,n}(\alpha\cdot l^{n-1}) = f_{n-1,n}\circ L^{n-1}(\alpha) = \alpha. $$ \end{proof} \begin{cor} \label{Lefschetz} Let $\pi\colon X\to B$ be a Lagrangian fibration on a projective hyperk\"ahler manifold $X$. Then there exists a sheaf $\mathcal N$ on $B$ such that the sheaf $R^{2n-1}\pi_*\mathbb Q_X$ decomposes into the direct sum $$ R^{2n-1}\pi_*\mathbb Q_X \simeq R^1\pi_*\mathbb Q_X \oplus \mathcal N. $$ The embedding of the first summand is given by the map (\ref{L on one}). \end{cor} \begin{proof} The map (\ref{L on one}) is an isomorphism after the restriction to $B^\circ\subset B$ by the Hard Lefschetz theorem. Together with Lemma \ref{decomposition coherent} this implies that the map $f_{n-1,n}|_{B^\circ}$ sends $R^{2n-1}\pi_*\mathbb Q|_{B^\circ}$ isomorphically to $R^1\pi_*\mathbb Q|_{B^\circ}$. A local section of $R^1\pi_*\mathcal O_X$ whose restriction to $B^\circ$ lies in $R^1\pi_*\mathbb Q$ is necessarily a section of $R^1\pi_*\mathbb Q$ by the proof of Proposition \ref{Gamma is almost R^1pi_*Z}. Hence the map $f_{n-1,n}$ descends to a map $$ f_{n-1, n}|_{R^{2n-1}\pi_*\mathbb Q_X}\colon R^{2n-1}\pi_*\mathbb Q\to R^1\pi_*\mathbb Q. $$ This map satisfies the following property (Lemma \ref{decomposition coherent}): $$ f_{n-1,n}|_{R^{2n-1}\pi_*\mathbb Q_X}\circ L^{n-1} = \mathrm{id}_{R^1\pi_*\mathbb Q_X}. $$ The claim now follows. \end{proof} \begin{rmk} It can be proven that the sheaf $\mathcal N$ from Proposition \ref{Lefschetz} is supported on a codimension two subset. The result follows from the description of a general singular fiber of a Lagrangian fibration given in \cite{HO9}. We do not know whether the sheaf $\mathcal N$ can be non-trivial. \end{rmk} \subsection{The discrete part of Shafarevich--Tate groups}\label{obstruction b_3} Let $\Sha$ be the Shafarevich--Tate group of a Lagrangian fibration $\pi\colon X\to B$. Recall that the group $\Sha/\Sha^0$ of connected components of $\Sha$ is isomorphic to $H^2(B,\Gamma)$ (see the exact sequence (\ref{sha zero to sha})). We sometimes refer to $\Sha/\Sha^0$ as the discrete part of $\Sha$. By Proposition \ref{Gamma is almost R^1pi_*Z}, there is an isomorphism $$ H^2(B,\Gamma)\o_\mathbb Z \mathbb Q \simeq H^2(B, R^1\pi_*\mathbb Q_X). $$ The natural map $E^{p,0}_2 = H^p(B, \mathbb Q) \to H^p(X, \mathbb Q)$ is given by the pullback map $\pi^*$. The pullback map on cohomology is injective for every surjective map of compact K\"ahler manifolds (see e.g. \cite[Lem. 7.28]{Vois1}). This implies that $E^{p,0}_2 = E^{p,0}_{\infty}$. Therefore, the differential $d_2\colon E_2^{2,1}\to E_2^{4,0}$ vanishes. All the higher differentials with the source in $E^{2,1}$ vanish because their targets are trivial groups. For the same reason, all the higher differentials $d_n, n>2$ with the source in $E^{0,2}$ vanish too. Since $E^{3,0}_2 = H^3(B,\mathbb Q)$ is trivial, there is an embedding of $E^{2,1}_\infty = E^{2,1}_2/\operatorname{im} (d_2)$ into $H^3(X,\mathbb Q)$. Moreover, we have the following exact sequence of $\mathbb Q$-vector spaces \begin{equation} \label{amazing exact sequence} H^2(X,\mathbb Q) \xrightarrow{r} H^0(B,R^2\pi_*\mathbb Q_X) \xrightarrow{d_2} H^2(B,R^1\pi_*\mathbb Q_X) \xrightarrow{e} H^3(X,\mathbb Q). \end{equation} Our goal is to prove the following theorem: \begin{thrm}\label{differential vanishes} Let $\pi\colon X\to B$ be a Lagrangian fibration with reduced irreducible fibers. Then the differential $$ d_2\colon H^0(B,R^2\pi_*\mathbb Q_X)\to H^2(B,R^1\pi_*\mathbb Q_X) $$ in the Leray spectral sequence of $\pi$ vanishes. \end{thrm} \begin{proof} Assume that $X$ is projective. Lemma \ref{Lefschetz decomposition} implies that $$ H^0(B, R^2\pi_*\mathbb Q_X) = H^0(B,\mathbb Q)\oplus H^0(B,(R^2\pi_*\mathbb Q_X)_{prim}) $$ where $(R^2\pi_*\mathbb Q_X)_{prim}$ is the kernel of the map $L^{n-1}\colon R^2\pi_*\mathbb Q_X\to R^{2n}\pi_*\mathbb Q_X$. The summand $H^0(B,\mathbb Q)$ is generated by the image of the ample class $l$ of $X$ in $H^0(R^2\pi_*\mathbb Q_X)$. Hence $d_2|_{H^0(B,\mathbb Q)}$ vanishes. We are left to prove that $d_2|_{H^0((R^2\pi_*\mathbb Q_X)_{prim})}$ vanishes. The differentials in the Leray spectral sequence commute with the Lefschetz maps. In particular, the following diagram is commutative. $$ \xymatrix{ H^0((R^2\pi_*\mathbb Q_X)_{prim})\ar[r]^{d_2} \ar[d]^{L^{n-1}} & H^2(R^1\pi_*\mathbb Q_X)\ar[d]^{L^{n-1}}\\ H^0(R^{2n}\pi_*\mathbb Q_X)\ar[r]^{d_2} & H^2(R^{2n-1}\pi_*\mathbb Q_X) } $$ The vertical arrow on the left-hand side vanishes by the definition of the primitive part of $R^2\pi_*\mathbb Q_X$. The vertical map on the right-hand side is injective. Indeed, by Corollary \ref{Lefschetz} the map $L^{n-1}$ embeds $H^2(R^1\pi_*\mathbb Q_X)$ into $H^2(R^{2n-1}\pi_*\mathbb Q_X)$ as a direct summand. It follows that $d_2|_{H^0((R^2\pi_*\mathbb Q_X)_{prim})}$ must vanish. \hfill {\bf Step 2:} In the case when $X$ is only assumed to be K\"ahler there exists a degenerate twistor deformation $\pi'\colon X'\to B$ of $X$ such that $X'$ is projective (\cite[Cor. 3.4]{SV}). The statement of the theorem holds for $X'$. Since topologically the maps $\pi$ and $\pi'$ coincide, the statement of the theorem holds for $X$ as well. \end{proof} \begin{cor} \label{restriction is surjective} In the setting of Theorem \ref{differential vanishes}, the following holds: \begin{itemize} \item [(1)] The restriction map $r\colon H^2(X,\mathbb Q)\to H^0(B,R^2\pi_*\mathbb Q_X)$ is surjective. \item [(2)] The map $e\colon H^2(R^1\pi_*\mathbb Q_X)\to H^3(X,\mathbb Q)$ from the exact sequence (\ref{amazing exact sequence}) is injective. \end{itemize} \end{cor} \begin{proof} Follows from Theorem \ref{differential vanishes} and the exact sequence (\ref{amazing exact sequence}). \end{proof} \noindent {\bf Proof of Theorem \ref{when a vanishes}:} It follows from Corollary \ref{restriction is surjective} (2) and Proposition \ref{Gamma is almost R^1pi_*Z} that there exists an embedding \[ (\Sha/\Sha^0)\o_{\mathbb Z} \mathbb Q \hookrightarrow H^3(X, \mathbb Q). \] In particular, if $b_3(X)$ vanishes, $\Sha/\Sha^0$ is finite. Since $\Sha/\Sha^0 \simeq H^2(B, \Gamma)$ this finishes the proof.\qed \begin{rmk} If $\pi\colon X\to B$ is an elliptic fibration on a K3 surface, then $\Sha/\Sha^0$ is trivial (\cite[I.1, Lemma 5.1]{FM}). \end{rmk} {
1,314,259,996,827
arxiv
\section*{Introduction} \label{s:intro} The {\em HOMFLYPT polynomial} $P(L)$ is an invariant of oriented link $L$. It is defined as the Laurent polynomial in two variables $a$ and $z$ with integer coefficients satisfying the following skein relation and the initial condition: \begin{equation}\label{eq:skein} aP(\rb{-4.2mm}{\ig[width=10mm]{lrints.eps}}) - a^{-1}P(\rb{-4.2mm}{\ig[width=10mm]{rlints.eps}}) = zP(\rb{-4.2mm}{\ig[width=10mm]{twoup.eps}})\ ;\qquad P(\rb{-4.2mm}{\ig[width=10mm]{unkn.eps}})\quad = \quad 1\,. \end{equation} If $L$ is an unlink with $m$ components then $P(L)=\Bigl(\frac{a-a^{-1}}{z}\Bigr)^{m-1}$. The proof of the existence of such an invariant is long and cumbersome. It was established simultaneously and independently by five groups of authors \cite{HOM,PT}. This paper is devoted to Gauss diagram formulas for Vassiliev invariants coming from the HOMFLYPT polynomial. It is known \cite{GPV} that any Vassiliev knot invariant may be presented by a Gauss diagram formula. This type of formulas is the simplest for computation purposes; however, the algorithm for producing them is complicated and until recently only few lower degree cases were described explicitly. The first description of such formulas for an infinite family of Vassiliev invariants was given in \cite{CKR}, where the coefficients of the Conway polynomial were considered. This paper generalizes the result of \cite{CKR} to the HOMFLYPT polynomial. We use a non-standard change of variables (used formely in \cite{G2}), leaving $z$ alone and plugging in $a=e^h$ to obtain a power series $\sum_{k,l}p_{k,l}h^kz^l$. The coefficients $p_{k,l}$ are Vassiliev invariants of degree $\leqslant k+l$, see \cite{G2}. We give the Gauss diagram formulas for $p_{k,l}$ for arbitrary $k,l$. These formulas are new already for invariants of degree 3. The paper is organized in the following way. In Section \ref{s:homfly} we start from the scheme of \cite{H, LM, PT}, extracting from it an explicit state model for the HOMFLYPT following \cite{Ja} in Section \ref{s:jaeger}. We then briefly review the notions of Gauss diagrams in Section \ref{s:gaus-diagr} and reformulate the state model in these terms in Section \ref{s:jaeger-ga}. The expansion of $P(L)$ into power series in $h$ and $z$ is considered in Section \ref{s:vas-from-homfly}. In the same section we remind the definition of the Gauss diagram formulas for Vassiliev invariants. Finally, we describe the Gauss diagram formulas for $p_{k,l}$ in Section \ref{s:result}. In the last Section \ref{s:example} we analyze low degree cases in details. Note that using instead of \eqref{eq:skein} the skein relation for the two-variable Kauffman polynomial, one gets a similar state model. We plan to consider the resulting Gauss diagram formulas in a forthcoming paper. We are grateful to O.~Viro, L.~Traldi, and to the anonymous referee for numerous corrections to the first version of the paper and useful remarks. This work has been done when both authors were visiting the Max-Plank-Institut f\"ur Mathematik in Bonn, which we would like to thank for excellent work conditions and hospitality. The second author was supported by a grant 3-3577 of the Israel Ministry of Science and ISF grant 1261/05. \section{HOMFLYPT and descending diagrams} \label{s:homfly} The skein relation \eqref{eq:skein} allows one to calculate the HOMFLYPT polynomial of a link. Following \cite{H, LM, PT}, this can be done by ordering a link diagram and then transforming it into a descending diagram. We call a diagram $D$ {\it ordered}, if its components $D_1$, $D_2$,\dots ,$D_m$ are ordered and on every component a (generic) base point is chosen. An ordered diagram is {\it descending}, if $D_i$ is above $D_j$ for all $i<j$ and if for every $i$ as we go along $D_i$ starting from its base point along the orientation we pass each self-crossing first on the overpass and then on the underpass. An elementary step of the algorithm computing $P(L)$ consists of the following procedure. Suppose that $D$ is an ordered diagram and that the subdiagram $D_1,\dots,D_{i-1}$ is already descending. We go along $D_i$ (starting from the base point) looking for the first crossing which fails to be descending. At such a crossing $x$ we change it using the skein relation. Namely, depending on the sign $\varepsilon$ (the local writhe) of the crossing, we express $P(D)$ as \begin{equation}\label{eq:descend}\begin{array}{ccl} P(\rb{-4.2mm}{\ig[width=10mm]{lrints.eps}})&=&a^{-2}P(\rb{-4.2mm}{\ig[width=10mm]{rlints.eps}}) + a^{-1}zP(\rb{-4.2mm}{\ig[width=10mm]{twoup.eps}})\vspace{5pt}\\ P(\rb{-4.2mm}{\ig[width=10mm]{rlints.eps}})&=&a^{2}P(\rb{-4.2mm}{\ig[width=10mm]{lrints.eps}}) - azP(\rb{-4.2mm}{\ig[width=10mm]{twoup.eps}}) \end{array}\end{equation} Denote the corresponding diagrams $D^\varepsilon$, $D^{-\varepsilon}$, $D^0$. The ordering of $D=D^\varepsilon$ induces an ordering of $D^{-\varepsilon}$ (in an obvious way); the ordering of $D^0$ requires some explanation. If $x$ was a crossing of $D_i$ with $D_j$, $j>i$, then these two components merge into a single component $D^0_i$ of $D^0$, with a base point being the base point of $D_i$. If $x$ was a self-crossing of $D_i$, then $D_i$ splits into two components: $D^0_i$, which contains the base point of $D_i$, and $D^0_{i+1}$, where we choose the base point in a neighborhood of $x$. In both cases the order of remaining components shifts accordingly. The diagrams $D^{-\varepsilon}$, $D^0$ are ``more'' descending than $D^\varepsilon$. At the next step we apply the same procedure to each of them. \begin{ex}\label{ex:1a} For the trefoil $3_1$ the algorithm consists of two steps, illustrated in the figure below. The diagram $D^+$ appearing in the first step is already descending; the diagram $D^0$ is not, so the second step is needed to transform it. $$\risS{-15}{31alg}{\put(20,60){\mbox{$D^-$}} \put(-60,85){\mbox{Step 1:}}\put(-60,20){\mbox{Step 2:}} \put(85,60){\mbox{$D^+$}}\put(150,60){\mbox{$D^0$}} \put(25,-5){\mbox{$D^-$}}\put(98,-5){\mbox{$D^+$}} \put(140,-5){\mbox{$D^0$}}}{160}{90}{25} $$ Hence $P(3_1)=a^2\cdot 1 - az\Bigl( a^2\cdot\frac{a-a^{-1}}{z}-az\cdot 1\Bigr) = (2a^2-a^4)+a^2z^2$. \end{ex} \section{State model reformulation} \label{s:jaeger} The state model of \cite{Ja} for the HOFMLYPT polynomial is a convenient reformulation of the algorithm of Section \ref{s:homfly}. A {\it state} $S$ on a link diagram $D$ is a subset of its crossings. The HOMFLYPT polynomial is going to be a sum over the states. Let $D(S)$ be the link diagram obtained by smoothing every crossing in $S$ according to orientation and $c(S)$ be the number of its components. We will not use the topology of $D(S)$, however its combinatorics will determine the contribution of the state $S$ to the state sum. The contribution will be a product of a global weight of the state as a whole, $\bigl(\frac{a-a^{-1}}{z}\bigr)^{c(S)-1}$ and local weights of crossings of the diagram. The ordering of $D$ induces an ordering of $D(S)$ (in the way explained in Section \ref{s:homfly} above) and thus determines a tracing of the link $D(S)$. The local weight $\langle x|D|S \rangle$ of a crossing $x$ of $D$ depends on the first passage of a neighborhood of $x$ in the tracing and on the sign $\varepsilon$ of $x$. Namely, if $x$ is in $S$ and we approach $x$ first time on an overpass of $D$ then $\langle x|D|S \rangle=0$ (since such a situation does not occur in the above algorithm). If we approach $x$ on an underpass of $D$ then $\langle x|D|S \rangle=\varepsilon a^{-\varepsilon}z$ (i.e., the coefficient of $D^0$ in \eqref{eq:descend}). In the case if $x$ does not belong to $S$ and we approach $x$ first time on an overpass then $\langle x|D|S \rangle=1$ (since in the above algorithm we do not apply the skein relation to $x$). If we approach $x$ on an underpass then $\langle x|D|S \rangle=a^{-2\varepsilon}$ (i.e., the coefficient of $D^{-\varepsilon}$ in \eqref{eq:descend}). These assignments can be summarized in the following figure. $$\begin{array}{c||c|c|c|c} \raisebox{5pt}{First passage:}&\risS{10}{br2}{}{25}{0}{0}&\risS{0}{br1}{}{25}{20}{3}& \risS{0}{br4}{}{25}{0}{0}&\risS{0}{br3}{}{25}{0}{0} \\ \hline\hline \risS{-8}{cr_p}{}{25}{15}{10} & 0 & a^{-1}z & 1 & a^{-2}\\ \hline \risS{-8}{cr_m}{}{25}{15}{10} & -az & 0 & a^2 & 1 \end{array} $$ Denote by $\langle D|S \rangle:=\prod_x \langle x|D|S \rangle$ the product of local weights of all crossings. For a link $L$ with a diagram $D$ we have \cite[Proposition 2]{Ja}: $$P(L)=\sum_S\ \ \langle D|S \rangle\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1} $$ \begin{ex} Consider a based trefoil diagram $D$ and a state $S$ consisting of one crossing $\{x_1\}$. $$D=\risS{-15}{31}{ \put(17,-3){\mbox{\scriptsize $x_1$}} \put(34,23){\mbox{\scriptsize $x_2$}} \put(0,26){\mbox{\scriptsize $x_3$}}}{40}{20}{20} \hspace{2cm} D(S)=\risS{-20}{conwlin}{}{40}{20}{0} $$ The tracing of $D(S)$ first approaches the crossing $x_1$ on the strand which was an underpass in $D$. So its weight will be $\lvw{x_1}{D}=-az$. Similarly the weights of the other two crossings are $\lvw{x_2}{D}=a^2$ and $\lvw{x_3}{D}=1$. So the total contribution from this state will be equal to $-a^3z\bigl(\frac{a-a^{-1}}{z}\bigr) = a^2-a^4$. The next table shows the contributions from all eight states. Non-zero weights come from states corresponding to descending diagrams appearing in the end of the algorithm, see Example \ref{ex:1a}. $$\begin{array}{c|c|c|c|c|c|c|c} \emptyset & \{x_1\} & \{x_2\} & \{x_3\} & \{x_1,x_2\} & \{x_1,x_3\} & \{x_2,x_3\} & \{x_1,x_2,x_3\} \\ \hline \makebox(0,12){} a^2& a^2-a^4 & 0 & 0 & a^2z^2 & 0 & 0 & 0 \end{array} $$ So we recover the result of Example \ref{ex:1a}: $P(3_1)=(2a^2-a^4)+ z^2a^2$. \end{ex} \medskip \begin{rem} Smoothing a crossing from a state $S$ changes the number of components by one. Hence the cardinality $|S|$ and the difference $m-c(S)$ (where $m$ is the number of components of $D$) are congruent modulo 2. Therefore the HOMFLYPT polynomial $P(L)$ is even in each of the variables $a$ and $z$ if $m$ is odd, and it is an odd polynomial if $m$ is even. \end{rem} \begin{rem} The negative powers of $z$ come from the factors $\bigl(\frac{a-a^{-1}}{z}\bigr)^{c(S)-1}$. A smoothing of a crossing $x\in S$ may increase $c(S)$ by one, however this increment will be compensated by a local weight $\langle x|D|S \rangle$. As a consequence we have that the lowest power of $z$ in the HOMFLYPT polynomial of $L$ is at least $-m+1$. In particular the HOMFLYPT polynomial $P(K)$ of a knot $K$ is a genuine polynomial in $z$, i.e., does not contain terms with negative powers of $z$. \end{rem} \section{Gauss diagrams} \label{s:gaus-diagr} \begin{defn} Gauss diagrams provide an alternative and more combinatorial way to present links. For a link diagram $D$ consider a collection of (counterclockwise) oriented circles parameterizing it. Two preimages of a crossing of $D$ we unite in a pair and connect them by an arrow pointing from the overpassing preimage to the underpassing one. To each arrow we assign a sign $\pm1$ of the corresponding crossing. The result is called the {\it Gauss diagram} $G_D$ of the link diagram $D$. A link can be uniquely reconstructed from the corresponding Gauss diagram \cite{GPV}. \end{defn} For example, a Gauss diagram of the trefoil looks as follows. $$D=\ \risS{-15}{31}{}{40}{20}{20} \hspace{3cm} G_{D}=\ \risS{-15}{gd-31bp}{ \put(-135,-15){\mbox{\tt A knot and its Gauss diagram}}}{40}{27}{35}\label{d3-1} $$ Not every diagram with arrows is realizable as a Gauss diagram of a classical link. For example, $\risS{-5}{gauss-nr}{}{15}{14}{8}\ $ is not realizable regardless of signs of its arrows. An {\it abstract Gauss diagram}, or an {\it arrow diagram} is a generalization of a notion of Gauss diagram, in which we forget about realizability. In other words, an arrow diagram consists of a number of oriented circles with several arrows connecting pairs of distinct points on them. The arrows are equipped with signs $\pm1$. We consider these diagrams up to orientation preserving diffeomorphisms of the circles. We are going to work with {\it ordered Gauss diagrams}, i.e. Gauss diagrams with ordered circles and a base point $\risS{-2}{bp}{}{10}{0}{0}_1, \risS{-2}{bp}{}{10}{0}{0}_2, \dots, \risS{-2}{bp}{}{10}{0}{0}_m$\, on each circle corresponding to an ordering of $D$. Similarly, an {\it ordered arrow diagram} is an arrow diagram equipped with an ordering of the circles and a base point (different from the end points of the arrows) on each of them. Two Gauss diagrams represent isotopic links if and only if they are related by a finite number of Reidemeister moves (see, for example, \cite{GPV,Oll,CDbook}). $$\Omega_1:\ \risS{-15}{virrI}{ \put(-2,5){\mbox{$\scriptstyle \varepsilon$}} \put(85,5){\mbox{$\scriptstyle \varepsilon$}} }{100}{18}{15} \hspace{2cm} \Omega_2:\ \risS{-15}{virrII1}{ \put(6,18){\mbox{$\scriptstyle \varepsilon$}} \put(22,18){\mbox{$\scriptstyle -\varepsilon$}} }{95}{0}{0} $$ $$\Omega_3: \risS{-18}{virrIII}{}{120}{20}{22}\ . $$ Note that the segments involved in $\Omega_2$ or $\Omega_3$ may lie on the different components of the link. So the order in which they are traced along the link may be arbitrary. An {\it ordered link} is an equivalence class of ordered Gauss diagrams modulo Reidemeister moves which do not involve base points. For $m=1$ this notion is equivalent to the notion of long knots defined as embeddings of $\R$ into $\R^3$ which coincide with a standard embedding (say, an $x$-axis) outside a compact. It is well known that for classical knots the theories of long and closed knots coincide. \section{State models on Gauss diagrams} \label{s:jaeger-ga} All notions and constructions of Section \ref{s:jaeger} have a straightforward translation to the language of Gauss diagrams. A {\it state} $S$ on an abstract Gauss diagram $G$ is a subset of its arrows. Let $G(S)$ be the abstract Gauss diagram obtained by doubling every arrow in $S$ as in the figure $$\risS{-6}{arrow}{}{30}{0}{5}\qquad \risS{-2}{totor}{}{25}{0}{0}\qquad\risS{-6}{darrow}{}{30}{0}{0}\ , $$ and let $c(S)$ be the number of its circles. The ordering of $G$ induces an ordering of $G(S)$. The local weight $\langle \alpha|G|S \rangle$ of an arrow $\alpha$ of $G$ in general depends on whether $\alpha$ belongs to $S$, on the first passage in a neighborhood of $\alpha$ in the tracing of $G(S)$, and on the sign $\varepsilon$ of $\alpha$. Given a table of such local weights, we denote by $\langle G|S \rangle:=\prod_\alpha \langle \alpha|G|S \rangle$ the product of local weights of all arrows and define a polynomial $P(G)$ by \begin{equation}\label{eq:PG} P(G):=\sum_S\ \ \langle G|S \rangle\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1} \end{equation} The table of local weights for the HOMFLYPT state model (readily taken from Section \ref{s:jaeger}) is shown below. \begin{equation}\label{eq:lwg} \begin{array}{c||c|c|c|c} \raisebox{7pt}{First passage:}& \risS{0}{fp-b}{}{35}{25}{3}&\risS{0}{fp-t}{}{35}{0}{0}& \risS{0}{fp-l}{}{35}{0}{0}&\risS{0}{fp-r}{}{35}{0}{0} \\ \hline\hline \risS{-8}{cr_p-gd}{}{40}{22}{15} & a^{-1}z & 0 & a^{-2} & 1 \\ \hline \risS{-8}{cr_m-gd}{}{40}{22}{15} & -az & 0 & a^2 & 1 \end{array} \end{equation} \medskip \begin{ex} For the Gauss diagram of the trefoil the states with non-zero weights are the following. $$\begin{array}{r||c|c|c} \mbox{States of\quad } \risS{-10}{gd-31bp}{}{30}{20}{18}\ :& \risS{-10}{s31-0}{}{30}{0}{0} & \risS{-10}{s31-1}{}{33}{0}{0} & \risS{-10}{s31-12}{}{32}{0}{0} \\ \mbox{Weights}\ : & 1\cdot a^2\cdot 1& 1\cdot (-az)\cdot a^2\cdot \Bigl(\frac{a-a^{-1}}{z}\Bigr)& 1\cdot (-az)\cdot (-az) \end{array} $$ Hence, $P(G)=(2a^2-a^4)+ z^2a^2$. \end{ex} The HOMFLYPT polynomial defined by this state model may be called the {\em descending} HOMFLYPT polynomial. An {\em ascending} HOMFLYPT polynomial may be defined in a similar way, interchanging the values of the first two columns and the last two columns in the table (\ref{eq:lwg}) of local weights. For classical links these two polynomials coincide. \section{Vassiliev invariants coming from the HOMFLYPT polynomial} \label{s:vas-from-homfly} \subsection{HOMFLYPT power series} A standard way \cite{BN,BL} to relate Vassiliev invariants to the HOMFLYPT polynomial is to make a substitution $a=e^{Nh}$, $z=e^h-e^{-h}$ and then take the Taylor expansion of $P(L)$ in the variable $h$. The coefficient at $h^n$ turns out to be a Vassiliev invariant of order $\leqslant n$ which depends on a parameter $N$. In this paper we are working in a different way, following \cite{G2}. Namely, we substitute $a=e^h$ and take the Taylor expansion in $h$. The result will be a Laurent polynomial in $z$ and a power series in $h$. Let $p_{k,l}(L)$ be its coefficient at $h^kz^l$. It is not difficult to see that for any link $L$ the total degree $k+l$ is not negative. (It also follows from the Jaeger model in Section \ref{s:jaeger}.) \begin{lemG}[\cite{G2}] $p_{k,l}(L)$ is a Vassiliev invariant of order $\leqslant k+l$. \end{lemG} \begin{proof} Indeed, plugging $a=e^h$ into the skein relation we get $$P(\rb{-4.2mm}{\ig[width=10mm]{lrints.eps}})\ -\ P(\rb{-4.2mm}{\ig[width=10mm]{rlints.eps}})\ = \ zP(\rb{-4.2mm}{\ig[width=10mm]{twoup.eps}}) + h(\mbox{some terms})\ .$$ Since all terms of the HOMFLYPT polynomial have non-negative total degree in $z$ and $h$, the terms of the right hand side has degree at least $1$. Therefore, if we change $n+1$ crossings in different places then the alternating sum of the $2^n$ polynomials will have the degree of its monomials $\geqslant n+1$. Hence the coefficient at any degree $n$ term will be zero. \end{proof} \begin{rem} After substitution $a=e^h$ and the Taylor expansion in $h$ the factor $\frac{a-a^{-1}}{z}$ becomes $ \frac{2h + \dots}{z}$. In other words its total degree in $h$ and $z$ is not negative. Therefore, the total degree $k+l$ of the monomial $h^kz^l$ of $P(L)$ is not negative, however the exponent $l$ of $z$ may be as negative as $-k+1$. \end{rem} Our next goal is to describe the Gauss diagram formulas for $p_{k,l}(L)$. Note that the case $k=0$ corresponds to the substitution $a=1$ into the HOMFLYPT polynomial, i.e. to the Conway polynomial. Thus $p_{0,l}(L)$ are coefficients of the Conway polynomial for which the Gauss diagram formulas were found in \cite{CKR}. Thus our work may be considered as a generalization of \cite{CKR}. \subsection{Gauss diagram formulas for Vassiliev invariants} \label{s:arrow-diagr} Let $\mathscr{A}$ be a free $\Z$-module generated by ordered arrow diagrams with $m$ circles. Define a map $I:\mathscr{A}\to\mathscr{A}$ by $I(G):=\sum_{A\subseteq G} A$ for any (abstract, ordered) Gauss diagram $G$, and extend it to $\mathscr{A}$ by linearity. Here $A\subseteq G$ means the arrow subdiagram $A$ containing the same circles as the whole diagram $G$ but only a subset of arrows of $G$ with their signs. A natural scalar product on $\mathscr{A}$ is given by $(A,B):=0$ if $A$ is not equal to $B$, and $(A,B):=1$ if $A=B$ for a pair of arrow diagrams $A$ and $B$. Let us define a pairing $\scp{A}{G}:=(A,I(G))$. \medskip \begin{defn} Let $A$ be a fixed element of $\mathscr{A}$. By {\it a Gauss diagram formula} we mean a function $\mathcal{I}_A$ on abstract Gauss diagrams defined by $\mathcal{I}_A: G\mapsto \scp{A}{G}$. \end{defn} \medskip If $A$ is chosen at random then $\mathcal{I}_A(G)$ usually changes under Reidemeister moves and thus does not define any link invariant. However, for some special choice of $A$ it might be a link invariant. According to \cite{GPV} any Vassiliev invariant of long knots can be expressed by a Gauss diagram formula. In the following sections we describe an algorithm for finding such formulas for invariants $p_{k,l}(L)$ coming from the HOMFLYPT polynomial. For shortness of notation, further we will use {\it unsigned arrow diagrams}, understanding by that a linear combination of arrow diagrams with all possible choices of signs and appearing with a coefficients $\pm1$ depending on whether even or odd number of negative signs were chosen. \bigskip \begin{ex} If $m=2$ and $$A\ =\ \risS{-12}{adln}{}{70}{15}{12}\ :=\ \risS{-12}{adln-p}{}{70}{0}{0}\ -\ \risS{-12}{adln-m}{}{70}{0}{0}\ , $$ then $\mathcal{I}_A(G)$ is equal to the linking number of components. If $m=1$ and $$A\ =\ \risS{-10}{gd2}{}{25}{0}{12}\ :=\ \risS{-10}{gd2-pp}{}{25}{0}{0}\ -\ \risS{-10}{gd2-pm}{}{25}{0}{0}\ -\ \risS{-10}{gd2-mp}{}{25}{0}{0}\ +\ \risS{-10}{gd2-mm}{}{25}{0}{0}\ , $$ then $\mathcal{I}_A(G)$ is equal to the second coefficient of the Conway polynomial, $p_{0,2}(G)$ (see \cite{PV}). \end{ex} \section{Gauss diagram formulas for HOMFLYPT coefficients} \label{s:result} Our aim is to figure out contributions of various arrow subdiagrams to $p_{k,l}$, using the state model from Section \ref{s:jaeger-ga}. Consider a state model on an arrow diagram $A$ with the following table of local weights $\langle \alpha|A|S \rangle$: \begin{equation}\label{eq:lwa} \begin{array}{c||c|c|c|c} \raisebox{7pt}{First passage:}& \risS{0}{fp-b}{}{35}{25}{3}&\risS{0}{fp-t}{}{35}{0}{0}& \risS{0}{fp-l}{}{35}{0}{0}&\risS{0}{fp-r}{}{35}{0}{0} \\ \hline\hline \risS{-8}{cr_p-gd}{}{40}{22}{15} & e^{-h}z & 0 & e^{-2h}-1 & 0 \\ \hline \risS{-8}{cr_m-gd}{}{40}{22}{15} & -e^hz & 0 & e^{2h}-1 & 0 \end{array} \end{equation} Let $\langle A|S \rangle=\prod_{\alpha\in A}\langle \alpha|A|S \rangle$ and define a power series in $h$ and $z$ by \begin{equation}\label{e:coeff} W(A)=\sum_S \langle A|S \rangle \left(\frac{e^h-e^{-h}}{z}\right)^{c(S)-1} \end{equation} Denote by $w_{k,l}$ the coefficient of $h^kz^l$ in $W(A)$, so that $W(A)=\sum_{k,l} w_{k,l}(A) h^k z^l$. \medskip \begin{defn} Now the linear combination $A_{k,l}\in\mathscr{A}$ can be defined as follows.\vspace{-5pt} $$\displaystyle A_{k,l} := \sum\ w_{k,l}(A)\cdot A$$ \end{defn} \begin{thm}\label{th:main} Let $G$ be a Gauss diagram of an ordered link $L$. Then $$p_{k,l}(L)=\mathcal{I}_{A_{k,l}}(G)=\scp{A_{k,l}}{G}\ .$$ \end{thm} \begin{proof} According to \ref{s:jaeger-ga}, the HOMFLYPT is equal to $$P(G)=\sum_{S\subset G}\ \ \langle G|S \rangle\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1}. $$ We have \begin{multline*} \langle G|S \rangle=\prod_{\alpha\in G} \langle \alpha|G|S \rangle=\prod_{\alpha\in S} \langle \alpha|G|S \rangle \prod_{\alpha\in G\smallsetminus S} \langle \alpha|G|S \rangle=\\ =\prod_{\alpha\in S} \langle \alpha|G|S \rangle \sum_{A\supset S}\left(\prod_{\alpha\in A\smallsetminus S} (\langle \alpha|G|S \rangle-1) \prod_{\alpha\in G\smallsetminus A} 1\right)=\\ =\sum_{A\supset S}\left(\prod_{\alpha\in S} \langle \alpha|G|S \rangle\prod_{\alpha\in A\smallsetminus S} (\langle \alpha|G|S \rangle-1)\right). \end{multline*} Therefore \begin{multline*} P(G)=\sum_{S\subset G}\ \ \sum_{A\supset S}\left(\prod_{\alpha\in S} \langle \alpha|G|S \rangle\prod_{\alpha\in A\smallsetminus S} (\langle \alpha|G|S \rangle-1)\right)\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1}=\\ =\sum_{A\subset G}\ \ \sum_{S\subset A}\left(\prod_{\alpha\in S} \langle \alpha|G|S \rangle\prod_{\alpha\in A\smallsetminus S} (\langle \alpha|G|S \rangle-1)\right)\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1} \end{multline*} Comparing tables \eqref{eq:lwg} and \eqref{eq:lwa} of local weights, we get $$\prod_{\alpha\in S} \langle \alpha|G|S \rangle\prod_{\alpha\in A\smallsetminus S} (\langle \alpha|G|S \rangle-1)=\prod_{\alpha\in A} \langle \alpha|A|S \rangle=\langle A|S \rangle$$ Thus $$P(G)=\sum_{A\subset G}\ \ \sum_{S\subset A}\langle A|S \rangle\cdot \left(\frac{a-a^{-1}}{z}\right)^{c(S)-1}$$ And the theorem follows. \end{proof} \subsection{Contributions of various diagrams to $A_{k,l}$}\label{sub:contrib} A state $S$ of an arrow diagram $A$ is called {\it ascending}, if in the tracing of $A(S)$ we approach a neighborhood of every arrow (not only the ones in $S$) first at the arrow head. As easy to see from the weight table, only ascending states contribute to $W(A)$. In particular, the first end point of an arrow in $A$ (as we move from the base point along the orientation) must be an arrow head. Note that since $e^{\pm 2h}-1=\pm 2h + \mbox{(higher degree terms)}$ and $\pm e^{\mp h}z= \pm z + \mbox{(higher degree terms)}$, the power series $W(A)$ starts with terms of degree at least $|A|$, the number of arrows of $A$. Moreover, the $z$-power of $\langle A|S \rangle \left(\frac{e^h-e^{-h}}{z}\right)^{c(S)-1}$ is equal to $|S|-c(S)+1$. Therefore, for fixed $k$ and $l$, the weight $w_{k,l}(A)$ of an arrow diagram may be non-zero only if $A$ satisfies the following conditions: \begin{itemize} \item[(i)] $|A|$ is at most $k+l$; \item[(ii)] there is an ascending state $S$ such that $c(S)=|S|+1-l$.\label{con-iii} \end{itemize} For diagrams of the highest degree $|A|=k+l$, the contribution of an ascending state $S$ to $w_{k,l}(A)$ is equal to $(-1)^{|A|-|S|}2^k\varepsilon(A)$, where $\varepsilon(A)$ is the product of signs of all arrows in $A$. If two such arrow diagrams $A$ and $A'$ with $|A|=k+l$ differ only by signs of arrows, their contributions to $A_{k,l}$ differ by the sign $\varepsilon(A)\varepsilon(A')$. Thus all such diagrams may be combined to the unsigned diagram $A$, appearing in $A_{k,l}$ with the coefficient $\sum_S(-1)^{|A|-|S|}2^k$ (where the summation is over all ascending states of $A$ with $c(S)=|S|+1-l$). Arrow diagrams with isolated arrows do not contribute to $A_{k,l}$. Indeed, consider an arrow diagram $A\cup a$ with an isolated arrow $a$. Every state $S$ of $A$ corresponds to two states of $A\cup a$: $S$ and $S\cup a$. Depending on the orientation of $\alpha$ their weights will be either both $0$, or $(e^{-2\varepsilon h}-1)\langle A|S \rangle$ and $\varepsilon e^{-\varepsilon h}z\frac{e^h-e^{-h}}{z}\langle A|S \rangle$. In both cases they sum up to $0$, since $(e^{-2\varepsilon h}-1)+\varepsilon e^{-\varepsilon h}(e^h-e^{-h})=0$. \subsection{Coefficients of the Conway polynomial} The Conway polynomial is obtained from the HOMFLYPT polynomial by setting $h=0$. So our formulas for $A_{0,l}$ are the Gauss diagram formulas for coefficients of the Conway polynomial, discovered earlier by Michael Khoury and Alfred Rossi \cite{CKR}. Indeed, only states with $|S|=|A|$ and $c(S)=1$ contribute to $w_{0,l}(A)$. Since these are diagrams of the highest degree, according to \ref{sub:contrib} they may be combined into unsigned ascending diagrams which appear with coefficients 1. For example, in the case $m=1$ of long knots, states with $c(S)=1$ exist only for even number $l$ of arrows. For $l=2$ and $l=4$ the resulting linear combinations $A_{0,l}$ are $$\begin{array}{rcl} A_{0,2} &=& \risS{-12}{cd22arw}{}{25}{15}{20}\ ;\hspace{1cm} A_{0,3} = 0\ ;\\ A_{0,4} &=& \ard{cd4-01arw}\ +\ \ard{cd4-07arw1} + \ard{cd4-07arw2} + \ard{cd4-07arw3} + \ard{cd4-07arw4}+ \\ &&\hspace{-8pt} + \ard{cd4-05arw1} + \ard{cd4-05arw2} + \ard{cd4-05arw3} + \ard{cd4-05arw4} + \ard{cd4-05arw5} + \ard{cd4-05arw6} + \ard{cd4-05arw7} + \ard{cd4-05arw8} + \\ &&\hspace{-8pt} + \ard{cd4-06arw1} + \ard{cd4-06arw2} + \ard{cd4-06arw3} + \ard{cd4-06arw4} + \ard{cd4-06arw5} + \ard{cd4-06arw6} + \ard{cd4-06arw7} + \ard{cd4-06arw8}\ . \end{array}$$ \section{Low degree examples}\label{s:example} Let us describe the corresponding formulas for degree 2 and 3 invariants of knots, i.e. $k+l=2,3$, $m=1$. The case $A_{0,2}$ was described above. A direct check shows that $A_{2,0}=0$. Let us explicitly find the formula for $A_{1,2}$. The maximal number of arrows is equal to 3. To get $z^2$ in $W(A)$ we need ascending states with either $|S|=2$ and $c(S)=1$, or $|S|=3$ and $c(S)=2$. In the first case the equation $c(S)=1$ means that the two arrows of $S$ must intersect. In the second case the equation $c(S)=2$ does not add any restrictions on the relative position of arrows. In cases $|S|=|A|=2$ or $|S|=|A|=3$, since $S$ is ascending, $A$ itself must be ascending as well. For diagrams of the highest degree $|A|=1+2=3$, we should count ascending states of unsigned arrow diagrams with the coefficient $(-1)^{3-|S|}2$, i.e. $-2$ for $|S|=2$ and $+2$ for $|S|=3$. There are only four types of (unsigned) 3-arrow diagrams with no isolated arrows: \def\lhd#1{\ \ \risS{-13}{#1}{}{25}{18}{10}\ \ } \def\fhd#1{\risS{-20}{#1}{}{25}{12}{25}} \def\shd#1{\ \risS{-12}{#1}{}{25}{15}{15}} $$\lhd{bcd35-1}; \qquad \risS{-10}{bcd34-3}{}{27.5}{0}{0}\ \ , \qquad \lhd{bcd34-1},\qquad \risS{-10}{bcd34-2}{}{27.5}{0}{0}\ \ . $$ Diagrams of the same type differ by directions of arrows. For the first type, recall that the first arrow should be oriented towards the base point; this leaves 4 possibilities for directions of the remaining two arrows. One of them, namely $\lhd{aiv-n}$ does not have ascending states with $|S|=2,3$. The remaining possibilities, together with their ascending states, are shown in the table: $$\begin{array}[t]{||c|c|c|c||}\hline\hline \fhd{aiv-3} & \fhd{aiv-2} & \fhd{aiv-1} & \fhd{aiv-1} \\ \shd{civ-3} & \shd{civ-2} & \shd{civ-1} & \shd{civ-4} \\ \hline\hline\end{array} $$ The final contribution of this type of 3-arrow diagrams to $A_{1,2}$ is equal to\\ $$-2\ \risS{-12}{aiv-3}{}{25}{0}{15}\ -2\ \risS{-12}{aiv-2}{}{25}{0}{0}\ . $$ The remaining three types of 3-arrow diagrams differ by the location of the base point. A similar consideration shows that 5 out of the total of 12 arrow diagrams of these types, namely $$\risS{-10}{aiii-n1}{}{27.5}{15}{15}\ ,\quad \risS{-10}{aiii-n2}{}{27.5}{0}{0}\ ,\qquad \risS{-13}{ai-n}{}{25}{0}{0}\ ,\qquad \risS{-10}{aii-n1}{}{27.5}{0}{0}\ ,\quad \risS{-10}{aii-n2}{}{27.5}{0}{0} $$ do not have ascending states with $|S|=2,3$. The remaining possibilities, together with their ascending states, are shown in the table: $$\begin{array}[t]{||c|c|c||c|c|c||c|c|c||}\hline\hline \risS{-17}{aiii-1}{}{27.5}{10}{25}& \risS{-17}{aiii-2}{}{27.5}{0}{0} & \risS{-17}{aiii-2}{}{27.5}{0}{0} & \fhd{ai-1} & \fhd{ai-2} & \fhd{ai-3} & \risS{-17}{aii-2}{}{27.5}{0}{0}& \risS{-17}{aii-1}{}{27.5}{0}{0} & \risS{-17}{aii-3}{}{27.5}{0}{0} \\ \risS{-9}{ciii-1}{}{27.5}{15}{15}& \risS{-9}{ciii-2}{}{27.5}{0}{0} & \risS{-9}{ciii-3}{}{27.5}{0}{0} & \shd{ci-1} & \shd{ci-2} & \shd{ci-3} & \risS{-9}{cii-2}{}{27.5}{0}{0}& \risS{-9}{cii-1}{}{27.5}{0}{0} & \risS{-9}{cii-3}{}{27.5}{0}{0} \\ \hline\hline\end{array} $$ The final contribution of this type of 3-arrow diagrams to $A_{1,2}$ is equal to $$-2\ \risS{-10}{aiii-1}{}{27.5}{13}{15} -2\ \risS{-13}{ai-1}{}{25}{0}{0} -2\ \risS{-13}{ai-2}{}{25}{0}{0} +2\ \risS{-13}{ai-3}{}{25}{0}{0} -2\ \risS{-10}{aii-2}{}{27.5}{0}{0}\ . $$ Besides 3-arrow diagrams, some 2-arrow diagrams contribute to $A_{1,2}$ as well. Since $|A|=2<k+l=3$, contributions of 2-arrow diagrams depend also on their signs. Such diagrams must be ascending (since $|S|=|A|=2$) and should not have isolated arrows. There are four such diagrams, looking like $\lhd{ad2}$\!\!,\vspace{4pt} but with different signs $\varepsilon_1$, $\varepsilon_2$ of arrows. For each of them $\langle A|S \rangle=\varepsilon_1\varepsilon_2 e^{-(\varepsilon_1+\varepsilon_2)h}z^2$. If $\varepsilon_1=-\varepsilon_2$, then $\langle A|S \rangle=-z^2$, so the coefficient of $hz^2$ vanishes and such diagrams do not occur in $A_{1,2}$. For two remaining diagrams with $\varepsilon_1=\varepsilon_2=\pm1$, coefficients of $hz^2$ in $\langle A|S \rangle$ are equal to $\mp2$ respectively. Combining all the above contributions, we finally get $$A_{1,2} = -2\Bigl( \risS{-12}{aiv-3}{}{25}{0}{15}+ \risS{-12}{aiv-2}{}{25}{0}{0}+\risS{-10}{aiii-1}{}{27.5}{13}{15} +\risS{-13}{ai-1}{}{25}{0}{0}+\risS{-13}{ai-2}{}{25}{0}{0} -\risS{-13}{ai-3}{}{25}{0}{0}+\risS{-10}{aii-2}{}{27.5}{0}{0} + \shd{ad2pp} - \shd{ad2mm}\Bigr)\ . $$ The invariant $\mathcal{I}_{A_{1,2}}=\scp{A_{1,2}}{\cdot}$ can be simplified further. Note that for any classical Gauss diagram $G$, $\scp{ \risS{-13}{ai-2}{}{25}{17}{13}}{G}= \scp{ \risS{-13}{ai-3}{}{25}{0}{0} }{G}$. This follows from the symmetry of the linking number. Indeed, supposed we have matched two vertical arrows (which are the same in both diagrams) with two arrows of $G$. Let us consider the orientation preserving smoothings of the corresponding two crossings of the link diagram $D$ associated with $G$. The smoothened diagram $\widetilde{D}$ will have three components. Matchings of the horizontal arrow of our arrow diagrams with an arrow of $G$ both measure the linking number between the first and the third components of $\widetilde{D}$, using crossings when the first component overpasses (underpasses, respectively) the third one.\vspace{4pt} Thus, as functions on classical Gauss diagrams, $\risS{-13}{ai-2}{}{25}{17}{5}$\ is equal to $\risS{-13}{ai-3}{}{25}{17}{5}$\ and we have $$p_{1,2}(G) = -2\langle \risS{-12}{aiv-3}{}{25}{0}{15}+ \risS{-12}{aiv-2}{}{25}{0}{0}+\risS{-10}{aiii-1}{}{27.5}{13}{15} +\risS{-13}{ai-1}{}{25}{0}{0}+\risS{-10}{aii-2}{}{27.5}{0}{0} + \shd{ad2pp} - \shd{ad2mm}\ , G\rangle\ . $$ In a similar way one may check that $A_{3,0}=-4 A_{1,2}$. \begin{ex} Let us compute the coefficients of $hz^2$ and $h^3$ of the HOMFLYPT polynomial on the trefoil from Section \ref{d3-1}, see page \pageref{d3-1}. $$\scp{A_{1,2}}{G}= 2\scp{\shd{ad2mm}}{G}=2\qquad \mbox{and}\qquad \scp{A_{3,0}}{G}= -8\ , $$ It is easy to verify these coefficients in the Taylor expansion of $P(3_1)=(2e^{2h}-e^{4h})+ e^{2h}z^2$. \end{ex}
1,314,259,996,828
arxiv
\section{Introduction} The study of the fundamental group of smooth algebraic varieties is a classical problem in complex geometry. One of the most studied case is the complement of an arrangement of lines $\mathscr{A}$ in $\mathbb{P}^2$. Several methods have been used for computing this group, for example: \cite{suciu}, \cite{salvetti}, \cite{Randell} . Consider a real arrangement of lines $\mathscr{A}=\{L_1,\ldots, L_n\}$ in $\mathbb{P}^2$ and denote by $\bar{X}$ the blow up at $\Sing \mathscr{A}=\{p_1,\ldots, p_s\}$, the set of multiple points of the arrangement . Denote by $D_i$ the strict transform of the lines in $\mathscr{A}$ and by $D_j$ the exceptional divisors. Let $D=\sum_{l=1}^{n+s}r_lD_l$ be a divisor in $\bar{X}$ with $r_l\in \mathbb{N}^*$ or equal to infinity. Denote by $r=(r_1,\ldots, r_s)$. To this datum we can associate the orbifold fundamental group $\pi_1(\mathcal{X}(X,D,r))$ (see e.g. \cite{Eyssidieux}). \begin{thm}[Thm \ref{MainT}] \label{Thm1} There is a presentation of $\pi_1(\mathcal{X}(\bar{X},D,r))$ obtained by adding to Randell's presentation relations that are powers of \underline{explicit} words in Randell's generators. \end{thm} The explicit algorithm to produce these words follows from a modification of Randell's method and is given in section \ref{3}. The special case with $r_l= 1, \infty$ can be seen as a quasi-projective surface $X$ where the divisors with coefficients equal to infinity are removed from $\bar{X}$. This is, $X$ is a partial compactification of $\bar{X}\setminus D$ by those $D_i$ with coefficient $a_i=1$ (Linear Arrangement Compactifications or LAC surfaces in what follows). LAC surfaces are our main object of study in Section \ref{sec:Lac} and \ref{sec:App}. We show in Section \ref{sec:Lac} that it suffices to give weight one only to exceptional divisors in order to obtain different groups others than those who can be obtained from an arrangement of less lines. We can ask if the following condition is satisfied by $X$ a quasi-projective variety \begin{itemize} \item If $\#\pi_1(X)=+\infty$, there is a representation $\rho:\pi_1(X)\to \GL_N(\mathbb{C})$ with $N\in \mathbb{N}^*$, such that $\#\rho(\pi_1(X))=+\infty$. (See (\cite{Ey2}) for motivation and related questions for Kähler groups). \end{itemize} No counterexample seems to be known. We give a negative answer in the case $N=1$ with $X$ a LAC surface in Theorem \ref{Counterexample}. \begin{thm}[Thm \ref{Counterexample}]\label{Thm2} There exists a real arrangement of $6$ lines $\mathcal{B}$ (the complete quadrilateral) and a partial compactification $X$ of $\bar{X}\setminus D$ such that $\#\pi_1(X)=+\infty$ and $\#H_1(X)=4$. \end{thm} To prove Theorem \ref{Thm2} we use Theorem \ref{Thm1} and obtain $ \pi_1(X)=\mathbb{Z}/2\mathbb{Z}\ast \mathbb{Z}/2\mathbb{Z}$ which can be faithfully embedded in $\GL_2(\mathbb{C})$ hence satisfis the above condition with $N=2$. This group can be seen to be induced by a regular map $f$ from $X$ to $\mathbb{P}^1$ minus three points, coming from a pencil of conics and having two double fibers, in fact $f_*:\pi_1(X)\to \pi_1(\mathcal{X}(\mathbb{P}^2,D,(2,+\infty,2))$ where $D=(1:0)+(1:-1)+(0:1)$, is an isomorphism. At the end of section \ref{sec:App} we give, as another application of Theorem \ref{Thm1}, a presentation of some weighted LAC surfaces which are among the quotients of the ball by a uniform lattice in $PU(2,1)$ considered in \cite{DM}. This method for obtaining the presentation was not found by the author in the literature. \section{Preliminaries} We review the definitions and some properties of meridians and orbifolds. For the latter we follow \cite{Eyssidieux}. \subsection{Meridians} Let $M$ be a connected complex manifold, $H\subset M$ a hypersurface, $D$ an irreducible component of $H$ and $q\in M\setminus H$. Denote by $U=\{z\in \mathbb{C}\mid \abs{z}<2\}$ and let $f:U\to M$ be a holomorphic function such that: \begin{enumerate} \item $f^{-1}(H)=\{0\}$, \item $f(0)=p$ is an smooth point of $H$ and $p\in D$, \item $f'(0)\not\in T_p H$ where $T_p H$ is the tangent space of $H$ at $p$. \end{enumerate} Then $f\mid_{S^1}:S^1\to M\setminus H$ defines a free-homotopy class independent of $f$ where $S^1\subset U$ is the unit circle. A loop $\gamma\in \pi_1(M\setminus H,q)$ freely homotopic to $f|_{S^1}$ is called a \emph{meridian} of $D$ around $p$. If $D$ is smooth, any other meridian of $D$ around a smooth point of $H$ is a conjugate of $\gamma$. Denoting by $H'=H\setminus D$, we have that the inclusion $i:M\setminus H \hookrightarrow M\setminus H'$ induces a morphism $i_*:\pi_1(M\setminus H,q)\to \pi_1(M\setminus H',q)$ whose kernel is the normal subgroup of $\pi_1(M\setminus H,q)$ generated by $\gamma$. By Van Kampen's theorem the normal subgroup generated by the set of meridians around each irreducible component of $H$ is the kernel of the map $\pi_1(M\setminus H,q)\to \pi_1(M,q)$ induced by the natural inclusion. Suppose $H=D$ is smooth and let $\gamma_D$ be a meridian. Denote by $\pi: \bar{M} \to M$ the blow up of $M$ at some $p\in D$ and let $E_p$ be the exceptional divisor. Then $\pi^{-1}(\gamma_D)$ is a meridian of $E_p$ in $\bar{M}$. \subsection{Orbifolds} Let $M$ be a complex manifold and $D$ a smooth effective divisor. Let $r\in \mathbb{N}^*$ and consider $P\to M$ the principal $\mathbb{C}^*$-bundle attached to $\mathscr{O}_M(-D)$. The tautological section $\sigma_D\in H^0(M,\mathscr{O}_M(D))$ can be lifted to a holomorphic function $f_D:P\to\mathbb{C}$ satisfying $f_D(p\cdot \lambda)=\lambda f_D(p)$. Let $Y\subset P\times \mathbb{C}$ be the complex analytic space defined by the equation $z^r=f_D(p)$ where $z$ is a coordinate for $\mathbb{C}$. Since $D$ is smooth $Y$ is smooth too. The action of $\mathbb{C}^*$ can be extended to $Y$ in the following way: $(p,z)\cdot\lambda=(p\cdot \lambda^r,\lambda z)$. Then the complex analytic stack $$M(\sqrt[r]{D}):=[Y_D/\mathbb{C}^*]$$ is an orbifold. The non-trivial isotropy groups lie over the points in $D$ and are isomorphic to the group $\mu_r$ of $r$-roots of unity. We allow also the weight $+\infty$ by considering the manifold $M\setminus D$ as an stack $[M\setminus D]$ and write $$M(\sqrt[+\infty]{D}):=[M\setminus D]. $$ Let $\bar{X}$ be a complex manifold and $D=\sum_{i=1}^l D_i$ be a simple normal crossing divisor, where $D_i$ is an irreducible component of $D$. For any choice of weights $r:=(r_1,\ldots,r_l)\in (\mathbb{N}^*\cup \{+\infty\})^l$ we can define the orbifold $$\mathcal{X}(\bar{X},D,r):=\bar{X}(\sqrt[r_1]{D_1})\times_{\bar{X}}\cdots\times_{\bar{X}}\bar{X}(\sqrt[r_l]{D_l})$$ Denoting by $X=\bar{X}\setminus D$, we can view $\mathcal{X}(\bar{X},D,r)$ as an orbifold (partial if some $r_i=+\infty$) compactification of $X$. Let $j_r:X\hookrightarrow \mathcal{X}(\bar{X},D,r)$ denote the natural open immersion. By fixing $q\in X$, it turns out that we can define $\pi_1(\mathcal{X}(\bar{X},D,r),q)$ and moreover it is the quotient of $\pi_1(X,q)$ by the normal subgroup generated by all $\gamma_i^{r_i}$, where $\gamma_i$ is a meridian around $D_i$ and $r_i\not = +\infty$. We obtain that ${j_r}_*:\pi_1(X,q)\to \pi_1(\mathcal{X}(\bar{X},D,r),q) $ is surjective. As a particular case we have that if $r=(1,\ldots,1)$ then $\mathcal{X}(\bar{X},D,r)=\bar{X}$. Let $D_\infty:= \sum D_j$ the sum of all irreducible component of $D$ such that $r_j=+\infty$. We can regard $\mathcal{X}(\bar{X},D,r)$ as $\mathcal{X}(\bar{X}\setminus D_\infty, D-D_\infty,r')$ where $r'$ consists of the same finite values that $r$. In particular, if $r'_i=1$ for all $i$ we have that $\mathcal{X}(\bar{X},D,r)=[\bar{X}\setminus D_\infty]$ and we write simply $\bar{X}\setminus D_\infty$. \begin{defn} Let $X$ be a smooth algebraic variety, $Y$ a projective curve, $D=\sum_{i=1}^l y_i$ a divisor on $Y$ and $r\in (\mathbb{N}^*)^l$. Consider the orbifold $\mathcal{X}(Y,D,r)$. A dominant algebraic morphism $f:X\to Y$ is said to be an \emph{orbifold morphism} if for all $y_i \in D$ the multiplicity of the fiber $f^*(y_i)$ is divisible by $r_i$. \end{defn} \section{Fundamental group}\label{sec:FundG} \subsection{Modification of the method of Randell}\label{subsec:2.1} \subsubsection{Elementary geometric bases} Consider $n$ real points $\{x_1,x_2,\ldots,x_n\}\subset \mathbb{R}\subset \mathbb{C}$ such that $x_1<x_2<\ldots <x_n$. Fix $q\in \mathbb{R}\setminus \{x_1,\ldots, x_n\}$. Any oriented simple closed curve $C\subset\mathbb{C}\setminus \{x_1,\ldots,x_n\}$ is freely homotopic to a loop based at $q$. Moreover, if it contains at least one $x_i$ in the bounded component that $C$ determines, there exists a simple path $\theta$ connecting $q$ and $C$ satisfying: $$ \Im(\theta(t))<0 \text{ for } t\in (0,1). $$ If $C\cap \{\Im(z)<0\}$ is connected we call $C_q:=\theta \cdot C \cdot \theta^{-1}$ an \emph{elementary loop}. Here $\Im$ denotes the imaginary part of a complex number. (We suppose the curve starts at a point with $\Im(z)\leq 0$). \begin{rem} We have made all the choices in order to have $C_q$ unique in $\pi_1(\mathbb{C}\setminus \{x_1,\ldots,x_n\},q)$.\end{rem} \begin{figure}[ht] \centering \begin{tikzpicture} \draw (0,0) node [right]{$q$}; \draw (.5,0) node [below]{$x_2$}; \draw (-.5,0) node [below] {$x_1$}; \draw (0,0) to[out=-90,in=90] (0,-.60) ; \draw [arc arrow=to pos 0.35 with length 2mm] (-1,0) to[out=-90,in=-90] (1,0) [arc arrow=to pos 0.85 with length 2mm] to[out=90,in=90] node[above] {$C$} cycle; \foreach \Point in {(0,0),(-.5,0), (0.5,0)}{ \node at \Point {\textbullet}; } \end{tikzpicture} \label{elementaryloop} \caption{Elementary loop $C_q$.} \end{figure} \begin{defn} An (ordered) \emph{geometric base} $\Gamma=(\gamma_1,\ldots,\gamma_n)$ for the group $\pi_1(\mathbb{C}\setminus \{x_1,\ldots,x_n\},q)$ is an $n$-tuple such that $\gamma_i$ is a meridian of $x_i$ based at $q$ and satisfying: $$\gamma_n\cdot \gamma_{n-1}\cdots \gamma_{1}= \partial B(0,M)_q $$ in $\pi_1(\mathbb{C}\setminus \{x_1,\ldots,x_n\},q)$, with $M>\abs{x_i}$ for all $i=1,\ldots,n$. The curve $\partial B(0,M)$ is a circle centered at $0$ with radius $M$ and oriented counterclockwise. We consider the product of loops from left to right. \end{defn} \begin{rem} The loop $\partial B(0,M)_q$ can be seen as the inverse of a meridian loop around the point at infinity. \end{rem} \noindent By abuse of notation we will write $\Gamma\subset \mathbb{C}$. \begin{defn} An \emph{elementary geometric base} $\Gamma=(\gamma_1,\ldots,\gamma_n)$ is a geometric base such that every $\gamma_i$ is an elementary loop. \end{defn} \begin{figure}[h] \centering \begin{tikzpicture} \draw (0,0) node [left]{$q$}; \draw (1.5,0) node [right]{$x_1$}; \draw (3,0) node [right]{$x_2$}; \draw (0,0) to[out=-90,in=-90] node[above]{$\gamma_1$} (1,0) ; \draw [arc arrow=to pos 0.35 with length 2mm] (1,0) to[out=-90,in=-90] (2,0) [arc arrow=to pos 0.85 with length 2mm] to[out=90,in=90] cycle; \draw (0,0) to[out=-90,in=-90]node[below]{$\gamma_2 $} (2.5,0) ; \draw [arc arrow=to pos 0.35 with length 2mm] (2.5,0) to[out=-90,in=-90] (3.5,0) [arc arrow=to pos 0.85 with length 2mm] to[out=90,in=90] cycle; \foreach \Point in {(0,0),(1.5,0), (3,0)}{ \node at \Point {\textbullet}; } \end{tikzpicture} \label{elgeobas} \caption{An elementary geometric base.} \end{figure} \begin{lem} Given $n$ real points and a base point as above, there is a unique \emph{elementary geometric base} $\Gamma$. \end{lem} \begin{proof} Is immediate by the ordering of $\Gamma$ and the uniqueness of the elementary loops. \end{proof} \begin{rem} The notion of geometric base for $\pi_1((L\otimes \mathbb{C})\setminus P; q)$ depends only on the real oriented line $L$ and $P=\{x_1,\ldots,x_n\}\in L(\mathbb{R}), q\in L(\mathbb{R})$. \end{rem} \subsubsection{Randell's pencil} \label{randellspencil} \begin{defn} A \emph{complex arrangement of lines} is an algebraic set $\mathscr{A}\subset \mathbb{P}^2$ whose irreducible components are complex lines. The arrangement $\mathscr{A}$ is said to be \emph{real} or \emph{to be defined over the reals} if the coefficients of all linear forms defining each line can be taken to be real. \end{defn} Denote by $M(\mathscr{A}):=\mathbb{P}^2\setminus \mathscr{A}$. We are going to review and adapt a method to compute a presentation for $\pi_1(M(\mathscr{A}))$ when $\mathscr{A}$ is real as in \cite{Randell}. Associate to each (projective) arrangement $\mathscr{A}$ an affine one, defined as fo\-llows: fix a line $L_\infty\in \mathscr{A}$ and consider it as a line at infinity, then $$\mathscr{A}^{\text{aff}}:=\mathscr{A}\cap (\mathbb{P}^2\setminus L_\infty)\cong\mathscr{A}\cap\mathbb{C}^2,$$ where we have chosen an isomorphism $h:\mathbb{C}^2\to\mathbb{P}^2\setminus L_\infty$. If we denote $M(\mathscr{A}^{\text{aff}}):=\mathbb{C}^2\setminus \mathscr{A}^{\text{aff}}$, we have the identification: $$M(\mathscr{A})= M(\mathscr{A}^{\text{aff}}).$$ Fixing $q\in M(\mathscr{A}^{\text{aff}})$ and denoting also by $q=h(q)$, we have: $$\pi_1(M(\mathscr{A}),q)\cong \pi_1(M(\mathscr{A}^{\text{aff}}),q) .$$ Moreover if the arrangement $\mathscr{A}$ is real, we can associate it a planar graph (allowing rays) in $\mathbb{R}^2$. Suppose $\mathscr{A}^{\text{aff}}$ is the associated affine arrangement, then all multiple points lie in a real plane. Namely, if we consider $\mathbb{C}^2$ with coordinates $(z,w)=(x_1+iy_1,x_2+iy_2)$, the real plane is given by $\{(z,w)\in \mathbb{C} \mid y_1=y_2=0\}\cong \mathbb{R}^2$. Set $\mathscr{A}(\mathbb{R}):=\mathscr{A}^{\text{aff}}\cap \mathbb{R}^2$ to be the set of real points of the arrangement $\mathscr{A}^{\text{aff}}$, denote by $M(\mathscr{A}(\mathbb{R})):=\mathbb{R}^2\setminus \mathscr{A}(\mathbb{R})$. Suppose there is no vertical line in $\mathscr{A}(\mathbb{R})$. Denote by $\Sing\mathscr{A} ^\bullet $ the multiple points of the corresponding arrangement $\mathscr{A}^\bullet=\mathscr{A},\mathscr{A}^{\text{aff}},\mathscr{A}(\mathbb{R})$. Consider $\mathbb{R}^2$ with coordinates $(x_1,x_2)$. We orient the non vertical lines in $\mathbb{R}^2$ taking the positive direction to be that of decreasing $x_1$. Fix a base point $q=(q_1,q_2)$ in the lower right part of $M(\mathscr{A}(\mathbb{R}))$ further and lower than any point in $\Sing \mathscr{A}(\mathbb{R})$ and lower than any line. For a complex line $\Sigma\subset \mathbb{C}^2$ defined by an equation with real coefficients, denote by $\Sigma(\mathbb{R})$ its restriction to $\mathbb{R}^2$ and orient it as before if it is non-vertical. Set $\Sigma^{(0)}:=\{(z,w) \mid z=q_1\}$, note that $\Sigma^{(0)}(\mathbb{R})$ is the vertical line passing through $q$, we orient it by taking as positive direction that of increasing $x_2$. For any triple $P\subset \Sigma(\mathbb{R}) \subset \Sigma$, where $P$ is a finite set of points, $\Sigma(\mathbb{R})$ a real oriented line and $\Sigma$ a complex line as before, we can consider an elementary geometric base $\Gamma\subset \Sigma$ of $\pi_1(\Sigma\setminus P,q)$ by fixing $q\in\Sigma(\mathbb{R})$. As $\Sigma^{(0)}(\mathbb{R})$ intersects all the lines of $\mathscr{A}(\mathbb{R})$, we can number $P=\Sigma^{(0)}(\mathbb{R})\cap \mathscr{A}(\mathbb{R})$ from bottom to top (given by the orientation chosen for $\Sigma^{(0)}(\mathbb{R}))$ and denote $\Gamma^{(0)}=\{\gamma_1^{(0)},\ldots, \gamma_n^{(0)}\} \subset \Sigma^{(0)}$ the associated elementary geometric base with base point $q$. The idea to obtain a presentation for the fundamental group is to study how the elementary geometric base change when we rotate the line $\Sigma^{(0)}$ counterclockwise while fixing the base point $q$ and keep track of the relations arising. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=1.5] \draw (.5,-.4) node [below] {\scriptsize $q$}; \node at (-.11,.15){\tiny $p_1 $}; \node at (-.75,.28){\tiny $p_2 $}; \node at (-1.5,0){\tiny $p_3 $}; \node at (-.80,-.45){\tiny $p_4 $}; \draw (.05,1)node[above,right]{\scriptsize $3$} -- (-.05,-1); \draw (-1,-3/4) -- (.5,3/8) node[above,right]{\scriptsize $2$}; \draw (-1,1/4) -- (.5,-1/6)node[above,right]{\scriptsize $1$}; \draw (-.55,1)node[above,right]{\scriptsize $4$} -- (-.65,-1); \draw [line width=0.05mm,red ] (-.65,1) node [above]{\tiny $\Sigma^{(0)}$} -- (.6,-.8); \draw [line width=0.05mm,red ] (-1,.55)node [above]{\tiny {$\Sigma^{(1)}$}} -- (.6,-.8); \draw [line width=0.05mm,red ] (-1,.25)node [above,left]{\tiny {$\Sigma^{(2)}$}} -- (.6,-.8); \draw [line width=0.05mm,red ] (-1,-.35)node [left]{\tiny {$\Sigma^{(3)}$}} -- (.6,-.8); \foreach \Point in {(.6,-.8),(0,0),(-.6,.15),(-.6,-.45)}{ \node at \Point {$\cdot$}; } \draw [red] (-.4,.6) to[out=-150,in=100] (-.5,.3) [arc arrow=to pos 1 with length .5mm] ; \draw [red] (-.7,0) to[out=-150,in=100] (-.8,-.3) [arc arrow=to pos 1 with length .5mm] ; \end{tikzpicture} \caption{Base point} \label{fig:Base Point} \end{figure} The set of lines passing through $q$, can be seen as $\mathbb{RP}^1$, which we parametrised by the angle with respect to the line $x_2=0$ (oriented in the positive sense), this is, a value in $[\pi/2,3\pi/2[$. To every real line $\Sigma(\mathbb{R})$ passing through $q$ we can associate its angle, which we denote by: $$\theta(\Sigma(\mathbb{R}))\in [\pi/2,3\pi/2[. $$ For $t\in [\pi/2,3\pi/2[$, the line being parametrised by $t$ will be denoted by $\Sigma_t$. In particular, $\theta(\Sigma^{(0)})=\pi/2$. The elementary geometric base $\Gamma^{(0)}$ varies in a continuous way as we vary $t$. There exists two types of directions where it changes: \begin{enumerate}[label=\textbf{S.\arabic*}] \item \label{en1} Those $t\in[\pi/2,3\pi/2[$ such that the associated $\Sigma_t$ contains a point in $\Sing \mathscr{A}(\mathbb{R})$, \item \label{en2} Those $t\in[\pi/2,3\pi/2[$ such that $\Sigma_t$ is parallel to a line in $\mathscr{A}(\mathbb{R})$, which correspond to the points in $\Sing \mathscr{A}\cap L_\infty$. \end{enumerate} By a slight change of $q$, we can consider that no line passing through it contains two points of $\Sing \mathscr{A}$. Given $p\in\Sing\mathscr{A}$, denote by $\theta(p)$ the angle of the unique line passing through $p$ and $q$. Given $p, p'\in \Sing \mathscr{A}$, we define a total order by $$p<p' \quad \text{iff} \quad \theta(p)<\theta(p'). $$ Let us write $\Sing \mathscr{A}=\{p_1,\ldots, p_s\}$ with this order. \subsubsection{Elementary geometric transition of regular fibers in Randell's pencil}\label{3.1.3} Fix a point $p_i\in\Sing \mathscr{A}$. Denote by $t_i=\theta(p_i)$. Choose $\varepsilon>0$ sufficiently small such that no $t\in [t_i-\varepsilon, t_i+\varepsilon]\setminus \{t_i\}$ is of type \ref{en1} or \ref{en2}. Let: $$\Sigma^{(i-1)}:= \Sigma_{t_i-\varepsilon}, \quad \Sigma^{(i)}:=\Sigma_{t_i+\varepsilon}.$$ This is, $\Sigma^{(i-1)}$ lies to the right and $\Sigma^{(i)}$ to the left of $p_i$. Recall that $\Sigma^{(i-1)}(\mathbb{R})$ is an oriented real line and by intersecting with $\mathscr{A}(\mathbb{R})$ we can consider the elementary geometric base: $$\Gamma^{(i-1)}=(\gamma_1^{(i-1)},\gamma_2^{(i-1)},\ldots,\gamma_n^{(i-1)})\subset \Sigma^{(i-1)},$$ similarly $$\Gamma^{(i)}=( \gamma_1^{(i)}, \gamma_2^{(i)}, \ldots, \gamma_n^{(i)}) \subset \Sigma^{(i)}.$$ A priori, we should take such geometric bases for every point $p_i$ but as there is no direction between $t_i$ and $t_{i+1}$ in which the geometric base changes, by continuity we will still write $\Gamma^{((i+1)-1)}={\Gamma^{(i)}}$. \begin{rem} In fact, only the points of type $\ref{en1}$ play a role in the presentation of $\pi_1(M(\mathscr{A}))$. The points of type \ref{en2} does not modify the meridians who are about to cross a point in $\Sing \mathscr{A}(\mathbb{R})$, they only change their numeration in the geometric base. These points are studied in section \ref{subsection2.2} and they are needed for the explicit form of the exceptional meridians given in Section \ref{3}. \end{rem} As a simple example illustrating how $\Gamma^{(i-1)}$ and $\Gamma^{(i)}$ are related, we have the following Lemma, which can be found in \cite{Orlik} Lemma 5.73 (with other notation). Let $\mathscr{A}^{\text{aff}}=\{L_1,L_2\}$ be a pencil of two lines in $\mathbb{C}^2$ defined over $\mathbb{R}$ and $x_i=\Sigma^{(0)}(\mathbb{R})\cap L_i(\mathbb{R})$ for $i=1,2$ as in \ref{randellspencil}. \begin{lem}\label{looppassing} Consider $\Gamma^{(0)}=(\gamma_1^{(0)},\gamma_2^{(0)})\subset \Sigma^{(0)}$ the elementary geometric base associated to $P=\{x_1,x_2\}$ and suppose $q<x_1<x_2$. Then $\Gamma^{(1)}=(\gamma_1^{(1)},\gamma_2^{(1)})$ can be represented in $\Sigma^{(0)}$ and is given by $\Gamma^{(1)}=({\gamma_1^{(0)}}^{-1}\gamma_2^{(0)}{\gamma_1^{(0)}},\gamma_1^{(0)})$. (See Figure \ref{fig:loop}). \end{lem} \begin{figure}[ht] \centering \begin{tikzpicture} \draw (0,0) to[out=0,in=180]node[below]{$\gamma_1^{(0)}$} (1,0) ; \draw [arc arrow=to pos 0.25 with length 2mm] (1,0) to[out=-90,in=-90] (2.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \draw (0,0)node [below]{$q$}; \draw (1.75,0) node [below] {$x_1$}; \draw (3.75,0) node [below]{$x_2$}; \draw (0,0) to[out=-45,in=-135] node[midway,below]{$\gamma_2^{(0)}$} (3,0); \draw (0,0) to[out=45,in=135]node [midway,above]{$\gamma_1^{(1)}$} (3,0) ; \draw [arc arrow=to pos 0.25 with length 2mm] (3,0) to[out=-90,in=-90] (4.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \foreach \Point in {(0,0),(1.75,0), (3.75,0)}{ \node at \Point {\textbullet}; } \end{tikzpicture} \caption{Loop passing} \label{fig:loop} \end{figure} We can interpret this Lemma as a loop passing a vertex and having a conjugation and a reordering of the lines as in Figure \ref{fig:conjugates}. The next example should be a pencil of $n$ lines, but in fact this is locally the general case as we will see in the following proposition. Let $p_i\in \Sing \mathscr{A}(\mathbb{R})$, $\Gamma^{(i-1)}$ and $\Gamma^{(i)}$ as before. \begin{prop}\label{1eq} Let $j$ be the first index for which the meridian $\gamma_j^ {(i-1)}$ su\-rrounds a line which passes through $p_i$ and let $k$ be the last such index. Then we have: $$\Gamma^{(i)}=(\gamma_1^{(i-1)},\ldots,\gamma_{j-1}^{(i-1)},{\gamma_j^{(i)}},\ldots,{\gamma_k^{(i)}},\gamma_{k+1}^{(i-1)},\ldots,\gamma_n^{(i-1)}), $$ where: \begin{align*} {\gamma_{k}^{(i)}}&=\gamma_{j}^{(i-1)}, \\ \gamma_{k-1}^{(i)}&={\gamma_{j}^{(i-1)}}^{-1}\gamma_{j+1}^{(i-1)}\gamma_{j}^{(i-1)},\\ &\ \ \!\vdots \\ \gamma_{j}^{(i)}& ={\gamma_{j}^{(i-1)}}^{-1}{\gamma_{j+1}^{(i-1)}}^{-1}\cdots{\gamma_{k-1}^{(i-1)}}^{-1}\gamma_{k}^{(i-1)}\gamma_{k-1}^{(i-1)}\ldots{\gamma_{j}^{(i-1)}}\gamma_{j}^{(i-1)}. \end{align*} And a set of relations in $\pi_1(M(\mathscr{A}),q)$:\footnote{These relations are stated as in \cite{Falk} p.142, where in a footnote he points to an error of \cite{Randell}.} \begin{equation}\label{rpi} R_{p_i}=\left\lbrace\gamma_{k}^{(i-1)}\gamma_{k-1}^{(i-1)}\cdots\gamma_{j}^{(i-1)} = \gamma_{\sigma(k)}^{(i-1)}\gamma_{\sigma(k-1)}^{(i-1)}\cdots\gamma_{\sigma(j)}^{(i-1)}=\ldots \right\rbrace \end{equation} where $\sigma$ runs over the set of cyclic permutations of $k-j+1$ elements. \end{prop} \begin{figure}[ht] \centering \begin{tikzpicture} \draw (0,0) node [below] {$p_i$}; \draw (-1.5,1) node[left] {$\gamma_{k}^{(i)}$ } -- (1.5,-1) node[right] { $\gamma_{j}^{(i-1)}$}; \draw (-1.5,.3) node[left] {$\gamma_{k-1}^{(i)}$ } -- (1.5,-0.3) node[right] { $\gamma_{j+1}^{(i-1)}$}; \draw (-1.5,-.3)node[left] {$\vdots$ } --(1.5,0.3) node[right] { $\vdots$}; \draw (-1.5,-1)node[left] {$\gamma_{j}^{(i)}$ } --(1.5,1) node[right] { $\gamma_{k}^{(i-1)}$}; \draw [red](-1.5,1.2)node[above]{$\Sigma^{(i)} $} -- (-1.5,-1.2); \draw [red](1.5,1.2)node[above] {$\Sigma^{(i-1)} $} -- (1.5,-1.2); \end{tikzpicture} \caption{Conjugates} \label{fig:conjugates} \end{figure} Expressing every meridian in terms of the geometric base $\Gamma^{(0)}$ by means of Proposition \ref{1eq} and replacing in (\ref{rpi}) we obtain: \begin{thm}[\cite{Randell}]\label{presentationRandell} The fundamental group of $M(\mathscr{A})$ admits a presentation: $$\pi_1(M(\mathscr{A}),q)\cong\left\langle \gamma_1^{(0)},\gamma_2^{(0)},\ldots, \gamma_n^{(0)}\biggm| \bigcup_ {p_i \in\Sing(\mathscr{A}(\mathbb{R}))}R_{p_i} \right\rangle. $$ \end{thm} \subsection{Meridians crossing a point at infinity}\label{subsection2.2} Let us describe the change in the geometric base when it traverses a singular point at infinity. Let $p_i\in \Sing \mathscr{A}\cap L_\infty$ and $\Gamma^{(i-1)}\subset \Sigma^{(i-1)}$ and $\Gamma^{(i)}\subset \Sigma^{(i)}$ be given as in section \ref{3.1.3}. This is: $$\Gamma^{(i-1)}=( \gamma_1^{(i-1)},\ldots, {\gamma_{n-k+1}^{(i-1)}}, \ldots, {\gamma_n^{(i-1)}}),$$ and $$\Gamma^{(i)}= (\gamma_1^{(i)},\ldots, \gamma_k^{(i)},\gamma_{k+1}^{(i)},\ldots, \gamma_n^{(i)}).$$ \begin{prop}\label{Prop:3.5} Assume that there are exactly $k$ parallel lines in $\mathscr{A}(\mathbb{R})$ whose corresponding lines in $\mathscr{A}$ intersect at $p_i$. Then these lines are associated to the last $k$ meridians ${\gamma_{n-k+1}^{(i-1)}}, \ldots, {\gamma_n^{(i-1)}}$ of $\Gamma^{(i-1)}$. \end{prop} \begin{proof} Let $t_i=\theta(p_i)$ and $\Sigma_{t_i}$ be the line passing by $q$ and $p_i$. Using the order of the real lines write $\Sigma^{(i-1)}(\mathbb{R})\cap \mathscr{A}(\mathbb{R})=\{y_1,\ldots,y_n\}$ and as no other point of $\Sing \mathscr{A}$ different from $p_i$ lies in $\Sigma_{t_i}$ we have that $\Sigma_{t_i}\cap \mathscr{A}(\mathbb{R})=\{x_1,\ldots,x_{n-k}\}$. In fact, it must be the case that $x_i$ and $y_i$ are in the same line of $\mathscr{A}(\mathbb{R})$ otherwise a point of type \ref{en1} or \ref{en2} would lie between $\Sigma^{(i-1)}$ and $\Sigma_{t_i}$ which can not happen. \end{proof} \begin{cor} We have the following identifications in $\Sigma_{t_i}$: \begin{align}\label{eq:gbinf1} \gamma_{k+1}^{(i)}= {\gamma_1^{(i-1)}}, \quad \ldots \quad ,\gamma_n^{(i)}={\gamma_{n-k}^{(i-1)}}. \end{align} \end{cor} \begin{proof} As we are turning counter-clockwise, by the orientation given to $\Sigma^{(i)}(\mathbb{R})$ it will intersect first the $k$ parallel lines associated to $p_i$ and then, by the same argument as in Proposition \ref{Prop:3.5}, the point in the position $k+j$ of $\Sigma^{(i)}(\mathbb{R})\cap \mathscr{A}(\mathbb{R})$ lies in the same line as $x_j$ . \end{proof} \begin{prop} The last $k$ meridians in $\Gamma^{(i-1)}$ invert their order to fit in the first $k$ places of $\Gamma^{(i)}$. By doing so a conjugation for all the precedent meridians is needed (see figure \ref{baseinfty}). More precisely we have: \begin{align}\label{eq:gbinf2} \gamma_{k}^{(i)}&={\gamma_1^{(i-1)}}^{-1}\cdots {\gamma_{n-k}^{(i-1)}}^{-1}\ {\gamma_{n-k+1}^{(i-1)}}\ {\gamma_{n-k}^{(i-1)}}\cdots {\gamma_1^{(i-1)}},\nonumber \\ \gamma_{k-1}^{(i)}&={\gamma_1^{(i-1)}}^{-1}\cdots {\gamma_{n-k+1}^{(i-1)}}^{-1}\ {\gamma_{n-k+2}^{(i-1)}}\ {\gamma_{n-k+1}^{(i-1)}}\cdots {\gamma_1^{(i-1)}},\nonumber \\ &\ \ \! \vdots \nonumber\\ \gamma_{1}^{(i)}&={\gamma_1^{(i-1)}}^{-1}\cdots {\gamma_{n-1}^{(i-1)}}^{-1}\ {}\gamma_{n}^{(i-1)}\ {\gamma_{n-1}^{(i-1)}}\cdots {\gamma_1^{(i-1)}}. \end{align} \end{prop} \begin{proof} By repeated iterations of Lemma \ref{looppassing} we obtain (\ref{eq:gbinf2}). The result follows by unicity of the elementary geometric base. \end{proof} \begin{figure}[ht] \centering \begin{tikzpicture} \draw (0,0) to[out=0,in=180]node[above]{$^{(i)}\gamma_1$} (1,0) ; \draw [arc arrow=to pos 0.25 with length 2mm] (1,0) to[out=-90,in=-90] (2.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \draw (0,0)node [below]{$q$}; \draw (1.75,0) node [below] {$x_1$}; \draw (3.75,0) node [below]{$x_2$}; \draw (5.75,0) node [below]{$x_3$}; \draw (7.75,0) node [below]{$x_4$}; \draw (0,0) to[out=-45,in=-135] node[midway,below]{$^{(i)}\gamma_2$} (3,0); \draw (0,0) to[out=45,in=135] node [midway,right]{$\gamma_2^{(i)}$} (5,0) ; \draw (0,0) to[out=45,in=135]node [midway,above]{$\gamma_1^{(i)}$} (7,0) ; \draw [arc arrow=to pos 0.25 with length 2mm] (3,0) to[out=-90,in=-90] (4.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \draw [arc arrow=to pos 0.25 with length 2mm] (5,0) to[out=-90,in=-90] (6.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \draw [arc arrow=to pos 0.25 with length 2mm] (7,0) to[out=-90,in=-90] (8.5,0) [arc arrow=to pos 0.75 with length 2mm] to[out=90,in=90] cycle; \foreach \Point in {(0,0),(1.75,0), (3.75,0), (5.75,0), (7.75,0)}{ \node at \Point {\textbullet}; } \end{tikzpicture} \caption{Loops crossing a point \ref{en2}.} \label{baseinfty} \label{loop} \end{figure} \subsection{Loops around singular points}\label{3} Consider an arrangement $\mathscr{A}$ defined over the reals as in the precedent section. We have a canonical way of associating an elementary geometric base for every line $\Sigma_t$ passing though $q$ with $t\in[\pi/2,3\pi/2[$ as in $\ref{randellspencil}$. We will write the elementary geometric base over the directions of the points \ref{en1} and \ref{en2} in terms of the elements of $\Gamma^{(i)}$. This can be seen as finding elementary loops for the points in $\Sing\mathscr{A}$, which can be divided into finite distance points $\Sing \mathscr{A}^{\text{aff}}\cong \Sing \mathscr{A}(\mathbb{R})$ and infinite distance $\Sing \mathscr{A} \cap L_\infty$. \begin{lem}\label{lineatinfinity} The inverse of a meridian loop around the line at infinity $L_\infty$ at the point $L_\infty\cap \Sigma^{(i)}$ is given by the product of the elements of the elementary base $\Gamma^{(i)}$, this is : $$(\gamma_{\infty}^{(i)})^{-1}=\gamma_n^{(i)}\cdot\gamma_{n-1}^{(i)}\cdots \gamma_1^{(i)} $$ \end{lem} \begin{proof} This is a simple consequence on the definition of geometric base and the choice of the base point. \end{proof} This meridian can be seen as an \emph{elementary} loop, as it is product of loops of this type. Recall that $t_i=\theta(p_i)$ denotes the angle of the line $\Sigma_{t_i}$ containing $p_i$ and $q$. \begin{defn} A meridian $\gamma_{p_i}'$ \emph{around a singular point} $p_i\in\Sing \mathscr{A}(\mathbb{R})$, is a meridian of $p_i\in \Sigma_{t_i}$ (based at $q$). \end{defn} We can consider the \emph{elementary} meridian $\gamma_{p_i}$ as the elementary loop of $p_i$ in $\Sigma_{t_i}$ (based at $q$). With the notation of Proposition \ref{1eq} we have: \begin{lem}\label{finite} The elementary meridian $\gamma_{p_i}$ can be obtained as a product of the elements of $\Gamma^{(i-1)}$ which surround the lines passing through $p_i$. Namely $$\gamma_{p_i}=\gamma_{k}^{(i-1)}\gamma_{k-1}^{(i-1)}\cdots\gamma_{j+1}^{(i-1)} \gamma_{j}^{(i-1)}.$$ \end{lem} \begin{proof} An elementary geometric base $\Gamma$ is constructed in such a way that the product of $k-j+1$ consecutive elements $(\gamma_{j},\ldots,\gamma_{k})$ of $\Gamma$ equals an elementary loop $C_q$, where $C$ is an oriented counter-clockwise simple closed curve that in the bounded part that it determines contains exactly $\{x_{j},\ldots,x_{k}\}$. \end{proof} \noindent Next we determine the meridians around multiple points lying in the line at infinity. Let $p_i\in \Sing\mathscr{A}\cap L_\infty$. Consider the line $\Sigma_{t_i}$ passing through $q$ and $p_i$. Suppose there are exactly $k$ lines in $\mathscr{A}$ different from $L_\infty$ passing through $p_i$ then their real points are parallel lines to $\Sigma_{t_i}(\mathbb{R})$ in $\mathscr{A}(\mathbb{R})$. As is Section \ref{subsection2.2} we have: $$\Sigma_{t_i}(\mathbb{R})\cap \mathscr{A}(\mathbb{R})=\{x_1,\ldots,x_{n-k}\}, $$ with $k\geq 1$ depending on $i$. The order of the points $x_i$ given by the orientation of $\Sigma_{t_i}(\mathbb{R})$. Hence we can take the elementary geometric base $\Gamma_{t_i}\subset \Sigma_{t_i}$ associated with $P=\{x_1,\ldots,x_{n-k}\}$ and the base point $q$. Suppose $\Gamma_{t_i}=(\gamma_1,\ldots,\gamma_{n-k})$. \begin{defn} A meridian $\gamma_{p_i}$ around a singular point at infinity $p_i\in \Sing \mathscr{A}\cap L_\infty$ is a meridian at infinity of $\Sigma_{t_i}$. \end{defn} \begin{lem}\label{meridian3} Let $\Gamma^{(i-1)}=(\gamma_1^{(i-1)},\ldots, \gamma_n^{(i-1)})$ be as in Section \ref{3.1.3}. For every point $p_i\in \Sing\mathscr{A}\cap L_\infty$ the elementary meridian $\gamma_{p_i}$ is given by any of the equivalent expressions \begin{equation}\label{eq:6} \gamma_{p_i}=\gamma_\infty^{(i-1)}\cdot\gamma_n^{(i-1)}\cdots\gamma_{n-k+1}^{(i-1)}, \end{equation} or \begin{equation}\label{eq:7} \gamma_{p_i}^{-1}=\gamma_{n-k}^{(i-1)}\cdots\gamma_1^{(i-1)} . \end{equation} \end{lem} \begin{rem} In (\ref{eq:6}) a similitude with the formula of Lemma \ref{finite} can be observed. Namely the product of the meridians of the lines crossing the point $p_i$ give the meridian. In (\ref{eq:7}) we simply compute the meridian around the point at infinity in the line $\Sigma_{t_i}$, so it is closer to Lemma \ref{lineatinfinity}. \end{rem} \begin{proof} As no other point of $\Sing \mathscr{A}$ lies in $\Sigma_{t_i}$ by continuity, Proposition \ref{Prop:3.5} and the uniqueness of the elementary geometric base we have that $$\Gamma_{p_i}=(\gamma_1^{(i-1)},\ldots, \gamma_{n-k}^{(i-1)}),$$ by applying Lemma \ref{lineatinfinity} we obtain (\ref{eq:7}). In $\Sigma^{(i-1)}$ we have \begin{equation*} \gamma_\infty^{(i-1)}=( \gamma_1^{(i-1)})^{-1}\cdots (\gamma_n^{(i-1)})^{-1}, \end{equation*} therefore for the right hand side of (\ref{eq:6}) \begin{equation*} \gamma_\infty^{(i-1)}\cdot\gamma_n^{(i-1)}\cdots\gamma_{n-k+1}^{(i-1)}=(\gamma_1^{(i-1)})^{-1}\cdots (\gamma_{n-k}^{(i-1)})^{-1} \end{equation*} which equals $\gamma_{p_i}$ by (\ref{eq:7}). \end{proof} \begin{rem} By the results of this Section we have obtained singular meridians for every point $p\in \Sing \mathscr{A}$ with $\mathscr{A}$ defined over the reals. In \cite{garber} Garber generalize a formula of Fujita \cite{fujita} expressing locally the singular meridians as the product of the meridians of the irreducible components in the singular point. He then uses this result globally when the lines intersect transversally, this is, when there is no additional conjugation. Our method can be seen as a generalization of this by allowing multiple points of higher order. \end{rem} \section{LAC Surfaces} \label{sec:Lac} \subsection{Construction}\label{subsection4.2} We will construct surfaces generalizing the complement of a hyperplane arrangement and obtain a presentation of them. Fix $\mathscr{A}=\{L_1,\ldots,L_k\}$ an arrangement of lines in $\mathbb{P}^2$. Let $\bar{X}$ be the blow up of $\mathbb{P}^2$ at $\Sing \mathscr{A}=\{p_1,\ldots, p_s\}$ and $\pi:\bar{X}{\to} \mathbb{P}^2$ the projection map. Denote by $D_1,\ldots, D_k$ the strict transform of the lines $L_1,\ldots, L_k$ and by $D_{k+1},\ldots, D_{k+s}$ the exceptional divisors associated to the points $p_1,\ldots, p_s$. Given a subset $I\subset \{1,\ldots, k+s\}$ we can define the orbifold $\mathcal{X}(\bar{X},D,r_I)$ associated to the divisor $D=\sum D_i$ and the weights $r_I=(r_1,\ldots, r_{k+s})$ where $r_i=1$ if $i \in I$ and $r_i=+\infty$ if $i\not\in I$. Then $\mathcal{X}(\bar{X},D,r_I)=\bar{X}\setminus (D_I)_\infty$ where we have written $D_I$ for $D$ to emphasize the dependence on $I$. \begin{defn} We call $\bar{X}\setminus (D_I)_\infty$ a \emph{ (partial) Linear Arrangement Compactification} or LAC surface. \end{defn} \begin{rem}\label{rema3} If $I=\varnothing$, $(D_I)_\infty = D$ and $\pi$ restricted to $\bar{X}\setminus D$ is a biholomorphism with $M(\mathscr{A})$, from which it follows that \begin{equation}\label{bihfunda} \pi_1(\bar{X}\setminus D)\cong \pi_1(M(\mathscr{A})), \end{equation} showing that these surfaces are indeed generalizations of the complement of an arrangement. \end{rem} \subsection{Reduced LAC Surfaces} In \cite{Eyssidieux} a comment before Proposition 1.3 mentions that the log pair $(\bar{X},D)$ has to be rigid if one wants the fundamental group to be very different from $\bar{X}\setminus D$. We prove here that we can reduce the study of LAC surfaces to partially compactify only with respect to exceptional divisors, this is, the subset of irreducible components of $D$ with weight $1$ are exceptional divisors. We do so by showing that if a strict transform of a line $L_i$ has weight $1$, then we can find an arrangement of less lines whose associated LAC surface has the same fundamental group. In this process the double points lying in the line that we have removed create isolated points and we must allow to blow them up as well in order to cover the case when this exceptional divisor had weight $1$ in $\mathscr{A}$. With this is mind we have the following definition. \begin{defn} A \emph{LAC datum}, is a triple $$(\mathscr{A},S,I):= (\mathscr{A}=\{L_1,\ldots, L_k\}\subset \mathbb{P}^2, S=\{p_1,\ldots,p_s\}\subset \mathbb{P}^2, I\subset \{1,\ldots,k+s\}) $$ where $\mathscr{A}$ is an arrangement of lines in $\mathbb{P}^2$, $S$ a finite set of points and $I$ an index set. \end{defn} Given a LAC datum $(\mathscr{A},S,I)$ we can construct a surface $\bar{X}\setminus (D_I)_\infty$ as in \ref{subsection4.2}. Consider $\bar{X}$ the blow up of $\mathbb{P}^2$ in the points $S$, call $D_1,\ldots, D_k$ the strict transform of the lines in $\mathscr{A}$, $D_{k+1},\ldots, D_{k+s}$ the exceptional divisors and $(D_I)_\infty=\sum_{j\in J}D_j$ where $J:=\{1,\ldots, k+s\} \setminus I$. As we can change the arrangement and the set of points to blow up, we prefer the notation $M(\mathscr{A},S,I)$ for this surface. \begin{defn} Two LAC datum $(\mathscr{A},S,I)$, $(\mathscr{A}', S', I')$ are said to be equivalent if and only if $$\pi_1(M(\mathscr{A},S,I))\cong \pi_1(M(\mathscr{A}',S',I')).$$ In such a case we write $(\mathscr{A},S,I)\sim(\mathscr{A}', S', I')$. \end{defn} \begin{defn} A LAC datum $(\mathscr{A},S, I)$ such that $S\subset \Sing \mathscr{A}$ and $I=S$ is called \emph{reduced}. In this case we write $(\mathscr{A},I)$. \end{defn} \begin{thm}\label{reducedLac} For every LAC $(\mathscr{A}, S, I)$ there is a canonical equivalent reduced LAC $(\mathscr{A}',I')$. \end{thm} \noindent We will need to prove first three reduction Lemmas. \begin{lem}\label{RL1} Let $(\mathscr{A}, S,I)$ be a LAC datum. Suppose there exists $L_i\in \mathscr{A}$ such that $i\in I$, then $$(\mathscr{A}, S,I) \sim (\mathscr{A}\setminus {L}, S,I\setminus \{i\}) .$$ \end{lem} \begin{proof} Denote by $\bar{X}=\Bl_S \mathbb{P}^2$. As $M(\mathscr{A},S,I)=\bar{X}\setminus (D_I)_\infty$ and $$\{1,\ldots,s+k\}\setminus I = \{1,\ldots,\hat{i},\ldots,s+k\}\setminus (I\setminus \{i\}) $$ denoting by $(D'_{I\setminus \{i\}})_\infty$ the divisor to be removed given by $(\mathscr{A}\setminus L,S, I\setminus \{i\})$ we have that $$(D_I)_\infty=(D'_{I\setminus \{i\}})_\infty $$ that implies $$M{(\mathscr{A}, S, I)}= M(\mathscr{A}\setminus L,S,I\setminus \{i\}). $$ \end{proof} \noindent So we can suppose $I\subset \{1,\ldots, s\}$. The next step is to consider points lying outside $\mathscr{A}$. \begin{lem}\label{RL2} Let $(\mathscr{A},S,I)$ be a LAC datum such that there is $p_j\in S$ that lies in no line of $\mathscr{A}$. \begin{enumerate} \item If $j\in I$ then $$ (\mathscr{A},S,I) \sim (\mathscr{A},S\setminus \{p_j\}, I\setminus \{j\}).$$ \item If $j\not\in I$ then $$ (\mathscr{A},S,I) \sim (\mathscr{A},S\setminus \{p_j\}, I).$$ \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item The surface $M(\mathscr{A},S,I)$ is the blowing up of $M(\mathscr{A},S\setminus \{p_j\}, I\setminus \{j\}) $ at the point $p_j$, as the fundamental group is invariant under blow ups we obtain the stated. \item We have a biholomorphism given by restricting the blowing up map of $M(\mathscr{A}, S\setminus\{p_j\},I)$ at $p_j$, to the complement of the exceptional divisor $$ M(\mathscr{A},S,I) \overset{\sim}{\to} M(\mathscr{A}, S\setminus\{p_j\},I)\setminus \{p_j\} $$ but as $$ \pi_1(M(\mathscr{A}, S\setminus\{p_j\},I)\setminus \{p_j\})\cong \pi_1(M(\mathscr{A}, S\setminus\{p_j\},I))$$ the result follows. \end{enumerate} \end{proof} \noindent The last reduction Lemma, can be divided into two parts. In the first case we show that it is only interesting when we blow up a point and do not remove the exceptional divisor. In the second part, a point of $p_j\in S$ that is a smooth point of $\mathscr{A}$ does not affect the fundamental group in either case $j\in I$ or $j\not\in I$. By the last Lemma we can assume that every point in $S$ lies in the arrangement $\mathscr{A}$. \begin{lem}\label{RL3} Let $p_j\in S\subset \mathscr{A}$. \begin{enumerate} \item If $j\not\in I$ then $$ (\mathscr{A},S,I) \sim (\mathscr{A},S\setminus \{p_j\}, I).$$ \item Suppose $p_j\in L$ for some line $L\in \mathscr{A}$. If $j\in I$ and $p_j\not\in \Sing \mathscr{A}$, then $$ (\mathscr{A},S,I) \sim (\mathscr{A}\setminus \{L\},S\setminus\{p_j\}, I\setminus \{j\}).$$ \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Let $\bar{X}=\Bl_{S\setminus \{p_j\}}\mathbb{P}^2$ and $\bar{Y}=\Bl_{p_j} \bar{X}$. In $\bar{X}$ we have $$(D_I)_\infty=\sum D_r \text{ with } r \in S\setminus (I\cup \{p_j\}) $$ In $\bar{Y}$ $$(D'_I)_\infty= (D_I)_\infty+ D'_j $$ Where we have denote also by $(D_I)_\infty$ the strict transform of the divisor with same notation in $\bar{X}$. Therefore we have a biholomorphism $$\bar{Y}\setminus (D'_I)_\infty= M(\mathscr{A}, S, I) \overset{\sim}{\to} M(\mathscr{A},S\setminus\{p_i\},I)\setminus\{p_j\}=\bar{X}\setminus ((D_I)_\infty \cup\{p_j\})$$ and the result follows. \item If $\gamma_{p_j}$ is a meridian at $p_j$ of $L$, as $p_j$ is a smooth point then it is also a meridian of the exceptional divisor $D_{k+j}$ in $M(\mathscr{A},S,I\setminus \{j\})$. As $D_{k+j}$ is smooth, $\gamma_{p_j}$ generates the kernel of $$\pi_1(M(\mathscr{A}, S, I\setminus \{j\}))\to \pi_1(M(\mathscr{A},S,I))$$ hence \begin{equation}\label{eq:reduction} \pi_1(M(\mathscr{A}, S, I\setminus \{j\}))/\left\langle\left\langle\gamma_{p_j} \right\rangle\right\rangle \cong \pi_1(M(\mathscr{A},S,I)) \end{equation} By the point 1 above we have that $ (\mathscr{A},S,I\setminus \{j\}) \sim (\mathscr{A},S\setminus \{p_j\}, I\setminus \{j\}).$ Replacing in (\ref{eq:reduction}) we obtain \begin{equation}\label{eq:reduction2} \pi_1(M(\mathscr{A}, S\setminus \{p_j\}, I\setminus \{j\}))/\left\langle\left\langle\gamma_{p_j} \right\rangle\right\rangle \cong \pi_1(M(\mathscr{A},S,I)) \end{equation} But $\gamma_{p_j}$ also generates the kernel of the map of fundamental group induced by the inclusion $$M(\mathscr{A},S\setminus \{ p_j\},I\setminus\{ j\})\hookrightarrow M(\mathscr{A}\setminus L, S\setminus \{p_j\},I\setminus\{j\})$$ therefore $$\pi_1(M(\mathscr{A}, S\setminus \{p_j\}, I\setminus \{j\}))/\left\langle\left\langle\gamma_{p_j} \right\rangle\right\rangle \cong \pi_1(M(\mathscr{A}\setminus \{L\},S\setminus \{p_j\}, I\setminus \{j\})) $$ which together with (\ref{eq:reduction2}) prove the statement. \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem \ref{reducedLac} ] Given an arbitrary LAC datum $(\mathscr{A}, S, I)$ by Lemma \ref{RL1} we can suppose that $I\subset \{1,\ldots,s\}$. By Lemma \ref{RL2} all those points in $S$ not lying over $\mathscr{A}$ can be also discarded without changing the fundamental group. By Proposition \ref{RL3} 1, we remove from $S$ all points $p_j$ such that $j\not\in I$ so $S=I$, we will denote the LAC datum by $(A,S)$ . If there is a smooth point $p_j\in S$ such that $p_j\in L$ for some $L\in \mathscr{A}$ by Proposition \ref{RL3} 2, $(\mathscr{A},S)\sim (\mathscr{A}\setminus \{L\}, S\setminus \{p_j\})$. This new LAC datum could have as well smooth points lying in $S\setminus \{p_j\}$, either coming from $S$ or from double points in $\mathscr{A}$ lying in $L$. We repeatedly apply Proposition \ref{RL3} 2, until $I\subset \Sing \mathscr{A}$ or $\mathscr{A}=\emptyset$. As there are only a number finite of points and lines this process must end and we obtain an equivalent reduced LAC datum $(\mathscr{A}',I')$ as wanted. \end{proof} \subsection{A presentation for the orbifold fundamental group} \begin{defn} Let $\mathscr{A}=\{L_1,\ldots,L_k\}$, $\bar{X}$ the blow up of $\mathbb{P}^2$ at $\Sing\mathscr{A}=\{p_1,\ldots, p_s\}$ and $D_i$ as in section \ref{subsection4.2}. The divisor $D=\sum D_i$ is SNC and for $r \in (\mathbb{N}^*\cup \{\infty\})^{k+s}$ the orbifold $\mathcal{X}(\bar{X},D,r)$ is called a \emph{weighted LAC Surface}. \end{defn} \begin{thm}\label{MainT} Let $\mathscr{A}=\{L_1,\ldots,L_k\}$ be a real arrangement, $\mathcal{X}(\bar{X},D,r)$ a weighted LAC surface. Suppose we consider $L_k$ as a line at infinity and $\mathscr{A}^{\text{aff}}$ has no vertical line. Choose a base point $q$ and a canonical elementary geometric base $\Gamma^{(0)}=(\gamma_1,\ldots,\gamma_{k-1})$ based at $q$ and to the right of any vertex as in Section \ref{sec:FundG}. Let $\Sing \mathscr{A}=\{p_1,\ldots, p_s\}$ and $\gamma_{p_j}$ be the elementary singular meridian around $p_j$. Then the $\gamma_{p_j}$ can be expressed in terms of $\Gamma^{(0)}$ as in Lemmas \ref{finite} and \ref{meridian3} a presentation for $\pi_1(\mathcal{X}(\bar{X},D,r),q)$ is given by \begin{equation}\label{eq:Presentation} \pi_1(\mathcal{X}(\bar{X},D,r),q)=\left\langle \gamma_1,\ldots, \gamma_{k-1}\mid \bigcup_{p_l\in\Sing(\mathscr{A}(\mathbb{R}))}R_{p_l}, \gamma_i^{r_i},\gamma_{p_j}^{r_{k+j}}, \begin{array}{l} i=1,\ldots, k ,\\ j=1,\ldots, s \end{array} \right\rangle \end{equation} where we omit the relation $\gamma^{r_m}=1$ if $r_m=\infty$. \end{thm} \begin{proof} We find first a presentation for $\pi_1(\bar{X}\setminus D,q)$ and express the meridians around the $D_r$ in terms of $\gamma_i$. As $\bar{X}\setminus D\cong M(\mathscr{A})$ by remark \ref{rema3}, we obtain that $\gamma_i$ is a meridian of $D_i$ in $\bar{X}$ and by Theorem \ref{presentationRandell} we have the following presentation for $\pi_1(\bar{X}\setminus D,q)$: $$\pi_1(\bar{X}\setminus D,q)=\left\langle \gamma_1,\gamma_2,\ldots, \gamma_{k-1}\mid \bigcup_ {p\in\Sing(\mathscr{A}(\mathbb{R}))}R_p \right\rangle. $$ The elementary meridian $\gamma_k$ around $D_k$ is given by Lemma \ref{lineatinfinity} as $\gamma_k=(\gamma_{k-1}\cdots \gamma_1)^{-1}$. The meridians around the exceptional divisor $D_{k+j}$ are given by the Lemmas \ref{finite} and \ref{meridian3} in the following way: $\gamma_{p_j}$ is a meridian around $p_j$ lying completely in the line $\Sigma_i$, so after the blow up this meridian lies in the strict transform of $\Sigma_i$ giving a meridian of $D_{k+j}$. Moreover, $\gamma_{p_j}$ is expressed in terms of $\Gamma^{(0)}$. By \cite{Eyssidieux} p.3 dividing by the normal subgroup generated by $\gamma_i^{r_i},\gamma_{p_j}^{r_{k+j}}$ we obtain the presentation. \end{proof} \begin{cor}\label{corollary1} Let $(\mathscr{A},I)$ be a reduced LAC surface with $\mathscr{A}$ real. A presentation for $\pi_1(\bar{X}\setminus (D_I)_\infty)$ is given by \begin{equation}\label{eq:corollary1} \pi_1(\bar{X}\setminus (D_I)_\infty)\cong\left\langle \gamma_1,\ldots, \gamma_{k-1}\mid \bigcup_{p_r\in\Sing(\mathscr{A}(\mathbb{R}))}R_{p_r},\gamma_{p_j}, \quad j\in I \right\rangle. \end{equation} \end{cor} \section{Applications}\label{sec:App} \subsection{LAC Surface with infinity fundamental group and finite abelianization}\label{subsectionExample} Consider a set $S$ of $4$ points in general position in $\mathbb{P}^2$. The arrangement $\mathcal{B}=\{L_1,\ldots, L_6\}$ of $6$ lines connecting each pair of these is called \emph{the complete cuadrilateral}. It has $4$ triple points and $3$ double points: $\Sing \mathcal{B}=\{p_1,\ldots, p_7\}$ numbered as in Figure \ref{fig:Ceva2}. It has the following equation $(z_1^2-z_2^2)(z_1^2-z_3^2)(z_2^2-z_3^2)=0 $ for projective coordinates $(z_1:z_2:z_3)$. If we consider $L_6$ as the line at infinity, after a small rotation in order to have no vertical lines, we obtain the real picture as in Fig. \ref{fig:Ceva2}. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=1.5] \draw (.8,-1.5) node [below] {\scriptsize $q$}; \node at (.4,0) [below]{\tiny $p_1 $}; \node at (0,.4)[below]{\tiny $p_2 $}; \node at (0,-.4)[below]{\tiny $p_3 $}; \node at (-.4,0)[below]{\tiny $p_4 $}; \node at (-1.4,1.4){\tiny $p_5 $}; \node at (-1.6,0){\tiny $p_6 $}; \node at (-1.4,-1.4){\tiny $p_7 $}; \draw (-1.5,0) -- (1.5,0) node[above,right]{\tiny $L_3$}; \draw (-1.5,-1.1) -- (1.1,1.5)node[above,right]{\tiny $L_5$}; \draw (-1.1,-1.5) -- (1.5,1.1)node[above,right]{\tiny $L_4$}; \draw (-1.5,1.1) -- (1.1,-1.5)node[above,right]{\tiny $L_1$}; \draw (-1.1,1.5) -- (1.5,-1.1)node[above,right]{\tiny $L_2$}; \draw [line width=0.05mm,red ] (.5,1.5) node [above]{\tiny {$\Gamma^{(0)}$}} -- (.8,-1.5); \draw [line width=0.05mm,red ] (-.3,1.5)node [above]{\tiny {$\Gamma^{(1)}$}} -- (.8,-1.5); \draw [line width=0.05mm,red ] (.8,-1.5) -- (-1,1.5) node [above]{\tiny {$\Gamma^{(2)}$}}; \draw [line width=0.05mm,red ] (.8,-1.5) -- (-1.5,1.5) node [above]{\tiny {$\Gamma^{(3)}$}}; \draw [line width=0.05mm,red ] (.8,-1.5) -- (-1.5,1) node [left]{\tiny {$\Gamma^{(4)}$}}; \draw [line width=0.05mm,red ] (.8,-1.5) -- (-1.5,.1) node [left]{\tiny {$\Gamma^{(5)}$}}; \draw [line width=0.05mm,red ] (1.5,-331/230) -- (-1.5,-1.7) node [left]{\tiny {$\Gamma^{(6)}$}}; \foreach \Point in {(.4,0),(-.4,0),(0,.4),(0,-.4)}{ \node at \Point {$\cdot$}; } \end{tikzpicture} \caption{Complete quadrilateral.} \label{fig:Ceva2} \end{figure} \noindent By the subsections \ref{subsection2.2} and \ref{3} we have that the elementary geometric base (up to homotopy in $\pi_1(\mathbb{P}^2\setminus \mathcal{B},q)$ and replacing $\gamma_i^{(0)}$ by $x_i$) are \begin{equation}\label{eq:ceva} \begin{aligned} \Gamma^{(0)}&= (x_1,x_2,x_3,x_4,x_5)\\ \Gamma^{(1)}&= (x_1,x_4, x_3^{x_2},x_2,x_5)\\ \Gamma^{(2)}&= (x_1,x_4, x_3^{x_2},x_5,x_2)\\ \Gamma^{(3)}&= (x_4,x_1, x_3^{x_2},x_5,x_2)\\ \Gamma^{(4)}&= (x_4,x_5, x_3^{x_2x_1},x_1,x_2)\\ \Gamma^{(5)}&= (x_2^{x_3x_2x_1x_5x_4},x_1^{x_3^{x_2}x_1x_5x_4},x_4,x_5,x_3^{x_2x_1})\\ \Gamma^{(6)}&=(x_3^{a^{-1}x_2x_1a},x_2^{x_3x_2x_1x_5x_4},x_1^{x_3^ {x_2}x_1x_5x_4},x_4,x_5) \end{aligned} \end{equation} where $$a=(x_2x_1)^{x_3x_2x_1}x_5x_4 $$ By Theorem \ref{presentationRandell} we obtain the following presentation: \begin{equation}\label{quadrilateralpres} G=\pi_1(\mathbb{P}^2\setminus \mathcal{B},q)\cong\left\langle x_1, \ldots, x_5 \mid [x_4,x_1], [x_5,x_2], [x_4,x_3,x_2], [x_5, x_3^{x_2}, x_1] \right\rangle \end{equation} which can be easily seen to be a semidirect product $\mathbb{F}_2 \ltimes \mathbb{F}_3$ where $\mathbb{F}_2=\left\langle x_4, x_5 \right\rangle$ and $\mathbb{F}_3:=\left\langle x_1,x_2,x_3 \right\rangle $. Let $\bar{X}$ denote the blow up of $\mathbb{P}^2$ at $\Sing \mathcal{B}$, to simplify denote $E_k=D_{6+k}$ the exceptional divisor coming from $p_k$. Consider the reduced LAC surface $M(\mathcal{B},I)$ where $I$ consists of three triple points and two double ones. The simplest case is $I=\{p_1,p_2,p_3,p_4,p_5\}$. \begin{thm}\label{Counterexample} The reduced LAC surface $M(\mathcal{B},I)$ has infinite fundamental group and finite $H_1$. \end{thm} \begin{proof} Consider the singular meridians $\gamma_{p_j}$ around $p_j$ for $j=1,\ldots, 5$, which by Lemmas \ref{finite} and \ref{meridian3} are given by \begin{equation}\label{eq:cevameridians} \begin{array}{lll} \gamma_{p_1} = x_4 x_3x_2, & \gamma_{p_3} = x_4 x_1, & \gamma_{p_5} = x_3^{x_2 x_1}x_5 x_4,\\ \gamma_{p_2} = x_5 x_2, & \gamma_{p_4} = x_5 x_3^{x_2} x_1. & \end{array} \end{equation} \noindent By the corollary \ref{corollary1} a presentation of $\pi_1(M(\mathcal{B},I))$ can be obtained by $$H:=\pi_1(M(\mathcal{B},I))=\pi_1(\mathbb{P}^2\setminus \mathcal{B},q)/ \langle\langle \gamma_{p_1}, \gamma_{p_2}, \gamma_{p_3}, \gamma_{p_4}, \gamma_{p_5} \rangle\rangle . $$ By making $\gamma_{p_2}=1$ and $\gamma_{p_3}=1$ we obtain $x_5=x_2^{-1}$ and $x_4=x_1^{-1}$, replacing them in (\ref{quadrilateralpres}) and (\ref{eq:cevameridians}), we obtain the presentation $$H=\langle x_1,x_2,x_3| [x_1^{-1},x_3,x_2], [x_2^{-1},x_3^{x_2},x_1], x_1=x_3x_2, x_2=x_3^{x_2}x_1, x_3^{x_2x_1}=x_1x_2 \rangle $$ By replacing $x_1$ by $x_3x_2$ the relation $[x_2^{-1}x_3^{-1}, x_3,x_2]$ becomes trivial. So we are left with: $$H=\langle x_2,x_3 \mid [x_2^{-1},x_3^{x_2},x_3x_2], \ x_2=x_3^{x_2}x_3x_2, \ x_3^{x_2x_3x_2}=x_3x_2x_2 \rangle $$ By writing down the relations: \begin{align} x_2^{-2}(x_3x_2)^2=x_2^{-1}x_3x_2x_3=x_3x_2^{-1}x_3x_2\\ x_2^2=(x_3x_2)^2 \label{x22x3x2}\\ (x_3x_2)^2=x_2(x_3x_2)^2x_2\label{x3x3} \end{align} By replacing (\ref{x22x3x2}) in (\ref{x3x3}) we obtain that $x_2^2=1$, hence $(x_3x_2)^2=1$. Note that these two relations include all the precedent. Therefore we obtain the presentation $$H=\langle x_2, x_3 \mid x_2^2=1, (x_3x_2)^2=1 \rangle$$ which can be seen either as $\mathbb{Z}/2\mathbb{Z}*\mathbb{Z}/2\mathbb{Z}$ or as $\mathbb{Z}/2\mathbb{Z}\ltimes \mathbb{Z}$, by this we see that $H$ is infinite and its abelianization is finite. \end{proof} We can clarify this example geometrically by means of the following proposition. \begin{prop} There exists an orbifold morphism from $M(\mathcal{B},I)$ to $\mathcal{X}(\mathbb{P}^1,D,r)$ where $D=[0:1]+[1:-1]+[1:0]$ and $r=(2,+\infty,2)$. The morphism comes from a pencil of conics and induces an isomorphism between orbifold fundamental groups. \end{prop} \begin{proof} Consider a pencil $\mathscr{P}$ having $4$ fixed points in general position, which we may assume to be $S=\{p_1,p_4,p_5,p_7\}$. If we let $Q_1=(z_1^2-z_2^2),Q_2=(z_1^2-z_3^2)$ and $Q_3=(z_2^2-z_3^2)$ we have that the complete quadrilateral $\mathscr{A}$ is given by $Q=Q_1 Q_2 Q_3=0 $. The pencil $\mathscr{P}$ can be written as $\mathscr{P}=aQ_1-bQ_2 $ with $a,b\in \mathbb{C}$ not both zero. Note that $Q_3\in \mathscr{P}$ as $Q_3=Q_2 - Q_1$. This pencil defines a rational map $$f_{\mathscr{P}}:\mathbb{P}^2\to \mathbb{P}^1, \ (z_1:z_2:z_3)\mapsto (Q_1(z_1:z_2:z_3),Q_2(z_1:z_2:z_3)) $$ whose indeterminacy locus is $S$. By blowing it up, we obtain a regular map $\tilde{f}:\Bl_S \mathbb{P}^2 \to \mathbb{P}^1$ with fiber over $(a:b)$ the strict transform of $aQ_1-bQ_2$. As any point lying in two elements of the pencil is a fixed point of it, for any $x\in \mathbb{P}^2\setminus S$ there is a unique curve $C\in\mathscr{P}$ passing through it. In particular for the double points $p_2\in \{z_1-z_2=0\}\cap\{z_1+z_2=0\}$ and $p_3\in \{z_1-z_3=0\}\cap\{z_1+z_3=0\}$ the curves are $Q_1$ and $Q_2$ respectively. This allows us to extend $\tilde{f}$ to the blow up of $\Bl_S\mathbb{P}^2$ at $p_2,p_3$ as $f:\Bl_{S\cup\{p_2,p_3\}}\to \mathbb{P}^1.$ We have that $f(E_2)=(1:0)$ and $f(E_3)=(0:1)$. Let $X=\Bl_{S\cup\{p_2,p_3\}}\setminus \{Q \cup E_7 \}$. Note that $f|_X:X\to \mathbb{P}^1\setminus \{(1:1)\}$ as $f(Q_3)=(1,1)$. Moreover $f|_X$ has double fibers in $(0:1)$ and $(1:0)$. For any other $(a:b)\in\mathbb{P}^1\setminus \{(1:1)\}$ the fiber is the strict transform of $aQ_1-bQ_2$ minus one point (corresponding to the intersection with $E_7$). The former assertion can be seen by local computations: Consider $\mathbb{P}^2$ and $\mathbb{P}^1$ with coordinates $(z_1:z_2:z_3)$ and $(u,v)$ respectively. Restricting to the standard open sets $W_3=\{z_3=1\}\subset \mathbb{P}^2$ and $V_2=\{v=1\}\subset \mathbb{P}^1$ we have that $\tilde{f}|_{W_3}=\frac{z_1^2-z_2^2}{z_1^2-1}$ with $z_1^2-1\not = 0$. Blowing up at $p_2=(0,0)$ and working in coordinates $(z_1,Z_2)$ (where $Z_2$ is the coordinate in $U_1\subset \mathbb{P}^1$) we have that $f|_{W_3\cap U_1}= z_1^2 \frac{1-Z_2^2}{z_1^2-1}$. Analogous computations for the other open sets and for $p_3$ show that the fibers are indeed double. The last part of the statement is then clear. \end{proof} There is a modification of Dimca's suggestion that may still hold. \begin{ques} Let $X$ be a reduced LAC surface with $H_1(X)$ finite whose universal abelian cover has finite $H_1$. Is $\pi_1(X)$ finite? \end{ques} \subsection{Presentation for a weighted complete quadrilateral} By considering weighted LAC surfaces $\mathcal{X}(\bar{X},D,r)$ we can study the ramified covers of $\bar{X}$ over $D$. In the case where all the lines of $D$ have the same weight Hirzebruch constructed a finite abelian cover in \cite{Hirzebruch}. If moreover we ask the cover to be a quotient of the ball, Deligne-Mostow have given weights (not necessarily equal) for this to hold \cite{DM}. Consider again the complete quadrilateral $\mathcal{B}=\{L_1,\ldots, L_6\}$ with the same notation as in \ref{subsectionExample}, suppose $L_6$ is the line at infinity. Let $\tilde{X}=\Bl_S \mathbb{P}^2\to \mathbb{P}^2$ be the blow up of $\mathbb{P}^2$ at the four triple points $S=\{p_1,p_4,p_5,p_7\}$ and $E_1, E_4, E_5, E_7$ be the respective exceptional divisors. Consider the elementary geometric base $\Gamma^{(0)}=(x_1,\ldots, x_5)$. A meridian $x_6$ for the line at infinity around the point $\Sigma^{(0)}\cap L_6$ (recall that $\Sigma^{(0)}$ is the line where $\Gamma^{(0)}$ lies) is given by Lemma \ref{lineatinfinity} \begin{equation} x_6= (x_5x_4x_3x_2x_1)^{-1}. \end{equation} Denote by $\gamma_{p_i}$ the meridian around $E_i$. By Lemma \ref{finite}, using respectively the elementary geometric bases $\Gamma^{(0)}$ and $\Gamma^{(3)}$ of (\ref{eq:ceva}), we obtain: \begin{equation} \begin{aligned} y_1:=\gamma_{p_1}&=x_4x_3x_2 \\ y_2:=\gamma_{p_4}&= x_5 x_3^{x_2}x_1 \end{aligned} \end{equation} Finally, the meridians around the triple points lying in $L_6$ are given by Lemma \ref{meridian3} and bases $\Gamma^{(4)}$ and $\Gamma^{(6)}$ of (\ref{eq:ceva}). \begin{equation} \begin{aligned} y_3:=\gamma_{p_5}&=x_3^{x_2x_1}x_5x_4 \\ y_4:=\gamma_{p_7}&=x_4^{-1}x_5^{-1}ax_3^{a^{-1}x_2x_1a} \end{aligned} \end{equation} where $a=(x_2x_1)^{x_3x_2x_1}x_5x_4. $ \begin{prop} Let $\mathcal{B}$ be the complete quadrilateral, $\tilde{X}$, $\Gamma^{(0)}=(x_1,\ldots,x_5)$ and $y_i$ as above. For any $r=(m_1,\ldots,m_4,n_1,\ldots, n_{6})\in (\mathbb{N}^*\cup{+\infty})^{10}$ as in \cite{Tretkoff} p.110, $D=E_1+E_4+E_5+E_7+\sum_{i=1}^{6} L_i$ we have a presentation for the fundamental group of the ball quotient $\mathcal{X}(\tilde{X},D,r)$ given by $$\pi_1(\mathcal{X}(\tilde{X},D,r))=\left\langle x_1,\ldots, x_5 | [x_4,x_1], [x_5,x_2], [x_4,x_3,x_2], [x_5, x_3^{x_2}, x_1] ,\\ x_i^{n_i}, y_i^{m_i} \right\rangle $$ \end{prop} \bibliographystyle{smfalpha}
1,314,259,996,829
arxiv
\section{s-IFR(R) Ordering}\label{sec2} \hspace*{0.2 in}In this section we define and study s-IFR(R) ordering. This ordering interprets that ratio of the hazard rates of $X_s$ and $Y_s$ is increasing. This means that $X_s$ ages faster than $Y_s$. We begin with the following definition. \begin{definition} \label{def2*1} For any positive integer $s$, $X$ (or its distribution $F_X$) is said to be more $s$-IFR(R) than $Y$ (or its distribution $F_Y$) (written as $X\leq_{s-IFR(R)}Y$) if the random variable $\Phi^s_{X,Y}\;\text{has an IFR distribution}$.\hfill $\Box$ \end{definition} \begin{remark} For $s=1$, Definition \ref{def2*1} gives $X\leq_{c}Y$, as discussed in Sengupta and Deshpande~\cite{sd2}.\hfill $\Box$ \end{remark} \hspace*{0.2 in}The following lemma may be obtained in Marshall and Olkin~(\cite{mo1}, Section $21(f)$, pp. 699-700). \begin{lem}\label{l2-1} Let $f(\cdot)$ and $g(\cdot)$ be two real-valued continuous functions, and $\zeta(\cdot)$ be a strictly increasing (resp. decreasing) and continuous function defined on the range of $f$ and $g$. Then, for any real number $c>0$, $f(x)-cg(x)$ and $\zeta (f(x))-\zeta (cg(x))$ have sign change property in the same (resp. reverse) order, as $x$ traverses from left to right.\hfill $\Box$ \end{lem} \hspace*{0.2 in}In the following two propositions, we give some equivalent representations of the s-IFR(R) ordering. The second proposition can easily be verified by using Lemma \ref{l2-1} or Proposition~2.C.8 of Marshall and Olkin~\cite{mo1}, and Proposition~\ref{pp61}(i). \begin{pro}\label{pp61} For $s=2,3,\dots$, Definition \ref{def2*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)$ is convex in $x\geq 0.$ \item [$(ii)$] $\frac{r_{X,s}(x)}{r_{Y,s}(x)}\;\text{is increasing in}\;x\geq0.$ \item [$(iii)$] $\frac{\mu_{X,s-1}(x)}{\mu_{Y,s-1}(x)}\;\text{is decreasing in}\;x\geq0.$ \item [$(iv)$] $\Lambda_{X,s}( Y_s)\;\text{has a DFR distribution.}$ \end{enumerate} \end{pro} {\bf Proof:} We have \begin{eqnarray} \Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)&=&-\log\left(\overline T_{X,s}\overline T^{-1}_{Y,s}\left(e^{-x}\right)\right)\nonumber \\&=&-\log \overline F_{\Phi^s_{X,Y}}(x).\label{rre14} \end{eqnarray} Thus, $(i)$ follows from Definition \ref{def2*1}, and conversely. Again, Definition \ref{def2*1} can equivalently be written as \begin{eqnarray}\label{rre15} r_{\Phi^s_{X,Y}}(x)=\left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}{\overline T_{X,s}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}\right)\left(\frac{e^{-x}}{\overline T_{Y,s-1}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}\right) \end{eqnarray} is increasing in $x\geq0$, which holds if, and only if, $$\left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}\overline T^{-1}_{Y,s}\left(u\right)}{\overline T_{X,s}\overline T^{-1}_{Y,s}\left(u\right)}\right)\left(\frac{u}{\overline T_{Y,s-1}\overline T^{-1}_{Y,s}\left({u}\right)}\right)\;\text{is decreasing in}\;u\in(0,1],$$ or equivalently, $$ \frac{r_{X,s}(x)}{r_{Y,s}(x)}\;\text{is increasing in}\;x\geq0.$$ This gives the equivalence of Definition \ref{def2*1} and $(ii)$. The one-to-one connection between $(ii)$ and $(iii)$ follows from (\ref{mre5}). Note that $(iv)$ holds if, and only if, $$\frac{r_{Y,s}(x)}{r_{X,s}(x)}\;\text{is decreasing in}\;x\geq0,$$ which is equivalent to $(ii)$.\hfill $\Box$ \begin{remark} The equivalences of $(i)$, $(ii)$ and $(iv)$ given in Proposition~\ref{pp61} are also true for $s=1$.\hfill $\Box$ \end{remark} \begin{pro} Definition \ref{def2*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] For any real numbers $a$ and $b$, $\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x)-(ax+b)$ changes sign at most twice, and if the change of signs occurs twice, they are in the order $+,-,+$, as $x$ traverses from $0$~to~$\infty$. \item [$(ii)$] For any real numbers $a$ and $b$, $\Lambda^{-1}_{Y,s}(x)-\Lambda^{-1}_{X,s}(ax+b)$ changes sign at most twice, and if the change of signs occurs twice, they are in the order $+,-,+$, as $x$ traverses from $0$~to~$\infty$. \item [$(iii)$] For any real numbers $a$ and $b$, $\Lambda^{-1}_{Y,s}(ax+b)-\Lambda^{-1}_{X,s}(x)$ changes sign at most twice, and if the change of signs occurs twice, they are in the order $+,-,+$, as $x$ traverses from $0$~to~$\infty$. \item [$(iv)$] For any real numbers $a$ and $b$, $\Lambda^{-1}_{X,s}(x)-\Lambda^{-1}_{Y,s}(ax+b)$ changes sign at most twice, and if the change of signs occurs twice, they are in the order $-,+,-$, as $x$ traverses from $0$~to~$\infty$. \item [$(v)$] For any real numbers $a$ and $b$, $\Lambda_{Y,s}\Lambda^{-1}_{X,s}(x)-(ax+b)$ changes sign at most twice, and if the change of signs occurs twice, they are in the order $-,+,-$, as $x$ traverses from $0$~to~$\infty$. \item [$(vi)$] $\Lambda_{Y,s}\Lambda^{-1}_{X,s}(x)$ is concave in $x>0$.\hfill $\Box$ \end{enumerate} \end{pro} \hspace*{0.2 in}Below we state two lemmas which will be used in proving the upcoming theorem. The proofs are omitted. \begin{lem}\label{lem2-6} Let $f(\cdot)$ be a nonnegative, increasing, and convex function. Then $f^{-1}(\cdot)$ is concave.\hfill $\Box$ \end{lem} \begin{lem}\label{lem2-5} Let $f(\cdot)$ and $g(\cdot)$ be two nonnegative, increasing, and convex functions. Then $f(g(\cdot))$ is convex.\hfill $\Box$ \end{lem} \hspace*{0.2 in}The following theorem shows some properties of the s-IFR(R) ordering. \begin{thm}\label{th2*1} For any positive integer $s$, \begin{enumerate} \item [$(i)$] $X\leq_{s-IFR(R)}X$. \item [$(ii)$] $X\leq_{s-IFR(R)}Y$ and $Y\leq_{s-IFR(R)}X$ hold simultaneously if, and only if, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, for some $\theta>0$ and for all $x\geq 0.$ \item [$(iii)$] If $X\leq_{s-IFR(R)}Y$ and $Y\leq_{s-IFR(R)}Z$ then $X\leq_{s-IFR(R)}Z$. \end{enumerate} \end{thm} {\bf Proof:} The proof of $(i)$ is trivial. Now, $X\leq_{s-IFR(R)}Y$ gives that $$\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)\;\text{is convex in}\;x\geq0,$$ which, by Lemma \ref{lem2-6}, reduces to the fact that \begin{equation}\label{aa} \Lambda_{Y,s}\left(\Lambda^{-1}_{X,s}(x)\right)\;{\rm is\; concave}. \end{equation} Further, $Y\leq_{s-IFR(R)}X$ gives that \begin{equation}\label{ab} \Lambda_{Y,s}\left(\Lambda^{-1}_{X,s}(x)\right)\;{\rm is\; convex}. \end{equation} Combining (\ref{aa}) and (\ref{ab}), we get $$\Lambda_{Y,s}\left(\Lambda^{-1}_{X,s}(x)\right)=\frac{x}{\theta} ,$$ for some constant $\theta\;(>0)$. Thus, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, and hence $(ii)$ is proved. On using Lemma \ref{lem2-5}, one can easily check that $(iii)$ holds.\hfill $\Box$ \\\hspace*{.2in} The following lemma can be easily verified. \begin{lem}\label{lem2-3} Let $X\sim \overline F_{X}(x)=e^{-\lambda x}$. Then, for $s=1,2,\ldots$, \begin{enumerate} \item[($i$)] $r_{X,s}(x)=\lambda $; \item[($ii$)] $\overline{T}_{X,s}(x)=e^{-\lambda x}.$\hfill $\Box$ \end{enumerate} \end{lem} \hspace*{0.2 in}The following theorem shows that a random variable $X$ has an s-IFR distribution if, and only if, $X$ is smaller than exponential distribution in s-IFR(R) ordering. The proof follows from Lemma~\ref{lem2-3}. \begin{thm} If $\overline F_Y(x)=e^{-\lambda x}$, then $X\leq_{s-IFR(R)}Y$ if, and only if, $X$ is s-IFR.\hfill $\Box$ \end{thm} \hspace*{0.2 in} Below we give a lemma which will be used in proving the upcoming theorem, and can be proved using Principle of Mathematical Induction. \begin{lem}\label{lehk} For any real numbers $a\;(>0)$ and $b$, and for $s=1,2,\dots$, \begin{enumerate} \item [$(i)$] $\overline{T}_{aX+b,s}(x)=\overline{T}_{X,s}\left(\frac{x-b}{a}\right)$. \item [$(ii)$] $\tilde \mu_{aX+b,s}=a\tilde \mu_{X,s}$. \item [$(iii)$] $r_{aX+b,s}(x)=\frac{1}{a}r_{X,s}\left(\frac{x-b}{a}\right)$.\hfill $\Box$ \end{enumerate} \end{lem} \hspace*{0.2 in}The following theorem shows that s-IFR(R) ordering is location and scale invariant. \begin{thm} $X\leq_{s-IFR(R)}Y$ if, and only if, $\left(aX+b\right)\leq_{s-IFR(R)}\left(aY+b\right)$, for any real numbers $a\;(>0)$ and $b$.\hfill $\Box$ \end{thm} {\bf Proof:} Let $\Lambda_{aX+b,s}(\cdot)$ and $\Lambda_{aY+b,s}(\cdot)$ be the cumulative hazard rate functions of $aX+b$ and $aY+b$, respectively. Then, on using Lemma~\ref{lehk}, we have, for all $x\geq0$, \begin{eqnarray} \Lambda_{aX+b,s}\left(\Lambda_{aY+b,s}\right)^{-1}(x)&=&-\log\left(\overline T_{aX+b,s}\overline T^{-1}_{aY+b,s}\left(e^{-x}\right)\right)\nonumber \\&=&-\log\left[\overline T_{X,s}\left(\frac{\overline T^{-1}_{aY+b,s}\left(e^{-x}\right)-b}{a}\right)\right]\nonumber \\&=&-\log\left[\overline T_{X,s}\left(\frac{b+a\overline T^{-1}_{Y,s}\left(e^{-x}\right)-b}{a}\right)\right]\nonumber \\ &=&\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x).\label{rre19} \end{eqnarray} Thus, the result follows from Proposition \ref{pp61}(i).\hfill $\Box$ \section{s-IFRA(R) Ordering}\label{sec3} \hspace*{0.2 in}In this section we discuss s-IFRA(R) ordering. The interior scenario of s-IFRA(R) ordering is that ratio of the cumulative hazard rates of $X_s$ and $Y_s$ is increasing. \begin{definition} \label{def3*1} For any positive integer $s$, $X$ (or its distribution $F_X$) is said to be more $s$-IFRA(R) than $Y$ (or its distribution $F_Y$) (written as $X\leq_{s-IFRA(R)}Y$) if the random variable $\Phi^s_{X,Y}\;\text{has an IFRA distribution}$.\hfill $\Box$ \end{definition} \begin{remark} For $s=1$, Definition \ref{def3*1} gives $X\leq_{*}Y$, as discussed in Sengupta and Deshpande~\cite{sd2}.\hfill $\Box$ \end{remark} \hspace*{0.2 in}Some equivalent representations of s-IFRA(R) ordering are given in the following two propositions. The second proposition can easily be proved by Lemma~\ref{l2-1} and Proposition~\ref{ar6}(i). \begin{pro}\label{ar6} Definition \ref{def3*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)$ is star-shaped in $x> 0.$ \item [$(ii)$] $\frac{\Lambda_{X,s}(x)}{\Lambda_{Y,s}(x)}\;\text{is increasing in}\;x>0.$ \item [$(iii)$] $\Lambda_{X,s}(Y_s)\;\text{has a DFRA distribution.}$ \end{enumerate} \end{pro} {\bf Proof:} On using (\ref{rre14}), the equivalence of Definition~\ref{def3*1} and $(i)$ follows. Note that $(i)$ can equivalently be written as $$\frac{\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)}{x}\;\text{is increasing in}\;x>0,$$ or equivalently, $$\frac{\Lambda_{X,s}(x)}{\Lambda_{Y,s}(x)}\;\text{is increasing in}\;x>0.$$ Thus, the equivalence of $(i)$ and $(ii)$ is proved. The one-to-one connection between $(ii)$ and $(iii)$ can be proved in the same line as is done in $(ii)$ above.\hfill $\Box$ \begin{pro}\label{pn3-1} Definition \ref{def3*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] For any real number $a$, $\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x)-ax$ changes sign at most once, and if the change of sign does occur, it is in the order $-,+$, as $x$ traverses from $0$ to $\infty$. \item [$(ii)$] For any real number $a$, $\Lambda^{-1}_{Y,s}(x)-\Lambda^{-1}_{X,s}(ax)$ changes sign at most once, and if the change of sign does occur, it is in the order $-,+$, as $x$ traverses from $0$ to $\infty$. \item [$(iii)$] For any real number $a$, $\Lambda^{-1}_{Y,s}(ax)-\Lambda^{-1}_{X,s}(x)$ changes sign at most once, and if the change of sign does occur, it is in the order $-,+$, as $x$ traverses from $0$ to $\infty$. \item [$(iv)$] For any real number $a$, $\Lambda^{-1}_{X,s}(x)-\Lambda^{-1}_{Y,s}(ax)$ changes sign at most once, and if the change of sign does occur, it is in the order $+,-$, as $x$ traverses from $0$ to $\infty$. \item [$(v)$] For any real number $a$, $\Lambda_{Y,s}\Lambda^{-1}_{X,s}(x)-ax$ changes sign at most once, and if the change of sign does occur, it is in the order $+,-$, as $x$ traverses from $0$ to $\infty$. \item [$(vi)$] $\Lambda_{Y,s}\Lambda^{-1}_{X,s}(x)$ is antistar-shaped in $x>0$.\hfill $\Box$ \end{enumerate} \end{pro} \hspace*{0.2 in} Before going to the next theorem we give two lemmas without proof. \begin{lem}\label{lem3-2} Let $f(\cdot)$ be a nonnegative, increasing, and star-shaped function. Then $f^{-1}(\cdot)$ is antistar-shaped.\hfill$\Box$ \end{lem} \begin{lem}\label{lem3-1} Let $f(\cdot)$ and $g(\cdot)$ be two nonnegative, increasing, and star-shaped functions. Then $f(g(\cdot))$ is star-shaped.\hfill$\Box$ \end{lem} \hspace*{0.2 in}Some properties of the s-IFRA(R) ordering are discussed in the following theorem. \begin{thm}\label{th3*1} For any positive integer $s$, \begin{enumerate} \item [$(i)$] $X\leq_{s-IFRA(R)}X$. \item [$(ii)$] $X\leq_{s-IFRA(R)}Y$ and $Y\leq_{s-IFRA(R)}X$ hold simultaneously if, and only if, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, for some $\theta>0$ and for all $x\geq 0.$ \item [$(iii)$] If $X\leq_{s-IFRA(R)}Y$ and $Y\leq_{s-IFRA(R)}Z$ then $X\leq_{s-IFRA(R)}Z$. \end{enumerate} \end{thm} {\bf Proof:} The proof of $(i)$ is trivial. To prove $(ii)$ we proceed as follows. \\ $X\leq_{s-IFRA(R)}Y$ gives that $$\Lambda_{X,s}(\Lambda^{-1}_{Y,s}(\cdot))\;\text{is star-shaped},$$ which, by Lemma~\ref{lem3-2}, reduces to the fact that $$\Lambda_{Y,s}(\Lambda^{-1}_{X,s}(\cdot))\;\text{is antistar-shaped}.$$ Further, $Y\leq_{s-IFRA(R)}X$ gives that $$\Lambda_{Y,s}(\Lambda^{-1}_{X,s}(\cdot))\;\text{is star-shaped}.$$ Combining the two, we have $$\Lambda_{Y,s}(\Lambda^{-1}_{X,s}(x))=\frac{x}{\theta},$$ for some constant $\theta\;(>0)$. Thus, we have $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, and hence $(ii)$ is proved. Again, by Lemma \ref{lem3-1}, $(iii)$ holds.\hfill$\Box$ \\\hspace*{0.2 in}The following theorem is a bridge between s-IFRA(R) ordering and s-IFRA ageing class. The proof follows from Lemma~\ref{lem2-3}. \begin{thm} If $\overline F_Y(x)=e^{-\lambda x}$, then $X\leq_{s-IFRA(R)}Y$ if, and only if, $X$ is s-IFRA.\hfill$\Box$ \end{thm} \hspace*{0.2 in}Since, every IFR distribution is an IFRA distribution, we have the following theorem. \begin{thm} If $X\leq_{s-IFR(R)}Y$ then $X\leq_{s-IFRA(R)}Y.$\hfill$\Box$ \end{thm} \hspace*{0.2 in}The following theorem shows that s-IFRA(R) ordering is location and scale invariant. The proof follows from (\ref{rre19}). \begin{thm} $X\leq_{s-IFRA(R)}Y$ if, and only if, $\left(aX+b\right)\leq_{s-IFRA(R)}\left(aY+b\right)$, for any real numbers $a\;(>0)$ and $b$.\hfill $\Box$ \end{thm} \section{s-NBU(R) Ordering}\label{sec4} \hspace*{0.2 in}We start this section with the following definition of the s-NBU(R) ordering. \begin{definition} \label{def4*1} For any positive integer $s$, $X$ (or its distribution $F_X$) is said to be more $s$-NBU(R) than $Y$ (or its distribution $F_Y$) (written as $X\leq_{s-NBU(R)}Y$) if the random variable $\Phi^s_{X,Y}\;\text{has a NBU distribution}$.\hfill $\Box$ \end{definition} \begin{remark} For $s=1$, Definition \ref{def4*1} gives $X\leq_{su}Y$, as discussed in Sengupta and Deshpande~\cite{sd2}.\hfill $\Box$ \end{remark} \hspace*{0.2 in}In the following proposition we give some equivalent representations of the s-NBU(R) ordering. \begin{pro} Definition \ref{def4*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)$ is super-additive in $x\geq 0.$ \item [$(ii)$] $\overline T^{-1}_{X,s}\left(\frac{\overline T_{X,s}(x+t)}{\overline T_{X,s}(t)}\right)\geq \overline T^{-1}_{Y,s}\left(\frac{\overline T_{Y,s}(x+t)}{\overline T_{Y,s}(t)}\right)$, for all $x,t>0.$ \item [$(iii)$] $\Lambda_{X,s}(Y_s)\;\text{has a NWU distribution.}$ \end{enumerate} \end{pro} {\bf Proof:} The equivalence of Definition~\ref{def4*1} and $(i)$ follows from (\ref{rre14}). Again, $(i)$ holds if, and only if, for all $a,b>0$, $$\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(a+b)\right)\geq \Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(a)\right)+\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(b)\right),$$ or equivalently, $$-\log\left(\frac{\overline T_{X,s}\left(\Lambda^{-1}_{Y,s}(a+b)\right)}{\overline T_{X,s}\left(\Lambda^{-1}_{Y,s}(a)\right)}\right)\geq -\log \overline T_{X,s}\left(\Lambda^{-1}_{Y,s}(b)\right).$$ It can equivalently be written as $$\overline T_{X,s}^{-1}\left(\frac{\overline T_{X,s}\left(\Lambda^{-1}_{Y,s}(a+b)\right)}{\overline T_{X,s}\left(\Lambda^{-1}_{Y,s}(a)\right)}\right)\geq \Lambda^{-1}_{Y,s}(b).$$ Writing $a=\Lambda_{Y,s}(t),\;a+b=\Lambda_{Y,s}(x+t)$ in the above inequality, we have, for $x,t>0$, \begin{eqnarray*} \overline T^{-1}_{X,s}\left(\frac{\overline T_{X,s}(x+t)}{\overline T_{X,s}(t)}\right)&\geq& \Lambda^{-1}_{Y,s}\left(-\log \frac{\overline T_{Y,s}(x+t)}{\overline T_{Y,s}(t)}\right) \\&=&\overline T^{-1}_{Y,s}\left(\frac{\overline T_{Y,s}(x+t)}{\overline T_{Y,s}(t)}\right). \end{eqnarray*} Thus, the equivalence of $(i)$ and $(ii)$ is proved. The proof of $(iii)$ follows in the same line as is done in $(ii)$.\hfill$\Box$ \\\hspace*{0.2 in} To prove the next theorem we use two lemmas which are given below without proof. \begin{lem}\label{lem4-1} Let $f(\cdot)$ be a nonnegative, increasing, and super-additive function. Then $f^{-1}(\cdot)$ is sub-additive.\hfill$\Box$ \end{lem} \begin{lem}\label{lem4-2} Let $f(\cdot)$ and $g(\cdot)$ be two nonnegative, increasing, and super-additive functions. Then $f(g(\cdot))$ is super-additive.\hfill$\Box$ \end{lem} \hspace*{0.2 in}The following theorem discusses some properties of the s-NBU(R) ordering. \begin{thm}\label{th4*1} For any positive integer $s$, \begin{enumerate} \item [$(i)$] $X\leq_{s-NBU(R)}X$. \item [$(ii)$] $X\leq_{s-NBU(R)}Y$ and $Y\leq_{s-NBU(R)}X$ hold simultaneously if, and only if, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, for some $\theta>0$ and for all $x\geq 0.$ \item [$(iii)$] If $X\leq_{s-NBU(R)}Y$ and $Y\leq_{s-NBU(R)}Z$ then $X\leq_{s-NBU(R)}Z$. \end{enumerate} \end{thm} {\bf Proof:} It is easy to verify $(i)$. Let $X\leq_{s-NBU(R)}Y$. Then $${\Lambda}_{X,s}\left({\Lambda}^{-1}_{Y,s}(x)\right)\;\text{is super-additive}.$$ By Lemma \ref{lem4-1}, the above statement can equivalently be written as \begin{equation} {\Lambda}_{Y,s}\left({\Lambda}^{-1}_{X,s}(x)\right)\;\text{is sub-additive}.\label{eqn4-1} \end{equation} Further, $Y\leq_{s-NBU(R)}X$ gives that \begin{equation} {\Lambda}_{Y,s}\left({\Lambda}^{-1}_{X,s}(x)\right)\;\text{is super-additive}.\label{eqn4-2} \end{equation} Combining ($\ref{eqn4-1}$) and ($\ref{eqn4-2}$), we have $$\Lambda_{Y,s}\left(\Lambda^{-1}_{X,s}(x)\right)=\frac{x}{\theta},$$ for some constant $\theta\;(> 0)$. Thus, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$. The proof of $(iii)$ follows from Lemma \ref{lem4-2}.\hfill$\Box$ \\\hspace*{0.2 in}In the following theorem we represent the relationship between s-NBU(R) ordering and s-NBU ageing. The proof follows from Lemma~\ref{lem2-3}. \begin{thm} If $\overline F_Y(x)=e^{-\lambda x}$, then $X\leq_{s-NBU(R)}Y$ if, and only if, $X$ is s-NBU. \end{thm} \hspace*{0.2 in}Since every star-shaped function is super-additive, we have the following theorem. \begin{thm} If $X\leq_{s-IFRA(R)}Y$ then $X\leq_{s-NBU(R)}Y.$\hfill $\Box$ \end{thm} \hspace*{0.2 in}The following theorem shows that s-NBU(R) ordering is scale and base invariant. The proof follows from (\ref{rre19}). \begin{thm} $X\leq_{s-NBU(R)}Y$ if, and only if, $\left(aX+b\right)\leq_{s-NBU(R)}\left(aY+b\right)$, for any real numbers $a\;(>0)$ and $b$.\hfill $\Box$ \end{thm} \section{s-NBUFR(R) Ordering}\label{sec5} \hspace*{0.2 in}We discuss s-NBUFR(R) ordering in this section. \begin{definition} \label{def5*1} For any positive integer $s$, $X$ (or its distribution $F_X$) is said to be more $s$-NBUFR(R) than $Y$ (or its distribution $F_Y$) (written as $X\leq_{s-NBUFR(R)}Y$) if the random variable $\Phi^s_{X,Y}\;\text{has a NBUFR distribution}$.\hfill $\Box$ \end{definition} \hspace*{0.2 in}In the following proposition we give some equivalent conditions of the s-NBUFR(R) ordering. \begin{pro} For $s=2,3\dots,$ Definition \ref{def5*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\frac{r_{X,s}(x)}{r_{Y,s}(x)}\geq \frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}$, for all $x\geq 0$. \item [$(ii)$] $\frac{\mu_{Y,s-1}(x)}{\mu_{X,s-1}(x)}\geq \frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}$, for all $x\geq 0$. \item [$(iii)$] $\Lambda_{X,s}(Y_s)\;\text{has a NWUFR distribution}$. \end{enumerate} \end{pro} {\bf Proof:} $\Phi^s_{X,Y}$ is NBUFR if, and only if, for all $x\geq 0$, $$r_{\Phi^s_{X,Y}}(x)\geq r_{\Phi^s_{X,Y}}(0),$$ or equivalently, \begin{eqnarray*}\label{rre9} \frac{\overline T_{X,s-1}\left(\overline T^{-1}_{Y,s}(u)\right)}{\overline T_{X,s}\left(\overline T^{-1}_{Y,s}(u)\right)}\geq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(\overline T^{-1}_{Y,s}(u)\right)}{u}\right),\;\text{for all}\;u\in(0,1], \end{eqnarray*} which holds if, and only if, \begin{eqnarray}\label{rre13} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}\geq \frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)},\;\text{for all}\;x\geq0. \end{eqnarray} This can equivalently be written as $$\frac{r_{X,s}(x)}{r_{Y,s}(x)}\geq \frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}},\;\text{for all}\;x\geq 0. $$ Thus, the equivalence of Definition~\ref{def5*1} and $(i)$ is established. The equivalence of $(i)$ and $(ii)$ follows from (\ref{mre5}). The proof of the equivalence of $(i)$ and $(iii)$ is obvious.\hfill $\Box$ \begin{remark} For $s=1$, Definition \ref{def5*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\frac{r_{X,1}(x)}{r_{Y,1}(x)}\geq \frac{f_X(0)}{f_Y(0)}$, for all $x\geq 0$. \item [$(ii)$] $\Lambda_{X,1}(Y)\;\text{has a NWUFR distribution}$.\hfill $\Box$ \end{enumerate} \end{remark} \hspace*{0.2 in}The following theorem discusses some properties of the s-NBUFR(R) ordering. \begin{thm}\label{th5*1} For any positive integer $s$, \begin{enumerate} \item [$(i)$] $X\leq_{s-NBUFR(R)}X$. \item [$(ii)$] $X\leq_{s-NBUFR(R)}Y$ and $Y\leq_{s-NBUFR(R)}X$ hold simultaneously if, and only if, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, for some $\theta>0$ and for all $x\geq 0.$ \item [$(iii)$] If $X\leq_{s-NBUFR(R)}Y$ and $Y\leq_{s-NBUFR(R)}Z$ then $X\leq_{s-NBUFR(R)}Z$. \end{enumerate} \end{thm} {\bf Proof:} The proof of $(i)$ is obvious. To prove $(ii)$ we proceed as follows. On using (\ref{rre13}), $X\leq_{s-NBUFR(R)}Y$ reduces to the fact that, for all $x\geq0$, \begin{eqnarray}\label{rre10} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}\geq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)}\right), \end{eqnarray} and $Y\leq_{NBUFR(R)}X$ gives $$\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}\left(x\right)}\geq \left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{X,s-1}(0)}\right)\left(\frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}(x)}\right),$$ or equivalently, \begin{eqnarray}\label{rre11} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}\leq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)}\right). \end{eqnarray} Combining (\ref{rre10}) and (\ref{rre11}), we have \begin{eqnarray}\label{rre12} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}= \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)}\right). \end{eqnarray} Again, from (\ref{rre14}) and (\ref{rre15}) we have \begin{eqnarray} \frac{d}{dx}\left(\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x)\right)&=&\left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}{\overline T_{X,s}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}\right)\left(\frac{e^{-x}}{\overline T_{Y,s-1}\overline T^{-1}_{Y,s}\left(e^{-x}\right)}\right)\nonumber \\&=&\frac{1}{\theta},\label{rre16} \end{eqnarray} where $$\theta=\left(\frac{\tilde \mu_{X,s-1}}{\tilde \mu_{Y,s-1}}\right)\left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{X,s-1}(0)}\right),$$ and the second equality follows from (\ref{rre12}). Hence, from (\ref{rre16}) we have $$\Lambda_{X,s}\left(\Lambda^{-1}_{Y,s}(x)\right)=\frac{x}{\theta},$$ or equivalently, $$\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x).$$ Thus, $(ii)$ is proved. Again, $X\leq_{s-NBUFR(R)}Y$ gives that, for all $x\geq0$, \begin{eqnarray}\label{rre17} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}\geq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)}\right), \end{eqnarray} and $Y\leq_{s-NBUFR(R)}Z$ gives \begin{eqnarray*} \frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}\left(x\right)}\geq \left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{Z,s-1}(0)}\right)\left(\frac{\overline T_{Z,s-1}\left(x\right)}{\overline T_{Z,s}(x)}\right), \end{eqnarray*} or equivalently, \begin{eqnarray}\label{rre18} \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right)\left(\frac{\overline T_{Y,s-1}\left(x\right)}{\overline T_{Y,s}(x)}\right)\geq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Z,s-1}(0)}\right)\left(\frac{\overline T_{Z,s-1}\left(x\right)}{\overline T_{Z,s}(x)}\right). \end{eqnarray} Thus, from (\ref{rre17}) and (\ref{rre18}) we have \begin{eqnarray*} \frac{\overline T_{X,s-1}\left(x\right)}{\overline T_{X,s}\left(x\right)}\geq \left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Z,s-1}(0)}\right)\left(\frac{\overline T_{Z,s-1}\left(x\right)}{\overline T_{Z,s}(x)}\right). \end{eqnarray*} Thus, $X\leq_{s-NBUFR(R)}Z$.\hfill $\Box$ \\\hspace*{0.2 in}In the following theorem we give a relationship between s-NBUFR(R) ordering and s-NBUFR ageing. \begin{thm} If $\overline F_Y(x)=e^{-\lambda x}$, then $X\leq_{s-NBUFR(R)}Y$ if, and only if, $X$ is s-NBUFR. \end{thm}\hfill $\Box$ \\\hspace*{0.2 in}Since, every NBU distribution is a NBUFR distribution, we have the following theorem. \begin{thm} If $X\leq_{s-NBU(R)}Y$ then $X\leq_{s-NBUFR(R)}Y.$\hfill $\Box$ \end{thm} \hspace*{0.2 in}The following theorem shows that s-NBUFR(R) ordering is scale and base invariant. \begin{thm} $X\leq_{s-NBUFR(R)}Y$ if, and only if, $\left(aX+b\right)\leq_{s-NBUFR(R)}\left(aY+b\right)$, for any real numbers $a\;(>0)$ and $b$. \end{thm} {\bf Proof:} For all $x\geq0$, and for any real numbers $a\;(>0)$ and $b$, the hazard rate function of the random variable $\Phi^s_{aX+b,aY+b}$ is given by \begin{eqnarray} r_{\Phi^s_{aX+b,aY+b}}(x)\nonumber &=&\frac{d}{dx}\left(\Lambda_{aX+b,s}(\Lambda_{aY+b,s})^{-1}(x)\right)\nonumber \\&=&\frac{d}{dx}\left(\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x)\right)\nonumber \\&=&r_{\Phi^s_{X,Y}}(x),\label{rre20} \end{eqnarray} where the second equality holds from (\ref{rre19}). Thus, the result follows from Definition~\ref{def5*1}. \section{s-NBAFR(R) Ordering}\label{sec6} \hspace*{0.2 in}We begin this section with the following definition. \begin{definition} \label{def6*1} For any positive integer $s$, $X$ (or its distribution $F_X$) is said to be more $s$-NBAFR(R) than $Y$ (or its distribution $F_Y$) (written as $X\leq_{s-NBAFR(R)}Y$) if the random variable $\Phi^s_{X,Y}\;\text{has a NBAFR distribution}$.\hfill $\Box$ \end{definition} \hspace*{0.2 in}Some equivalent representations of the s-NBAFR(R) ordering are discussed in the following theorem. \begin{pro} For $s=2,3,\dots,$ Definition \ref{def6*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\Lambda^{-1}_{X,s}\left(x\tilde \mu_{Y,s-1} \right)\leq \Lambda^{-1}_{Y,s}\left(x\tilde \mu_{X,s-1} \right)$, for all $x> 0$. \item [$(ii)$] $\frac{\Lambda_{X,s}(x)}{\Lambda_{Y,s}(x)}\geq \frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}$, for all $x> 0$. \item [$(iii)$] $\Lambda_{X,s}(Y_s)\;\text{has a NWAFR distribution}$. \end{enumerate} \end{pro} {\bf Proof:} $\Phi^s_{X,Y}$ has a NBAFR distribution if, and only if, for all $x>0$, $$-\frac{1}{x}\log \overline F_{\Phi^s_{X,Y}}(x)\geq r_{\Phi^s_{X,Y}}(0),$$ or equivalently, \begin{eqnarray}\label{rre6} \frac{\Lambda_{X,s}\Lambda^{-1}_{Y,s}(x)}{x}\geq \left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right). \end{eqnarray} This can equivalently written as \begin{eqnarray}\label{rre7} \Lambda^{-1}_{Y,s}(x)\geq \Lambda_{X,s}^{-1}\left(x\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right). \end{eqnarray} Replacing $x$ by $x\tilde \mu_{X,s-1}$ in (\ref{rre7}), we get $$\Lambda_{X,s}^{-1}\left(x\tilde \mu_{Y,s-1}\right)\leq \Lambda_{Y,s}^{-1}\left(x\tilde \mu_{X,s-1}\right).$$ Thus, the equivalence of Definition~\ref{def6*1} and $(i)$ is proved. The one-to-one connection between $(i)$ and $(ii)$ follows from (\ref{rre6}). The equivalence of $(i)$ and $(iii)$ can be proved in the same line as is done in $(i)$.\hfill $\Box$ \begin{remark} For $s=1,$ Definition \ref{def6*1} can equivalently be written in one of the following forms: \begin{enumerate} \item [$(i)$] $\Lambda^{-1}_{X,1}\left(x f_X(0) \right)\leq \Lambda^{-1}_{Y,1}\left(x f_Y(0) \right)$, for all $x> 0$. \item [$(ii)$] $\frac{\Lambda_{X,1}(x)}{\Lambda_{Y,1}(x)}\geq \frac{f_X(0)}{\tilde f_Y(0)}$, for all $x> 0$. \item [$(iii)$] $\Lambda_{X,1}(Y)\;\text{has a NWAFR distribution}$.\hfill $\Box$ \end{enumerate} \end{remark} \hspace*{0.2 in}The following theorem gives some properties of the s-NBAFR(R) ordering. \begin{thm}\label{th6*1} For any positive integer $s$, \begin{enumerate} \item [$(i)$] $X\leq_{s-NBAFR(R)}X$. \item [$(ii)$] $X\leq_{s-NBAFR(R)}Y$ and $Y\leq_{s-NBAFR(R)}X$ hold simultaneously if, and only if, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}(x)$, for some $\theta>0$ and for all $x> 0.$ \item [$(iii)$] If $X\leq_{s-NBAFR(R)}Y$ and $Y\leq_{s-NBAFR(R)}Z$ then $X\leq_{s-NBAFR(R)}Z$. \end{enumerate} \end{thm} {\bf Proof:} The proof of $(i)$ is obvious. Note that $X\leq_{s-NBAFR(R)}Y$ holds if, and only, if, for all $x>0$, \begin{eqnarray}\label{rre1} {\Lambda}_{X,s}({\Lambda}^{-1}_{Y,s}(x))\geq \left(x\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right). \end{eqnarray} Again, $Y\leq_{s-NBAFR(R)}X$ holds if, and only if, for all $x>0$, \begin{eqnarray}\label{rre2} {\Lambda}_{Y,s}({\Lambda}^{-1}_{X,s}(x))\geq \left(x\frac{\tilde \mu_{X,s-1}}{\tilde \mu_{Y,s-1}}\right)\left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{X,s-1}(0)}\right). \end{eqnarray} Replacing $x$ by ${\Lambda}_{X,s}({\Lambda}^{-1}_{Y,s}(x))$ in (\ref{rre2}), we have \begin{eqnarray}\label{rre3} \Lambda_{X,s}(\Lambda^{-1}_{Y,s}(x))\leq \left(x\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right). \end{eqnarray} Combining (\ref{rre1}) and (\ref{rre3}), we have \begin{eqnarray*} \Lambda_{X,s}(\Lambda^{-1}_{Y,s}(x))=\theta x , \end{eqnarray*} where $$\theta=\left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right).$$ Thus, $\Lambda_{X,s}(x)=\theta\Lambda_{Y,s}( x)$, and hence $(ii)$ is proved. Again, $X\leq_{s-NBAFR}Y$ gives \begin{equation}\label{rre4} {\Lambda}_{X,s}({\Lambda}^{-1}_{Y,s}(x))\geq \left(x\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right), \end{equation} and $Y\leq_{s-NBAFR}Z$ gives \begin{equation}\label{rre5} {\Lambda}_{Y,s}({\Lambda}^{-1}_{Z,s}(x))\geq \left(x\frac{\tilde \mu_{Z,s-1}}{\tilde \mu_{Y,s-1}}\right)\left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{Z,s-1}(0)}\right). \end{equation} Now, \begin{eqnarray*} \Lambda_{X,s}(\Lambda^{-1}_{Z,s}(x))&=&\Lambda_{X,s}\Lambda^{-1}_{Y,s}\left(\Lambda_{Y,s}\Lambda^{-1}_{Z,s}(x)\right) \\&\geq&\Lambda_{X,s}\Lambda^{-1}_{Y,s}\left(x\frac{\tilde \mu_{Z,s-1}}{\tilde \mu_{Y,s-1}}\frac{\overline T_{Y,s-1}(0)}{\overline T_{Z,s-1}(0)}\right) \\&\geq&\left(x\frac{\tilde \mu_{Z,s-1}}{\tilde \mu_{Y,s-1}}\right)\left(\frac{\overline T_{Y,s-1}(0)}{\overline T_{Z,s-1}(0)}\right)\left(\frac{\tilde \mu_{Y,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Y,s-1}(0)}\right) \\&=&\left(x\frac{\tilde \mu_{Z,s-1}}{\tilde \mu_{X,s-1}}\right)\left(\frac{\overline T_{X,s-1}(0)}{\overline T_{Z,s-1}(0)}\right), \end{eqnarray*} where the first inequality holds from (\ref{rre5}) and using the fact that $\Lambda_{X,s}\Lambda^{-1}_{Y,s}(\cdot)$ is an increasing function. The second inequality follows from (\ref{rre4}). Thus, $X\leq_{s-NBAFR(R)}Z$.\hfill $\Box$ \\\hspace*{0.2 in} The following theorem shows that $X$ is smaller than exponential random variable in s-NBAFR(R) ordering if, and only if, $X$ has a NBAFR distribution. The proof follows from Lemma \ref{lem2-3}. \begin{thm} If $\overline F_Y(x)=e^{-\lambda x}$, then $X\leq_{s-NBAFR(R)}Y$ if, and only if, $X$ is s-NBAFR. \end{thm}\hfill $\Box$ \\\hspace*{0.2 in}Since, every NBUFR distribution is a NBAFR distribution, we have the following theorem. \begin{thm} If $X\leq_{s-NBUFR(R)}Y$ then $X\leq_{s-NBAFR(R)}Y.$\hfill $\Box$ \end{thm} \hspace*{0.2 in}That the s-NBAFR(R) ordering is scale and base invariant, is shown in the following theorem. On using (\ref{rre20}), the proof follows from Definition~\ref{def6*1}. \begin{thm} $X\leq_{s-NBAFR(R)}Y$ if, and only if, $\left(aX+b\right)\leq_{s-NBAFR(R)}\left(aY+b\right)$, for any real numbers $a\;(>0)$ and $b$.\hfill $\Box$ \end{thm} \section{Concluding Remarks}\label{sec7-1} \hspace*{0.2 in}In this paper we give some new generalized stochastic orderings, and study some of their properties. These orderings may be helpful to visualize the positive ageing classes from different aspects. To handle the crossing hazard rates and/or crossing mean residual lives problem, we may find out a new direction with the help of these orderings. This unified study is meaningful because it gives a complete scenario of the existing results (available in the literature) together with the new results. The usefulness of relative ageing is well explained in Sengupta and Deshpande~\cite{sd2}, and Kalashnikov and Rachev~\cite{kr6}. Keeping the importance of the relative ageing in mind, we have taken an attempt to discuss different kinds of relative ageing in a unified way so that different kinds of relative ageing properties come under a single umbrella. Further, the different characterizations of these relative ageing properties are important because of their theoretical insight in one hand, and the systems belonging to this ageing classes help the practitioners (viz. reliability and design engineers) manipulate it for its nice mathematical properties on the other. To make the usefulness of these kind of orderings more appealing, let us take a particular example as discussed below. \begin{example}\label{cse61} Let $X$ be random variable having $\mu_{X,1}(t)=1/(4+11t^2),$ $t\geq 0$, and $Y$ be another random variable having $\mu_{Y,1}(t)=1/(4+5t^2),$ $t\geq 0$. Then $$r_{X,1}(t)=4+11t^2-\frac{22t}{4+11t^2},$$ and $$r_{Y,1}(t)=4+5t^2-\frac{10t}{4+5t^2}.$$ By drawing the figures of $r_{X,1}(t)$ and $r_{Y,1}(t)$, it can be shown that $X$ and $Y$ have crossing hazard rates. Note that $$\frac{r_{X,1}(t)}{r_{Y,1}(t)}=\left(\frac{(4+11t^2)^2-22t}{(4+5t^2)^2-10t}\right)\left(\frac{4+5t^2}{4+11t^2}\right),$$ which can be shown to non-monotone, and hence $X\nleq_{1-IFR(R)}Y$. Further $$\frac{r_{X,2}(t)}{r_{Y,2}(t)}=\frac{4+11t^2}{4+5t^2}\;\text{is increasing in}\;t,$$ and hence $X\leq_{2-IFR(R)}Y$ follows from Proposition~\ref{pp61}.$\hfill\Box$ \end{example} In the above example we see that hazard rates of $X$ and $Y$ have crossed each other. So, none of the two dominates the other in terms of their failure rates. In order to decide on the better system, i.e., to see which one is ageing slower, we take $s=1$, i.e., we compare them in terms of $1$-IFR(R) order. It is noted that none of the two dominates the other as far as $1$-IFR(R) order is concerned. This means that if we concentrate our study based on $1$-IFR(R) order only, we cannot conclude which of the two is better. To overcome this difficulty we take $s=2$, which gives a comparison, known as $2$-IFR(R) order. Here we see that the ratio of $r_{X,2}(t)$ and $r_{Y,2}(t)$ is monotone $-$ clearly showing the dominance of one over the other. Thus, we are now in a position to say that $X$ is ageing faster compared to $Y$. So the system with life distribution $Y$ is better. The above example gives the importance of $s$-IFR(R) ($s\geq 2$) order. In a similar spirit, the other generalized orders are defined and studied to help the reliability practitioners to decide on how to choose the better one. We conclude our discussion by mentioning the following chain of implications of the generalized stochastic orderings. \\\hspace*{1 in}$X\leq_{s-IFR(R)}Y\Rightarrow ~X\leq_{s-IFRA(R)}Y$ \\\hspace*{2.6 in}$\Downarrow$ \\\hspace*{2.4 in}$ X\leq_{s-NBU(R)}Y$ \\\hspace*{2.6 in}$\Downarrow$ \\\hspace*{2.4 in}$X\leq_{s-NBUFR(R)}Y\Rightarrow X\leq_{s-NBAFR(R)}Y.$ \section*{Acknowledgements} \hspace*{0.3 in}The authors are thankful to the Editor, and anonymous Reviewers for their valuable constructive comments and suggestions which lead to an improved version of the manuscript. Financial support from Council of Scientific and Industrial Research, New Delhi (CSIR Grant No. 09/921(0060)2011-EMR-I) is sincerely acknowledged by Nil Kamal Hazra.
1,314,259,996,830
arxiv
\section{Introduction} Wide-bandgap devices, such as silicon carbide (SiC) and gallium nitride (GaN) metal oxide semiconductor field-effect transistors (MOSFETs), are the promising alternatives that replace conventional silicon (Si) power devices~\cite{Baliga_book,Kimoto_book}. Owing to the excellent material properties, SiC and GaN MOSFETs can operate at higher switching frequencies with lower switching loss under wide range of ambient temperature. Especially, the high switching operation greatly contributes to reducing the volume and the weight of power converters. Those are the desirable properties in the application of electric vehicles (EVs), such as on-board charger (OBC) and in-wheel motor (IWM)~\cite{TIE2917_Liu,ECCE2020_Akatsu}. As the operating frequency of the power converters increases, design optimization using circuit simulators based on compact model of power MOSFETs plays an increasingly important role. The formulation of accurate compact models for power MOSETs, and the careful extraction of their parameter sets are crucial for obtaining reliable results from circuit simulations. Compact power MOSFET models, which have long been the topic of intensive researches, are composed of multiple nonlinear functions that accurately capture the device physics of the MOSFET. Recently, surface-potential-based and charge-based compact models~\cite{TED2013_Mattausch,TPEL2018_Shintani,TED2019_Agarwal,TED2020_Albrecht} have been successfully applied to simulate power converters. One of the most widely used parameter-extraction methods is iterative parameter refinement, which is based on gradient calculations~\cite{JJAP2004_Li,ICMTS2009_Zhou}. This approach involves two processes, gradient calculation and parameter updating, which are repeated until the characteristics of the model agree with the measured values, or the iteration limit is reached. In this context, the gradient is a set of partial derivatives for all model parameters in the model equation. Then, in the parameter-update phase, the gradient-descent algorithm is applied to refine the parameters based on the gradient that indicates the direction in which the parameters should be adjusted to improve the fitting accuracy. Although iterative parameter refinement is a very general method and is applicable to arbitrary model equations, it tends to consume a long time for all the parameters to converge using this approach. The gradient calculations require numerical differentiation (ND) with regard to each parameter, which demands full evaluation of the model equations for all bias voltages and all model parameters. Therefore, the model evaluation constitutes most of the time required for parameter optimization. In a preliminary experiment~\cite{ICMTS2018_Shintani}, even when a very simple device model equation~\cite{TED1991_sakurai} was used, 99.6\% of the parameter optimization time was spent on the gradient calculation. Several techniques are known to derive derivatives: examples include symbolic differentiation (SD), numerical differentiation (ND), and automatic differentiation (AD). SD derives derivatives based on the symbolic rules of basic calculus. While it provides us with exact computation, SD tends to require a large amount of memory resource when the model equation becomes complex. Since ND needs to evaluate the model equation twice, the calculation time tends to be long depending on the model scale and the number of model parameters. AD is a technique for deriving partial derivatives for the equations defined in a program. AD can be carried out by the combinations of four basic arithmetic operations and elementary functions, such as exponential function, etc., through repeatedly applying the chain rule. By using AD, partial derivatives can be automatically obtained with a reduced calculation cost. Herein, we propose a method to accelerate the model parameter extraction for power MOSFET models. Expanding upon the idea proposed in~\cite{ICMTS2018_Shintani}, we implement automatic differentiation (AD)~\cite{JCAM2000_Bartholomew-Biggs}, which has recently been extensively applied in machine learning to adjust the weights of convolutional neural networks during the process of {\it backpropagation}~\cite{JMLR2018_Baydin,NATURE2015_LeCun}. The AD technique enables the contribution of each model parameter to the discrepancy between the measurement and the model to be calculated individually. This is achieved by traversing a so-called computational graph merely twice, i.e., once in the forward direction and once in the backward direction, regardless of the number of model parameters. Therefore, the proposed method eliminates the need for repeated model evaluations during the gradient calculation. The AD can be directly substituted for the conventional ND so there is no need to change the existing procedures for updating the parameters in the gradient-based parameter-extraction method. Herein, we experimentally demonstrate that this approach reduces the time required for parameter extraction by 4.34$\times$ than the ND method for a model consisting of eight parameters; this is close to the theoretical upper limit of 4.5$\times$ time reduction with this number of parameters. The remainder of this paper is organized as follows: Section~\ref{sec:conv} provides the problem formulation of the ND. In Section~\ref{sec:propose}, the basic concept of AD and the AD-based parameter-extraction method are described. Then, experiments using two SPICE models~\cite{TED1991_sakurai,TPEL2018_Shintani} to quantitatively evaluate the effectiveness of the proposed method are reported in Section~\ref{sec:exp}. Finally, the paper is concluded in Section~\ref{sec:conclusion}. \section{Parameter extraction based on numerical differentiation} \label{sec:conv} \subsection{Gradient-Based Parameter Extraction} We first review gradient-based parameter extraction as the typical optimization algorithm, while the proposed method is applicable to other algorithms using derivatives, such as the Levenberg-Marquardt (LM) method~\cite{SIAM1963_Marquardt}. Algorithm~\ref{alg:gd} outlines the parameter extraction for a drain current model, $f(\cdot)$, as an example. Here, $f(\cdot)$ is a function of the gate-source voltage, $V_{\rm gs}$, and drain-source voltage, $V_{\rm ds}$, and is based on the current model equation with constant model parameters, $\bm p$. The algorithm has the following seven inputs: the initial value vector of the model parameters ($\bm p$), the vector representing the small changes in each parameter ($\bm{\delta}$), the measured current data ($\bm{I}^{\rm meas}$) and the corresponding bias voltages ($\bm{V}$), the target error ($E_{\rm target}$), and the maximum number of iterations ($N_{\rm max}$). $\bm{\eta}$ represents the rate at which the parameters are updated. Vectors $\bm{p}$, $\bm{\delta}$, and $\bm{\eta}$ all have sizes of $n$, which is equal to the number of model parameters. $\bm{V}$ is a vector whose components represent the voltage pair $V_{\rm gs}$ and ${V_{\rm ds}}$ at which the drain current, $\bm{I}^{\rm meas}$, is measured. Vectors $\bm{V}$ and $\bm{I}^{\rm meas}$ both have dimensions of $m_{\rm vd} \times m_{\rm vg}$, where $m = m_{\rm vd} \times m_{\rm vg}$ is the total number of the data points measured and $m_{\rm vd}$ and $m_{\rm vg}$ are the number of voltages measured in the ${V_{\rm ds}}$ and ${V_{\rm gs}}$ sweeps, respectively. The gradient-based parameter extraction proceeds by changing each of the parameters, $\bm{p}$, in the direction determined by the numerical derivatives to reduce the cost function, $E$. Here, $E$ is the root-mean-square error (RMSE) between the simulation and measurement and is generally defined as follows: \begin{equation} E = \sqrt{\frac{1}{m}\sum_{j=0}^{m-1} (I^{\rm meas}_j-I^{\rm sim}_j)^2}. \label{eq:error} \end{equation} Before the extraction is initiated, the currents may be normalized such that each bias point has an equal contribution to the final error~\cite{TED2008_Takeuchi}. Otherwise, the fitting results would be dominated by the bias points having larger current values, while those having smaller current values do not contribute to the fitting. This leads to insufficient extraction accuracy. In line 3 of Algorithm~\ref{alg:gd}, the function {\em gradient\_calc} is used to obtain the gradient of the cost function, $\nabla E$, in terms of each parameter as follows: \begin{equation} \nabla E = \left(\frac{\partial E}{\partial p_0}, \ldots ,\frac{\partial E}{\partial p_{n-1}}\right). \end{equation} Then, in lines 4-6, the gradient is used to update the parameters according to $\bm{\eta}$ in a direction that will reduce the cost function. In line 5, each parameter is changed according to $\partial E/\partial p_i$ at an update rate of $\bm{\eta}$, which is assumed to be constant throughout the parameter extraction for the sake of simplicity. However, the variable step size can be chosen as proposed in AdaGrad~\cite{JMLR2011_Duchi} or AdaDelta~\cite{arxiv2012_adadelta}. Next, the cost function and its gradient are recalculated based on the updated model parameters. The gradient calculation and the parameter update are alternatively repeated until either of the exit conditions in line 2 is satisfied, i.e., the cost function becomes smaller than the target value or the iteration limit is reached. \begin{figure}[t!] \begin{algorithm}[H] \caption{Gradient-descent-based parameter optimization} \label{alg:gd} \begin{algorithmic}[1] \Require $\bm{p}=(p_0, \ldots ,p_{n-1})$, $\bm{\delta}=(\delta_0, \ldots ,\delta_{n-1})$, ${\bm V}=((V_{\rm{gs}_0}, V_{\rm {ds}_0}), \ldots ,(V_{\rm{gs}_{m-1}},V_{\rm{ds}_{m-1}}))$, $\bm{I}^{\rm meas}=(I^{\rm meas}_0, \ldots ,I^{\rm meas}_{m-1})$, $E_{\rm target}$, $N_{\rm max}$, $\bm{\eta}=(\eta_0, \ldots, \eta_{n-1})$ \State initialize $N_{\rm iter}=0$ \Do \State $\nabla E, E =$ gradient\_calc($\bm{p}$, $\bm{\delta}$, ${\bm V}$, $\bm{I}^{\rm meas}$) \ForEach {$p_i \in \bm{p}$} \State $p_i = p_i - \eta_i \frac{\partial E}{\partial p_i}$ \EndFor \State $N_{\rm iter}$++ \doWhile{($N_{\rm iter} < N_{\rm max}$ or $E_{\rm target} < E$) } \State Return optimal parameter $\bm{p}$ \end{algorithmic} \end{algorithm} \end{figure} \subsection{Numerical Differentiation}\label{sec:nd} The gradient-based parameter extraction requires the gradient to be calculated as shown in line 3 of Algorithm~\ref{alg:gd}. A simple way to approximate the derivatives is to adopt ND; in this approach, one of the model parameters, $p_i$, is slightly changed by a small amount, $\delta_i$, while the other model parameters and inputs are fixed to evaluate the change in $E$. This two-point gradient approximation is versatile as it can be applied even when the model equation is not expressed using closed-form equations. The ND-based gradient calculation is conducted by the function {\em gradient\_calc} in Algorithm~\ref{alg:gd} as summarized in Algorithm~\ref{alg:nd}. The ND approximates the derivative based on the slope between the two points; thus, the model equation, $f(\cdot)$, must be evaluated twice (i.e., once with respect to each model parameter, $I_{\rm sim1}$ and $I_{\rm sim2}$). $E$ and $E_{\rm delta}$ are the RMSEs for $\bm{p}$ and $\bm{p}'$, respectively. Based on these errors, the partial derivative of $E$ is calculated with respect to each parameter as shown in line 14 of Algorithm~\ref{alg:nd}. \begin{figure}[t!] \begin{algorithm}[H] \caption{ND-based gradient calculation} \label{alg:nd} \begin{algorithmic}[1] \Function{ND}{$\bm{p}$, $\bm{\delta}$, ${\bm V}$, $\bm{I}^{\rm meas}$} \State $e=0$, $e_{\rm delta}=0$ \ForEach {$(V_{\rm{gs}_j},V_{\rm{ds}_j}) \in \bm{V}$} \State $I_{\rm sim1} = f(\bm{p},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ \State $e=e+(I^{\rm meas}_j - I_{\rm sim1})^2$ \ForEach {$p_i \in \bm{p}$} \State $\bm{p'} = \bm{p}$ \State Substitute $p'_i = p_i + \delta_i$ for $i$-th element of $\bm{p'}$ \State $I_{\rm sim2} = f(\bm{p'},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ \State $e_{\rm delta}=e_{\rm delta}+(I^{\rm meas}_j - I_{\rm sim2})^2$ \EndFor \State $E= \sqrt{\frac{e}{m}}$ \State $E_{\rm delta}= \sqrt{\frac{e_{\rm delta}}{m}}$ \State $\frac{\partial E}{\partial p_i} = \frac{E_{\rm delta}-E}{\delta_i}$ \EndFor \State Return $\nabla E, E$ \EndFunction \end{algorithmic} \end{algorithm} \end{figure} ND in the context of parameter extraction is computationally intensive as it involves the calculation of the cost function and gradient with respect to the model parameters, and hence, the evaluation of the model equation for all combinations of bias voltages and parameters. In the ND-based method, $f(\cdot)$ is evaluated $(1+n)mN_{\rm iter}$ times, where $N_{\rm iter}$ is the iteration count. The calculation time increases linearly as $n$ increases. Hence, the time required to evaluate the partial derivative will increase for complex models having larger number of parameters. Moreover, in the situations where any of the model parameters are involved in different model equations, the computation becomes more complex as the parameter extraction has to take all the model equations into account. For example, oxide thickness is involved in both current and capacitance equations in typical MOSFET models. The two model equations should be evaluated simultaneously during the extraction such that the consistent parameter values for these equations are obtained. In the optimization, {\em gradient\_calc} is performed with respect to each model equation. Thus, the total calculation complexity is written as the sum of $(1+n)mN_{\rm iter}$ in each model equation. The proposed method adopts AD to eliminate the iteration over $n$. \section{Parameter extraction based on automatic differentiation} \label{sec:propose} The proposed novel method of parameter extraction essentially follows the same procedure as gradient-based extraction shown in Algorithm~\ref{alg:gd}. By replacing the most time-consuming step of the differentiation in ND, AD is expected to significantly reduce the time required for parameter extraction. \subsection{Automatic Differentiation} The basic concept of AD\footnote[1]{According to the direction of the traverse in the computational graph, the AD is classified into two distinct types: forward type and reverse type~\cite{JMLR2018_Baydin}. We hereafter call the reverse-type AD as AD.} is the decomposition of partial derivatives using the chain rule. The derivative, ${\rm d}y/{\rm d}x$, of a composite function, $y = p(q(x)) = p(w)$, is written using the chain rule as follows: \begin{equation} \frac{{\rm d}y}{{\rm d}x}=\frac{{\rm d}y}{{\rm d}w}\frac{{\rm d}w}{{\rm d}x}. \label{eq:example} \end{equation} The first and second factors of~(\ref{eq:example}) are individually calculated because ${\rm d}y/{\rm d}w={\rm d}p(w)/{\rm d}w$ and ${\rm d}w/{\rm d}x={\rm d}q(x)/{\rm d}x$. Then, ${\rm d}y/{\rm d}x$ is derived as the product of these two factors. In AD, the given expression is generally represented using a directed acyclic graph called {\em computational graph}~\cite{NIPS2015_Schulman}. The AD computes the gradient with respect to each parameter as the contribution to the output. This is realized by two calculation modes: forward and backward. In the forward mode, the given expression is first decomposed into a set of primitive expressions (i.e., the simplest functions) such as addition and multiplication functions; in this mode, the model parameters and input values are propagated to obtain the value of the output. Then, in the backward mode, the output variable to be differentiated is given first and the partial derivative value concerning each partial expression (e.g., ${\rm d}y/{\rm d}w$ and ${\rm d}w/{\rm d}x$ in the above example in (\ref{eq:example})) is recursively calculated. In the gradient calculation involved in AD, the partial derivatives with respect to each of the input parameters are obtained simultaneously through a traversal of the computational graph in the forward and backward directions; hence, there is no need to repeat the model evaluation for each parameter. \subsection{Parameter Extraction} The AD-based gradient calculation, which is executed by the function {\em gradient\_calc} in line 3 of Algorithm~\ref{alg:gd}, is outlined in Algorithm~\ref{alg:ad}. Notably, in contrast to Algorithm~\ref{alg:nd}, there is no nested loop iterating over all model parameters. Instead, the forward propagation and backward propagation are performed in lines 4 and 5, respectively. $\nabla E$ is derived by summing the gradient for each bias condition, $\nabla E_{\rm tmp}$. The input arguments are also simplified than those in ND. The input vector $\bm{\delta}$, which represents the small parameter deviation used for numerically calculating each gradient, is unnecessary in this method and thus omitted. \begin{figure}[t!] \begin{algorithm}[H] \caption{AD-based gradient calculation} \label{alg:ad} \begin{algorithmic}[1] \Function{AD}{$\bm{p}$, ${\bm V}$, $\bm{I}^{\rm meas}$} \State Initialize $e$ and $\nabla E$ to zero \ForEach {$(V_{\rm{gs}_j},V_{\rm{ds}_j}) \in \bm{V}$} \State Calculate $I_{\rm sim} = f(\bm{p},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ and $e=e+(I^{\rm meas}_j - I_{\rm sim})^2$ through forward mode \State Calculate $\nabla E_{\rm tmp}$ and $\nabla E = \nabla E + \nabla E_{\rm tmp}$ through backward mode \EndFor \State $E= \sqrt{\frac{e}{m}}$ \State Return $\nabla E, E$ \EndFunction \end{algorithmic} \end{algorithm} \end{figure} The computational complexity of the forward and backward propagation is roughly equal to that of a single model evaluation~\cite{OMS1992_Griewank}; therefore, the calculation complexity becomes $2mN_{\rm iter}$ and is independent of the number of parameters $n$, unlike that in ND. Specifically, comparing the loops in Algorithms~\ref{alg:nd} and~\ref{alg:ad}, the gradient calculation is accelerated by the factor of $(n+1)/2$ in AD as compared to ND. Therefore, as the number of the model parameters increases, the acceleration efficiency linearly improves. In situations where two model equations $f_1(\cdot)$ and $f_2(\cdot)$ are considered, those calculation complexity becomes merely the summation of that of each equation, $2m_1N_{\rm iter} + 2m_2N_{\rm iter}$. In the gradient calculation, if $\nabla E$ has the partial derivatives of the common parameters in the two models, they are added up. The proposed parameter-extraction method is described using a simple example model: the drain current equation. The drain current equation is expressed as $f(\cdot)$ in Algorithm~\ref{alg:ad} and is defined as follows~\cite{TPEL2018_Shintani}: \begin{align} I_{\rm sim} = \cfrac{1}{1 + \prmt{THETA}\cdot V_{\rm gs}} \, (1 + \prmt{LAMBDA}\cdot V_{\rm ds}) \nonumber \\ \cdot{\prmt{SCALE}}\cdot I_{\rm DD}, \label{eq:equation} \end{align} where $I_{\rm sim}$ is the simulated drain current, $I_{\rm DD}$ is an intermediate value expressed as a function of $V_{\rm gs}$, $V_{\rm ds}$, and the surface potentials at the metal-oxide-semiconductor (MOS) interface. The channel current equation includes the channel length modulation and mobility degradation. $\bf{SCALE}$, $\bf{LAMBDA}$, and ${\bf THETA}$ are model parameters that represent the scaling factor, channel length modulation, and channel mobility degradation, respectively. \begin{figure}[!t] \centering \includegraphics[width=0.52\linewidth]{graph.pdf} \caption{Computational graph of~(\ref{eq:equation}) including five multiplication operations, two addition operations, and one division operation.} \label{fig:graph} \end{figure} Fig.~\ref{fig:graph} shows a computational graph representing~(\ref{eq:equation}). The variables located at the leaves of the graph (i.e., {\bf THETA} and $V_{\rm ds}$) represent the inputs and that on the bottom (i.e., $I_{\rm sim}$) represent the output. The vertices labeled as $v_1, \ldots, v_7$ stand for intermediate variables. The nodes represent the basic mathematical function, with each node defining an input variable, output variable, internal connection, and functional behavior. The order of the input edges is defined consistently to handle the operations that is commutative such as subtraction and division. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{forward_graph.pdf} \caption{Forward propagation mode. In this figure, the graph is simplified by showing only one subgraph for the current calculation of $v_9$.} \label{fig:forward_graph} \end{figure} In the AD-based gradient calculation, the forward propagation is conducted first. The current model equation is evaluated by traversing the computational graph to calculate the internal variables under a particular bias voltage condition, $V_{\rm ds}$ and $V_{\rm gs}$, as shown in Fig.~\ref{fig:forward_graph}. During the forward propagation, the values of the internal vertices, $v_1, \ldots, v_{12}$ and $I_{\rm sim}$, are stored for use in the backward propagation that will be conducted later. Note that, in Fig.~\ref{fig:forward_graph}, the calculation of the RMSE is in the last part of the graph. In the summation operation used to calculate the RMSE, the values of $I_{\rm meas}-I_{\rm sim}$ for each of the bias voltages are required. To simplify the illustration, only the graph for $v_9$ is shown and the subgraphs for the other bias points are omitted. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{backward_graph2.pdf} \caption{Backward propagation mode. The directions of arrows are reversed as compared to those in Fig.~\ref{fig:forward_graph}. The dashed arrows indicate the path from $E$ to {\bf SCALE}. The propagated values on the path in the backward propagation are highlighted.} \label{fig:backward_graph} \end{figure} In the backward mode, the derivative of the output $E$ is propagated backwards through the graph to calculate the partial derivative with respect to each model parameter, as shown in Fig.~\ref{fig:backward_graph}, in which the path to calculate $\frac{\partial E}{\partial {\bf{SCALE}}}$ has been highlighted as an example. There are eight vertices ($E$, $v_{11}$, $v_{10}$, $v_9$, $v_8$, $I_{\rm sim}$, $v_7$, and ${\bf{SCALE}}$) on the path. According to the chain rule, $\frac{\partial E}{\partial {\bf{SCALE}}}$ is calculated as follows: \begin{eqnarray} \label{eqchain} \frac{\partial E}{\partial {\bf{SCALE}}} = \frac{\partial E}{\partial v_{11}} \cdot \frac{\partial v_{11}}{\partial v_{10}} \cdot \frac{\partial v_{10}}{\partial v_9} \cdot \frac{\partial v_9}{\partial v_8} \nonumber \\ \cdot \frac{\partial v_8}{\partial I_{\rm sim}} \cdot \frac{\partial I_{\rm sim}}{\partial v_7} \cdot \frac{\partial v_7}{\partial {\bf{SCALE}}}. \end{eqnarray} The derivative of each edge with respect to the previous edge can be easily obtained: $\frac{\partial E}{\partial v_{11}}=\frac{1}{2\sqrt{v_{11}}}$ because $E=\sqrt{v_{11}}$. Similarly, $\frac{\partial I_{\rm sim}}{\partial v_7}=I_{\rm DD}$ as $I_{\rm sim}=v_7\cdot I_{\rm DD}$, and $\frac{\partial v_7}{\partial {\bf{SCALE}}}=v_4$ as $v_7=v_4 \cdot {\bf{SCALE}}$. Substituting all the derivatives in~(\ref{eqchain}) with the partial derivatives of the edges obtained by traversing the graph, the following equation is derived: \begin{equation} \frac{\partial E}{\partial {\bf{SCALE}}}= \frac{1}{2\sqrt{v_{11}}} \cdot \frac{1}{m} \cdot 2v_8 \cdot I_{\rm DD} \cdot v_4. \label{eq:lambda} \end{equation} The values of $v_{11}$, $v_8$, and $v_7$ were already stored while progressing the forward mode. The partial derivatives are calculated for all model parameters and the resulting set of partial derivatives is used as a gradient in the parameter updating phase. The chain rule provides us with a way to easily and efficiently calculate the gradients without approximation. The edges of the graphs for forward and backward propagations can be determined by simply consulting the rules as summarized in Fig~\ref{fig:rules}. We would like to note that the partial derivative shown in~(\ref{eq:lambda}) is the value of a single bias condition. When there are multiple bias conditions, which is usually the case, the backward propagation explained above is carried out for all bias conditions. That is, at the sum node, the partial difference of all the paths contributing $E$ is calculated for each bias condition until the model parameter node is reached. The partial derivatives of the model parameters are the summation of the partial derivatives of the different bias voltages diverged at the sum node. \begin{figure}[!t] \centering \subfigure[Multiplication ($x \cdot y$)\label{fig:mul}]{ \includegraphics[width=0.3\linewidth]{mul.pdf}} \hspace{2mm} \subfigure[Addition ($x + y$)\label{fig:add}]{ \includegraphics[width=0.3\linewidth]{add.pdf}} \\ \subfigure[Division ($x/y$)\label{fig:div}]{ \includegraphics[width=0.32\linewidth]{div.pdf}} \hspace{2mm} \subfigure[Summation ($\sum x_m$)\label{fig:sum}]{ \includegraphics[width=0.32\linewidth]{sum.pdf}} \\ \subfigure[Exponential ($\exp(x)$)\label{fig:exp}]{ \includegraphics[width=0.4\linewidth]{exp.pdf}} \\ \hspace{2mm} \subfigure[Square root ($\sqrt{x}$)\label{fig:sqrt}]{ \includegraphics[width=0.32\linewidth]{sqrt.pdf}} \caption{Representative forward and backward propagation rules. The solid arrow is for forward propagation and the dashed one is for the backward propagation. In this figure, a value of 1 is assumed to be the input in the backpropagation.} \label{fig:rules} \end{figure} The computational graph must be constructed only once for a given MOSFET model. The construction can be very quick as described in the experimental section. Additionally, once the computational graph is generated, it can be reused as long as the same MOSFET model is used; therefore, the time needed to construct the computational graph is virtually negligible. Besides, since building a computational graph on the basis of model equations, either model equations or source code, at least, is required as a prerequisite for applying the proposed parameter extraction. Note that the extracted model parameters by the proposed method bears physical meaning when physics-based model is used as the computational graph is consistent with the model equations. Thus, the estimated model parameters can be used for device characterization, such as manufacturing variability analysis. \section{Experiments}\label{sec:exp} To quantitatively evaluate the effectiveness of the proposed method, the model parameters for two different MOSFET models were extracted using AD- and ND-based parameter extraction. A commercially available silicon carbide (SiC) power MOSFET~\cite{sct2450ke} was used for these measurements. The experiments were conducted using a Linux PC with an Intel Xeon W5590 3.33\,GHz central processing unit (CPU) using a single thread. The extraction was implemented using the Python programming language. \begin{figure}[!t] \centering \includegraphics[width=0.38\linewidth]{equiv.pdf} \caption{Equivalent circuit of SiC power MOSFET.} \label{fig:equiv} \end{figure} \subsection{MOSFET Models}\label{sec:model} The two MOSFET models used in this section are an $N$-th-power-law model~\cite{TED1991_sakurai} and a surface-potential-based model~\cite{TPEL2018_Shintani}. The equivalent circuit of SiC power MOSFET is shown in Fig.~\ref{fig:equiv}. The MOSFET model characterizes the current characteristics $I_{\rm sim}$ and three terminal capacitances: gate-source capacitance $C_{\rm gs}$, drain-source capacitance $C_{\rm ds}$, and gate-drain capacitance $C_{\rm gd}$. The parameter extraction for the $N$-the-power-law model uses $I_{\rm sim}$ only, while that for the surface-potential-based model uses $I_{\rm sim}$, $C_{\mathrm{ds}}$, and $C_{\mathrm{gd}}$ in this experiment. Note that $C_{\mathrm{gs}}$ is modeled as constant as in~\cite{TPEL2018_Shintani}. \subsubsection{$N$-th-Power-Law Model} In this model, the drain current is calculated based on the threshold voltage, $\prmt{VTH}$, of the MOSFET\@. The saturation voltage, $V_\mathrm{ds,sat}$, and saturation current, $I_\mathrm{d,sat}$, are defined as \begin{align} V_\mathrm{ds,sat}&=\prmt{J}\left(V_\mathrm{gs}-\prmt{VTH}\right)^{\prmt{M}} \,\,\,\,\,\textrm{and} \\ I_\mathrm{d,sat}&=\prmt{K}\left(V_\mathrm{gs}-\prmt{VTH}\right)^{\prmt{N}}, \end{align} where $\prmt{J}$ and $\prmt{M}$ are the fitting parameters used to calculate the current in the linear region and $\prmt{K}$ and $\prmt{N}$ are other fitting parameters that are used for the saturation region. $V_\mathrm{ds,mod}$ replaces $V_\mathrm{ds}$ to represent a smooth transition between the linear and saturation regions~\cite{TPEL2018_Shintani}: \begin{align} V_\mathrm{ds,mod} = \frac{V_\mathrm{ds}}{\left[1+\left(\frac{V\mathrm{ds}} {V_\mathrm{ds,sat}}\right)^{\prmt{DELTA}} \right]^{\frac{1}{\prmt{DELTA}}}}, \label{eq:delta} \end{align} where $\prmt{DELTA}$ controls the smoothness of the transition between $V_\mathrm{ds,mod}$ and $V_\mathrm{ds,sat}$. The drain current, $I_\mathrm{sim}$, is calculated based on the channel length modulation and mobility degradation~\cite{TED2006_Miura,TED2006_Gildenblat} as follows: \begin{align} I_\mathrm{sim}=&I_\mathrm{d,sat}\left(2-\dfrac{V_\mathrm{ds,mod}}{V_\mathrm{ds,sat}}\right) \dfrac{V_\mathrm{ds,mod}}{V_\mathrm{ds,sat}}\times\nonumber\\ &(1+\prmt{LAMBDA} \cdot V_\mathrm{ds})\,\left[1+\prmt{THETA}\,(V_\mathrm{gs}-\prmt{VTH})\right]. \end{align} The model parameters of the $N$-th-power-law model are listed in Table~\ref{tab:sakurai_parameters}. \begin{table}[t] \centering \caption{Model parameters of the $N$-th-power-law model~\cite{TED1991_sakurai}} \label{tab:sakurai_parameters} \begin{tabular}{l|l}\hline Parameter & Description \\ \hline {\bf VTH} & Threshold voltage [V] \\ {\bf K} & Fitting parameter for the linear region [-] \\ {\bf M} & Fitting parameter for the linear region [-]\\ {\bf J} & Fitting parameter for the saturation region [-] \\ {\bf N} & Fitting parameter for the saturation region [-] \\ {\bf LAMBDA} & Channel length modulation [1/V] \\ {\bf THETA} & Mobility degradation [1/V] \\ \prmt{DELTA}& Smoothing parameter for gradual transition\\ & between the linear and saturation regions [-]\\ \hline \end{tabular} \end{table} \subsubsection{Surface-Potential-Based Model} \begin{table}[!t] \centering \caption{Model parameters of the current equation in the surface-potential-based model~\cite{TPEL2018_Shintani}} \label{tab:sp_parameters} \begin{tabular}{l|l}\hline Model parameter & Description \\ \hline \prmt{TOX} &Oxide thickness [m]\\ \prmt{VFBC}&Flat-band voltage of the channel region [V]\\ \prmt{NA}& Acceptor concentration [$\rm cm^{-3}$]\\ \prmt{SCALE}& Current gain factor [$\rm cm^2/V$]\\ \prmt{RD}& Parasitic resistance at the drain side [$\Omega$]\\ \prmt{LAMBDA}& Channel length modulation [1/V]\\ \prmt{THETA}& Channel mobility degradation [1/V]\\ \prmt{DELTA}& Smoothing parameter for gradual transition\\ & between the linear and saturation regions [-]\\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Model parameters of the capacitance equation in the surface-potential-based model~\cite{TPEL2018_Shintani}} \label{tab:symbol_cv} \begin{tabular}{l|l}\hline Model parameter & Description \\ \hline \prmt{ADS}& Drain-source overlap area [$\rm cm^2$]\\ \prmt{ND}&Donor concentration [$\rm cm^{-3}$]\\ \prmt{VBI}& Built-in potential of PN junction [V]\\ \prmt{COXD}& Gate-drain oxide capacitance [F]\\ \prmt{AGD}& Gate-drain overlap area [$\rm cm^2$]\\ \prmt{VFBD}& Gate-drain flat-band voltage [V]\\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Physical constants} \label{tab:physical_parameters} \begin{tabular}{l|l|r}\hline Parameter & Description & Value \\ \hline $k$ & Boltzmann's constant [$\rm J/K$] & $1.38 \times 10^{-23}$\\ $q$ & Elementary charge [C] &$1.60\times 10^{-19}$\\ $T$& Absolute temperature [K] & 298\\ $\phi_{\rm t}$& Thermal voltage ($kT/q$) [V] & 0.026\\ $\varepsilon_{\rm SiC}$ &Permittivity of SiC [F/m] & $9.7\times 8.85 \times 10^{-12}$\\ $\varepsilon_\mathrm{ox}$ & Permittivity of gate oxide [F/m]& $3.9\times8.85\times10^{-12}$\\ $n_i$ & Intrinsic carrier concentration & $4.82\times10^{15}$\\ & of SiC [$\rm cm^{-3}$] & \\ \hline \end{tabular} \end{table} The proposed parameter extraction is also performed on the surface-potential-based model, which was developed to simulate the behavior of a SiC power MOSFET~\cite{TPEL2018_Shintani}. The model parameters of the current and capacitance model equations are listed in Tables~\ref{tab:sp_parameters} and~\ref{tab:symbol_cv}, respectively. The physical constants for these models are summarized in Table~\ref{tab:physical_parameters}. According to the current model in~\cite{TPEL2018_Shintani}, the surface potentials, $\phi_\mathrm{sS}$, and $\phi_\mathrm{sD}$ are first calculated for the source and drain ends of the channel as functions of $V_{\rm gs}$ and $V_{\rm ds}$. The inverted charge of the channel is determined as a function of the surface potential. Then, the intermediate value $I_{\rm DD}$ in~(\ref{eq:equation}) is computed based on $\phi_{\rm sS}$ and $\phi_{\rm sD}$ as follows: \begin{align} I_{\rm DD}&=C_{\rm ox}{ (V_{\rm gs} - {\bf VFBC} + \phi_{\rm t})(\phi_{\rm sD} - \phi_{\rm sS})} \nonumber \\ &- \cfrac{1}{2}\, C_{\rm ox} (\phi_{\rm sD}^2 - \phi_{\rm sS}^2) \nonumber \\ &-\cfrac{2}{3}\, \phi_{\rm t} \gamma \left\{\left(\phi_{\rm sD}/\phi_{\rm t} - 1\right)^{\frac{3}{2}} - (\phi_{\rm sS}/\phi_{\rm t} - 1)^{\frac{3}{2}} \right\} \nonumber \\ &+\phi_{\rm t} \gamma \left\{\left(\phi_{\rm sD}/\phi_{\rm t} - 1\right)^{\frac{1}{2}} - (\phi_{\rm sS}/\phi_{\rm t} - 1 )^{\frac{1}{2}}\right\}, \label{eq:idd} \end{align} where \begin{align} \gamma&=\sqrt{2\varepsilon_{\rm SiC}kT\cdot {\bf NA}}. \end{align} Here, $k$ and $T$ are the Boltzmann's constant and the absolute temperature, respectively. $\varepsilon_\mathrm{SiC}$ is the permittivity of SiC, $\phi_{\rm t}$ is the thermal voltage, and $C_{\rm ox}$ is gate oxide capacitance per unit area. Further, $C_{\rm ox}=\varepsilon_{\rm ox}/{\bf TOX}$, where $\varepsilon_{\rm ox}$ and {\bf TOX} are the permittivity and thickness, respectively, of the gate oxide. {\bf VFBC} and {\bf NA} are the flat-band voltage and the acceptor concentration, respectively. The channel current model also includes a smooth transition between the linear and saturation regions according to the model parameter {\bf DELTA}, which is defined by a similar equation to that in the $N$-th-power-law model. Finally, substituting $I_{\rm DD}$ in~(\ref{eq:idd}), the simulated drain current, $I_{\rm sim}$, is derived. The drain current in the structure also depends on the parasitic resistance {\bf RD}~\cite{TED2013_Mattausch} which reduces internal drain voltage of the MOSFET. In the surface-potential-based model, the calculation of $\phi_{\rm sS}$ and $\phi_{\rm sD}$ involves solving a non-linear equation~\cite{Baliga_book}, whose solution cannot be explicitly expressed. Typically, the surface potentials have to be obtained using iterative methods~\cite{TED2006_Miura,TED2013_Mattausch} such as the Newton-Raphson method. Though the computational graph may also be constructed for the model equations containing iterations, we adopt the following closed-form expression for the surface potential $\phi_{\rm s}$~\cite{SSE2001_Chen} for simplifying the calculation graph: \begin{align} (V_{\rm gs}-\prmt{VFBC}-\phi_{\rm s})^2= \gamma^2 \phi_{\rm t} [(\exp(-\frac{\phi_{\rm s}}{\phi_{\rm t}})+\frac{\phi_{\rm s}}{\phi_{\rm t}}-1) + \nonumber \\ \exp(-(2\phi_{\rm F} + \phi_{\rm t})/\phi_t)(\exp(\frac{\phi_{\rm s}}{\phi_{\rm t}}) - \frac{\phi_{\rm s}}{\phi_{\rm t}} -1)]. \label{eq:spe} \end{align} $\phi_{\rm sS}$ and $\phi_{\rm sD}$ are derived by solving~(\ref{eq:spe}) with respect to~$\phi_{\rm s}$ at $\phi_{\rm F}=0$ and $\phi_{\rm F}=V_{\rm ds}$, respectively. Thus, the model equations of the surface-potential-based model can be entirely represented by a computational graph. The capacitance characteristics, $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$, are expressed in the surface-potential-based model as follows: \begin{align} C_{\rm ds} &= {\bf ADS}\cdot \sqrt{\frac{q \cdot \varepsilon_{\rm SiC}\cdot {\bf ND}} {2({\bf VBI} + V_{\rm ds}) }}\,\,\,\,\,\,{\rm and} \label{eq:cds} \\ C_{\rm gd} &= {\bf COXD} \parallel C_{\rm dep}. \label{eq:cgd} \end{align} $C_{\mathrm{ds}}$ is a bias-dependent junction capacitance between drain and source, which is calculated on the basis of the capacitance model of the PN junction as shown in~(\ref{eq:cds}). $C_{\mathrm{gd}}$ is modeled as a series connection of the constant gate oxide capacitance ${\bf COXD}$ and the bias-dependent depletion-layer MOS capacitance $C_{\rm dep}$. Here, $C_{\rm dep}$ is a MOS capacitance represented as a function of the surface potential $\phi_{\rm gd}$ of the channel formed on the drain region under the junction field effect transistor (JFET) region. It can be written as shown in~(\ref{eq:cdep}), where $\phi_{\rm gd}$ is computed in a similar way to the calculation of $\phi_{\rm s}$ using \eqref{eq:spe} with ($V_{\rm gd}$, $V_{\rm ds}$, ${\bf ND}$, ${\bf VFBD}$). \begin{figure*}[!t] \begin{equation} C_{\rm dep} = {\bf AGD}\cdot\sqrt{2q\varepsilon_{\rm SiC}\cdot{\bf ND}} \, \cfrac{1 - e^{-\phi_{\rm gd}/\phi_{\rm t}} + e^{-(2\phi_{\rm F}+ V_{\rm ds})/ \phi_{\rm t}} (e^{\phi_{\rm gd}/\phi_{\rm t}} - 1) } {2\sqrt{\phi_{\rm t}e^{-\phi_{\rm gd}/\phi_{\rm t}} + \phi_{\rm gd} - \phi_{\rm t} + e^{-(2\phi_{\rm F}+V_{\rm ds})/\phi_{\rm t}} (\phi_{\rm t}e^{\phi_{\rm gd}/\phi_{\rm t}} - \phi_{\rm gd} - \phi_{\rm t}) }} \label{eq:cdep} \end{equation} \end{figure*} \subsubsection{Initial Parameter Determination}\label{sec:initial} \begin{figure}[!t] \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.9\columnwidth]{lambda_rd.pdf} \end{center} \caption{Extraction of {\bf LAMBDA} and {\bf RD}.} \label{fig:ext_idvd} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.98\columnwidth]{k_theta.pdf} \end{center} \caption{Extraction of {\bf K} and {\bf THETA}.} \label{fig:ext_idvg} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \vspace{2mm} \begin{center} \includegraphics[width=.98\columnwidth]{cgs.pdf} \end{center} \caption{Extraction of {\bf VFBC}.} \label{fig:ext_vfbc} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.9\columnwidth]{coxd_vfbd.pdf} \end{center} \caption{Extraction of {\bf COXD} and {\bf VFBD}.} \label{fig:coxd_vfbd} \end{minipage} \end{figure} The initial parameter determination is one of the important steps in the model development because the model parameters are repeatedly updated to minimize the discrepancy with the measured device characteristics, starting from the initial values. When the choice of the initial parameters is inappropriate, the model parameters may be determined as a local optimum solution that can be far from the ground truth. The standardized compact models provide the initial parameter determination as well as the model descriptions~\cite{BSIM48,HiSIM-HV}. In the experiment, we apply an initial parameter determination for the surface-potential-based model proposed in~\cite{WIPDA2019-Shintani}. In this work, we assume {\bf TOX} is 50\,nm. The default value of {\bf DELTA}, which is introduced to model the gradual transition between the linear and saturation regions, is set to 0.8~\cite{HiSIM-HV}. The slope of the $I_{\rm d}$-$V_{\rm ds}$ curve in the saturation region is represented by {\bf LAMBDA}, while that at $V_{\rm ds} \simeq 0$\,V of a high $V_{\rm gs}$ shows {\bf RD} as shown in Fig.~\ref{fig:ext_idvd}. {\bf K} and {\bf THETA} are extracted from the $I_{\rm d}$-$V_{\rm gs}$ curve as shown in Fig.~\ref{fig:ext_idvg}. {\bf K} is approximated by the slope of saturation-region current, and {\bf THETA} is extracted from the linear region current. {\bf VFBC} is the flat-band voltage of the MOS interface at the channel region. Beyond that voltage, the depletion capacitance becomes apparent in the MOS capacitance characteristics. Hence, {\bf VFBC} can be estimated from the gate-source capacitance $C_{\rm gs}$ against $V_{\rm gs}$ at which $C_{\rm gs}$ starts to bend, as shown in Fig.~\ref{fig:ext_vfbc}. As a good estimate, we approximate $C_{\rm dep}$ to a PN junction capacitance that depends on drain-source voltage $V_{\mathrm{gd}}$~\cite{EPE2011_Phankong}: \begin{eqnarray} C_{\rm dep} = {\bf AGD} \sqrt{\frac{q \cdot \varepsilon_{\rm SiC}\cdot {\bf ND}}{2({\bf VFBD}-V_{\mathrm{gd}})}}. \label{eq:approximated_cgd} \end{eqnarray} As shown in Fig.~\ref{fig:coxd_vfbd}, {\bf VFBD} and {\bf COXD} are obtained as the capacitance in the accumulation mode. {\bf AGD} is estimated as $\frac{\bf TOX}{\varepsilon_{\rm SiC}{\bf COXD}}$. Then, by substituting {\bf VFBD}, {\bf COXD}, and {\bf AGD} to~(\ref{eq:approximated_cgd}), {\bf ND} can be calculated. Finally, {\bf NA} is obtained from \begin{equation} {\bf VBI} = \frac{kt}{q}\ln \left(\frac{{\bf NA} \cdot {\bf ND}}{n_i^2} \right), \label{eq:vbi} \end{equation} where $n_i$ is the intrinsic carrier concentration. {\bf VBI} is derived as the forward voltage where the body-diode current starts to flow. {\bf ADS} is calculated by substituting {\bf VBI} and {\bf ND} to (\ref{eq:cds}). \subsubsection{Computational Graph} \label{sec:graph} The computational graphs of the two models were constructed based on the respective model equations. Graph manipulations are implemented using a Python package, NetworkX~\cite{SCIPY2008_Hagberg}. As shown in Table~\ref{tab:generation_time}, the execution time was much less than 0.1\,second on the single thread PC for both models. Table~\ref{tab:graph_size} summarizes the sizes of the two computational graphs from the input parameters to $I_{\rm sim}$ for a particular bias voltage. The subgraph for calculating $I_{\rm sim}$ is reused for different bias voltages. The graph size of the surface-potential-based model is approximately ten times larger than that of the $N$-th-power-law model. Thus, only the computational graph of the $N$-th-power-law model is presented in Fig.~\ref{fig:graph_n-th-power}. Through defining the forward and backward functions of each node and incorporating the computational network into Algorihgm~\ref{alg:ad}, the AD-based parameter extraction is carried out according to Algorithm~\ref{alg:gd}. \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{level2.pdf} \caption{Computational graph of the $N$-th-power-law model for a particular bias voltage.} \label{fig:graph_n-th-power} \end{figure} \begin{table}[!t] \caption{Graph generation time}\label{tab:generation_time} \centering \begin{tabular}{l|l||r} \hline \multicolumn{2}{l||}{Model} & Time {[}s{]} \\ \hline \multicolumn{2}{l||}{$N$-th-power-law model} & 0.0384 \\ \hline \multirow{3}{*}{Surface-potential-based model} & $I_{\rm sim}$ & 0.0571 \\ \cline{2-3} & $C_{\mathrm{ds}}$ & 0.0039 \\ \cline{2-3} & $C_{\mathrm{gd}}$ & 0.0288 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \caption{Size of the computational graphs}\label{tab:graph_size} \centering \begin{tabular}{l|l||r|r} \hline \multicolumn{2}{l||}{Model} & No. of Edges & No. of Vertices \\ \hline \multicolumn{2}{l||}{$N$-th-power-law model} & 39 & 48 \\ \hline \multirow{3}{*}{Surface-potential-based model} & $I_{\rm sim}$ & 384 & 540 \\ \cline{2-4} & $C_{\mathrm{ds}}$ & 20 & 20 \\ \cline{2-4} & $C_{\mathrm{gd}}$ & 318 & 228 \\ \hline \end{tabular} \end{table} \subsection{Simulation Setup} I-V curves of the SiC MOSFET~\cite{sct2450ke} were measured at room temperature using a dedicated curve tracer~\cite{ICMTS2016_Nakamura} while sweeping $V_{\rm gs}$ from 6\,V to 14\,V with 2\,V steps and sweeping $V_{\rm ds}$ from 0\,V to 50\,V with 2\,V steps (the total number of bias voltages tested $m$, were 125). C-V curves were measured by a commercial curve tracer~\cite{b1505a} at 1\,MHz. The data points are 300 for both $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$ measurements. The parameter updating process was conducted by using AdaGrad~\cite{JMLR2011_Duchi}: \begin{eqnarray} h_i &=& h_i + \left(\frac{\partial E}{\partial p_i}\right)^2 \qquad {\mathrm{and}}\label{eq:adagrad1} \\ p_i &=& p_i - \eta_i \frac{1}{\sqrt{h_i}} \frac{\partial E}{\partial p_i}. \label{eq:adagrad2} \end{eqnarray} The parameter update is performed to decrease the RMSE between the measured drain current and that obtained through the MOSFET model with the latest model parameters. In AdaGrad, the parameter vector, $\bm{h}=(h_0,...,h_{n-1})$, is introduced. Based on the gradient, $\eta_i \frac{1}{\sqrt{h_i}}$ is gradually reduced as shown in~(\ref{eq:adagrad1}) and~(\ref{eq:adagrad2}). Generally, the initial values of all elements of $\bm{h}$ were set to zero and all elements of $\bm{\eta}$ were set to 1/100 for the respective parameter values. \subsection{Results} \subsubsection{Parameter extraction for current characteristics}\label{sec:c-1} The parameters extracted by the proposed AD-based approach was compared with those by the ND-based method using each of the two MOSFET models. In this experiment, the maximum number of iterations, $N_{\rm max}$, was set to 1,000 for both the models. Also, $E_{\rm target}$ was set to 0.04\,A and 0.16\,A for each model. The initial values are randomly determined for the $N$-th-power-law model, while those of the surface-potential-model are determined by the initial parameter determination procedure described in Section~\ref{sec:initial}. Fig.~\ref{fig:result1} shows the RMSE as function of the computation time. The proposed method accelerated the parameter extraction by 4.03$\times$ and 4.34$\times$ for the $N$-th-power-law model and the surface-potential-model, respectively. This improvement in the computation time was close to the theoretical value of 4.5 ($= (8+1)/2$) for a generalized model with eight parameters. \begin{figure}[t!] \centering \subfigure[$N$-th-power-law model \label{fig:npower}]{ \includegraphics[width=0.55\linewidth]{RMSE_thr.pdf}}\\ \subfigure[Surface-potential-based model\label{fig:sp}]{ \includegraphics[width=0.55\linewidth]{RMSE_pot.pdf}} \caption{RMSE as a function of the computation time.} \label{fig:result1} \end{figure} \begin{figure}[t!] \centering \subfigure[$N$-th-power-law model \label{fig:sakurai}]{% \includegraphics[width=.62\columnwidth]{thr_model_I-V.pdf} } \\ \centering \subfigure[Surface-potential-based model \label{fig:iv_sp}]{% \includegraphics[width=.65\columnwidth]{Ids.pdf} } \caption{Simulated and measured I-V curves.} \label{fig:iv} \end{figure} The measured and simulated I-V characteristics of the two models are presented in Fig.~\ref{fig:iv}. The final model parameters extracted using the AD and ND methods are used to generate the graph. These results show that both MOSFET models accurately reproduced the I-V characteristics of the SiC MOSFET\@. The extracted parameter values at $N=N_{\rm max}$ are summarized in Tables~\ref{tab:sakurai_error} and~\ref{tab:sp_error}. Also, the initial values are listed in Table~\ref{tab:sp_error} for the surface-potential-based model. The relative error was smaller than 2\% in all cases. In addition, it can be seen from Table~\ref{tab:sp_error} that the initial and the optimized parameters are close with each other, suggesting that the good initial parameters have been extracted through the procedure. Hence, it was concluded that the proposed method accelerates parameter extraction without affecting the accuracy. \begin{table}[!t] \centering \caption{Extracted values for the $N$-th-power-law model} \label{tab:sakurai_error} \begin{tabular}{l||r|r|r} \hline & & & Relative \\ Parameter & AD & ND & error [\%] \\ \hline {\bf VTH} & 2.600 & 2.600 & 0.116 \\ {\bf K} & $ 2.691 \times 10^{-3}$& $2.681 \times 10^{-3}$& 0.372 \\ {\bf N} & 3.284 & 3.286 & 0.061\\ {\bf LAMBDA} & $2.606 \times 10^{-3}$ & $2.259\times10^{-3}$ & 0.576 \\ {\bf THETA} & $3.440 \times 10^{-4}$ & $3.464 \times 10^{-4}$ &0.610 \\ {\bf M} & 1.743 & 1.744 & 0.057 \\ {\bf J} &0.119 & 0.119 & 0.000 \\ {\bf DELTA} & 1.269 & 1.267 & 0.158 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Extracted values for the surface-potential-based model} \label{tab:sp_error} \begin{tabular}{l||r|r|r} \hline Parameter& & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} & & & \\ (5166360) & 5403054 & 5398779 & 0.079 \\ {\bf TOX} & && \\ ($5.0\times 10^{-8}$) & $4.788\times10^{-08}$ & $4.790\times10^{-08}$ & 0.035 \\ {\bf NA} & &&\\ ($1.31\times 10^{17}$) & $1.313\times10^{17}$ &$1.313\times10^{17}$& 0.000 \\ {\bf LAMBDA} & &&\\ ($8.69\times 10^{-3}$) & $6.110\times10^{-3}$ & $6.086\times10^{-3}$& 0.387 \\ {\bf VFBC} & &&\\ ($-4.90$) & $-1.812\times10^{-3}$ &$-1.780\times10^{-3}$ & 1.869 \\ {\bf THETA} & &&\\ ($5.91\times10^{-3}$) & $5.912\times10^{-8}$ &$5.941\times10^{-8}$ & 0.492 \\ {\bf DELTA} & &&\\ ($0.80$) & 0.6170 & 0.6150 & 0.329 \\ {\bf RD} & &&\\ ($2.90\times10^{-3}$) & $2.7178\times10^{-3}$ &$2.737\times10^{-3}$ & 0.715 \\ \hline \end{tabular} \end{table} \subsubsection{Parameter extraction for multiple objectives}\label{sec:multi} The proposed method can handle model parameters used in different model equations. Since {\bf TOX} and {\bf NA} are contained in the current and capacitance characteristics, $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$, in the surface-potential-based model, these parameters have to be consistent for both characteristics. In this experiment, the computational graphs were constructed so that the parameters are simultaneously optimized for all the model equations. The cost function is set as the normalized sum of the residuals to balance the weights of each characteristic. The initial values of the parameters are determined through the determination procedure. In this experiment, $E_{\rm target} = 0.02$ was added as the termination condition, in addition to iteration count $N_{\rm iter} = 1000$. Since {\bf VBI} can be represented using {\bf NA} and {\bf ND} as shown in (\ref{eq:vbi}), the total number of the model parameters thus is 13 excluding {\bf VBI}. \begin{figure}[!t] \centering \includegraphics[width=0.55\linewidth]{RMSE_sum_2.pdf} \caption{RMSE as a function of the computation time for the simultaneous parameter extraction.} \label{fig:multi_rmse} \end{figure} \begin{figure}[t!] \centering \subfigure[I-V curve]{% \includegraphics[width=.65\columnwidth]{Ids2.pdf} } \\ \centering \subfigure[C-V curve]{% \includegraphics[width=.65\columnwidth]{sum_Cds_Cgd.pdf} } \caption{Simulated and measured curves.} \label{fig:multi_fit} \end{figure} Fig.~\ref{fig:multi_rmse} shows the RMSE as function of computation time for the simultaneous parameter extraction. The proposed method calculates the model parameters 3.50$\times$ faster than the ND-based parameter extraction. The measured and simulated I-V and C-V characteristics of the models are presented in Fig.~\ref{fig:multi_fit}. The graphs are plotted by the final model parameters extracted using the AD and ND methods. Form the results, good agreement can be seen among the I-V and C-V characteristics. The initial and extracted parameter values when the algorithm exits are summarized in Table~\ref{tab:sp_multi_error}. The relative error between the parameters extracted by using the AD and ND methods was smaller than 3\% in all the parameters. From the result, we confirmed that the proposed method is applicable for the capacitance characteristics, and the simultaneous parameter extraction of the different characteristics. \begin{table}[!t] \centering \caption{Extracted values for $I_{\rm sim}$, $C_{\mathrm{ds}}$, and $C_{\mathrm{gd}}$ by the simultaneous extraction}\label{tab:sp_multi_error} \begin{tabular}{l||r|r|r} \hline Model parameter & & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} &&&\\ (5166360) & 5644684 & 5574780 & 1.238\\ {\bf TOX} &&& \\ ($5.00\times10^{-08}$) & $4.933\times10^{-08}$ & $4.947\times10^{-08}$ & 0.286 \\ {\bf NA} &&& \\ ($1.31\times10^{17}$) & $1.313\times10^{17}$ & $1.313\times10^{17}$ & 0.000 \\ {\bf LAMBDA} &&& \\ ($8.69\times10^{-3}$) & $6.119\times10^{-3}$ & $6.083\times10^{-3}$ & 0.589 \\ {\bf VFBC} &&&\\ ($-4.90$) & $-1.943$ & $-1.985$ & 2.168 \\ {\bf THETA} &&&\\ ($5.910\times10^{-3}$) & $5.927\times10^{-3}$ & $5.910\times10^{-3}$ & 0.287 \\ {\bf DELTA} &&&\\ (0.80) & 0.6073 & 0.6130 & 0.938 \\ {\bf RD} &&&\\ ($2.90\times10^{-3}$) & $2.021\times10^{-3}$ & $2.057\times10^{-3}$ & 1.744 \\ \hline {\bf ADS} &&&\\ (0.00776) & 0.0250 & 0.0250 & 0.000 \\ {\bf ND} &&& \\ ($5.27\times10^{15}$) & $5.266\times10^{15}$ & $5.266\times10^{15}$ & 0.000 \\ {\bf COXD} &&& \\ ($4.36\times10^{-10}$) & $4.360\times10^{-10}$ & $4.360\times10^{17}$ & 0.000 \\ {\bf VFBD} &&& \\ ($1.00$) & 0.1055 & 0.1055 & 0.000 \\ {\bf AGD} &&& \\ ($6.31\times10^{-5}$) & $5.549\times10^{-3}$ & $5.549\times10^{-3}$ & 0.000 \\\hline \end{tabular} \end{table} \subsubsection{Parameter extraction using LM method} In addition to the gradient-descent-based method, the proposed method can be applied to various optimization algorithms that rely on derivatives. Here, we show the result of the parameter extraction derived by incorporating the proposed method with the LM method, which is adopted in a widely used industrial device modeling flow~\cite{iccap}. Fig.~\ref{fig:lm} shows the RMSE reduction in the LM-based parameter extraction. In this experiment, the model parameters of the surface-potential-based model are extracted by using the same initial parameters as those in Sec.~\ref{sec:c-1}. Owing to the quadratic convergence property of the LM algorithm~\cite{Computing2005-Fan}, the number of iterations is reduced to 30 for both the AD- and ND-based calculations, while the gradient-descent-based method requires more than six hundred iterations as shown in Fig.~\ref{fig:lm_iter}. As expected, close to the ideal 3.89$\times$ acceleration has been obtained, as shown in Fig.~\ref{fig:lm_time}. In addition, the extracted parameters by the AD- and ND-based calculations, which are shown in Table~\ref{tab:lm_param}, are in perfect agreement, and quite close to those in Table~\ref{tab:sp_error}. From these results, the proposed method can be successfully applied to other derivative-based optimization algorithms, and the accelerations according to the number of parameters can be achieved. \begin{figure}[t!] \centering \subfigure[Number of iterations\label{fig:lm_iter}]{ \includegraphics[width=0.55\linewidth]{lm_iter.pdf}}\\ \subfigure[CPU time\label{fig:lm_time}]{ \includegraphics[width=0.55\linewidth]{lm_time.pdf}} \caption{RMSE reduction in the LM-based parameter extraction.} \label{fig:lm} \end{figure} \begin{table}[!t] \centering \caption{Extracted values for the surface-potential-based model in the LM-based optimization} \label{tab:lm_param} \begin{tabular}{l||r|r|r} \hline Parameter& & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} & & & \\ (5166360) & 516636 & 516636 & 0.000 \\ {\bf TOX} & && \\ ($5.0\times 10^{-8}$) & $4.626\times10^{-08}$ & $4.626\times10^{-08}$ & 0.000 \\ {\bf NA} & &&\\ ($1.31\times 10^{17}$) & $1.273\times10^{17}$ &$1.273\times10^{17}$& 0.000 \\ {\bf LAMBDA} & &&\\ ($8.69\times 10^{-3}$) & $5.312\times10^{-3}$ & $5.313\times10^{-3}$& 0.000 \\ {\bf VFBC} & &&\\ ($-4.90$) & $-1.597\times10^{-3}$ &$-1.597\times10^{-3}$ & 0.000 \\ {\bf THETA} & &&\\ ($5.91\times10^{-3}$) & $5.910\times10^{-8}$ &$5.910\times10^{-8}$ & 0.000 \\ {\bf DELTA} & &&\\ ($0.80$) & 0.614 & 0.614 & 0.000 \\ {\bf RD} & &&\\ ($2.90\times10^{-3}$) & $2.902\times10^{-3}$ &$2.902\times10^{-3}$ & 0.000 \\ \hline \end{tabular} \end{table} \subsection{Manufacturing variability analysis} Recently, the application of artificial intelligence has been gaining popularity in the area of power electronics~\cite{TPLE2021_Shuai}. As an example, a neural network has been applied to model the device current characteristics in SPICE simulators~\cite{TPLE2019_Chiozzi} considering the parameter fitting as a black-box regression with a neural network. However, as opposed to the physics-based modeling, neural networks carry out purely mathematical fitting of the measured device characteristics to a nonlinear equation, ignoring physical behavior of the device in interest. In order to fully understand various device characteristics such as aging-induced threshold voltage shift~\cite{TED2014_Rescher} and manufacturing process variation~\cite{APEC2014_Wang,JESTPM2019_Borghese}, which have been critical issues in the design of power converters using wide-bandgap semiconductors, the usage of a set of physically meaningful model parameters is of enormous importance. Although the proposed method incorporates a similar technique used in neural networks for model-parameter extraction, our method preserves the physical meanings of the model parameters. \begin{figure}[!t] \centering \includegraphics[width=0.55\linewidth]{idvd_var.pdf} \caption{Measured I-V characteristics of 35 SiC MOSFETs.} \label{fig:var} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{hist.pdf} \caption{Histograms of the extracted model parameters of 35 SiC MOSFETs.} \label{fig:hist} \end{figure} In order to demonstrate the aforementioned advantages, we apply the proposed method to the modeling of I-V characteristics of 35 SiC MOSFETs, which are presented in Fig.~\ref{fig:var}. For instance, in the parallel operation of SiC MOSFETs in a power module, the characteristics mismatch causes the current imbalances, resulting in the significant degradation of the reliability~\cite{APEC2014_Wang}. By applying the proposed method with the surface-potential-based model, the distribution of its model parameters can be obtained as shown in Fig.~\ref{fig:hist}. We would like to note that the extracted model parameters represent their variation while the neural network based method~\cite{TPLE2019_Chiozzi} only gives us 35 black box models consisting of slightly different parameters. According to~\cite{JESTPM2019_Borghese}, the current imbalance can be mitigated by pairing MOSFETs with similar threshold voltage (${\bf VFBC}$) and/or current gain factor (${\bf SCALE}$) in parallelly connected SiC MOSFETs. Choosing pairs could be easy with the parameters obtained by the proposed method. Moreover, the distribution of the physically meaningful parameters, such as {\bf TOX} and {\bf NA}, can also be used by manufacturers to improve manufacturing yields. \section{Conclusion}\label{sec:conclusion} Herein, we proposed a novel parameter-extraction method that is applicable to various power MOSFET models. The proposed method employs AD, which is commonly used in backpropagation of artificial neural networks. The AD technique involves the calculation of the partial derivatives according to the chain rule by simply traversing the calculation graph, thereby reducing the number of times the model equation must be evaluated compared to ND calculation. Due to this improvement, the AD approach requires less computation time than the ND approach for parameter extraction. Experimental results using a commercial SiC power MOSFET demonstrated that the proposed method could be used to successfully derive the model parameters for the current equation of the MOSFET about 4.3 times faster than the conventional gradient-descent method using two-point gradient approximations. In future study, we intend to include an evaluation with open source packages for AD as introduced in~\cite{autodiff} to analyze more complex descriptions in arbitrary power MOSFET models by the computational graph. In addition, the evaluation of the proposed method is still limited in the static characteristics. We intend to investigate a further evaluation on the dynamic characteristics based on the extracted parameters. \section*{Acknowledgment} This work was partially supported by JST-OPERA Program Grant Number JPMJOP1841, Japan. The part of this work is also supported by JSPS KAKENHI Grant 20H04156 and 20K21793. \section*{Appendix} The proposed method can be applied even when the model contains loops, such as ``while'' or ``for'' statements. The computational graph of the loop is constructed with a feedback loop. In the forward mode, the computational graph is repeatedly traversed until the convergence condition is satisfied. Here, the variables of the graph are extended to use a {\it stack} data structure so that the intermediate values for each calculation are pushed into it. The backward mode is conducted by popping the variables while traversing the graph. Here, the while statement essentially contains a conditional branch. Meanwhile, some models also contain ``if'' statement, so that different calculations are carried out depending on the bias condition or the parameter. In the proposed method, the conditional branch is realized by adding a flag to the node again as a stack to judge which path should be selected. The backward mode is carried out by traversing the graph according to the flag value. \begin{figure}[!t] \centering \includegraphics[width=0.65\linewidth]{loop.pdf} \caption{Example of a computational graph for a while loop.} \label{fig:loop} \end{figure} Fig.~\ref{fig:loop} shows a computational graph of a while statement in the following code snippet as an example. \begin{lstlisting}[basicstyle=\ttfamily] val = 10; while (val > 0.5){ val = val / 2; } \end{lstlisting} In this code, variable {\tt val} is initialized to 10, then changes to 5, 2.5, 1.25, 0.625, and 0.3125. Finally, the loop terminates when its value becomes smaller than 0.5. The computational graph of the while statement is constructed with a feedback loop containing a conditional branch and stacks as shown in Fig.~\ref{fig:loop}. In the forward mode, the model equation is evaluated while pushing the value of variable {\tt val} into the stack sequentially. In addition, the flags that will be used to which path should be evaluated are stored in the stack. Then, in the backward mode, the graph is traversed by popping these values. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} {\textcolor{black}{ Wide-bandgap devices, such as silicon carbide (SiC) and gallium nitride (GaN) metal oxide semiconductor field-effect transistors (MOSFETs), are the promising alternatives {\textcolor{black}{that replace conventional}} silicon (Si) power devices~\cite{Baliga_book,Kimoto_book}. Owing to {\textcolor{black}{the}} excellent material properties, SiC and GaN MOSFETs can operate at {\textcolor{black}{higher}} switching {\textcolor{black}{frequencies}} with lower switching loss {\textcolor{black}{under wide range of ambient temperature.}} Especially, the high switching operation greatly contributes to reducing the volume and the weight of power converters. {\textcolor{black}{Those are the desirable properties in the application of}} electric vehicles (EVs), such as on-board charger (OBC) and in-wheel motor (IWM)~\cite{TIE2917_Liu,ECCE2020_Akatsu}. As the operating frequency of the power converters increases, design optimization using circuit simulators based on compact model of power MOSFETs plays an increasingly important role.}} {\textcolor{black}{The formulation of accurate compact models {\textcolor{black}{for power MOSETs}}, and the careful extraction of their parameter sets are crucial for obtaining reliable results from circuit simulations. Compact power MOSFET models, which have long been the topic of intensive researches, are composed of multiple nonlinear functions {\textcolor{black}{that}} accurately capture the device physics of the MOSFET. {\textcolor{black}{Recently,}} surface-potential-based and charge-based compact models~\cite{TED2013_Mattausch,TPEL2018_Shintani,TED2019_Agarwal,TED2020_Albrecht} have been successfully applied to simulate power converters.}} One of the most widely used parameter-extraction methods is iterative parameter refinement, which is based on gradient calculations~\cite{JJAP2004_Li,ICMTS2009_Zhou}. This approach involves two processes, gradient calculation and parameter updating, which are repeated until the characteristics of the model agree with the measured values, or the iteration limit is reached. In this context, the gradient is a set of partial derivatives for all model parameters in the model equation. Then, in the parameter-update phase, the gradient-descent algorithm is applied to refine the parameters based on the gradient that indicates the direction in which the parameters should be adjusted to improve the fitting accuracy. Although iterative parameter refinement is a very general method and is applicable to arbitrary model equations, it tends to consume a long time for all the parameters to converge using this approach. The gradient calculations require numerical differentiation (ND) with regard to each parameter, which demands full evaluation of the model equations for all bias voltages and all model parameters. Therefore, the model evaluation constitutes most of the time required for parameter optimization. In a preliminary experiment~\cite{ICMTS2018_Shintani}, even when a very simple device model equation~\cite{TED1991_sakurai} was used, 99.6\% of the parameter optimization time was spent on the gradient calculation. Several techniques are known to derive derivatives: examples include symbolic differentiation (SD), numerical differentiation (ND), and automatic differentiation (AD). SD derives derivatives based on the symbolic rules of basic calculus. While it provides us with exact computation, SD tends to require a large amount of memory resource when the model equation becomes complex. Since ND needs to evaluate the model equation twice, the calculation time tends to be long depending on the model scale and the number of model parameters. AD is a technique for deriving partial derivatives for the equations defined in a program. AD can be carried out by the combinations of four basic arithmetic operations and elementary functions, such as exponential function, etc., through repeatedly applying the chain rule. By using AD, partial derivatives can be automatically obtained with a reduced calculation cost. Herein, we propose a method to accelerate the model parameter extraction for power MOSFET models. Expanding upon the idea proposed in~\cite{ICMTS2018_Shintani}, we implement automatic differentiation (AD)~\cite{JCAM2000_Bartholomew-Biggs}, which has recently been extensively applied in machine learning to adjust the weights of convolutional neural networks during the process of {\it backpropagation}~\cite{JMLR2018_Baydin,NATURE2015_LeCun}. The AD technique enables the contribution of each model parameter to the discrepancy between the measurement and the model to be calculated individually. This is achieved by traversing a so-called computational graph merely twice, i.e., once in the forward direction and once in the backward direction, regardless of the number of model parameters. Therefore, the proposed method eliminates the need for repeated model evaluations during the gradient calculation. The AD can be directly substituted for the conventional ND so there is no need to change the existing procedures for updating the parameters in the gradient-based parameter-extraction method. Herein, we experimentally demonstrate that this approach reduces the time required for parameter extraction by 4.34$\times$ than the ND method for a model consisting of eight parameters; this is close to the theoretical upper limit of 4.5$\times$ time reduction with this number of parameters. The remainder of this paper is organized as follows: Section~\ref{sec:conv} provides the problem formulation of the ND. In Section~\ref{sec:propose}, the basic concept of AD and the AD-based parameter-extraction method are described. Then, experiments using two SPICE models~\cite{TED1991_sakurai,TPEL2018_Shintani} to quantitatively evaluate the effectiveness of the proposed method are reported in Section~\ref{sec:exp}. Finally, the paper is concluded in Section~\ref{sec:conclusion}. \section{Parameter extraction based on numerical differentiation} \label{sec:conv} \subsection{Gradient-Based Parameter Extraction} We first review gradient-based parameter extraction as the typical optimization algorithm, while the proposed method is applicable to other algorithms using derivatives, such as the Levenberg-Marquardt (LM) method~\cite{SIAM1963_Marquardt}. Algorithm~\ref{alg:gd} outlines the parameter extraction for a drain current model, $f(\cdot)$, as an example. Here, $f(\cdot)$ is a function of the gate-source voltage, $V_{\rm gs}$, and drain-source voltage, $V_{\rm ds}$, and is based on the current model equation with constant model parameters, $\bm p$. The algorithm has the following seven inputs: the initial value vector of the model parameters ($\bm p$), the vector representing the small changes in each parameter ($\bm{\delta}$), the measured current data ($\bm{I}^{\rm meas}$) and the corresponding bias voltages ($\bm{V}$), the target error ($E_{\rm target}$), and the maximum number of iterations ($N_{\rm max}$). $\bm{\eta}$ represents the rate at which the parameters are updated. Vectors $\bm{p}$, $\bm{\delta}$, and $\bm{\eta}$ all have sizes of $n$, which is equal to the number of model parameters. $\bm{V}$ is a vector whose components represent the voltage pair $V_{\rm gs}$ and ${V_{\rm ds}}$ at which the drain current, $\bm{I}^{\rm meas}$, is measured. Vectors $\bm{V}$ and $\bm{I}^{\rm meas}$ both have dimensions of $m_{\rm vd} \times m_{\rm vg}$, where $m = m_{\rm vd} \times m_{\rm vg}$ is the total number of the data points measured and $m_{\rm vd}$ and $m_{\rm vg}$ are the number of voltages measured in the ${V_{\rm ds}}$ and ${V_{\rm gs}}$ sweeps, respectively. The gradient-based parameter extraction proceeds by changing each of the parameters, $\bm{p}$, in the direction determined by the numerical derivatives to reduce the cost function, $E$. Here, $E$ is the root-mean-square error (RMSE) between the simulation and measurement and is generally defined as follows: \begin{equation} E = \sqrt{\frac{1}{m}\sum_{j=0}^{m-1} (I^{\rm meas}_j-I^{\rm sim}_j)^2}. \label{eq:error} \end{equation} Before the extraction is initiated, the currents may be normalized such that each bias point has an equal contribution to the final error~\cite{TED2008_Takeuchi}. Otherwise, the fitting results would be dominated by the bias points having larger current values, while those having smaller current values do not contribute to the fitting. This leads to insufficient extraction accuracy. In line 3 of Algorithm~\ref{alg:gd}, the function {\em gradient\_calc} is used to obtain the gradient of the cost function, $\nabla E$, in terms of each parameter as follows: \begin{equation} \nabla E = \left(\frac{\partial E}{\partial p_0}, \ldots ,\frac{\partial E}{\partial p_{n-1}}\right). \end{equation} Then, in lines 4-6, the gradient is used to update the parameters according to $\bm{\eta}$ in a direction that will reduce the cost function. In line 5, each parameter is changed according to $\partial E/\partial p_i$ at an update rate of $\bm{\eta}$, which is assumed to be constant throughout the parameter extraction for the sake of simplicity. However, the variable step size can be chosen as proposed in AdaGrad~\cite{JMLR2011_Duchi} or AdaDelta~\cite{arxiv2012_adadelta}. Next, the cost function and its gradient are recalculated based on the updated model parameters. The gradient calculation and the parameter update are alternatively repeated until either of the exit conditions in line 2 is satisfied, i.e., the cost function becomes smaller than the target value or the iteration limit is reached. \begin{figure}[t!] \begin{algorithm}[H] \caption{Gradient-descent-based parameter optimization} \label{alg:gd} \begin{algorithmic}[1] \Require $\bm{p}=(p_0, \ldots ,p_{n-1})$, $\bm{\delta}=(\delta_0, \ldots ,\delta_{n-1})$, ${\bm V}=((V_{\rm{gs}_0}, V_{\rm {ds}_0}), \ldots ,(V_{\rm{gs}_{m-1}},V_{\rm{ds}_{m-1}}))$, $\bm{I}^{\rm meas}=(I^{\rm meas}_0, \ldots ,I^{\rm meas}_{m-1})$, $E_{\rm target}$, $N_{\rm max}$, $\bm{\eta}=(\eta_0, \ldots, \eta_{n-1})$ \State initialize $N_{\rm iter}=0$ \Do \State $\nabla E, E =$ gradient\_calc($\bm{p}$, $\bm{\delta}$, ${\bm V}$, $\bm{I}^{\rm meas}$) \ForEach {$p_i \in \bm{p}$} \State $p_i = p_i - \eta_i \frac{\partial E}{\partial p_i}$ \EndFor \State $N_{\rm iter}$++ \doWhile{($N_{\rm iter} < N_{\rm max}$ or $E_{\rm target} < E$) } \State Return optimal parameter $\bm{p}$ \end{algorithmic} \end{algorithm} \end{figure} \subsection{Numerical Differentiation}\label{sec:nd} The gradient-based parameter extraction requires the gradient to be calculated as shown in line 3 of Algorithm~\ref{alg:gd}. A simple way to approximate the derivatives is to adopt ND; in this approach, one of the model parameters, $p_i$, is slightly changed by a small amount, $\delta_i$, while the other model parameters and inputs are fixed to evaluate the change in $E$. This two-point gradient approximation is versatile as it can be applied even when the model equation is not expressed using closed-form equations. The ND-based gradient calculation is conducted by the function {\em gradient\_calc} in Algorithm~\ref{alg:gd} as summarized in Algorithm~\ref{alg:nd}. The ND approximates the derivative based on the slope between the two points; thus, the model equation, $f(\cdot)$, must be evaluated twice (i.e., once with respect to each model parameter, $I_{\rm sim1}$ and $I_{\rm sim2}$). $E$ and $E_{\rm delta}$ are the RMSEs for $\bm{p}$ and $\bm{p}'$, respectively. Based on these errors, the partial derivative of $E$ is calculated with respect to each parameter as shown in line 14 of Algorithm~\ref{alg:nd}. \begin{figure}[t!] \begin{algorithm}[H] \caption{ND-based gradient calculation} \label{alg:nd} \begin{algorithmic}[1] \Function{ND}{$\bm{p}$, $\bm{\delta}$, ${\bm V}$, $\bm{I}^{\rm meas}$} \State $e=0$, $e_{\rm delta}=0$ \ForEach {$(V_{\rm{gs}_j},V_{\rm{ds}_j}) \in \bm{V}$} \State $I_{\rm sim1} = f(\bm{p},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ \State $e=e+(I^{\rm meas}_j - I_{\rm sim1})^2$ \ForEach {$p_i \in \bm{p}$} \State $\bm{p'} = \bm{p}$ \State Substitute $p'_i = p_i + \delta_i$ for $i$-th element of $\bm{p'}$ \State $I_{\rm sim2} = f(\bm{p'},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ \State $e_{\rm delta}=e_{\rm delta}+(I^{\rm meas}_j - I_{\rm sim2})^2$ \EndFor \State $E= \sqrt{\frac{e}{m}}$ \State $E_{\rm delta}= \sqrt{\frac{e_{\rm delta}}{m}}$ \State $\frac{\partial E}{\partial p_i} = \frac{E_{\rm delta}-E}{\delta_i}$ \EndFor \State Return $\nabla E, E$ \EndFunction \end{algorithmic} \end{algorithm} \end{figure} {\textcolor{black}{ ND in the context of parameter extraction is computationally intensive as it involves the calculation of the cost function and gradient with respect to the model parameters, and hence, the evaluation of the model equation for all combinations of bias voltages and parameters. In the ND-based method, $f(\cdot)$ is evaluated $(1+n)mN_{\rm iter}$ times, where $N_{\rm iter}$ is the iteration count. The calculation time increases linearly as $n$ increases. Hence, the time required to evaluate the partial derivative will increase for complex models {\textcolor{black}{having larger number of}} parameters. Moreover, in the situations where any of the model parameters are involved in different model equations, the computation becomes more complex as the parameter extraction has to take all the model equations into account. For example, oxide thickness is involved in both current and capacitance equations in typical MOSFET models. The two model equations should be evaluated simultaneously during the extraction such that the consistent {\textcolor{black}{parameter values}} for these equations are obtained. In the optimization, {\em gradient\_calc} is performed with respect to each model equation. Thus, the total calculation complexity is written as the sum of $(1+n)mN_{\rm iter}$ in each model equation. The proposed method adopts AD to eliminate the iteration over $n$.}} \section{Conclusion}\label{sec:conclusion} Herein, we proposed a novel parameter-extraction method that is applicable to various power MOSFET models. The proposed method employs AD, which is commonly used in backpropagation of artificial neural networks. The AD technique involves the calculation of the partial derivatives according to the chain rule by simply traversing the calculation graph, thereby reducing the number of times the model equation must be evaluated compared to ND calculation. Due to this improvement, the AD approach requires less computation time than the ND approach for parameter extraction. Experimental results using a commercial SiC power MOSFET demonstrated that the proposed method could be used to successfully derive the model parameters for the current equation of the MOSFET about 4.3 times faster than the conventional gradient-descent method using two-point gradient approximations. In future study, we intend to include an evaluation with open source packages for AD as introduced in~\cite{autodiff} to analyze more complex descriptions in arbitrary power MOSFET models by the computational graph. {\textcolor{black}{In addition, the evaluation of the proposed method is still limited in the static characteristics. We intend to investigate a further evaluation on the dynamic characteristics based on the extracted parameters.}} \section{Parameter extraction based on automatic differentiation} \label{sec:propose} The proposed novel method of parameter extraction essentially follows the same procedure as gradient-based extraction shown in Algorithm~\ref{alg:gd}. By replacing the most time-consuming step of the differentiation in ND, AD is expected to significantly reduce the time required for parameter extraction. \subsection{Automatic Differentiation} The basic concept of AD\footnote[1]{\textcolor{black}{According to the direction of the traverse in the computational graph, the AD is classified into two distinct types: forward type and reverse type~\cite{JMLR2018_Baydin}. We hereafter call the reverse-type AD as AD.}} is the decomposition of partial derivatives using the chain rule. The derivative, ${\rm d}y/{\rm d}x$, of a composite function, $y = p(q(x)) = p(w)$, is written using the chain rule as follows: \begin{equation} \frac{{\rm d}y}{{\rm d}x}=\frac{{\rm d}y}{{\rm d}w}\frac{{\rm d}w}{{\rm d}x}. \label{eq:example} \end{equation} The first and second factors of~(\ref{eq:example}) are individually calculated because ${\rm d}y/{\rm d}w={\rm d}p(w)/{\rm d}w$ and ${\rm d}w/{\rm d}x={\rm d}q(x)/{\rm d}x$. Then, ${\rm d}y/{\rm d}x$ is derived as the product of these two factors. In AD, the given expression is generally represented using a directed acyclic graph called {\em computational graph}~\cite{NIPS2015_Schulman}. The AD computes the gradient with respect to each parameter as the contribution to the output. This is realized by two calculation modes: forward and backward. In the forward mode, the given expression is first decomposed into a set of primitive expressions (i.e., the simplest functions) such as addition and multiplication functions; in this mode, the model parameters and input values are propagated to obtain the value of the output. Then, in the backward mode, the output variable to be differentiated is given first and the partial derivative value concerning each partial expression (e.g., ${\rm d}y/{\rm d}w$ and ${\rm d}w/{\rm d}x$ in the above example in (\ref{eq:example})) is recursively calculated. In the gradient calculation involved in AD, the partial derivatives with respect to each of the input parameters are obtained simultaneously through a traversal of the computational graph in the forward and backward directions; hence, there is no need to repeat the model evaluation for each parameter. \subsection{Parameter Extraction} The AD-based gradient calculation, which is executed by the function {\em gradient\_calc} in line 3 of Algorithm~\ref{alg:gd}, is outlined in Algorithm~\ref{alg:ad}. Notably, in contrast to Algorithm~\ref{alg:nd}, there is no nested loop iterating over all model parameters. Instead, the forward propagation and backward propagation are performed in lines 4 and 5, respectively. $\nabla E$ is derived by summing the gradient for each bias condition, $\nabla E_{\rm tmp}$. The input arguments are also simplified than those in ND. The input vector $\bm{\delta}$, which represents the small parameter deviation used for numerically calculating each gradient, is unnecessary in this method and thus omitted. \begin{figure}[t!] \begin{algorithm}[H] \caption{AD-based gradient calculation} \label{alg:ad} \begin{algorithmic}[1] \Function{AD}{$\bm{p}$, ${\bm V}$, $\bm{I}^{\rm meas}$} \State Initialize $e$ and $\nabla E$ to zero \ForEach {$(V_{\rm{gs}_j},V_{\rm{ds}_j}) \in \bm{V}$} \State Calculate $I_{\rm sim} = f(\bm{p},(V_{\rm{gs}_j},V_{\rm{ds}_j}))$ and $e=e+(I^{\rm meas}_j - I_{\rm sim})^2$ through forward mode \State Calculate $\nabla E_{\rm tmp}$ and $\nabla E = \nabla E + \nabla E_{\rm tmp}$ through backward mode \EndFor \State $E= \sqrt{\frac{e}{m}}$ \State Return $\nabla E, E$ \EndFunction \end{algorithmic} \end{algorithm} \end{figure} The computational complexity of the forward and backward propagation is roughly equal to that of a single model evaluation~\cite{OMS1992_Griewank}; therefore, the calculation complexity becomes $2mN_{\rm iter}$ and is independent of the number of parameters $n$, unlike that in ND. Specifically, comparing the loops in Algorithms~\ref{alg:nd} and~\ref{alg:ad}, the gradient calculation is accelerated by the factor of $(n+1)/2$ in AD as compared to ND. Therefore, as the number of the model parameters increases, the acceleration efficiency linearly improves. In situations where two model equations $f_1(\cdot)$ and $f_2(\cdot)$ are considered, those calculation complexity becomes merely the summation of that of each equation, $2m_1N_{\rm iter} + 2m_2N_{\rm iter}$. In the gradient calculation, if $\nabla E$ has the partial derivatives of the common parameters in the two models, they are added up. The proposed parameter-extraction method is described using a simple example model: the drain current equation. The drain current equation is expressed as $f(\cdot)$ in Algorithm~\ref{alg:ad} and is defined as follows~\cite{TPEL2018_Shintani}: \begin{align} I_{\rm sim} = \cfrac{1}{1 + \prmt{THETA}\cdot V_{\rm gs}} \, (1 + \prmt{LAMBDA}\cdot V_{\rm ds}) \nonumber \\ \cdot{\prmt{SCALE}}\cdot I_{\rm DD}, \label{eq:equation} \end{align} where $I_{\rm sim}$ is the simulated drain current, $I_{\rm DD}$ is an intermediate value expressed as a function of $V_{\rm gs}$, $V_{\rm ds}$, and the surface potentials at the metal-oxide-semiconductor (MOS) interface. The channel current equation includes the channel length modulation and mobility degradation. $\bf{SCALE}$, $\bf{LAMBDA}$, and ${\bf THETA}$ are model parameters that represent the scaling factor, channel length modulation, and channel mobility degradation, respectively. \begin{figure}[!t] \centering \includegraphics[width=0.52\linewidth]{graph.pdf} \caption{Computational graph of~(\ref{eq:equation}) including five multiplication operations, two addition operations, and one division operation.} \label{fig:graph} \end{figure} Fig.~\ref{fig:graph} shows a computational graph representing~(\ref{eq:equation}). The variables located at the leaves of the graph (i.e., {\bf THETA} and $V_{\rm ds}$) represent the inputs and that on the bottom (i.e., $I_{\rm sim}$) represent the output. The vertices labeled as $v_1, \ldots, v_7$ stand for intermediate variables. The nodes represent the basic mathematical function, with each node defining an input variable, output variable, internal connection, and functional behavior. The order of the input edges is defined consistently to handle the operations that is commutative such as subtraction and division. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{./forward_graph.pdf} \caption{Forward propagation mode. In this figure, the graph is simplified by showing only one subgraph for the current calculation of $v_9$.} \label{fig:forward_graph} \end{figure} In the AD-based gradient calculation, the forward propagation is conducted first. The current model equation is evaluated by traversing the computational graph to calculate the internal variables under a particular bias voltage condition, $V_{\rm ds}$ and $V_{\rm gs}$, as shown in Fig.~\ref{fig:forward_graph}. During the forward propagation, the values of the internal vertices, $v_1, \ldots, v_{12}$ and $I_{\rm sim}$, are stored for use in the backward propagation that will be conducted later. Note that, in Fig.~\ref{fig:forward_graph}, the calculation of the RMSE is in the last part of the graph. In the summation operation used to calculate the RMSE, the values of $I_{\rm meas}-I_{\rm sim}$ for each of the bias voltages are required. To simplify the illustration, only the graph for $v_9$ is shown and the subgraphs for the other bias points are omitted. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{./backward_graph2.pdf} \caption{Backward propagation mode. The directions of arrows are reversed as compared to those in Fig.~\ref{fig:forward_graph}. The dashed arrows indicate the path from $E$ to {\bf SCALE}. The propagated values on the path in the backward propagation are highlighted.} \label{fig:backward_graph} \end{figure} In the backward mode, the derivative of the output $E$ is propagated backwards through the graph to calculate the partial derivative with respect to each model parameter, as shown in Fig.~\ref{fig:backward_graph}, in which the path to calculate $\frac{\partial E}{\partial {\bf{SCALE}}}$ has been highlighted as an example. There are eight vertices ($E$, $v_{11}$, $v_{10}$, $v_9$, $v_8$, $I_{\rm sim}$, $v_7$, and ${\bf{SCALE}}$) on the path. According to the chain rule, $\frac{\partial E}{\partial {\bf{SCALE}}}$ is calculated as follows: \begin{eqnarray} \label{eqchain} \frac{\partial E}{\partial {\bf{SCALE}}} = \frac{\partial E}{\partial v_{11}} \cdot \frac{\partial v_{11}}{\partial v_{10}} \cdot \frac{\partial v_{10}}{\partial v_9} \cdot \frac{\partial v_9}{\partial v_8} \nonumber \\ \cdot \frac{\partial v_8}{\partial I_{\rm sim}} \cdot \frac{\partial I_{\rm sim}}{\partial v_7} \cdot \frac{\partial v_7}{\partial {\bf{SCALE}}}. \end{eqnarray} The derivative of each edge with respect to the previous edge can be easily obtained: $\frac{\partial E}{\partial v_{11}}=\frac{1}{2\sqrt{v_{11}}}$ because $E=\sqrt{v_{11}}$. Similarly, $\frac{\partial I_{\rm sim}}{\partial v_7}=I_{\rm DD}$ as $I_{\rm sim}=v_7\cdot I_{\rm DD}$, and $\frac{\partial v_7}{\partial {\bf{SCALE}}}=v_4$ as $v_7=v_4 \cdot {\bf{SCALE}}$. Substituting all the derivatives in~(\ref{eqchain}) with the partial derivatives of the edges obtained by traversing the graph, the following equation is derived: \begin{equation} \frac{\partial E}{\partial {\bf{SCALE}}}= \frac{1}{2\sqrt{v_{11}}} \cdot \frac{1}{m} \cdot 2v_8 \cdot I_{\rm DD} \cdot v_4. \label{eq:lambda} \end{equation} The values of $v_{11}$, $v_8$, and $v_7$ were already stored while progressing the forward mode. The partial derivatives are calculated for all model parameters and the resulting set of partial derivatives is used as a gradient in the parameter updating phase. The chain rule provides us with a way to easily and efficiently calculate the gradients without approximation. The edges of the graphs for forward and backward propagations can be determined by simply consulting the rules as summarized in Fig~\ref{fig:rules}. We would like to note that the partial derivative shown in~(\ref{eq:lambda}) is the value of a single bias condition. When there are multiple bias conditions, which is usually the case, the backward propagation explained above is carried out for all bias conditions. That is, at the sum node, the partial difference of all the paths contributing $E$ is calculated for each bias condition until the model parameter node is reached. The partial derivatives of the model parameters are the summation of the partial derivatives of the different bias voltages diverged at the sum node. \begin{figure}[!t] \centering \subfigure[Multiplication ($x \cdot y$)\label{fig:mul}]{ \includegraphics[width=0.3\linewidth]{./mul.pdf}} \hspace{2mm} \subfigure[Addition ($x + y$)\label{fig:add}]{ \includegraphics[width=0.3\linewidth]{./add.pdf}} \\ \subfigure[Division ($x/y$)\label{fig:div}]{ \includegraphics[width=0.32\linewidth]{./div.pdf}} \hspace{2mm} \subfigure[Summation ($\sum x_m$)\label{fig:sum}]{ \includegraphics[width=0.32\linewidth]{./sum.pdf}} \\ \subfigure[Exponential ($\exp(x)$)\label{fig:exp}]{ \includegraphics[width=0.4\linewidth]{./exp.pdf}} \\ \hspace{2mm} \subfigure[Square root ($\sqrt{x}$)\label{fig:sqrt}]{ \includegraphics[width=0.32\linewidth]{./sqrt.pdf}} \caption{Representative forward and backward propagation rules. The solid arrow is for forward propagation and the dashed one is for the backward propagation. In this figure, a value of 1 is assumed to be the input in the backpropagation.} \label{fig:rules} \end{figure} The computational graph must be constructed only once for a given MOSFET model. The construction can be very quick as described in the experimental section. Additionally, once the computational graph is generated, it can be reused as long as the same MOSFET model is used; therefore, the time needed to construct the computational graph is virtually negligible. Besides, since building a computational graph on the basis of model equations, either model equations or source code, at least, is required as a prerequisite for applying the proposed parameter extraction. {\textcolor{black}{ Note that the extracted model parameters by {\textcolor{black}{the}} proposed method bears physical meaning when physics-based model is used as the computational graph is {\textcolor{black}{consistent with the model equations.}} Thus, the estimated model parameters can be used for device characterization, such as manufacturing variability analysis.}} \section*{Appendix} The proposed method can be applied {\textcolor{black}{even}} when the model contains loops, such as ``while'' or ``for'' statements. The computational graph of the loop is constructed with a feedback loop. In the forward mode, the computational graph is repeatedly traversed until the convergence condition is satisfied. Here, the variables of the graph are extended to use a {\it stack} data structure so that the intermediate values for each calculation are pushed into it. The backward mode is conducted by popping the variables while traversing the graph. Here, the while statement essentially contains a conditional branch. Meanwhile, some models also contain ``if'' statement, so that different calculations are carried out depending on the bias condition or the parameter. In the proposed method, the conditional branch is realized by adding a flag to the node again as a stack to judge which path should be selected. The backward mode is carried out by traversing the graph according to the flag value. }} \begin{figure}[!t] \centering \includegraphics[width=0.65\linewidth]{./loop.pdf} \caption{{\textcolor{black}{Example of a computational graph for a while loop.}}} \label{fig:loop} \end{figure} {\textcolor{black}{ Fig.~\ref{fig:loop} shows a computational graph of a while statement in the following code snippet as an example.}} \begin{lstlisting}[basicstyle=\ttfamily] val = 10; while (val > 0.5){ val = val / 2; } \end{lstlisting} {\textcolor{black}{ In this code, variable {\tt val} is initialized to 10, then changes to 5, 2.5, 1.25, 0.625, and 0.3125. Finally, the loop terminates when its value becomes smaller than 0.5. The computational graph of the while statement is constructed with a feedback loop containing a conditional branch and stacks as shown in Fig.~\ref{fig:loop}. In the forward mode, the model equation is evaluated while pushing the value of variable {\tt val} into the stack sequentially. In addition, the flags that will be used to which path should be evaluated are stored in the stack. Then, in the backward mode, the graph is traversed by popping these values. }} \section{Experiments}\label{sec:exp} To quantitatively evaluate the effectiveness of the proposed method, the model parameters for two different MOSFET models were extracted using AD- and ND-based parameter extraction. A commercially available silicon carbide (SiC) power MOSFET~\cite{sct2450ke} was used for these measurements. The experiments were conducted using a Linux PC with an Intel Xeon W5590 3.33\,GHz central processing unit (CPU) using a single thread. The extraction was implemented using the Python programming language. \begin{figure}[!t] \centering \includegraphics[width=0.38\linewidth]{./equiv.pdf} \caption{Equivalent circuit of SiC power MOSFET.} \label{fig:equiv} \end{figure} \subsection{MOSFET Models}\label{sec:model} The two MOSFET models used in this section are an $N$-th-power-law model~\cite{TED1991_sakurai} and a surface-potential-based model~\cite{TPEL2018_Shintani}. The equivalent circuit of SiC power MOSFET is shown in Fig.~\ref{fig:equiv}. The MOSFET model characterizes the current characteristics $I_{\rm sim}$ and three terminal capacitances: gate-source capacitance $C_{\rm gs}$, drain-source capacitance $C_{\rm ds}$, and gate-drain capacitance $C_{\rm gd}$. The parameter extraction for the $N$-the-power-law model uses $I_{\rm sim}$ only, while that for the surface-potential-based model uses $I_{\rm sim}$, $C_{\mathrm{ds}}$, and $C_{\mathrm{gd}}$ in this experiment. Note that $C_{\mathrm{gs}}$ is modeled as constant as in~\cite{TPEL2018_Shintani}. \subsubsection{$N$-th-Power-Law Model} In this model, the drain current is calculated based on the threshold voltage, $\prmt{VTH}$, of the MOSFET\@. The saturation voltage, $V_\mathrm{ds,sat}$, and saturation current, $I_\mathrm{d,sat}$, are defined as \begin{align} V_\mathrm{ds,sat}&=\prmt{J}\left(V_\mathrm{gs}-\prmt{VTH}\right)^{\prmt{M}} \,\,\,\,\,\textrm{and} \\ I_\mathrm{d,sat}&=\prmt{K}\left(V_\mathrm{gs}-\prmt{VTH}\right)^{\prmt{N}}, \end{align} where $\prmt{J}$ and $\prmt{M}$ are the fitting parameters used to calculate the current in the linear region and $\prmt{K}$ and $\prmt{N}$ are other fitting parameters that are used for the saturation region. $V_\mathrm{ds,mod}$ replaces $V_\mathrm{ds}$ to represent a smooth transition between the linear and saturation regions~\cite{TPEL2018_Shintani}: \begin{align} V_\mathrm{ds,mod} = \frac{V_\mathrm{ds}}{\left[1+\left(\frac{V\mathrm{ds}} {V_\mathrm{ds,sat}}\right)^{\prmt{DELTA}} \right]^{\frac{1}{\prmt{DELTA}}}}, \label{eq:delta} \end{align} where $\prmt{DELTA}$ controls the smoothness of the transition between $V_\mathrm{ds,mod}$ and $V_\mathrm{ds,sat}$. The drain current, $I_\mathrm{sim}$, is calculated based on the channel length modulation and mobility degradation~\cite{TED2006_Miura,TED2006_Gildenblat} as follows: \begin{align} I_\mathrm{sim}=&I_\mathrm{d,sat}\left(2-\dfrac{V_\mathrm{ds,mod}}{V_\mathrm{ds,sat}}\right) \dfrac{V_\mathrm{ds,mod}}{V_\mathrm{ds,sat}}\times\nonumber\\ &(1+\prmt{LAMBDA} \cdot V_\mathrm{ds})\,\left[1+\prmt{THETA}\,(V_\mathrm{gs}-\prmt{VTH})\right]. \end{align} The model parameters of the $N$-th-power-law model are listed in Table~\ref{tab:sakurai_parameters}. \begin{table}[t] \centering \caption{Model parameters of the $N$-th-power-law model~\cite{TED1991_sakurai}} \label{tab:sakurai_parameters} \begin{tabular}{l|l}\hline Parameter & Description \\ \hline {\bf VTH} & Threshold voltage [V] \\ {\bf K} & Fitting parameter for the linear region [-] \\ {\bf M} & Fitting parameter for the linear region [-]\\ {\bf J} & Fitting parameter for the saturation region [-] \\ {\bf N} & Fitting parameter for the saturation region [-] \\ {\bf LAMBDA} & Channel length modulation [1/V] \\ {\bf THETA} & Mobility degradation [1/V] \\ \prmt{DELTA}& Smoothing parameter for gradual transition\\ & between the linear and saturation regions [-]\\ \hline \end{tabular} \end{table} \subsubsection{Surface-Potential-Based Model} \begin{table}[!t] \centering \caption{Model parameters of the current equation in the surface-potential-based model~\cite{TPEL2018_Shintani}} \label{tab:sp_parameters} \begin{tabular}{l|l}\hline Model parameter & Description \\ \hline \prmt{TOX} &Oxide thickness [m]\\ \prmt{VFBC}&Flat-band voltage of the channel region [V]\\ \prmt{NA}& Acceptor concentration [$\rm cm^{-3}$]\\ \prmt{SCALE}& Current gain factor [$\rm cm^2/V$]\\ \prmt{RD}& Parasitic resistance at the drain side [$\Omega$]\\ \prmt{LAMBDA}& Channel length modulation [1/V]\\ \prmt{THETA}& Channel mobility degradation [1/V]\\ \prmt{DELTA}& Smoothing parameter for gradual transition\\ & between the linear and saturation regions [-]\\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Model parameters of the capacitance equation in the surface-potential-based model~\cite{TPEL2018_Shintani}} \label{tab:symbol_cv} \begin{tabular}{l|l}\hline Model parameter & Description \\ \hline \prmt{ADS}& Drain-source overlap area [$\rm cm^2$]\\ \prmt{ND}&Donor concentration [$\rm cm^{-3}$]\\ \prmt{VBI}& Built-in potential of PN junction [V]\\ \prmt{COXD}& Gate-drain oxide capacitance [F]\\ \prmt{AGD}& Gate-drain overlap area [$\rm cm^2$]\\ \prmt{VFBD}& Gate-drain flat-band voltage [V]\\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Physical constants} \label{tab:physical_parameters} \begin{tabular}{l|l|r}\hline Parameter & Description & Value \\ \hline $k$ & Boltzmann's constant [$\rm J/K$] & $1.38 \times 10^{-23}$\\ $q$ & Elementary charge [C] &$1.60\times 10^{-19}$\\ $T$& Absolute temperature [K] & 298\\ $\phi_{\rm t}$& Thermal voltage ($kT/q$) [V] & 0.026\\ $\varepsilon_{\rm SiC}$ &Permittivity of SiC [F/m] & $9.7\times 8.85 \times 10^{-12}$\\ $\varepsilon_\mathrm{ox}$ & Permittivity of gate oxide [F/m]& $3.9\times8.85\times10^{-12}$\\ $n_i$ & Intrinsic carrier concentration & $4.82\times10^{15}$\\ & of SiC [$\rm cm^{-3}$] & \\ \hline \end{tabular} \end{table} The proposed parameter extraction is also performed on the surface-potential-based model, which was developed to simulate the behavior of a SiC power MOSFET~\cite{TPEL2018_Shintani}. The model parameters of the current and capacitance model equations are listed in Tables~\ref{tab:sp_parameters} and~\ref{tab:symbol_cv}, respectively. The physical constants for these models are summarized in Table~\ref{tab:physical_parameters}. According to the current model in~\cite{TPEL2018_Shintani}, the surface potentials, $\phi_\mathrm{sS}$, and $\phi_\mathrm{sD}$ are first calculated for the source and drain ends of the channel as functions of $V_{\rm gs}$ and $V_{\rm ds}$. The inverted charge of the channel is determined as a function of the surface potential. Then, the intermediate value $I_{\rm DD}$ in~(\ref{eq:equation}) is computed based on $\phi_{\rm sS}$ and $\phi_{\rm sD}$ as follows: \begin{align} I_{\rm DD}&=C_{\rm ox}{ (V_{\rm gs} - {\bf VFBC} + \phi_{\rm t})(\phi_{\rm sD} - \phi_{\rm sS})} \nonumber \\ &- \cfrac{1}{2}\, C_{\rm ox} (\phi_{\rm sD}^2 - \phi_{\rm sS}^2) \nonumber \\ &-\cfrac{2}{3}\, \phi_{\rm t} \gamma \left\{\left(\phi_{\rm sD}/\phi_{\rm t} - 1\right)^{\frac{3}{2}} - (\phi_{\rm sS}/\phi_{\rm t} - 1)^{\frac{3}{2}} \right\} \nonumber \\ &+\phi_{\rm t} \gamma \left\{\left(\phi_{\rm sD}/\phi_{\rm t} - 1\right)^{\frac{1}{2}} - (\phi_{\rm sS}/\phi_{\rm t} - 1 )^{\frac{1}{2}}\right\}, \label{eq:idd} \end{align} where \begin{align} \gamma&=\sqrt{2\varepsilon_{\rm SiC}kT\cdot {\bf NA}}. \end{align} Here, $k$ and $T$ are the Boltzmann's constant and the absolute temperature, respectively. $\varepsilon_\mathrm{SiC}$ is the permittivity of SiC, $\phi_{\rm t}$ is the thermal voltage, and $C_{\rm ox}$ is gate oxide capacitance per unit area. Further, $C_{\rm ox}=\varepsilon_{\rm ox}/{\bf TOX}$, where $\varepsilon_{\rm ox}$ and {\bf TOX} are the permittivity and thickness, respectively, of the gate oxide. {\bf VFBC} and {\bf NA} are the flat-band voltage and the acceptor concentration, respectively. The channel current model also includes a smooth transition between the linear and saturation regions according to the model parameter {\bf DELTA}, which is defined by a similar equation to that in the $N$-th-power-law model. Finally, substituting $I_{\rm DD}$ in~(\ref{eq:idd}), the simulated drain current, $I_{\rm sim}$, is derived. The drain current in the structure also depends on the parasitic resistance {\bf RD}~\cite{TED2013_Mattausch} which reduces internal drain voltage of the MOSFET. In the surface-potential-based model, the calculation of $\phi_{\rm sS}$ and $\phi_{\rm sD}$ involves solving a non-linear equation~\cite{Baliga_book}, whose solution cannot be explicitly expressed. Typically, the surface potentials have to be obtained using iterative methods~\cite{TED2006_Miura,TED2013_Mattausch} such as the Newton-Raphson method. Though the computational graph may also be constructed for the model equations containing iterations, we adopt the following closed-form expression for the surface potential $\phi_{\rm s}$~\cite{SSE2001_Chen} for simplifying the calculation graph: \begin{align} (V_{\rm gs}-\prmt{VFBC}-\phi_{\rm s})^2= \gamma^2 \phi_{\rm t} [(\exp(-\frac{\phi_{\rm s}}{\phi_{\rm t}})+\frac{\phi_{\rm s}}{\phi_{\rm t}}-1) + \nonumber \\ \exp(-(2\phi_{\rm F} + \phi_{\rm t})/\phi_t)(\exp(\frac{\phi_{\rm s}}{\phi_{\rm t}}) - \frac{\phi_{\rm s}}{\phi_{\rm t}} -1)]. \label{eq:spe} \end{align} $\phi_{\rm sS}$ and $\phi_{\rm sD}$ are derived by solving~(\ref{eq:spe}) with respect to~$\phi_{\rm s}$ at $\phi_{\rm F}=0$ and $\phi_{\rm F}=V_{\rm ds}$, respectively. Thus, the model equations of the surface-potential-based model can be entirely represented by a computational graph. The capacitance characteristics, $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$, are expressed in the surface-potential-based model as follows: \begin{align} C_{\rm ds} &= {\bf ADS}\cdot \sqrt{\frac{q \cdot \varepsilon_{\rm SiC}\cdot {\bf ND}} {2({\bf VBI} + V_{\rm ds}) }}\,\,\,\,\,\,{\rm and} \label{eq:cds} \\ C_{\rm gd} &= {\bf COXD} \parallel C_{\rm dep}. \label{eq:cgd} \end{align} $C_{\mathrm{ds}}$ is a bias-dependent junction capacitance between drain and source, which is calculated on the basis of the capacitance model of the PN junction as shown in~(\ref{eq:cds}). $C_{\mathrm{gd}}$ is modeled as a series connection of the constant gate oxide capacitance ${\bf COXD}$ and the bias-dependent depletion-layer MOS capacitance $C_{\rm dep}$. Here, $C_{\rm dep}$ is a MOS capacitance represented as a function of the surface potential $\phi_{\rm gd}$ of the channel formed on the drain region under the junction field effect transistor (JFET) region. It can be written as shown in~(\ref{eq:cdep}), where $\phi_{\rm gd}$ is computed in a similar way to the calculation of $\phi_{\rm s}$ using \eqref{eq:spe} with ($V_{\rm gd}$, $V_{\rm ds}$, ${\bf ND}$, ${\bf VFBD}$). \begin{figure*}[!t] \begin{equation} C_{\rm dep} = {\bf AGD}\cdot\sqrt{2q\varepsilon_{\rm SiC}\cdot{\bf ND}} \, \cfrac{1 - e^{-\phi_{\rm gd}/\phi_{\rm t}} + e^{-(2\phi_{\rm F}+ V_{\rm ds})/ \phi_{\rm t}} (e^{\phi_{\rm gd}/\phi_{\rm t}} - 1) } {2\sqrt{\phi_{\rm t}e^{-\phi_{\rm gd}/\phi_{\rm t}} + \phi_{\rm gd} - \phi_{\rm t} + e^{-(2\phi_{\rm F}+V_{\rm ds})/\phi_{\rm t}} (\phi_{\rm t}e^{\phi_{\rm gd}/\phi_{\rm t}} - \phi_{\rm gd} - \phi_{\rm t}) }} \label{eq:cdep} \end{equation} \end{figure*} \subsubsection{Initial Parameter Determination}\label{sec:initial} \begin{figure}[!t] \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.9\columnwidth]{./lambda_rd.pdf} \end{center} \caption{Extraction of {\bf LAMBDA} and {\bf RD}.} \label{fig:ext_idvd} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.98\columnwidth]{./k_theta.pdf} \end{center} \caption{Extraction of {\bf K} and {\bf THETA}.} \label{fig:ext_idvg} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \vspace{2mm} \begin{center} \includegraphics[width=.98\columnwidth]{./cgs.pdf} \end{center} \caption{Extraction of {\bf VFBC}.} \label{fig:ext_vfbc} \end{minipage} \hspace{2mm} \begin{minipage}{0.48\hsize} \begin{center} \includegraphics[width=.9\columnwidth]{./coxd_vfbd.pdf} \end{center} \caption{Extraction of {\bf COXD} and {\bf VFBD}.} \label{fig:coxd_vfbd} \end{minipage} \end{figure} {\textcolor{black}{The initial parameter determination is one of the important steps in the model development because the model parameters are {\textcolor{black}{repeatedly}} updated to minimize the discrepancy with the measured device characteristics, starting {\textcolor{black}{from the}} initial values. {\textcolor{black}{When the choice of the}} initial parameters {\textcolor{black}{is inappropriate}}, the model parameters may be determined as a local optimum solution that {\textcolor{black}{can be}} far from the {\textcolor{black}{ground truth}}. The standardized compact models provide the initial parameter determination as well as the model descriptions~\cite{BSIM48,HiSIM-HV}.}} In the experiment, we apply an initial parameter determination for the surface-potential-based model proposed in~\cite{WIPDA2019-Shintani}. In this work, we assume {\bf TOX} is 50\,nm. The default value of {\bf DELTA}, which is introduced to model the gradual transition between the linear and saturation regions, is set to 0.8~\cite{HiSIM-HV}. The slope of the $I_{\rm d}$-$V_{\rm ds}$ curve in the saturation region is represented by {\bf LAMBDA}, while that at $V_{\rm ds} \simeq 0$\,V of a high $V_{\rm gs}$ shows {\bf RD} as shown in Fig.~\ref{fig:ext_idvd}. {\bf K} and {\bf THETA} are extracted from the $I_{\rm d}$-$V_{\rm gs}$ curve as shown in Fig.~\ref{fig:ext_idvg}. {\bf K} is approximated by the slope of saturation-region current, and {\bf THETA} is extracted from the linear region current. {\bf VFBC} is the flat-band voltage of the MOS interface at the channel region. Beyond that voltage, the depletion capacitance becomes apparent in the MOS capacitance characteristics. Hence, {\bf VFBC} can be estimated from the gate-source capacitance $C_{\rm gs}$ against $V_{\rm gs}$ at which $C_{\rm gs}$ starts to bend, as shown in Fig.~\ref{fig:ext_vfbc}. As a good estimate, we approximate $C_{\rm dep}$ to a PN junction capacitance that depends on drain-source voltage $V_{\mathrm{gd}}$~\cite{EPE2011_Phankong}: \begin{eqnarray} C_{\rm dep} = {\bf AGD} \sqrt{\frac{q \cdot \varepsilon_{\rm SiC}\cdot {\bf ND}}{2({\bf VFBD}-V_{\mathrm{gd}})}}. \label{eq:approximated_cgd} \end{eqnarray} As shown in Fig.~\ref{fig:coxd_vfbd}, {\bf VFBD} and {\bf COXD} are obtained as the capacitance in the accumulation mode. {\bf AGD} is estimated as $\frac{\bf TOX}{\varepsilon_{\rm SiC}{\bf COXD}}$. Then, by substituting {\bf VFBD}, {\bf COXD}, and {\bf AGD} to~(\ref{eq:approximated_cgd}), {\bf ND} can be calculated. Finally, {\bf NA} is obtained from \begin{equation} {\bf VBI} = \frac{kt}{q}\ln \left(\frac{{\bf NA} \cdot {\bf ND}}{n_i^2} \right), \label{eq:vbi} \end{equation} where $n_i$ is the intrinsic carrier concentration. {\bf VBI} is derived as the forward voltage where the body-diode current starts to flow. {\bf ADS} is calculated by substituting {\bf VBI} and {\bf ND} to (\ref{eq:cds}). \subsubsection{Computational Graph} \label{sec:graph} {\textcolor{black}{The computational graphs of the two models were constructed based on the respective model equations. {\textcolor{black}{Graph manipulations are implemented}} using a Python package, NetworkX~\cite{SCIPY2008_Hagberg}. As shown in Table~\ref{tab:generation_time}, {\textcolor{black}{the execution time}} was {\textcolor{black}{much}} less than 0.1\,second {\textcolor{black}{on the single thread PC for both models}}. Table~\ref{tab:graph_size} summarizes the sizes of the two computational graphs from the input parameters to $I_{\rm sim}$ for a particular bias voltage. The subgraph for calculating $I_{\rm sim}$ is reused for different bias voltages. The {\textcolor{black}{graph size of the}} surface-potential-based model is approximately ten times larger than {\textcolor{black}{that of}} the $N$-th-power-law model. Thus, only the computational graph of the $N$-th-power-law model is presented in Fig.~\ref{fig:graph_n-th-power}. Through defining the forward and backward functions of each node and incorporating the computational network into Algorihgm~\ref{alg:ad}, the AD-based parameter extraction is carried out according to Algorithm~\ref{alg:gd}.}} \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{./level2.pdf} \caption{Computational graph of the $N$-th-power-law model for a particular bias voltage.} \label{fig:graph_n-th-power} \end{figure} \begin{table}[!t] \caption{Graph generation time}\label{tab:generation_time} \centering \begin{tabular}{l|l||r} \hline \multicolumn{2}{l||}{Model} & Time {[}s{]} \\ \hline \multicolumn{2}{l||}{$N$-th-power-law model} & 0.0384 \\ \hline \multirow{3}{*}{Surface-potential-based model} & $I_{\rm sim}$ & 0.0571 \\ \cline{2-3} & $C_{\mathrm{ds}}$ & 0.0039 \\ \cline{2-3} & $C_{\mathrm{gd}}$ & 0.0288 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \caption{Size of the computational graphs}\label{tab:graph_size} \centering \begin{tabular}{l|l||r|r} \hline \multicolumn{2}{l||}{Model} & No. of Edges & No. of Vertices \\ \hline \multicolumn{2}{l||}{$N$-th-power-law model} & 39 & 48 \\ \hline \multirow{3}{*}{Surface-potential-based model} & $I_{\rm sim}$ & 384 & 540 \\ \cline{2-4} & $C_{\mathrm{ds}}$ & 20 & 20 \\ \cline{2-4} & $C_{\mathrm{gd}}$ & 318 & 228 \\ \hline \end{tabular} \end{table} \subsection{Simulation Setup} I-V curves of the SiC MOSFET~\cite{sct2450ke} were measured at room temperature using a dedicated curve tracer~\cite{ICMTS2016_Nakamura} while sweeping $V_{\rm gs}$ from 6\,V to 14\,V with 2\,V steps and sweeping $V_{\rm ds}$ from 0\,V to 50\,V with 2\,V steps (the total number of bias voltages tested $m$, were 125). C-V curves were measured by a commercial curve tracer~\cite{b1505a} at 1\,MHz. The data points are 300 for both $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$ measurements. The parameter updating process was conducted by using AdaGrad~\cite{JMLR2011_Duchi}: \begin{eqnarray} h_i &=& h_i + \left(\frac{\partial E}{\partial p_i}\right)^2 \qquad {\mathrm{and}}\label{eq:adagrad1} \\ p_i &=& p_i - \eta_i \frac{1}{\sqrt{h_i}} \frac{\partial E}{\partial p_i}. \label{eq:adagrad2} \end{eqnarray} The parameter update is performed to decrease the RMSE between the measured drain current and that obtained through the MOSFET model with the latest model parameters. In AdaGrad, the parameter vector, $\bm{h}=(h_0,...,h_{n-1})$, is introduced. Based on the gradient, $\eta_i \frac{1}{\sqrt{h_i}}$ is gradually reduced as shown in~(\ref{eq:adagrad1}) and~(\ref{eq:adagrad2}). Generally, the initial values of all elements of $\bm{h}$ were set to zero and all elements of $\bm{\eta}$ were set to 1/100 for the respective parameter values. \subsection{Results} \subsubsection{Parameter extraction for current characteristics}\label{sec:c-1} The parameters extracted by the proposed AD-based approach was compared with those by the ND-based method using each of the two MOSFET models. In this experiment, the maximum number of iterations, $N_{\rm max}$, was set to 1,000 for both the models. Also, $E_{\rm target}$ was set to 0.04\,A and 0.16\,A for each model. The initial values are randomly determined for the $N$-th-power-law model, while those of the surface-potential-model are determined by the initial parameter determination procedure described in Section~\ref{sec:initial}. Fig.~\ref{fig:result1} shows the RMSE as function of the computation time. The proposed method accelerated the parameter extraction by 4.03$\times$ and 4.34$\times$ for the $N$-th-power-law model and the surface-potential-model, respectively. This improvement in the computation time was close to the theoretical value of 4.5 ($= (8+1)/2$) for a generalized model with eight parameters. \begin{figure}[t!] \centering \subfigure[$N$-th-power-law model \label{fig:npower}]{ \includegraphics[width=0.55\linewidth]{./RMSE_thr.pdf}}\\ \subfigure[Surface-potential-based model\label{fig:sp}]{ \includegraphics[width=0.55\linewidth]{./RMSE_pot.pdf}} \caption{RMSE as a function of the computation time.} \label{fig:result1} \end{figure} \begin{figure}[t!] \centering \subfigure[$N$-th-power-law model \label{fig:sakurai}]{% \includegraphics[width=.62\columnwidth]{./thr_model_I-V.pdf} } \\ \centering \subfigure[Surface-potential-based model \label{fig:iv_sp}]{% \includegraphics[width=.65\columnwidth]{./Ids.pdf} } \caption{Simulated and measured I-V curves.} \label{fig:iv} \end{figure} The measured and simulated I-V characteristics of the two models are presented in Fig.~\ref{fig:iv}. The final model parameters extracted using the AD and ND methods are used to generate the graph. These results show that both MOSFET models accurately reproduced the I-V characteristics of the SiC MOSFET\@. The extracted parameter values at $N=N_{\rm max}$ are summarized in Tables~\ref{tab:sakurai_error} and~\ref{tab:sp_error}. Also, the initial values are listed in Table~\ref{tab:sp_error} for the surface-potential-based model. The relative error was smaller than 2\% in all cases. In addition, it can be seen from Table~\ref{tab:sp_error} that the initial and the optimized parameters are close with each other, suggesting that the good initial parameters have been extracted through the procedure. Hence, it was concluded that the proposed method accelerates parameter extraction without affecting the accuracy. \begin{table}[!t] \centering \caption{Extracted values for the $N$-th-power-law model} \label{tab:sakurai_error} \begin{tabular}{l||r|r|r} \hline & & & Relative \\ Parameter & AD & ND & error [\%] \\ \hline {\bf VTH} & 2.600 & 2.600 & 0.116 \\ {\bf K} & $ 2.691 \times 10^{-3}$& $2.681 \times 10^{-3}$& 0.372 \\ {\bf N} & 3.284 & 3.286 & 0.061\\ {\bf LAMBDA} & $2.606 \times 10^{-3}$ & $2.259\times10^{-3}$ & 0.576 \\ {\bf THETA} & $3.440 \times 10^{-4}$ & $3.464 \times 10^{-4}$ &0.610 \\ {\bf M} & 1.743 & 1.744 & 0.057 \\ {\bf J} &0.119 & 0.119 & 0.000 \\ {\bf DELTA} & 1.269 & 1.267 & 0.158 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \centering \caption{Extracted values for the surface-potential-based model} \label{tab:sp_error} \begin{tabular}{l||r|r|r} \hline Parameter& & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} & & & \\ (5166360) & 5403054 & 5398779 & 0.079 \\ {\bf TOX} & && \\ ($5.0\times 10^{-8}$) & $4.788\times10^{-08}$ & $4.790\times10^{-08}$ & 0.035 \\ {\bf NA} & &&\\ ($1.31\times 10^{17}$) & $1.313\times10^{17}$ &$1.313\times10^{17}$& 0.000 \\ {\bf LAMBDA} & &&\\ ($8.69\times 10^{-3}$) & $6.110\times10^{-3}$ & $6.086\times10^{-3}$& 0.387 \\ {\bf VFBC} & &&\\ ($-4.90$) & $-1.812\times10^{-3}$ &$-1.780\times10^{-3}$ & 1.869 \\ {\bf THETA} & &&\\ ($5.91\times10^{-3}$) & $5.912\times10^{-8}$ &$5.941\times10^{-8}$ & 0.492 \\ {\bf DELTA} & &&\\ ($0.80$) & 0.6170 & 0.6150 & 0.329 \\ {\bf RD} & &&\\ ($2.90\times10^{-3}$) & $2.7178\times10^{-3}$ &$2.737\times10^{-3}$ & 0.715 \\ \hline \end{tabular} \end{table} \subsubsection{Parameter extraction for multiple objectives}\label{sec:multi} The proposed method can handle model parameters used in different model equations. Since {\bf TOX} and {\bf NA} are contained in the current and capacitance characteristics, $C_{\mathrm{ds}}$ and $C_{\mathrm{gd}}$, in the surface-potential-based model, these parameters have to be consistent for both characteristics. In this experiment, the computational graphs were constructed so that the parameters are simultaneously optimized for all the model equations. The cost function is set as the normalized sum of the residuals to balance the weights of each characteristic. The initial values of the parameters are determined through the determination procedure. In this experiment, $E_{\rm target} = 0.02$ was added as the termination condition, in addition to iteration count $N_{\rm iter} = 1000$. Since {\bf VBI} can be represented using {\bf NA} and {\bf ND} as shown in (\ref{eq:vbi}), the total number of the model parameters thus is 13 excluding {\bf VBI}. \begin{figure}[!t] \centering \includegraphics[width=0.55\linewidth]{./RMSE_sum_2.pdf} \caption{RMSE as a function of the computation time for the simultaneous parameter extraction.} \label{fig:multi_rmse} \end{figure} \begin{figure}[t!] \centering \subfigure[I-V curve]{% \includegraphics[width=.65\columnwidth]{./Ids2.pdf} } \\ \centering \subfigure[C-V curve]{% \includegraphics[width=.65\columnwidth]{./sum_Cds_Cgd.pdf} } \caption{Simulated and measured curves.} \label{fig:multi_fit} \end{figure} Fig.~\ref{fig:multi_rmse} shows the RMSE as function of computation time for the simultaneous parameter extraction. The proposed method calculates the model parameters 3.50$\times$ faster than the ND-based parameter extraction. The measured and simulated I-V and C-V characteristics of the models are presented in Fig.~\ref{fig:multi_fit}. The graphs are plotted by the final model parameters extracted using the AD and ND methods. Form the results, good agreement can be seen among the I-V and C-V characteristics. The initial and extracted parameter values when the algorithm exits are summarized in Table~\ref{tab:sp_multi_error}. The relative error between the parameters extracted by using the AD and ND methods was smaller than 3\% in all the parameters. From the result, we confirmed that the proposed method is applicable for the capacitance characteristics, and the simultaneous parameter extraction of the different characteristics. \begin{table}[!t] \centering \caption{Extracted values for $I_{\rm sim}$, $C_{\mathrm{ds}}$, and $C_{\mathrm{gd}}$ by the simultaneous extraction}\label{tab:sp_multi_error} \begin{tabular}{l||r|r|r} \hline Model parameter & & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} &&&\\ (5166360) & 5644684 & 5574780 & 1.238\\ {\bf TOX} &&& \\ ($5.00\times10^{-08}$) & $4.933\times10^{-08}$ & $4.947\times10^{-08}$ & 0.286 \\ {\bf NA} &&& \\ ($1.31\times10^{17}$) & $1.313\times10^{17}$ & $1.313\times10^{17}$ & 0.000 \\ {\bf LAMBDA} &&& \\ ($8.69\times10^{-3}$) & $6.119\times10^{-3}$ & $6.083\times10^{-3}$ & 0.589 \\ {\bf VFBC} &&&\\ ($-4.90$) & $-1.943$ & $-1.985$ & 2.168 \\ {\bf THETA} &&&\\ ($5.910\times10^{-3}$) & $5.927\times10^{-3}$ & $5.910\times10^{-3}$ & 0.287 \\ {\bf DELTA} &&&\\ (0.80) & 0.6073 & 0.6130 & 0.938 \\ {\bf RD} &&&\\ ($2.90\times10^{-3}$) & $2.021\times10^{-3}$ & $2.057\times10^{-3}$ & 1.744 \\ \hline {\bf ADS} &&&\\ (0.00776) & 0.0250 & 0.0250 & 0.000 \\ {\bf ND} &&& \\ ($5.27\times10^{15}$) & $5.266\times10^{15}$ & $5.266\times10^{15}$ & 0.000 \\ {\bf COXD} &&& \\ ($4.36\times10^{-10}$) & $4.360\times10^{-10}$ & $4.360\times10^{17}$ & 0.000 \\ {\bf VFBD} &&& \\ ($1.00$) & 0.1055 & 0.1055 & 0.000 \\ {\bf AGD} &&& \\ ($6.31\times10^{-5}$) & $5.549\times10^{-3}$ & $5.549\times10^{-3}$ & 0.000 \\\hline \end{tabular} \end{table} \subsubsection{Parameter extraction using LM method} In addition to the gradient-descent-based method, the proposed method can be applied to various optimization algorithms that rely on derivatives. Here, we show the result of the parameter extraction derived by incorporating the proposed method with the LM method, which is adopted in a widely used industrial device modeling flow~\cite{iccap}. Fig.~\ref{fig:lm} shows the RMSE reduction in the LM-based parameter extraction. In this experiment, the model parameters of the surface-potential-based model are extracted by using the same initial parameters as those in Sec.~\ref{sec:c-1}. Owing to the quadratic convergence property of the LM algorithm~\cite{Computing2005-Fan}, the number of iterations is reduced to 30 for both the AD- and ND-based calculations, while the gradient-descent-based method requires more than six hundred iterations as shown in Fig.~\ref{fig:lm_iter}. As expected, close to the ideal 3.89$\times$ acceleration has been obtained, as shown in Fig.~\ref{fig:lm_time}. In addition, the extracted parameters by the AD- and ND-based calculations, which are shown in Table~\ref{tab:lm_param}, are in perfect agreement, and quite close to those in Table~\ref{tab:sp_error}. From these results, the proposed method can be successfully applied to other derivative-based optimization algorithms, and the accelerations according to the number of parameters can be achieved. \begin{figure}[t!] \centering \subfigure[Number of iterations\label{fig:lm_iter}]{ \includegraphics[width=0.55\linewidth]{./lm_iter.pdf}}\\ \subfigure[CPU time\label{fig:lm_time}]{ \includegraphics[width=0.55\linewidth]{./lm_time.pdf}} \caption{RMSE reduction in the LM-based parameter extraction.} \label{fig:lm} \end{figure} \begin{table}[!t] \centering \caption{Extracted values for the surface-potential-based model in the LM-based optimization} \label{tab:lm_param} \begin{tabular}{l||r|r|r} \hline Parameter& & & Relative \\ (Initial value) & AD & ND & error [\%] \\ \hline {\bf SCALE} & & & \\ (5166360) & 516636 & 516636 & 0.000 \\ {\bf TOX} & && \\ ($5.0\times 10^{-8}$) & $4.626\times10^{-08}$ & $4.626\times10^{-08}$ & 0.000 \\ {\bf NA} & &&\\ ($1.31\times 10^{17}$) & $1.273\times10^{17}$ &$1.273\times10^{17}$& 0.000 \\ {\bf LAMBDA} & &&\\ ($8.69\times 10^{-3}$) & $5.312\times10^{-3}$ & $5.313\times10^{-3}$& 0.000 \\ {\bf VFBC} & &&\\ ($-4.90$) & $-1.597\times10^{-3}$ &$-1.597\times10^{-3}$ & 0.000 \\ {\bf THETA} & &&\\ ($5.91\times10^{-3}$) & $5.910\times10^{-8}$ &$5.910\times10^{-8}$ & 0.000 \\ {\bf DELTA} & &&\\ ($0.80$) & 0.614 & 0.614 & 0.000 \\ {\bf RD} & &&\\ ($2.90\times10^{-3}$) & $2.902\times10^{-3}$ &$2.902\times10^{-3}$ & 0.000 \\ \hline \end{tabular} \end{table} {\textcolor{black}{ \subsection{Manufacturing variability analysis} Recently, the application of artificial intelligence has been gaining popularity in the area of power electronics~\cite{TPLE2021_Shuai}. As an example, a neural network has been applied to model the {\textcolor{black}{device}} current characteristics in SPICE simulators~\cite{TPLE2019_Chiozzi} {\textcolor{black}{considering the parameter fitting as a black-box regression with a neural network.}} However, {\textcolor{black}{as opposed to the physics-based modeling,}} neural networks carry out purely mathematical fitting of the measured device characteristics to a nonlinear equation, ignoring physical behavior of the device in interest. In order to fully understand various device characteristics such as aging-induced threshold voltage shift~\cite{TED2014_Rescher} and manufacturing process variation~\cite{APEC2014_Wang,JESTPM2019_Borghese}, which have been critical issues in the design of power converters using wide-bandgap semiconductors, the usage of a set of physically meaningful model parameters is of enormous importance. {\textcolor{black}{Although}} the proposed method incorporates a {\textcolor{black}{similar}} technique used in neural networks for model-parameter extraction, our method {\textcolor{black}{preserves}} the physical meanings of the model parameters. }} \begin{figure}[!t] \centering \includegraphics[width=0.55\linewidth]{./idvd_var.pdf} \caption{{\textcolor{black}{Measured I-V characteristics of 35 SiC MOSFETs.}}} \label{fig:var} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{./hist.pdf} \caption{{\textcolor{black}{Histograms of the extracted model parameters of 35 SiC MOSFETs.}}} \label{fig:hist} \end{figure} {\textcolor{black}{ In order to demonstrate the aforementioned {\textcolor{black}{advantages}}, we apply the proposed method to the {\textcolor{black}{modeling of}} I-V characteristics of 35 SiC MOSFETs, which {\textcolor{black}{are presented}} in Fig.~\ref{fig:var}. For instance, in the parallel operation of SiC MOSFETs in a power module, the {\textcolor{black}{characteristics mismatch}} causes the current imbalances, resulting in the significant degradation of the reliability~\cite{APEC2014_Wang}. By applying the proposed method with the surface-potential-based model, the distribution of its model parameters can be obtained as shown in Fig.~\ref{fig:hist}. We would like to note that the extracted model parameters {\textcolor{black}{represent their variation}} {\textcolor{black}{while the neural network based method}}~\cite{TPLE2019_Chiozzi} {\textcolor{black}{only gives us 35 black box models consisting of slightly different parameters}}. According to~\cite{JESTPM2019_Borghese}, the current imbalance can be mitigated by pairing MOSFETs with similar threshold voltage (${\bf VFBC}$) and/or current gain factor (${\bf SCALE}$) in parallelly connected SiC MOSFETs. {\textcolor{black}{Choosing pairs could be easy with the parameters obtained by the proposed method.}} Moreover, the distribution of the physically meaningful parameters, such as {\bf TOX} and {\bf NA}, can {\textcolor{black}{also}} be used {\textcolor{black}{by manufacturers}} to improve manufacturing yields.}}
1,314,259,996,831
arxiv
\section{Introduction} In the last decade, ideas and tools from quantum information and computation have found an increasing number of applications in the efforts to understand the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence~\cite{maldacena1999large} as a holographic quantum theory of gravity. Notable examples include the ER=EPR~\cite{maldacena2013cool} conjecture and the proposed resolutions of: the black hole information paradox \cite{almheiri2020page, penington2020entanglement}, the firewall paradox~\cite{harlow2013quantum}, and the wormhole growth paradox in terms of the complexity=volume~\cite{susskind2016computational,aaronson2016complexity, haferkamp2021linear} and complexity=action \cite{brown2016holographic} conjectures. Central to the connection between quantum gravity and quantum information is the Ryu-Takayanagi (RT) formula. The RT formula conjectures that the entanglement entropy of a boundary CFT state is dual to the area of a bulk region in AdS~\cite{ryu2006holographic}. The study of the entanglement properties of the AdS/CFT holographic duality~\cite{almheiri2013black}, spurred by the result of Ryu and Takayanagi, has led to a reformulation of the AdS/CFT correspondence in terms of quantum error-correcting codes~\cite{verlinde2013black, almheiri2015bulk, mintun2015bulk}. This framework has helped to clarify the relationship between bulk and boundary and proved to be an effective and simple toy model of the AdS/CFT correspondence. Based on these early results, researchers built toy models that reproduce key features of the correspondence (such as subregion duality, radial commutativity and the RT formula) using quantum error-correcting codes based on tensor networks~\cite{pastawski2015holographic, donnelly2017living}, random tensor networks~\cite{hayden2016holographic}, and approximate Bacon-Shor codes \cite{cao2020approximate}. All these models (which have been recently reviewed in~\cite{jahn2021holographic}) give an explicit bulk-boundary mapping for states and observables. Using techniques from Hamiltonian simulation, \cite{kohler2019toy} showed how the mapping can be extended to local Hamiltonians. In parallel with the development of increasingly-advanced toy models, Harlow initiated a systematic study of holographic quantum error correction~\cite{harlow2017ryu,Akers:2021fut} (for a pedagogical introduction to these ideas see~\cite{harlow2016jerusalem, harlow2018tasi, rath2020aspects}). Leveraging the operator algebra quantum error correction framework developed in~\cite{beny2007quantum, beny2007generalization, kribs2005unified, kribs2006operator},~\cite{harlow2017ryu} identified the conditions that make a quantum error-correcting code a good holographic code (that is, a code that reproduces the key features of the AdS/CFT correspondence). In particular,~\cite{harlow2017ryu} showed that standard quantum error-correcting codes such as stabiliser codes~\cite{gottesman1997stabilizer, gottesman2010introduction} or subsystem codes~\cite{poulin2005stabilizer, bravyi2011subsystem}, correct errors ``too well'' to give rise to good holographic codes. This statement can be made precise using the language of finite-dimensional von Neumann algebras, which we review in Section~\ref{sec:vonNeumann}. Consider the algebra of operators that can be reconstructed after the erasure of a region of the boundary: for a good holographic code this algebra is not a factor algebra. In this paper, we build on the formalism of Harlow to derive new properties and examples of holographic quantum error-correcting codes. Our contributions further sharpen our understanding of the relationship between bulk and boundary and give even simpler examples of holographic codes reproducing key features of the AdS/CFT correspondence. In particular: \begin{itemize} \item We give new ``atomic'' examples of holographic codes. The key feature of these examples is that they are based on quantum circuits with a minimal number of qubits rather than on the large tensor networks that have appeared in the literature \cite{pastawski2015holographic, donnelly2017living, hayden2016holographic, cao2020approximate}. By significantly reducing the complexity of the toy models, we hope to introduce a new tool to identify what features of error correcting codes enable the emergence of holographic states. \item We prove new properties of holographic quantum correcting codes. More specifically, we show that the code algebra is the unique von Neumann algebra satisfying complementary recovery (defined below). The proof is obtained by leveraging a connection between quantum error correction and quantum privacy~\cite{crann2016private, kribs2018quantum} which we believe is entirely novel in the context of holographic quantum error correction. The uniqueness of the algebra shows that error correcting codes which satisfy complementary recovery are ``rigid'' in the sense that they are uniquely determined by the requirements of holography. \item We give a reformulation of key results in holographic quantum error correction which is aimed at experts in quantum information. This might be a desirable feature for researchers with a quantum information background that are venturing into the field and could give people already familiar with these ideas a new angle to think about related problems. \end{itemize} We give a brief presentation of key results from holographic quantum error correction in Section~\ref{sec:overview_HQEC} and a detailed overview of our contributions in Section~\ref{sec:overview_our_contributions}. The remainder of this paper is organised as follows. Section~\ref{sec:holography_background} gives an informal presentation of some of the central concepts in holography for a reader with no prior background on the subject. In Section~\ref{sec:vonNeumann}, we review some key facts about finite-dimensional von Neumann algebras. Section~\ref{sec:complementary_recovery} and Section~\ref{sec:examples} contain the bulk of our contributions. In Section~\ref{sec:complementary_recovery}, we give a reformulation of holographic quantum error correction and prove new properties of the code algebra, while in Section~\ref{sec:examples}, we construct several ``atomic'' examples of holographic codes using quantum circuits. We conclude in Section~\ref{sec:discussion} with some remarks on the main differences between our work and \cite{harlow2017ryu} and a list of open questions. This paper has three Appendices. In Appendix~\ref{app:privacy}, we review some key notions and results on quantum private systems. In Appendix~\ref{app:structure_lemma}, we give a new proof of the main theorem in~\cite{harlow2017ryu}. In Appendix~\ref{app:2x2bacon-shor}, we give a full analysis of the holographic properties of the 2x2 Bacon-Shor code. \subsection{Overview of holographic quantum error correction} \label{sec:overview_HQEC} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{holographic_quantities_gamma.pdf} \end{center} \caption{\label{fig:notation} Sketch of a holographic quantum error-correcting code in $2+1$ dimensions using our notation, indicating some of the terms in Table~\ref{table:Rosetta}. } \end{figure} In holography, we consider a bulk asymptotically-AdS space described by a Hilbert space $\mathcal{H}_L$, surrounded by a boundary CFT with a Hilbert space $\mathcal{H}$. The correspondence manifests via a holographic dictionary $V : \mathcal{H}_L \to \mathcal{H}$, which maps the state from the bulk into the boundary. See Figure~\ref{fig:notation}. The same setup cleanly maps to a quantum error-correcting code. We let $\mathcal{H}_L$ be the logical space, and $\mathcal{H}$ be the physical space. Then $V$ is an isometry that takes the data in $\mathcal{H}_L$ and encodes it in the physical space $\mathcal{H}$. Our goal is to concretely define what we mean for such a setup to exhibit ``holographic quantum error correction''. We will do this by taking an RT formula and writing it in the notation of quantum error correction. Then we can proceed to derive general properties of such an RT formula, and building specific examples of codes that exhibit one. Having these concrete examples can illuminate the relationship between bulk and boundary, and generally make AdS/CFT easier to understand. We begin with a classical RT formula. Say $A$ is a subregion of the boundary space, splitting the boundary into a bipartition $A$-$\bar A$. Then, the classical RT formula states\footnote{This is the version of the formula that holds in a \emph{static} geometry, i.e.\ one that can be described by a time-independent metric. In a time-dependent geometry the extremization is more subtle, and is described by the maximin prescription \cite{Engelhardt:2014gca} for the HRT formula \cite{Hubeny:2007xt}. In particular, the geodesics are not confined to a fixed spatial slice of the bulk but instead live inside the \emph{entanglement wedge} of the boundary region; see Section \ref{sub:wedges} for further discussion. Furthermore, the minimization over geodesics should include only the geodesics homologous to A; see Footnote \ref{fn:homology} for a discussion.} that, in a holographic state corresponding to a (2+1)-dimensional classical bulk geometry: \begin{align} \text{entanglement across } A\text{-}\bar A \propto \min_{\gamma_{A}} \text{Area}(\gamma_{A}), \end{align} where $\gamma_A$ is a geodesic in the (negatively curved, gravitating) bulk whose endpoints are the same as those of $A$ on the boundary. In $d+1$ dimensions, the geodesic is replaced by a $(d-1)$-dimensional extremal surface ending on (and homologous to) the boundary subregion; `area' denotes a codimension 2 quantity, and hence it is actually a length for a (2+1)-dimensional bulk. The RT formula connects a geometrical quantity, an area, with a quantum-mechanical quantity: the entanglement entropy. (Readers who find the RT formula unfamiliar are invited to consult Section~\ref{sec:holography_background} for a more detailed exposition of the quantum-gravitational setting where the formula arises.) If the bulk is itself a quantum system that can be in a mixed state, then we must be more careful in defining the left-hand side of the equation: we only care about the entanglement entropy stemming from the holographic dictionary $V$, but not any entropy from the bulk degrees of freedom. Thus, we must subtract off the entropy from the bulk state. Say $\rho_L$ is a state in the bulk $\mathcal{H}_L$, and $\rho = V \rho_L V^\dagger$ is its encoded state on the boundary $\mathcal{H}$. The subregion $A$ induces a factorization\footnote{In a conformal field theory this factorization may be subtle: to ensure the theory factorizes we can introduce edge modes \cite{Donnelly:2016auv}. Throughout this paper we will follow convention and assume without comment that the boundary theory does indeed factorize, which is already necessary to define the left-hand side of the RT formula.} of the boundary into $\mathcal{H} = \mathcal{H}_{A} \otimes \mathcal{H}_{\bar A}$. We can then say $\rho_{A}$ is the reduced state obtained by taking $\rho$ and tracing out $\mathcal{H}_{\bar A}$. Now we can phrase a quantum RT formula as: \begin{align} \text{entropy of } \rho_A - \text{entropy of } \rho_L \text{ visible from }A = \min_{\gamma_{A}} \text{Area}(\gamma_{A}). \end{align} We can define the entropy of $\rho_A$ via the von Neumann entropy $S(\rho_A)$. The other quantities are more challenging to define. The geometry itself may be a superposition, so that the area actually corresponds to an observable $L_A$ on the bulk Hilbert space $\mathcal{H}_L$. The area contribution to the RT formula is then the expectation $\langle L_A \rangle_{\rho_L} = \text{Tr}(\rho_L L_A)$. It can be reconstructed by a bulk observer given access to either subregion. The state $\rho_L$ describes the state of the bulk, and thus also captures the superposition over geometries. We are left with: \begin{align} S(\rho_A) = \text{entropy of } \rho_L \text{ visible from }A + \text{Tr}(\rho_L L_A). \end{align} To make the ``entropy of $\rho_L$ visible from $A$'' rigorous, we will need some tools from the quantum error correction literature. Our goal is to identify a collection of operators $\mathcal{M}$ on $\mathcal{H}_L$ that exactly capture what we can see given only access to the boundary subregion $A$. Then we can use this family of operators to define the entropy. Some language developed in the quantum error correction framework from \cite{beny2007quantum, beny2007generalization, kribs2005unified, kribs2006operator} is particularly useful for this purpose. These papers are concerned with what kinds of observables in $\mathcal{H}_L$ are affected by a general quantum error channel. Here we restrict their language to erasure errors, since we just want to erase the subregion $\bar A$. \begin{definition} Say $V:\mathcal{H}_L \to \mathcal{H}$ is an encoding isometry, and $A$ is a subregion that induces the factorization $\mathcal{H} = \mathcal{H}_{A} \otimes \mathcal{H}_{\bar A}$. We say a bulk operator $O_L \in \mathcal{L}(\H_L)$ is \textbf{correctable from $A$} if there is some boundary operator $O$ with support only on $A$ that lets us access $O_L$ via $V^\dagger O V = O_L$. If all the operators with support only on $A$ commute with $O_L$ after projection with $V^\dagger$, then the observable corresponding to $O_L$ cannot be measured from $A$. In this case we say $O_L$ is \textbf{private from $A$}. \end{definition} We are looking for a collection of operators $\mathcal{M}$ that exactly capture what degrees of freedom are visible from $A$. These are all correctable from $A$. To make sure we are not missing any operators, we want the `mirror image' of this condition to be true from $\bar A$. If $\mathcal{M}'$ is the set of all operators in the bulk that commute with $\mathcal{M}$ (also known as the `commutant' or the `normalizer of $\mathcal{M}$ in $\mathcal{L}(\H_L)$'), then we want $\mathcal{M}'$ to be correctable from $\bar A$. The center $Z_\mathcal{M} := \mathcal{M}\cap\mathcal{M'}$, which contains the area operator, is correctable from both regions. We call this condition `complementary recovery'\footnote{Holography experts: note that in classical holographic states, the equivalent of this condition is that access to a boundary subregion $A$ allows the reconstruction of bulk operators in (at least) the causal wedge of $A$, while access to the complement $\bar A$ allows the reconstruction of operators in the causal wedge of $\bar A$. See Figure \ref{fig:subregion}; although the union of these two causal wedges does not cover the entire spacetime, when $A$ is a \emph{spatial} subregion and the boundary state is pure it \emph{does} contain the entirety of a spatial slice of the bulk. (If the boundary state is mixed, the union won't cover an entire spatial slice; for example, there could be a black hole horizon beyond which the boundary-anchored geodesics will not penetrate.)}: \begin{definition} An encoding isometry $V$, a subregion $A$, and collection of operators $\mathcal{M}$ satisfy \textbf{complementary recovery} if $\mathcal{M}$ is correctable from $A$, and $\mathcal{M}'$ is correctable from $\bar A$. \end{definition} If we can find such a collection of operators $\mathcal{M}$, then we have exactly captured the degrees of freedom in $\mathcal{H}_L$ that are visible from $A$. Now all that is left to do is to use $\mathcal{M}$ to define an entropy on $\rho_L$. It turns out that there is a very natural way of doing this if $\mathcal{M}$ is closed under multiplication, in which case $\mathcal{M}$ is a von Neumann algebra. In this case there is a natural generalization of the entanglement entropy called the `algebraic entropy' $S(\mathcal{M},\rho_L)$ (which we review in Section~\ref{sec:vonNeumann}). This finally lets us define what we mean by `entropy of $\rho_L$ visible from $A$', and state an RT formula in quantum mechanical language: \begin{align} S(\rho_A) = S(\mathcal{M},\rho_L) + \text{Tr}(\rho_L L_A). \label{eqn:intro_rt_formula} \end{align} In fact, a key result of \cite{harlow2017ryu} is that the existence of a von Neumann algebra $\mathcal{M}$ implies that the code satisfies an RT formula. \begin{theorem}\label{thm:simple_complementaritytoRT}\textbf{From Theorem~5 of \cite{harlow2017ryu}.} Say an encoding isometry $V$, a subregion $A$, and a von Neumann algebra $\mathcal{M}$ satisfy complementary recovery. Then there is an area operator $L_A$ such that (\ref{eqn:intro_rt_formula}) holds. \end{theorem} A summary of the notation from this discussion is to be found in Table~\ref{table:Rosetta}. In section~\ref{sec:vonNeumann} we give an introduction to von Neumann algebras and their properties. Then, in Section~\ref{sec:complementary_recovery} we present the above discussion in more mathematical detail and also outline some of our main results. \begin{table} \small \begin{center} \begin{tabular}{p{3cm}|p{4.2cm}|p{4.2cm}} \textbf{Symbol} & \textbf{Quantum Error Correction \mbox{Interpretation}} & \textbf{Holographic \mbox{Interpretation}} \\ \hline \hline $\mathcal{H}$ & physical space & boundary CFT \\ \hline $\mathcal{H}_{L}$ & logical space & bulk asympotically AdS space \\ \hline $V :\mathcal{H}_{L} \rightarrow \mathcal{H}$ & encoding isometry & AdS/CFT dictionary \\ \hline $\ket{\psi}_L$, $\rho_L$, $O_L$ & state, operator in the logical space $\mathcal{H}_L$ & state, operator in the bulk \\ \hline $\mathcal{H}_A$, where $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_{\bar A}$ & the region of $\mathcal{H}$ that remains after the erasure of $\bar A$ & a subregion of the boundary complementary to $\bar A$ \\ \hline $\ket{\psi}_A$, $\rho_A$, $O_A$ & state, operator in $\mathcal{H}_A$ & boundary state, operator in the $A$ subregion \\ \hline $\mathcal{M} $ & algebra of operators protected from the erasure of $\bar A$ & algebra of bulk operators in the entanglement wedge of $A$ \\ \hline $Z_\mathcal{M}$ & algebra of operators protected from the erasure of either $A$ or $\bar A$ & bulk operators that can be reconstructed from either $A$ or $\bar A$ \\ \end{tabular} \end{center} \caption{\textbf{A Rosetta stone for symbols and their quantum error correction and holographic interpretations}. First column: main symbols used throughout the paper. Second column: interpretation of the symbol in the language of quantum error correction. Third column: holographic interpretation of the symbol.} \label{table:Rosetta} \end{table} \subsection{Overview of our contributions} \label{sec:overview_our_contributions} The central goal of our work is to build concrete examples and analysis techniques using the work of \cite{harlow2017ryu} as a starting point. We begin with some general results that aid in the analysis of holographic codes. First, we show that the von Neumann algebra of interest to holography is unique, and that there is a direct way of computing it. Then, we give several `atomic' examples of quantum error correction codes that manifest holographic features despite only possessing very few qubits. \begin{theorem} \label{thm:simple_vn_algebra}\textbf{What is the von Neumann algebra?} Say $V$ is an encoding isometry and $A$ is a subregion. If there exists a von Neumann algebra $\mathcal{M}$ such that $V,A,\mathcal{M}$ obey complementary recovery, then it is unique. Furthermore, let $\mathcal{M}$ be exactly the set of operators in the bulk that are correctable from $A$. If it is closed under multiplication, then it is the unique von Neumann algebra with complementary recovery. Otherwise, no such algebra exists. \end{theorem} This theorem is a direct consequence of a result from the quantum error correction literature: \begin{lemma} A von Neumann algebra $\mathcal{M}$ is correctable from $A$ if and only if it is private from $\bar A$. \end{lemma} The main idea is that complementary recovery restricts $\mathcal{M}$ from both sides: on the one hand $\mathcal{M}$ must be correctable from $A$ so it cannot contain too many operators. But on the other hand, since $\mathcal{M}'$ is correctable from $\bar A$, we must have that $\mathcal{M}'$ is private from $A$. So $\mathcal{M}$ must be large enough so that its commutant remains small enough to be private. In particular, when $\mathcal{M}$ is correctable from $A$ there is then no proper subalgebra of $\mathcal{M}$ whose commutant is correctable from $\bar A$. We comment on the implications of this fact for bulk reconstruction in the Discussion. The uniqueness theorem implies a concrete `recipe' for analyzing quantum error-correcting codes and determining their RT formulae. In section~\ref{sec:complementary_recovery} we present Theorem~\ref{thm:simple_vn_algebra}, as well as a mostly self-contained derivation of Theorem~\ref{thm:simple_complementaritytoRT}. With all these mathematical tools together, the section culminates in a series of step-by-step instructions for analyzing a quantum error-correcting code. In section~\ref{sec:examples} we then practice this recipe on several examples. We construct these examples by building quantum circuits for the encoding isometry $V$. These examples are designed to flesh out the different terms of (\ref{eqn:intro_rt_formula}) to varying degrees of completeness. The section culminates in the example sketched in Figure~\ref{fig:full_example}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{holographic_circuit.pdf} \end{center} \caption{\label{fig:full_example} A sketch of a quantum circuit with an RT formula. On the left side, $i, \alpha, j$ label three qubits in the bulk. The central degree of freedom $\alpha$ looks classical since it is visible from both $A$ and $\bar A$ via the CNOT. $\alpha$ also conditionally creates some of the entanglement across the bipartition via a Toffoli gate, which determines the area operator's eigenvalues. The bulk degrees of freedom $i,j$ are not encoded at all and are plainly visible from the boundary. A full technical explanation of the encoding can be found in Sections~4 and 5. See also Example~\ref{ex:cql} in particular. } \end{figure} Here we give a brief non-technical summary of how the code in Figure~\ref{fig:full_example} generates an RT formula. Since the full explanation is fairly involved, we focus only on the intuition behind the features and leave the technical explanation for Sections~4 and~5. We will review in Section~3 that the algebraic entropy $S(\mathcal{M},\rho_L)$ naturally divides into two terms: a classical term $S_c$ and a quantum term $S_q$. \begin{align} S(\rho_A) = S_c + S_q + \text{Tr}(\rho_L L_A). \end{align} The degree of freedom $\alpha$ is copied via CNOT onto an extra qubit, so that $\alpha$ can be seen from both $A$ and $\bar A$. This essentially `measures' the $\alpha$ degree of freedom in the computational basis, and makes it look entirely classical. This is the classical part of the entropy $S_c$. On the other hand, the $i$ and $j$ degrees of freedom are not measured, and thus retain their coherence. Subregion $A$ cannot see the $j$ qubit, but it can see the $i$ qubit. Thus, the von Neumann entropy of the $i$ qubit forms the quantum part of the entropy $S_q$. Finally, some of the entanglement across $A$-$\bar A$ stems from the subcircuit involving the Toffoli gate, which connects $\alpha$ and the two boundary regions. This entropy forms the area term $\text{Tr}(\rho_L L_A) $ since it is completely independent of entanglement between bulk degrees of freedom. However, the generation of entanglement is conditional on the value of $\alpha$, so the area operator $L_A$ is actually $\ket{1}\bra{1}$ on the $\alpha$ subsystem. In this sense $\alpha$ is a bulk degree of freedom that indexes which geometry we are in. The examples in section~\ref{sec:examples} are in some sense the simplest possible quantum error-correcting codes with non-trivial holographic properties. Their purpose is primarily to serve as pedagogical examples for understanding holographic quantum error correction. However, there are many possible future directions, some of which we elaborate in the Discussion below. An obvious direction is to try to use these examples as building blocks for a tensor network that supports superpositions of geometries. This is an idea that tensor network models seem to struggle to capture. Other possibilities include trying to find an example so that $\alpha$ is not a separate degree of freedom, or to add dynamics. \section{The holographic principle} \label{sec:holography_background} What characterizes a (quantum) gravitational/holographic theory? Which of these characteristics can be captured by low-dimensional discrete toy models? The goal of this section is to provide the interested reader with enough intuition about gravity and holography to answer these questions. In particular, reading this section is meant to establish the concepts and intuition necessary to understand the following key claim: \begin{itemize} \item The entanglement structure of holographic states is special. The entanglement entropy of a mixed state is not in general an observable. However, the reduced density matrix in the spatial subregion $A$ of a \emph{classical} holographic state $\rho$ obeys an area law known as the Ryu-Takayanagi formula: \begin{align}\label{eq:classical_RT} S_A(\rho) = \frac{1}{4G_N} \min_{\gamma_{A}} Area(\gamma_{A}). \end{align} That is, entropies of subregions are proportional to the value of a geometric observable, the minimal area operator, which is obtainable by following a well-defined procedure (detailed later in this section) to construct the classical geometry corresponding to $\rho$, and the constant of proportionality involves the gravitational constant $G_N$. Furthermore, holographic states in a much larger class, where we allow entanglement of bulk degrees of freedom as well as superpositions of different geometries, nevertheless have entropies described by a more general RT formula: \begin{align}\label{eq:full_RT} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho), \end{align} where $L$ is again a bulk observable whose eigenvalues are the minimal areas of the different geometries in the superposition. An RT formula also holds for the reduced state in the complementary subregion $\bar A$, with the \emph{same} operator $L$. \end{itemize} The following sections of the paper will be devoted to understanding the features of, and building examples of, quantum error-correcting codes which obey Eq.\ \eqref{eq:full_RT}. In particular, we will show that an operator $L$ exists for codes which obey a \emph{complementary recovery} property relating errors correctable in a region $A$ to errors correctable in the complement of the region. These codes are themselves \emph{holographic}: even though the codes are not gravitational, encoded states have the same special entanglement structure. However, before moving to the error-correction setting, in this section we give more details on what the RT formula \emph{means}. We discuss how to describe a spacetime geometry which obeys the equations of general relativity: first using a metric, then using the more invariant data of geodesic distances and extremal areas. We then sketch how to think of a \emph{quantum state} which describes (a spatial slice of) a given geometry, and then the larger Hilbert space which allows for entangled degrees of freedom living on these geometries, as well as states describing superpositions of geometries. Finally, we move from this abstract discussion to the more concrete setting of holographic theories which give dual descriptions of some of these quantum-gravitational Hilbert spaces, allowing us to measure bulk observables via a holographic operator dictionary and relating the bulk geometry to the boundary entropic structure. \subsection{Some general relativity} We begin by explaining the objects that appear on the right-hand side of \eqref{eq:classical_RT}; in later subsections we'll discuss the meaning of the left-hand side and of the generalization \eqref{eq:full_RT}. These are geometric quantities, so we'll first need to explain what we mean by a geometry, by introducing the notion of a metric to define the distance between points and along curves, and then the special curves called geodesics which define the causal structure of a spacetime. We'll then pass from mathematics to physics by discussing how the Einstein field equations relate the geometry to the energy and matter living on top of it. Finally, we'll give a (reasonably) careful discussion of the symmetries of spacetime and of the Einstein equations. Many metrics can describe the same spacetime, so if we want to work with physical quantities we need objects which don't change when we alter the metric but leave the spacetime unchanged. We'll see that, when a spacetime has a boundary, one of these quantities is precisely the minimal area operator appearing in \eqref{eq:classical_RT}. We recommend that the interested reader looking for a more complete but still concise introduction to GR consult \cite{carroll2001no}. \subsubsection{Metrics and distances}\label{sub:metrics} The full machinery of quantum gravity won't be necessary for this review, but it will be useful to establish some intuitions and terminology. We begin with classical general relativity. The space in this theory is a particular non-flat D-dimensional geometry. Formally, what we mean by a ``geometry" is some smooth (differentiable) manifold, which we can describe by some set of coordinates $\{x^\mu\}_{\mu=0}^{D-1}$, where the index $\mu$ ranges over the $D$ dimensions of the manifolds. What we mean by ``curved'' is that distances between points in this manifold aren't given by the Euclidean distance. Instead, we use a more general notion of distance, a \emph{pseudo-Riemannian metric} $g_{\mu \nu}$. Using the metric, we can define the \emph{line element} \begin{equation} ds^2 = g_{\mu \nu} dx^\mu dx^\nu, \end{equation} where we are adopting the convention that repeated indices are summed over. That is, at every point of space the metric is a \emph{matrix} (or, more formally, a two-index tensor): if we specify a particular point $x$ and pick two coordinate directions $\mu,\nu$ we can find the matrix element $g_{\mu \nu}$. When $g_{\mu \nu}=\delta_{\mu \nu}$ at all points in space, we recover the special case of the Euclidean metric in $D$ dimensions: the line element is \begin{equation} ds^2_\text{Euc} = (dx^0)^2 + \ldots + (dx^{D-1})^2 \equiv dx^\mu dx^\mu. \end{equation} However, we more often have cases in which some of the coordinates are \emph{timelike}, $g_{aa}<0$. In particular, when exactly one of the coordinates (by convention, $x^0$) is timelike (or, more precisely, when the metric has one negative eigenvalue everywhere on the manifold), we say that the metric is \emph{Lorentzian}. A simple example is given by taking the Euclidean metric and putting a minus sign in front of the {00} (``time-time'') component: \begin{equation} ds^2_\text{SR} = -(dx^0)^2 + \ldots + (dx^{D-1})^2 = dx^\mu dx^\mu. \end{equation} This is a metric which describes the situation of \emph{special relativity}: we can see that, for any fixed value of $x_0$, the \emph{spatial} part of the metric is still flat. The line element in turn allows us to compute the length of a curve $\gamma$, which we can parametrize as a choice of coordinates at each point on the curve: $\gamma(\lambda)=x^\mu(\lambda)$. The length of the curve is given by adding up the infinitesimal displacements along the curve, i.e. the arc length integral \begin{equation}\label{eq:arc_length} \left|\gamma\right| \equiv \int_0^1 d\lambda \sqrt{ds^2} = \int_0^1 d\lambda \sqrt{g_{\mu \nu} \frac{dx^\mu(\lambda)}{d\lambda} \frac{dx^\nu(\lambda)}{d\lambda}}. \end{equation} In a given geometry, we can construct the set of \emph{all possible} (smooth) curves which connect two points (in the equation above, the points are $x^\mu(0)$ and $x^\mu(1)$). Individual curves in this set depend on some choice of coordinates, but, crucially, the entire set depends only on the geometry and the choice of points\footnote{Admittedly, so far we've labelled these points in a particular coordinate system, but we could just call them $A$ and $B$, or alternatively consider any coordinate system that preserves the locations of the two points but allow the coordinates to vary on the rest of the geometry. Below we're going to consider the whole \emph{space} of geodesics, and that will remove even this dependence.}. So any quantity we can compute given access to the entire set is coordinate-independent. In particular, we'd like to use the set to come up with a coordinate-independent distance between two points. In Euclidean (or more generally Riemannian) metrics, one such quantity is \emph{the length of the shortest curve connecting two points}. If there's a timelike direction, this isn't true anymore: we can take a curve and add zig-zags in the timelike direction which will make the curve steadily shorter and shorter. So we can't simply take this as our distance measure. The right generalization turns out to be to consider the lengths not of \emph{minimal} curves, but \emph{extremal} ones. To find these curves, we take the arc length integral \eqref{eq:arc_length}, consider it as a functional of the curve $\gamma$, vary the curve $\gamma\rightarrow\gamma + \delta\gamma$, and look for stationary points of the variation. This is the fundamental problem of the calculus of variations, and its solution is given by solving the Euler-Lagrange equations with the action taken to be the line element $ds=\sqrt{ds^2}$. We won't write this down explicitly, but the equation to be solved is known as the \emph{geodesic equation} and the extremal curves are called \emph{geodesics}; in GR, they're the curves traced out by non-accelerating (``freely-falling'') observers. If all of the geodesics connecting two points have the same length, we call this length the \emph{geodesic distance} between two points; if there are multiple geodesics with different lengths, we take the geodesic distance to be the shortest such length. In geometries with one timelike direction, the geodesic distance between pairs of points gives a \emph{causal structure} for the geometry: for all pairs of points, we can tell whether they are spacelike, timelike, or null separated by checking whether the geodesic distance between them is, respectively, positive, negative, or zero. When two points are timelike separated, we often call the negative of the geodesic distance the \emph{proper time}; with appropriate units, it measures the time elapsed by a clock carried by an observer freely falling between the two points. Crucially, as we'll discuss below, the causal structure is really a property \emph{of the geometry itself}: the set of all geodesics on a manifold is independent of how the metric is parameterized, and two metrics describe the same geometry precisely when they produce the same causal structure. It should be clear that we can generalize this entire dicussion by passing from curves to higher-dimensional objects (surfaces, volumes, etc.). Instead of the arc length integral \eqref{eq:arc_length} we have some higher-dimensional integral, which we vary to find stationary solutions: extremal surfaces, volumes, etc. Like the geodesics, these are similarly coordinate-invariant objects. For ease of drawing figures, we'll typically work in two space and one time dimension. This hopefully explains our choice of notation in \eqref{eq:classical_RT}: the $A$ is a subregion of a spatial slice of the boundary, that is, a dimension $D-2$ (``codimension 2'') object, and the minimization is over the areas of extremal dimension $D-2$ objects in the bulk of the spacetime that touch the boundary at the edge of $A$ (actually a subclass of these objects, as we'll discuss towards the end of this section). If, like our universe, $D=4$, $A$ would be two-dimensional. But in three spacetime dimensions $D-2=1$, so the boundary is equivalent to a circle, $A$ is some portion of that circle, and the relevant extremal objects are curves which we've accordingly labeled as $\gamma_A$. We nevertheless call the operator which computes their length an \emph{area} operator because in general spacetime dimension the invariant objects have \emph{codimension} 2. \subsubsection{The Einstein field equations} So far we have just done mathematics (differential geometry, to be precise). General relativity is a physical theory which relates the geometry of a manifold to the matter distribution living on it. More precisely, the Einstein field equations read \begin{equation}\label{eq:einstein} G_{\mu \nu} = 8 \pi G_N T_{\mu \nu}. \end{equation} It won't be necessary for us to precisely define the objects in this equation, but we mention several features of it: \begin{itemize} \item $G_{\mu \nu}$ is the ``Einstein curvature tensor'', a geometric object which is a function of the metric and its derivatives. \item $T_{\mu \nu}$ is the ``stress-energy tensor.'' In a particular coordinate frame in a Lorentzian spacetime, we can identify $T_{00}$ as the energy density, $T_{0k}$ as momentum density in the $x^k$ direction, and the mixed components as pressures and stresses. \item \eqref{eq:einstein} is $D^2$ equations given by different choices of $\mu,\nu$, but both the left- and right-hand sides of the equation are symmetric under exchange of $\mu$ and $\nu$, e.g. $G_{\mu \nu}=G_{\nu \mu}$, so there are only $D(D-1)/2$ independent equations, 10 in 4 spacetime dimensions. For a fixed choice of stress-energy tensor, this is a set of (second-order) nonlinear coupled differential equations which determine the metric. \item Although it isn't manifest in this form, we can often rewrite the Einstein equations as an \emph{initial value problem}: if we know the metric and its derivatives and the stress-energy, \emph{at one particular moment in time}, i.e. everywhere in space for one particular value of a timelike coordinate, we can use the field equations to tell us what the metric will be at some later time\footnote{When we're trying to solve Einstein's equations on a manifold with a boundary, we need to give (spatial) boundary conditions in addition to initial data. This is the case, in particular, for the asymptotically-AdS spacetimes that are of interest in holography.}. Now we can study, for example, backreaction---given some particular matter configuration, how does the geometry evolve? \end{itemize} \subsubsection{Diffeomorphism invariance}\label{sub:diff} We said above that a particular geometry is described by a manifold specified by a metric. In general, however, there is not a one-to-one correspondence between metrics and geometries---there are many metrics which describe the same geometry. We're already used to being able to use different coordinate choices to describe the same physics: in Newtonian physics we're free to choose different origins and choices of axis for our coordinate system, or, with a little more work, to make one coordinate system move and rotate with respect to another. We can tell that two seemingly different situations are actually the same thing in different coordinate systems when the laws of physics are the same in both situations. In Newtonian physics, for example, acceleration is the same in all inertial frames, and Newton's laws of motion depend only on the acceleration: they have the property of ``Galilean invariance." Similarly, in special relativity, the laws of physics are invariant under Lorentz transformations; two observers may differ, for example, on what the strength of an electric or magnetic field is but they will agree on the appropriate invariant combinations. So, to answer the question of whether two metrics describe the same geometry, we should compute invariant quantities and check whether they are the same in both situations. In GR, the invariant quantities are related to the causal structure: they are the proper distances \eqref{eq:arc_length} along geodesics. Two metrics which share the same causal structure are related by a \emph{diffeomorphism}. That is, in general a given spacetime can be described by multiple distinct metrics. We emphasize that the \emph{metrics} really are distinct; it's only by computing ``diff-invariant" quantities that we can check that they describe the same spacetime. Another way to phrase this is that the metric doesn't only contain physical information about a spacetime, it \emph{also} contains extra redundant information that doesn't matter to an observer. (Think of the choice of the origin and axes for a flat metric, for example.) In high-energy physics one often refers to this information as ``gauge'' degrees of freedom. When we go from a metric to the physical quantities, i.e.\ the geodesics, we ``gauge out" these degrees of freedom so that only the physical ones remain. We say that two metrics are ``gauge-equivalent" if they're related by a gauge transformation, i.e.\ there is a diffeomorphism which takes one metric to the other. These two metrics are members of a ``gauge orbit", the equivalence class of all gauge-equivalent metrics. We ``gauge-fix" by specifying information to go to a subset of metrics in the equivalence class; if we specify so much that only one metric remains, we've totally fixed the gauge. A ``gauge'' is just another word for a measuring device; think of gauge-fixing as specifying the properties of this measuring device, i.e. giving enough information that two observers can agree on how to perform a measurement. In Newtonian physics, for example, we'd gauge-fix by fixing the direction of each coordinate axis, and a position and velocity for the origin of the coordinate system. None of that affects the physics, but if you want to check someone else's measurements you'll need that information. However, it's important to point out that not all gauge transformations preserve all of the information we might call physical. The issue arises when we consider metrics for manifolds with \emph{boundaries}. It's useful to gain some intuition by first thinking about the equivalent case in electrodynamics. Recall that the behavior of charged particles and electromagnetic fields is governed by Maxwell's equations. However, just like the metric, electric and magnetic fields are not gauge-invariant, only certain combinations of them (like $E^2 + B^2$ are). Just like Einstein's equations, the Maxwell equations are also differential equations, and hence their solution in a region with a boundary depends on a choice of boundary conditions. We could solve for the behavior of the electromagnetic fields inside a conducting sphere, for example, or with a charged surface. The point is that these boundary conditions represent an additional set of physical information. In general, then, the set of gauge transformations which preserve all physical quantities will be \emph{smaller} than if we didn't worry about boundary conditions at all. This same issue arises even when we're not placing boundary conditions at some particular region of space, but instead placing them ``at infinity." In this case there is a precise language used to talk about these types of boundary conditions. We ask whether the gauge transformation has any effect at infinity, or, equivalently, if it can be distinguished from the identity transformation in the limit that we go very far away from the origin of our coordinates. If it can't, we call the gauge transformation a ``small gauge transformation". If it can, we call the gauge transformation a ``large gauge transformation." And we have in mind that large gauge transformations are \emph{physical} while small gauge transformations are not. A large gauge transformation in electrodynamics can, for example, change the total charge a distant observer measures enclosed within some radius. A similar story holds in general relativity, but now the gauge transformations are diffeomorphisms applied to metrics. A small gauge transformation of the metric is one that takes $g_{\mu \nu}\rightarrow g_{\mu \nu}+\delta g_{\mu \nu}$, with $\lim_{x^\mu\rightarrow0} \delta g_{\mu \nu}(x^\mu)=0.$ A large gauge transformation can, for example, change the total invariant mass enclosed within some region. One convenient way to gauge-fix in general relativity is to fix a direction of time everywhere on the spacetime, or equivalently identify points which are on ``the same spatial slice" at a given time. In four spacetime dimensions, this is referred to as a \emph{3 + 1 decomposition}. Geometrically, we can think of this as a foliation\footnote{It turns out that there are some manifolds where it isn't actually possible to do such a foliation, but none of these exotic spacetimes will be relevant for our purposes. For a review of this formalism, which is most important when solving Einstein's equations numerically, see \ref{gourgoulhon20123+}.} of the spacetime into spatial slices. Again, when the spacetime has a boundary (in a spacelike direction), some foliations will coincide on the boundary and others will not. It's only the foliations which look the same at the boundary which we think of as describing the same physics. As we'll discuss below, the RT formula applies to spacetimes that have a (spatial) boundary. Invariant quantities are those which are left unchanged by ``small diffeomorphism'', i.e.\ diffeomorphisms which leave the boundary unchanged. In particular, the invariant quantities of interest for the RT formula are the areas of extremal surfaces which end on the boundary. In $2+1$ dimensions, these are geodesics which extend between points on the boundary; in higher dimensions there are also extremal surfaces, volumes, etc. which touch the boundary. In subsequent sections we will talk about not spacetimes, but \emph{Hilbert spaces} with gauge symmetries and redundancies. Although this type of gauging can be described independently of anything having to do with gravity, we will always have in mind that holographic error-correcting codes should indeed exhibit some version of the gauge symmetries we see in gravity. In particular, our codes will manifest a particular version of the observation that large gauge transformations describe physically distinct spacetimes: we will see that changing the gauge in the holographic code yields a different result when measuring with an ``area operator." See Appendix \ref{app:2x2bacon-shor} for a discussion of the Bacon-Shor code which is phrased in the language of gauges. \subsection{Towards quantum gravity} General relativity is a \emph{classical} theory. Just like Newton's laws govern the behavior of massive particles and extended objects moving, accelerating, and applying forces to each other, and Maxwell's electrodynamics govern the coupled behavior of charged objects and electromagnetic fields, Einstein's equations \eqref{eq:einstein} govern the coupled behavior of energy distributions and geometry. By ``govern the behavior", we really mean that given enough data to describe things at an initial time (the position and velocity of particles, the electric and magnetic fields everywhere, the stress-energy tensor and metric), we can use the theory to find a description at a later (or earlier) time. Quantum mechanics is also a theory in this sense: given a Hamiltonian and an initial wave function, evolution is governed by the Schr\"odinger equation. If we arrived at the theory by quantizing an initial theory, we can get back the classical quantities by applying the appropriate observables (i.e., Hermitian operators) to the wave function: for example, for the quantum mechanics of a point particle in a potential, position or momentum operators. For reasonable choices of Hamiltonian, the \emph{expectation value} of these observables will evolve smoothly--but when we measure the observable we project onto one of its eigenstates according to the Born rule, and only by repeatedly resetting the system, evolving, and measuring can we actually get access to the expectation value. Finding a fully consistent theory of quantum gravity is beyond the scope of this review, to put it mildly, but we \emph{can} say some things confidently. In the quantum theory of a point particle, the classical observables (i.e., the observables which reduce to classical quantities in the classical limit) are the operators which measure position, velocity, etc. The classical observables of a field theory are, similarly, the operators which measure field value and its derivatives. The classical observables of a gravitational theory, then, when applied to states corresponding to classical geometries, measure the metric, stress tensor, etc. So, at minimum, we expect the Einstein equations to hold in some classical limit. That is, the Einstein equations suggest a schematic operator equation \begin{equation} \hat G_{\mu \nu}\: ``="\:8 \pi G_N \hat T_{\mu \nu}, \end{equation} where the hat indicates that this is an operator expression, and we've put quote marks around the equality to emphasize that it's not really precise. What we really mean by this expression is that, for classical states $\ket{\Psi}$ which are simultaneous eigenstates of both operators, \begin{equation} \hat G_{\mu \nu} \ket{\Psi} = G_{\mu \nu} \ket{\Psi} = 8 \pi G \hat T_{\mu \nu} \ket{\Psi} = 8 \pi G T_{\mu \nu} \ket{\Psi}, \end{equation} which automatically implies as well that \begin{equation} \left\langle\hat G_{\mu \nu}\right\rangle_\Psi = 8 \pi G \left\langle\hat T_{\mu \nu}\right\rangle_\Psi. \end{equation} Again, we emphasize that making these expressions precise is complicated in full quantum gravity. The abundant gauge freedoms in GR which we discussed in the previous subsection mean that the notion of a local operator is itself subtle, for example. Keeping this in mind, we can proceed to move gingerly away from exactly classical states (i.e., exact eigenstates of these operators), in two ways: \begin{itemize} \item We can use perturbation theory to understand the result of measuring operators in states close to classical states. For example, if we have a massive system in some superposition of locations, we can see that the expectation value of the metric is that sourced by the average position of the mass, but that measuring quantities sensitive to the metric (for example, the motion of a test mass passing near the system) will project the wave function onto a state of definite metric (and so the particle will be seen (experimentally! \cite{PhysRevLett.47.979}) to follow a geodesic of this metric, emit gravitational waves quantized as gravitons, etc.). For a given classical geometry, we can use these sorts of techniques to work our way all the way up to the full machinery of quantum field theory in curved space. At the linear, perturbative level, the graviton enters as just another type of field. (To be clear, though, perturbation theory has its limits! We can't use this machinery to fully quantize gravity, which is famously impossible using just the machinery of quantum field theory.) \item We can use the linearity of quantum mechanics to discuss not only states close to particular geometries but \emph{superpositions} of distinct geometries. \end{itemize} It's important to emphasize that there's a major caveat with this second point: the linearity of quantum mechanics applies to states \emph{in a fixed Hilbert space}. Let's return to the basic example of the quantum-mechanical theory of a single particle in a potential. There's a position operator on this Hilbert space, and we understand how it acts both on eigenstates of position and on general states (because we can write a general state in the position basis). But this doesn't tell us how to act on states of a single particle in a different potential. Actually, there are ways to give a sensible answer to this question. We could arrange for the particle to move in a given potential by coupling it to another set of degrees of freedom, an ``external field'', so that the potential is recovered for a fixed state of the fields. Then, in this new, larger Hilbert space, we now have a way to talk about a superposition of a particle in one potential with a particle in a different one. But doing so requires us to understand how to embed the original Hilbert space into the larger one. This (finally) is where holography comes in. You might sensibly worry that we could only ever measure the metric, stress-energy, geodesic distance, etc., on a Hilbert space describing states perturbatively close to a single geometry. But holography, as we'll describe next, does much better than this. And, to be clear, we have very good reasons to expect that quantum gravity does in fact require us to deal with superpositions of different geometries! As we discussed in the first bullet point above, we can imagine coupling the metric to a quantum degree of freedom--for example, arranging to move a test mass into one of two locations depending on the result of a projective measurement. Even without explicitly arranging for this ourselves, though, there are (at least) two places where nature as we understand it naturally creates superpositions. One is cosmology: in the early universe, quantum variance of an inflating field \cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi} could have been converted \cite{Polarski:1995jg,Lombardo:2005iz} during the Big Bang into superpositions of different classical configurations of matter, radiation, etc. which ultimately seeded the large-scale structure of the universe. Another is black hole evaporation: according to Hawking's famous calculation \cite{Hawking:1975vcx}, a geometry with a black hole in it can ultimately evolve into a superposition of many possible states which each contain no black hole but rather some collection of matter and radiation, which in turn can source distinct spatial geometries. So, if we want our theory of quantum gravity to describe any of these scenarios, we're going to have to be able to work in a Hilbert space that allows for superpositions of geometries. \subsection{Holographic theories} Holographic theories are ones in which a gravitational theory can be described using a different non-gravitational theory ``at the boundary.'' These theories implement the desired feature of the last subsection: we can use them to describe superpositions of states which describe distinct geometries. Unfortunately, in the best-understood examples of holography none of these geometries look anything like our universe: in particular, they have \emph{negative spacetime curvature}, meaning that at large distances the spacetime metric becomes hyperbolic (``asymptotically anti-de Sitter''). Our universe, as best as we can tell, looks like it has \emph{positive} spacetime curvature. So we can't just immediately interpret our universe as a particular state in a holographic Hilbert space. However, holographic theories are nevertheless worth studying not only because they have a nice, well-understood Hilbert space, but also because states in these theories that describe geometries, or superpositions of geometries, have a number of nice properties, not least the RT formula itself. Like in the previous subsections, but perhaps even more so, our discussion here will only scratch the surface of what is by now a vast literature. Our goal will be to reach the RT formula and its interpretation, in particular, and along the way we will sometimes be heuristic (and we note in a few footnotes places where the main discussion has been imprecise). We recommend that interested readers looking for more comprehensive reviews on the aspects of holography most closely related to quantum information consult, for example, \cite{harlow2016jerusalem,harlow2018tasi} and references therein. First, we'll say a bit more about how the known examples of holographic theories actually work. Then, finally, we'll be in a position to present the RT formula once and for all. After we do that, we'll take the time to introduce a few last concepts to which it will be useful to refer later in the paper: the geometric notions of the causal and entanglement wedge, and the properties of complementary recovery and radial commutativity. \subsubsection{The holographic dictionary} Let's be a little more precise by what it means to describe one theory using another. The fundamental objects in any quantum theory are states and observables. In the last subsection we described how to think about states describing quantum fields on top of a spacetime geometry, or a superposition of spacetime geometries. In particular, when a state describes (a spatial slice of) a classical spacetime geometry satisfying the Einstein equations, it is an eigenstate of certain operators, with eigenvalues given by diffeomorphism-invariant quantities like the length of a geodesic, area of an extremal surface, etc. Then we can compute the expectation values of these operators on states close to these classical ones, and the linearity of quantum mechanics then allows us to compute the expectations on superpositions of near-geometric states. It was realized in the 1990s by string theorists \cite{maldacena1999large,Gubser:1998bc,Witten:1998qj,Aharony:1999ti} that, for states describing asymptotically anti-de Sitter geometries in $D$ dimensions, the expectation values of \emph{all} of these operators could instead be computed using operators in a non-gravitational $(D-1)$-dimensional theory. In particular, one major result of the known holographic correspondences is that there is a precise dictionary for matching operators in the gravitational theory inserted at points near the AdS boundary to operators in the ``boundary theory'', and a precise prescription \cite{Hamilton:2005ju,Hamilton:2006az,Skenderis:2008dg,Christodoulou:2016nej} for integrating over points on the boundary to reconstruct operators deeper into the bulk of the spacetime. For the purposes of this paper, we won't really need to know about the details of the boundary theory: just the entropies of reduced density matrices constructed from (some of) the states in the theory. However, it's worth mentioning two of their properties. First, this type of correspondence could only make sense if the boundary theory at least had the same symmetries as the symmetries of the gravitational theory \emph{at its boundary}. In the language of Subsection \ref{sub:diff} above, these are the small gauge symmetries of diffeorphisms that leave the boundary at spatial infinity unchanged. With a little bit of work, you could stare at a metric that describes hyperbolic space and figure this out---it turns out that the group of transformations that do this is the \emph{conformal group} of angle-preserving tranformations. And, accordingly, the boundary theories are \emph{conformal} field theories. Now you know the reason why another name for the holographic correspondence is ``the AdS/CFT correspondence''! Second, hyperbolic (and spherical) metrics have a length scale, the ``anti-de Sitter length''. Einstein's equations \eqref{eq:einstein} \emph{also} have a length scale, the Planck scale, which can be derived (in a dimension-dependent way) from the Newton gravitational constant $G_N$. The ratio of these two length scales is a dimensionless number, which also appears in the conformal field theory on the boundary. In spacetimes that look classical, this ratio needs to be very large: it's the ratio between ``cosmological'' scales and ``quantum gravitational'' scales. Accordingly, boundary CFTs that can describe classical-looking geometries have a very large number of fields--they're often referred to as ``large-$N$ CFTs.'' And it's a fact about (non-free) conformal theories that the larger the number of fields, the more strongly the fields couple to each other. So, when used to describe classical geometries, the holographic correspondence relates gravity in asymptotally AdS spacetimes to the behavior of strongly-coupled conformal field theories. For the purposes of this article we'll only care about fundamentally \emph{gravitational} observables like the lengths of geodesics, etc., whose expectation values on classical states we can compute knowing only the metric. But it's important to understand that this dictionary doesn't apply only to these, it also applies to any other diff-invariant observable built from the \emph{fields} in the theory--for example, the stress tensor on the right-hand side of the Einstein equations, or just the expectation value of a field at some point. So far, this might just seem like an interesting coincidence, but no more than that. After all, as discussed in the previous subsection, we already in principle know how to compute these observables for nearly-classical states: we write down a metric describing the geometry on the state, solve the Einstein equations to get the field configurations on top of the geometry, then perturb these field configurations slightly and see how this backreacts on the geometry using the operator form of the Einstein equations. However, there are a few reasons the existence of a holographic description is exciting: \begin{itemize} \item Sometimes we can compute quantities easily in the gravitational theory but not the non-gravitational theory, or vice-versa. The RT formula itself is an example of this: it's a straightforward mechanical task to compute the areas of extremal curves given the metric, but computing the entropy of a CFT state requires first writing down what the state is, already a nontrivial task, and then doing all the integrals to trace out the state of the fields outside the region of interest. \item As we discussed in the last subsection, we're free to compute the expectation values of states that \emph{aren't} nearly-classical. These states need not come anywhere near solving the classical Einstein equations, i.e. they might not be easily described as geometric at all! Yet they live in the same Hilbert space as all the nearly-classical states. We can think of these as \emph{bona fide} quantum-gravitational states! (In fact, historically the logic worked almost the other way around: the holographic dualities were used within string theory to exhibit examples of states that weren't ``stringy'' but had nearly-classical descriptions.) \end{itemize} \subsubsection{The RT formula}\label{sub:RT} Now, at last, let's return to the versions of the RT formula we presented at the start of this section. First, the version \eqref{eq:classical_RT} that applies to a holographic state dual to a classical geometry: \begin{align} S_A(\rho) = \frac{1}{4G_N} \min_{\gamma_{A}} Area(\gamma_{A}). \end{align} We've already understood the meaning of the right-hand side from previous sections. If we have a metric already, we can find the geodesics (or extremal surfaces) which hit the boundary at precisely the edge of the boundary region $A$. If there are multiple such geodesics, we choose the one with smallest area\footnote{\label{fn:homology}We mention one caveat for experts: if $A$ is the entire boundary, and $\rho$ is mixed, then it might seem like the RT formula leads us astray, because the boundary of $A$ is the empty set and so any geodesic which is a closed circle hits the boundary at the empty set, i.e. doesn't touch it at all. This puzzle was resolved by realizing that the spacetime dual to a thermal state is \emph{AdS-Schwarzschild}, i.e. hyperbolic spacetime with a black hole of the appropriate temperature sitting in the black hole. Then we get the correct result if we take the minimal surface to be the one which wraps around the horizon of the black hole. To get this, we need to impose a ``homology constraint'': the only geodesics which we consider in the minimization in \eqref{eq:classical_RT} are those which not only meet the boundary at the appropriate place, but those which can be continuously extended through the bulk to touch $A$ without crossing any holes or horizons in spacetime, i.e. those that are ``homologous to A.''}. On the left-hand side, the state $\rho$ lives in the Hilbert space of a large-N CFT, with $N$ related to the Newton constant $G_N$ as discussed above. In principle, we can perform the field-theoretic equivalent of tracing out a subregion, which involves integrating out the values of fields outside the region with appropriately chosen boundary conditions, then take a logarithm to find the entropy. In practice, the computation is usually done by holographers using slicker mathematical techniques to compute the entropy, for example computing $\text{Tr}_A(\rho^n)$ and then obtaining the entropy by taking a limit in $n$. Even so, it's very hard to carry out this procedure except in states close to certain highly symmetric states like the vacuum or a thermal state. Now let's consider the more general RT formula \eqref{eq:full_RT} which applies to states with entanglement in the bulk and superpositions of geometries: \begin{align} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho). \end{align} $L$ is an operator in the quantum-gravitational Hilbert space. Its eigenstates are the states dual to classical geometries (with appropriate AdS length to be described by the particular CFT under consideration), and its eigenvalues are the areas of the minimal surfaces that meet the boundary subregion. However, roughly speaking\footnote{For a discussion of the limitations of this approach, and the circumstances under which the RT formula breaks down (essentially, when there are very many, exponential in $N^2$, terms in the superposition), see \cite{Almheiri:2016blp}.} there are many different ``field-theoretic'' states that live on the same curved spacetime. The bulk entanglement term identifies which of these states (or rather, which equivalence class of states with the same bulk entanglement inside the extermal surface) is described by $\rho$. Hence neither the left-hand nor the right-hand side of \eqref{eq:full_RT} is the expectation value of an observable, but the difference between the boundary and bulk entropies \emph{is}. Moreover, we can see that, as we will discuss below, $L$ is an operator which can be obtained given access either to the reduced state either in $A$ or to its complement $\bar A$. \subsubsection{The causal and entanglement wedges}\label{sub:wedges} Which bulk operators can we reconstruct given access to a particular boundary region? As discussed in Subsection \ref{sub:metrics} above, Lorentzian metrics have \emph{causal structure}, so we know that only those parts of the boundary that can send or receive a signal from the point or region can affect the value of an operator there. This is formalized by the notion of the \emph{causal wedge}, depicted in Figure \ref{fig:subregion}: \begin{figure}[htb!] \centering \includegraphics[height=8cm]{bulk_reconstruction.pdf} \caption{(This is a slightly modified version of Figure 11 in~\cite{pastawski2015holographic} and its caption.) Bulk field reconstruction in the causal wedge. On the left is a spacetime diagram, showing the full spacetime extent of the causal wedge $\mathcal{C}[A]$ associated with a boundary subregion $A$ that lies within a boundary time slice $\Sigma$. The point $x$ lies within $\mathcal{C}[A]$ and thus any operator at $x$ can be reconstructed on $A$. On the right is a bulk time slice containing $x$ and $\Sigma$, which has a geometry similar to that of our tensor networks. The point $x$ can simultaneously lie in distinct causal wedges, so $\phi(x)$ has multiple representations in the CFT.}\label{fig:subregion} \end{figure} On a time-slice of the boundary theory, choose a spatial sub-region $A$. The \emph{causal wedge} $\mathcal{C}[A]$ of $A$ is the bulk region bounded by (1) the boundary domain of dependence of $A$ (dark grey curve in Fig.~\ref{fig:subregion}) and (2) the set of bulk geodesics which start and end on (1). The casual wedge is determined by the domain of dependence of $A$, hence ``causal.'' Often, especially in static spacetimes, it's convenient not to work with the full causal wedge, but instead some particular spatial slice within it. If the RT surface is spacelike, then every spatial slice in the causal wedge ends on the RT surface itself, but they hit the boundary at different times. It's usually most convenient to choose a spatial slice which intersects the boundary at the region $A$ itself. In nice situations, for example if the spacetime is static, we can pick a spatial slice that extends between $A$ and its RT surface, which by causality contains all of the information necessary to reproduce the entire causal wedge. In this case, as shown on the right diagram in Figure \ref{fig:subregion}, we're free to draw diagrams which suppress the time direction entirely: compare to the figures in the Introduction, which similarly depict the situation at one particular time. \begin{figure}[htb!] \centering \includegraphics[height=7cm]{causal_vs_entanglement.pdf} \caption{(This is a slightly modified version of Figure 14 in~\cite{pastawski2015holographic} and its caption.) The intersection of the entanglement wedge $\mathcal{E}[A]$ with a bulk time-slice, in the case where $A$ has two connected components. Minimal geodesics in the bulk are solid lines. When $A$ is smaller than $A^c$, we have the situation on the left and the causal wedge agrees with the entanglement wedge. When $A$ is bigger, however, the minimal geodesics switch and the entanglement wedge becomes larger. In particular the point in the center lies in the $\mathcal{E}[A]$ but not $\mathcal{C}[A]$.}\label{fig:wedges} \end{figure} However, we also know that given access to the entropy of a CFT subregion we can use the RT formula to compute the area of the relevant extremal surface. In fact, we expect that if we know not just the entropy but the full reduced density matrix, we can construct, e.g., the RT surface itself. And we can also use this information to construct the RT surfaces of smaller parts of the subregion, so we should be able to read off the metric everywhere in the portion of the bulk between the boundary region and the RT surface. This is formalized by the notion of the \emph{entanglement wedge}: The \emph{entanglement wedge} $\mathcal{E}[A]$ is the domain of dependence of the bulk region bounded by (1) $A$ (dark grey curve in Fig.~\ref{fig:wedges}) and (2) the minimal extremal bulk surface homologous to $\partial A$ (i.e. the RT surface of $A$) (black curve in Fig.~\ref{fig:wedges}). The entanglement wedge is determined by the RT surface, which has area equal to the von Neumann entropy of the part of the boundary theory contained in $A$, hence ``entanglement.'' One can show, under the assumption the bulk theory describes a sensible gravitational spacetime, that the causal wedge is contained in the entanglement wedge \cite{Wall:2012uf,Headrick:2014cta}. Figure \ref{fig:wedges} depicts a situation in which the two wedges do not coincide. Note that, for a pure boundary state, the RT surface of $A$ can clearly seen to be the same as the RT surface of $\bar A$. That is, the operator $L$ which gives its area is both in the set of operators acting only on region $A$ of the boundary theory, and the set of operators acting only on region $\bar A$. But causality dictates that all operators in one of these two sets commute with all operators in the other of these sets. So, $L$ commutes with every operator acting on $A$: we say it's in the \emph{center} of the operators acting on $A$. One such operator is the identity. But, in general, when there is some gauge symmetry in the bulk theory, there will be elements in the center which are \emph{not} trivial, and the area operator will be one of these. So the nontriviality of the area operator tells us about the fact that the bulk is gravitational, and thus has diffeomorphism invariance! Thinking about the algebraic properties of bulk and boundary operators will be key to our approach in the rest of the paper; we'll review the concepts of operator algebras, centers, etc.\ in the next section. \subsubsection{Complementary recovery and radial commutativity}\label{sub:radial} Recall that the causal wedge tells us which bulk operators can be reconstructed given access to a boundary region $A$. As Figure \ref{fig:subregion} makes clear, if we divide the boundary into two regions, an operator acting at a particular bulk point must lie in the causal wedge of at least one of the regions, and it only lies in the causal wedge of both when the point is part of the RT surface. This is \emph{complementary recovery}: given a subregion we can reconstruct all operators inside its causal wedge but none of the operators outside it. Now consider, instead of fixing the region $A$, what happens when we allow it to vary but still keep the bulk operator fixed. In general, we see that an operator lies inside the causal wedge of \emph{many} regions. So, if we had access to a boundary region large enough that many of its subregions could alone reconstruct the operator, knowledge of the operator is \emph{redundantly} encoded in the state: we don't need the full state on $A$ to reconstruct it, and there are many possible ways to reconstruct it. We say the state exhibits \emph{subregion duality}, in which many subregions can be used to reconstruct the same operator. Furthermore, if we erase a piece of the boundary that is much smaller than $A$, almost every bulk operator can still be exactly reconstructed. Historically, it was this sort of code-like redundancy which led to the consideration of holographic error correction. The flip side of subregion duality is radial commutativity. We can see that, at least for non-pathological spacetimes, the RT surfaces of \emph{small} subregions don't extend deep into the bulk: we need large subregions to penetrate deep into the interior. If a bulk operator is outside of the causal wedge of a subregion, that means, by causality, that it commutes with every operator acting in the casual wedge subregion. In particular, it commutes with the operators that act on the boundary region itself. But every boundary operator acting on a point in the boundary lives inside the causal wedge of any boundary subregion containing that point; in particular, it lives inside the causal wedge of an arbitrarily small subregion around the point. Hence any bulk operator which lives away from the boundary must commute with \emph{all} boundary operators acting on single points in the boundary: this is the property of \emph{radial commutativity}. This might not seem to be a problem yet: an arbitrary operator in the boundary \emph{doesn't} act at a single point in the boundary, but at many points. However, field theories, and conformal field theories in particular, have an \emph{operator product expansion}: the product of operators acting at multiple points can be written as a sum of local operators acting only at a single point. And each of these local operators, by the argument above, commutes with the bulk operator! If we take this argument seriously, then, a bulk operator commutes with \emph{every} operator in the boundary theory. This seeming paradox was another motivation behind the introduction of error correction in holography---we only reached this conclusion because we treated bulk operators and boundary operators as acting on the same Hilbert space, but in fact the bulk Hilbert space of a given geometry, as we have seen, is much smaller and redundantly encoded into the CFT Hilbert space. So, in the language of this paper, the resolution can be stated simply: the bulk doesn't live ``inside the boundary,'' i.e. in the same space. Rather, as depicted in Figure \ref{fig:notation}, we must map the bulk into the boundary using an isometry $V$. \section{Finite-dimensional von Neumann algebras} \label{sec:vonNeumann} In Section~\ref{sec:overview_HQEC}, we discussed how the appropriate language to analyse the entropy contributions arising from the bulk degrees of freedom is the one of von Neumann algebras. In this section, we review some basic notions from the theory of von Neumann algebras (a special case of the more general $C^*$-algebras). Although it is common to study von Neumann algebras over infinite-dimensional Hilbert spaces (and it is in this case that they have proved most useful) we only consider the finite-dimensional case, which is the most relevant for our purposes. Unless otherwise specified, when we use the term von Neumann algebra we always refer to a von Neumann algebra over a finite-dimensional Hilbert space. The content of this section is mostly based on the presentation given in~\cite{harlow2017ryu} which in turn draws from the lecture notes of Jones~\cite{jones2003neumann}. An \emph{algebra} over a field is a set which is closed under scalar multiplication, addition and multiplication, and for which there exists a unit element. Von Neumann algebras are algebras of linear operators acting over a complex Hilbert space with the additional property of closure under complex conjugation. More specifically, we have that \begin{definition}[von Neumann algebra] Let $\mathcal{L}(\mathcal{H})$ be the set of linear operators over a finite-dimensional complex Hilbert space $\mathcal{H}$. A von Neumann algebra is a subset $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ which is closed under: \begin{itemize} \item (addition) if $A, \, B \in \mathcal{M}$ then $A + B \in \mathcal{M}$; \item (multiplication) if $A, \, B \in \mathcal{M}$ then $AB \in \mathcal{M}$; \item (scalar multiplication) if $A \in \mathcal{M}$ and $c \in \mathbb{C}$ then $cA \in \mathcal{M}$; \item (complex conjugation) if $A \in \mathcal{M}$ then $A^\dagger \in \mathcal{M}$; \end{itemize} and for which there exists an element $I \in \mathcal{M}$ such that for every $A \in \mathcal{M}$ we have $IA = A$. \end{definition} From now on, whenever we use the term algebra we always assume that the algebra is a finite-dimensional von Neumann algebra (sometimes, when extra care is required, we still write the full name explicitly). We often define a von Neuman algebra through its generators using the following notation $\mathcal{M} = \langle A, B, \dots \rangle_{\mathrm{vN}}$, where the angle brackets denote the algebra generated by some operators $A, B, \dots$. Note that the von Neumann algebra generated by a set of operators is different from the group generated by a set of operators as the latter does not have an addition and scalar multiplication operation. Because in quantum error correction it is customary to use the angle bracket notation to define the group generated by a set of operators we chose to adopt the, bulkier, notation $\langle \dots \rangle_{\mathrm{vN}}$ for von Neumann algebras. So, for example, $\langle Z \rangle_{vN}$ is the set of all 2 x 2 diagonal matrices ($X,Y,Z$ denote the Pauli matrices) and $\langle Z,X \rangle_{vN} = \mathcal{L}(\mathbb{C}^2)$. \begin{example} The von Neumann algebra $\mathcal{M} = \langle ZII, IXI, IZI \rangle_{\mathrm{vN}}$ over $\mathcal{H}= \mathbb{C}^{8}$, where $X,Z$ are Pauli operators. \end{example} There are three fundamental notions in the study of von Neumann algebras: commutant, center, and factor. The commutant $\mathcal{M}^{\prime}$ is the set of operators which commute with every element of $\mathcal{M}$. The commutant itself forms a von Neumann algebra. \begin{definition}[commutant] Given a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ the commutant is the set \begin{equation} \mathcal{M}^{\prime} \equiv \left \{ B \in \mathcal{L}(\mathcal{H}) \mid \forall A \in \mathcal{M}: AB = BA \right \}. \end{equation} \end{definition} Double commutation (or bicommutation) leaves a von Neumann algebra unvaried. This important property is known as the bicommutant theorem. \begin{theorem}[bicommutant] For every von Neumann algebra $\mathcal{M}\subseteq \mathcal{L}(\mathcal{H})$ we have that \begin{equation} \mathcal{M}^{\prime \prime} \equiv (\mathcal{M}^{\prime})^\prime = \mathcal{M}. \end{equation} \end{theorem} The center is the set of commuting elements of an algebra. \begin{definition}[center] Given a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ the center $Z_{\mathcal{M}}$ is the set \begin{equation} Z_{\mathcal{M}}\equiv \mathcal{M} \cap \mathcal{M}^{\prime}. \end{equation} \end{definition} Trivial centers (i.e. centers that are multiples of the identity) are known as factors. \begin{definition}[factor] Let $c\in \mathbb{C}$. A von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ is a factor if its center satisfies \begin{equation} Z_{\mathcal{M}} = \langle I\rangle_\text{vN} \equiv \{z I\hspace{1mm}|\hspace{1mm} z \in \mathbb{C}\}. \end{equation} \end{definition} \subsection{Classification of von Neumann algebras} \label{thm:Wedderburn} The classification theorem shows that any von Neumann algebra can be decomposed as a direct sum of factors. \begin{theorem}[classification theorem] For every von Neumann algebra $\mathcal{M}$ on a finite-dimensional Hilbert space $\mathcal{H}$ there exists a block decomposition of the Hilbert space \begin{equation} \mathcal{H} = \left[ \oplus_{\alpha} \left( \mathcal{H}_{A_\alpha} \otimes \mathcal{H}_{\bar{A}_\alpha} \right) \right] \oplus \mathcal{H}_0 \end{equation} such that \begin{align} \label{eq:Wedderburn} \mathcal{M} = \left[ \oplus_{\alpha} \left( \mathcal{L}(\mathcal{H}_{A_\alpha}) \otimes I_{\bar{A}_\alpha} \right) \right] \oplus 0, \\ \mathcal{M}^{\prime} = \left[ \oplus_{\alpha} \left( I_{A_\alpha} \otimes \mathcal{L}(\mathcal{H}_{\bar{A}_\alpha} ) \right) \right] \oplus 0, \\ Z_\mathcal{M} = \oplus_{\alpha} \left( c_{\alpha} I_{A_\alpha} \otimes I_{\bar{A}_\alpha} \right), \end{align} where $\mathcal{H}_0$ is the null space and $0$ is the zero operator on $\mathcal{H}_0$. For simplicity, whenever we write a decomposition of an algebra (Hilbert space), we no longer write the direct sum with the null space (zero operator). The decomposition in \eqref{eq:Wedderburn} is known as the Wedderburn decomposition. \end{theorem} Note that, in order to denote the different blocks in the sum, we adopt the heavy notation $\H_{A_\alpha}$---and not the simpler $\H_{\alpha}$---to ensure consistency with the notation of Lemma~\ref{lemma:factorization}. In that case, the letter $A$ is used to denote a partition of the Hilbert space. In this section the letter $A$ has no other meaning but to denote one of the two factors of a block. We now proceed to give a series of examples of increasing generality of the classification theorem. We begin with the special case of a factor algebra over $\mathcal{H}$ that is equivalent to $\mathcal{L}(\mathcal{H})$. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^2$ with Wedderburn decomposition \begin{equation} \mathcal{M} = \mathcal{L}(\mathbb{C}^2) \otimes 1 = \mathcal{L}(\mathbb{C}^2) = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \end{equation} where $a, \dots, d \in \mathbb{C}$, is a factor. The commutant of the algebra is $\mathcal{M}^\prime = \langle I\rangle_\text{vN}$. \end{example} The following is an example of a factor algebra over $\mathcal{H}$ that is strictly contained in $\mathcal{L}(\mathcal{H})$. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^4$ with Wedderburn decomposition \begin{equation} \mathcal{M} = \mathcal{L}(\mathbb{C}^2) \otimes I = \begin{bmatrix} a & 0 & b & 0 \\ 0 & a & 0 & b \\ c & 0 & d & 0 \\ 0 & c & 0 & d \end{bmatrix}, \end{equation} where $a, \dots, d \in \mathbb{C}$,is a factor. The commutant of the algebra is $\mathcal{M}^\prime = I \otimes \mathcal{L}(\mathbb{C}^2)$. \end{example} Finally, we give two examples of algebras that are not a factor. The first is a fully diagonal algebra (and, in the language of quantum mechanics, can be thought of describing a classical algebra of observables) while the second has a block diagonal structure (thus describing a quantum algebra of observables). For many more examples of von Neumann algebras, Weddernburn decompositions, and their relationship to coarse-graining and decoherence the interested reader can consult \cite{Kabernik:2019jko}. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^2$ generated by the Pauli $Z$ operator has the following Wedderburn decomposition \begin{equation} \mathcal{M} = \langle Z \rangle_{vN} = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} = \left[ \mathcal{L}(\mathbb{C})\otimes 1 \right] \oplus \left[ \mathcal{L}(\mathbb{C})\otimes 1 \right], \end{equation} where $a,b \in \mathbb{C}$. \end{example} \begin{example} \label{example:Wedderburn} The von Neumann algebra $\mathcal{M} = \langle ZII, IXI, IZI \rangle_{\mathrm{vN}}$ over $\mathcal{H}= \mathbb{C}^{8}$ has the following Wedderburn decomposition \begin{equation} \mathcal{M} = \bigoplus_{\alpha=0} ^1 \left (\mathcal{L}(\mathbb{C}^2) \otimes I \right) = \begin{pmatrix} \begin{matrix} a & 0 & b & 0 \\ 0 & a & 0 & b \\ c & 0 & d & 0 \\ 0 & c & 0 & d \end{matrix} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \mbox{\normalfont\Large\bfseries 0} \\ \hline \mbox{\normalfont\Large\bfseries 0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \begin{matrix} e & 0 & f & 0\\ 0 & e & 0 & f \\ g & 0 & h & 0\\ 0 & g & 0 & h \end{matrix} \end{pmatrix}, \end{equation} where $a, \dots, h \in \mathbb{C}$. \end{example} \subsection{Algebraic states and entropies} \label{sec:algebraic_states} A quantum state on a Hilbert space $\mathcal{H}$ is a Hermitian positive semi-definite operator $\rho \in \mathcal{L}(\mathcal{H})$ with $\operatorname{Tr}(\rho) =1$. Given a state $\rho$ and a Hermitian operator $O$ we can define the expectation value of the operator $O$ on $\rho$ as \begin{equation} \mathbb{E}_\rho (O) = \operatorname{Tr} (O \rho). \end{equation} It is often the case that one is interested in computing expectation values of operators that form an algebra $\mathcal{M}$. A generic state $\rho$ is not necessarily an element of $\mathcal{M}$ and could contain more information than is needed to compute expectation of values of operators in $\mathcal{M}$. It is therefore useful to define the notion of an algebraic state---that is, the state that is ``visible'' from an algebra $\mathcal{M}$. For an algebra $\mathcal{M}$ and quantum state $\rho$ we denote the respective algebraic state by $\rho_\mathcal{M}$. The following theorem shows that algebraic states are unique and that, for the purpose of computing expectation values of operators in $\mathcal{M}$, we can always replace $\rho$ by $\rho_\mathcal{M}$. That is, the algebraic state is a generalization of the reduced density matrix for an algebra which need not be a factor. \begin{theorem} \label{thm:expectations_vN} Let $\mathcal{M}$ be a von Neumann algebra on $\mathcal{H}$ and let $\rho \in \mathcal{H}$ be a quantum state. Then, there exists a unique state $\rho_\mathcal{M} \in \mathcal{M}$ such that \begin{equation} \operatorname{Tr}(O \rho_\mathcal{M}) = \operatorname{Tr}(O \rho) \end{equation} for all $O \in \mathcal{M}$. \end{theorem} For an algebra $\mathcal{M}$ and state $\rho$ it is possible to write an explicit formula for the algebraic state $\rho_\mathcal{M}$. Recall that by Theorem~\ref{thm:Wedderburn} there exists a decomposition of the Hilbert space \begin{equation} \label{eq:Hdec} \mathcal{H} = \oplus_{\alpha} \left( \mathcal{H}_{A_\alpha} \otimes \mathcal{H}_{\bar{A}_\alpha} \right), \end{equation} in terms of which we can write the Wedderburn decomposition of the algebra \begin{equation} \mathcal{M} = \oplus_{\alpha} \left( \mathcal{L}(\mathcal{H}_{A_\alpha}) \otimes I_{\bar{A}_\alpha} \right). \end{equation} Let $\{\ket{\alpha,i,j}\}$ be an orthonormal basis for $\H_{A_\alpha} \otimes \H_{\bar{A}_\alpha} $ (a block in the decomposition) that is ``compatible with $\mathcal{M}$'', that is, the $\alpha$ enumerates the diagonal blocks and within each block we have $\ket{\alpha,i,j} = \ket{i_\alpha}_{A_{\alpha}} \otimes \ket{j_\alpha}_{{\bar A_\alpha}}$ where $\{\ket{i_\alpha}_{A_{\alpha}} \}$ and $\{ \ket{j_\alpha}_{\bar A_\alpha} \}$ are orthonormal bases for $\H_{A_\alpha}$ and $\H_{\bar A_\alpha}$ respectively. Any state $\rho$ can be written in terms of the Hilbert space decomposition of \eqref{eq:Hdec} as \begin{equation} \rho = \sum_{\alpha,\alpha'}\sum_{i,j}\sum_{i',j'} \rho[\alpha,\alpha']_{i,j,i',j'} \ket{\alpha,i,j}\bra{\alpha',i',j'}, \end{equation} where $\{\ket{\alpha,i,k} \}$ is a basis for the $\alpha$-block. Because for the purpose of computing expectation values of elements of $\mathcal{M}$ only the blocks that are diagonal in $\alpha$ will give non-zero contributions we have that $\rho[\alpha,\alpha'] = 0$ for all $\alpha \neq \alpha'$. For computational purposes, it is then useful to define the blocks of $\rho$ that are diagonal in $\alpha$ as \begin{equation} \rho_{A_\alpha} \equiv \frac{1}{p_\alpha} \operatorname{Tr}_{\bar A_\alpha} (\rho[\alpha]), \end{equation} where $p_\alpha \equiv \sum_{i,j} \rho [\alpha, \alpha] _{i,j,i,j}$ is a positive normalisation constant such that $\operatorname{Tr}_{ A_\alpha} (\rho_{A_\alpha}) = 1$ and $\rho[\alpha] \equiv \rho[\alpha, \alpha]$ is the part of $\rho$ which is in the $\alpha$-block. Using this notation we can write the algebraic state $\rho_\mathcal{M}$ as \begin{equation} \label{eq:algebraic_state} \rho_\mathcal{M} \equiv \oplus_\alpha \left( p_\alpha \rho_{A_\alpha} \otimes \frac{I_{\bar{A}_\alpha}}{|I_{\bar{A}_\alpha|}}\right). \end{equation} From \eqref{eq:algebraic_state} we can see that when $\mathcal{M}$ is a factor the von Neumann entropy of the algebraic state is equivalent to the von Neumann entropy of the reduced state $\rho_A = \operatorname{Tr}_{\bar A} (\rho)$. This suggests the following generalisation of the von Neumann entropy for a general quantum state $\rho$ and an arbitrary algebra $\mathcal{M}$: \begin{definition}\textbf{Algebraic entropy.} Let $\rho$ be a state on an arbitrary von Neumann algebra $\mathcal{M}$. The algebraic entropy of $\rho$ with respect to $\mathcal{M}$ is \begin{equation} \label{eq:algebraic_entropy} S(\rho, \mathcal{M}) \equiv-\sum_{\alpha} \operatorname{Tr}_{A_{\alpha}}\left(p_{\alpha} \rho_{A_{\alpha}} \log \left(p_{\alpha} \rho_{A_{\alpha}}\right)\right)=-\sum_{\alpha} p_{\alpha} \log p_{\alpha}+\sum_{\alpha} p_{\alpha} S\left(\rho_{A_{\alpha}}\right), \end{equation} where $S\left(\rho_{A_{\alpha}}\right) \equiv - \operatorname{Tr}_A (\rho_A \log \rho_{A_\alpha})$ is the von Neumann entropy of the reduced state $\rho_{A_\alpha}$. \end{definition} Note that when $\mathcal{M}$ is a factor the algebraic entropy reduces to the standard von Neumann entropy (i.e. the classical term $-\sum_{\alpha} p_{\alpha} \log p_{\alpha}$ vanishes). The definition in \eqref{eq:algebraic_entropy} has two terms: a \emph{classical} term arising from the uncertainty over which block of the Wedderburn decomposition the state is in, and a \emph{quantum} term associated to the standard von Neumann entropies over the blocks. \begin{example} Consider the von Neumann algebra of Example~\ref{example:Wedderburn}. The algebra has two diagonal blocks denoted by $\alpha = 0,1$. Consider the $3$-qubit GHZ state $\ket{\Psi} = 2^{-1/2} (\ket{000} + \ket{111})$. We have that \begin{equation} \rho_{A_0} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \quad \rho_{A_1} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} and $p_0 = 1/2$, $p_1 = 1/2$. The algebraic entropy of the state is \begin{equation} S(\ket{\Psi}\bra{\Psi}, \mathcal{M}) = 1. \end{equation} \end{example} \section{Complementary recovery and the RT formula} \label{sec:complementary_recovery} In this section we define several holographic properties of quantum error-correcting codes and establish some relationships between them. The main goal is the RT formula, which is a remarkable relationship between the entropy of a subregion of the boundary $A$, called $S_A$, as well as the entropy $S_{\text{bulk},A}$ of the bulk degrees of freedom visible from $A$: \begin{align} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho). \end{align} For a holographic quantum error-correcting code, the above holds for any encoded state $\rho$. Such a relationship imposes a lot of structure on the family of states in the code: entropies are non-linear functions of $\rho$, whereas the rightmost term $\text{Tr}(L \rho)$ is a linear function. If $\rho$ is pure, we can intuitively think of $S_A$ as the entanglement entropy of the state encoded into physical qubits, whereas $S_{\text{bulk},A}$ is like the entanglement entropy of the underlying logical state. Rearranging the equation to $S_A(\rho) - S_\text{bulk}(\rho) = \text{Tr}(L \rho)$, we can see that this is essentially saying that the extra entropy added by the encoding process is linear in $\rho$. Furthermore, the amount of entropy is added is an observable: a hermitian `area operator' $L$. While such a structured relationship might seem very rare, we find that there is actually a fairly simple and natural property that implies it: complementary recovery. This property demands a certain symmetry of the error-correcting code across a bipartition $A$-$\bar A$ of the physical Hilbert space. The errors correctable given only access to $A$ are exactly those that commute with the ones correctable only from subregion $\bar A$. This symmetry is present in many quantum error-correcting codes, such as stabilizer codes (See Lemma~\ref{lemma:stabm}). Surprisingly, it immediately implies an RT formula! \subsection{Complementary recovery} We begin with a discussion of complementary recovery and its relationship to quantum error correction. A quantum error-correcting code can be thought of as a subspace $\H_\text{code}$ of the physical Hilbert space $\H$. However, in this discussion as well as in the next section we will find it more convenient to work with a `logical space' $\H_L$ with the same dimension as $\H_\text{code}$, which is thought of as separate from $\H$. Then, an `encoding isometry' $V: \H_L \to \H$ takes logical states and encodes them in the physical Hilbert space. The image of $V$ is $\H_\text{code}$. Intuitively one can think of this as fixing a basis for the code space, since $\H_\text{code}$ is invariant under a basis change $V \to VU_L$ for some unitary $U_L$ on $\H_L$. While different from the approach of other literature, this view has two advantages. First, it makes the notion of a commutant of a von Neumann algebra in $H_L$ a little easier to understand. Second, we find that when giving explicit examples in the next section it is easier to write down $V$ rather than $\H_\text{code}$. Above we have been speaking of holographic properties of a code, defined by its encoding isometry $V$. However, two other quantities are important for an RT formula: the subregion $A$ that determines the entropy $S_A$, and the visible bulk degrees of freedom that determine the entropy $S_{\text{bulk},A}$. What degrees of freedom are visible is denoted by a von Neumann algebra $\mathcal{M}$. Clearly, $(V,A,\mathcal{M})$ are interrelated, so we establish the following vocabulary: \begin{definition} Say $V : \H_L \to \H$ is an encoding isometry $V$ for some quantum error-correcting code, and $A$ is a subregion of $\H$ inducing the factorization $\H = \H_A \otimes \H_{\bar A}$. A von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$ is said to be: \begin{itemize} \item \textbf{correctable} from $A$ with respect to $V$ if $\mathcal{M} \subseteq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$. That is: for every $O_L \in \mathcal{M}$ there exists an $O_A \in \mathcal{L}(\H_A)$ such that $O_L = V^\dagger (O_A \otimes I_{\bar A}) V$. \item \textbf{private} from $A$ with respect to $V$ if $V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{M}'$. That is: for every $O_A \in \mathcal{L}(\H_A)$ it is the case that $ V^\dagger (O_A \otimes I_{\bar A}) V$ commutes with every operator in $\mathcal{M}$. \end{itemize} \end{definition} If $\mathcal{M}$ is correctable, then it is a set of logical operators that can be performed on the encoded state given access to only the subregion $A$. A hermitian element in $\mathcal{M}$ then corresponds to an observable on the logical Hilbert space that could be measured from $A$, so, intuitively, $\mathcal{M}$ tells us about what parts of the logical state are recoverable from $A$. Conversely, if $\mathcal{M}$ is private then the observables in $\mathcal{M}$ tell us what parts of $\rho$ are invisible from $A$. The notion of correctability is central to complementary recovery: a von Neumann algebra $\mathcal{M}$ exhibits complementary recovery if it can be corrected from $A$, and its commutant $\mathcal{M}'$ can be corrected from $\bar A$. \begin{definition} A code with encoding isometry $V:\H_L\to \H$, a subregion of the physical Hilbert space $A$ and a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$, together $(V,A,\mathcal{M})$, exhibit \textbf{complementary recovery} if: \begin{itemize} \item $\mathcal{M}$ is correctable from $A$ with respect to $V$: $\mathcal{M} \subseteq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$, \item $\mathcal{M}'$ is correctable from $\bar A$ with respect to $V$: $\mathcal{M}' \subseteq V^\dagger (I_A \otimes \mathcal{L}(\H_{\bar A}))V$. \end{itemize} \end{definition} So far, there do not appear to be very many restrictions on the von Neumann algebra $\mathcal{M}$. In particular if $\mathcal{N}$ is a subalgebra of $\mathcal{M}$, and $\mathcal{M}$ is correctable, then $\mathcal{N}$ is correctable as well. It would thus seem plausible that if $(V,A,\mathcal{M})$ has complementary recovery, then so does $(V,A,\mathcal{N})$, so there are multiple von Neumann algebras with complementary recovery. However, we will find that complementary recovery is actually so restrictive on $\mathcal{M}$ that it determines it uniquely, and subalgebras of $\mathcal{M}$ do not have complementary recovery. This is important because the von Neumann algebra plays a key role in the RT formula: it tells us how to concretely define `the entropy of bulk degrees of freedom visible from $A$' via an algebraic entropy $S(\mathcal{M},A)$. For this to make sense, $\mathcal{M}$ must be completely determined by the isometry $V$ and the subregion $A$. To prove this result we cite a helpful lemma from the quantum error correction literature. \begin{lemma} \label{lemma:corrpriv} \textbf{Correctable from $A$ $\leftrightarrow$ private from $\bar A$.} A von Neumann algebra $\mathcal{M}$ is correctable from $A$ with respect to $V$ if and only if $\mathcal{M}$ is private from $\bar A$ with respect to $V$. \end{lemma} This lemma establishes the complementarity of correctable and private algebras for the case of erasure errors (the correctability from $A$ implies that $\bar A$ has been erased). Informally, a subsystem $B$ of a Hilbert space $\mathcal{H} = A \otimes B$ is private if it completely decoheres after the action of a channel. The lemma follows from the more general Theorem~\ref{thm:correctable-private} (which we present in Appendix~\ref{app:privacy}) that applies to the case of general error channels (to get the lemma simply consider $\mathcal{E}$ to be the erasure channel for the subsystem $\bar A$ and $P$ a projection onto the code subspace). The complementarity theorem was first proven for the case of factor algebras \cite{kretschmann2008complementarity} and then extended to general, infinite dimensional, von Neumann algebras~\cite{crann2016private}. Now we are ready to demonstrate that $\mathcal{M}$ is unique, provided it exists at all. The theorem below also shows an easy way to calculate $\mathcal{M}$ as well as a simple criterion for its existence. The proof relies on the fact that privacy of $\mathcal{M}'$ is defined as a statement that is a bit like an `upper bound version' of correctability of $\mathcal{M}$: it demands that the set of correctable operators lies in $\mathcal{M}$, rather than that $\mathcal{M}$ is correctable. By sandwiching together correctability of $\mathcal{M}$ and privacy of $\mathcal{M}'$ we fix what $\mathcal{M}$ must be. \begin{theorem}\label{thm:whatalgebra} \textbf{Uniqueness of the Neumann algebra.} Say $V$ is an encoding isometry and say $A$ is a subregion. Let $\mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ be the image of operators on $\H_A$ projected onto $\H_L$. If $\mathcal{M}$ is a von Neumann algebra (that is, it is closed under multiplication), then it is the unique von Neumann algebra satisfying complementary recovery with $V$ and $A$. If it is not, then no von Neumann algebra satisfying complementary recovery exists. \end{theorem} \begin{proof} We split this proof into two conditions: \begin{description} \item[Existence] If $\mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, then $(V,A,\mathcal{M})$ have complementary recovery. \item[Uniqueness] If $\mathcal{N} \subsetneq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, then $(V,A,\mathcal{N})$ do not have complementary recovery. \end{description} We begin with existence: we assume that $ \mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, so $\mathcal{M}'$ is well defined. The first condition of complementary recovery holds by definition of $\mathcal{M}$. Also by definition we have that: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{M} = \mathcal{M}'', \end{align} where in the last part we used the bicommutant theorem. Thus, by definition of privacy, $\mathcal{M}'$ is private from $A$ with respect to $V$. By Lemma~\ref{lemma:corrpriv}, $\mathcal{M}'$ is thus correctable from $\bar A$ with respect to $V$, which is the second condition of complementary recovery. Next we show uniqueness. Let $\mathcal{N} \subsetneq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ be any von Neumann algebra that is correctable from $A$, but not equal to the full set of correctable operators. We assume that $(V,A,\mathcal{N})$ has complementary recovery and derive a contradiction. By the second condition of complementary recovery, $\mathcal{N}$ is correctable from $\bar A$ with respect to $V$. By Lemma~\ref{lemma:corrpriv}, $\mathcal{N}'$ is thus private from $A$ with respect to $V$, that is: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{N}'' = \mathcal{N}, \end{align} where in the last part we used the bicommutant theorem. So we have \begin{align} \mathcal{N} \subsetneq V^\dagger (\mathcal{L}(H_A)\otimes I_{\bar A})V \subseteq \mathcal{N}, \end{align} implying that $\mathcal{N}$ does not contain itself - a contradiction. \end{proof} While an RT formula seems like an extremely unlikely property, complementary recovery on the other hand seems like a property that is rather natural and that most, if perhaps all, quantum error-correcting codes should have. Thus, the fact that complementary recovery implies an RT formula is surprising. However, the fact that a von Neumann algebra with complementary recovery can fail to exist implies that complementary recovery is actually less trivial than it might seem. While still exhibited by many quantum error-correcting codes, it is worth giving an explicit example of a code without complementary recovery. \begin{example} \label{ex:badcode} \textbf{A code without complementary recovery.} \\ Let $\H = \text{span}(\ket{00},\ket{01},\ket{10},\ket{11})$ be two qubits and $\H_L = \text{span}(\ket{0},\ket{1},\ket{2})$ be a qutrit. Let $A$ be the first qubit of $\H$, and let: \begin{align} V = \ket{00}\bra{0} + \ket{01}\bra{1} + \ket{10}\bra{2}. \end{align} Then the set of correctable operators is: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V = \begin{bmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & a\end{bmatrix} \text{ for all }a,b,c,d \in \mathbb{C}. \end{align} Notably, this set is not closed under multiplication and is not a von Neumann algebra. Let us pick $\mathcal{M}$ to be the largest von Neumann algebra in this set: \begin{align} \mathcal{M} = \begin{bmatrix} a & 0 & 0 \\ 0 & d & 0 \\ 0 & 0 & a\end{bmatrix} \text{ for all }a,d \in \mathbb{C}. \end{align} While $(V,A,\mathcal{M})$ satisfy the first condition of complementary recovery, they do not satisfy the second: \begin{align} \begin{bmatrix} a & 0 & b \\ 0 & d & 0 \\ c & 0 & e\end{bmatrix} = \mathcal{M}' \not\subseteq V^\dagger (I_{A} \otimes \mathcal{L}(\H_{\bar A}))V = \begin{bmatrix} a & 0 & b \\ 0 & a & 0 \\ c & 0 & e\end{bmatrix}. \end{align} Say we had chosen $\mathcal{M}$ to be some smaller subalgebra of $V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$. Then $\mathcal{M}'$ would only be larger, containing the $\mathcal{M}'$ in the line above. But since that $\mathcal{M}'$ is already not contained in $V^\dagger (I_{A} \otimes \mathcal{L}(\H_{\bar A}))V$, there does not exist \emph{any} von Neumann algebra with complementary recovery with $V$ and $A$. \end{example} The fact that complementary recovery can fail to exist should illustrate that it actually imposes a non-trivial constraint on the quantum error-correcting code. This constraint is strong enough to imply an RT formula, which we will now define carefully. Note that it is not obvious at all how to obtain an $\mathcal{M}$ that makes the RT formula work from the definition of the formula itself - that is where complementary recovery comes in. \subsection{The RT formula and its properties} \begin{definition} Say $V$ is an encoding isometry, say $A$ is a subregion, and say $\mathcal{M}$ is a von Neumann algebra on $\H_L$. Then we say $(V,A,\mathcal{M})$ have an \textbf{RT formula} if there exists an \textbf{area operator} $L \in \mathcal{L}(\H_L)$ such that for any state $\rho$ on $\H_L$: \begin{align} S(\text{Tr}_{\bar A}(V\rho V^\dagger)) = S( \mathcal{M}, \rho) + \text{Tr}(\rho L). \end{align} If $L \propto I$ then we say $(V,A, \mathcal{M})$ have a trivial RT formula. \end{definition} Now we show the connection between complementary recovery and the existence of the RT formula. This is a highly non-trivial claim that makes use of an enormous amount of structure implied by complementary recovery. Recall from the previous section that a von Neumann algebra implies a Wedderburn decomposition on the Hilbert space that it acts on. We find that when a von Neumann algebra is correctable from $A$ with respect to $V:\H_L \to \H$, then not only does $\H_L$ decompose, but the Hilbert space associated with the subregion $A$ also decomposes. Furthermore, these decompositions are directly related. The following lemma formalizes this structure, even when complementary recovery is not present. Recall that complementary recovery really implies the correctability of both $\mathcal{M}$ and $\mathcal{M}'$, which allows us to invoke the lemma below not once but twice. We then exploit this to prove that an RT formula exists. \begin{lemma} \label{lemma:factorization} \textbf{Factorization of encoded states.} Say $V : \H_L \to \H$ is an encoding isometry, $A$ is a subregion inducing $\H = \H_A \otimes \H_{\bar A}$, and say $\mathcal{M}$ is a von Neumann algebra on $\H_L$ that is correctable from $A$ with respect to $V$. Say $\mathcal{M}$ induces the decomposition $\H_L = \bigoplus_{\alpha} \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right) $ so that \begin{equation} \label{eq:algebra_4_lemma} \mathcal{M} = \bigoplus_{\alpha} \left( \mathcal{L}(\H_{L_\alpha}) \otimes I_{\bar L_\alpha} \right), \end{equation} and that $\{\ket{\alpha,i,j}\}$ is an orthonormal basis for $\H_L$ that is ``compatible with $\mathcal{M}$'', that is, the $\alpha$ enumerates the diagonal blocks and within each block we have $\ket{\alpha,i,j} = \ket{i_\alpha}_{L_{\alpha}} \otimes \ket{j_\alpha}_{{\bar L_\alpha}}$ where $\{\ket{i_\alpha}_{L_{\alpha}} \}$ and $\{ \ket{j_\alpha}_{\bar L_\alpha} \}$ are orthonormal bases for $\H_{L_\alpha}$ and $\H_{\bar L_\alpha}$ respectively. Then there exists a factorization $\H_A = \bigoplus_\alpha\left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}$ and a unitary $U_A$ on $\H_A$ such that the state $(U_A \otimes I_{\bar A}) V \ket{\alpha,i,j}$ factors as follows: \begin{align} \label{eq:factorisation_encoded_state} (U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}, \end{align} where the state $\ket{\psi_{\alpha,i}}$ is independent of $j$, and $\ket{\chi_{\alpha,j}}$ is independent of $i$. \end{lemma} \begin{proof} The full proof for the general case of the algebra in Eq.~\ref{eq:algebra_4_lemma} is given in~\cite[Section 5.1]{harlow2017ryu}. We give a proof for the simpler case of a factor algebra in Appendix~\ref{app:structure_lemma}. Both proofs follow a similar strategy---originally developed in~\cite{schumacher1996quantum}---that involves introducing a reference system $R$ which is maximally entangled with the region $A$. By analysing the von Neumann entropies of the reduced density matrices of the $RA\bar A$ system one can obtain a necessary and sufficient condition for quantum error correction. This condition and standard properties of the Schmidt decomposition give the proof of the lemma. We note that an alternative proof of this result can be obtained via a result---see~\cite[Section VI]{hayden2004structure}---that shows that states that saturate the strong subadditivity inequality for the von Neumann entropy can be decomposed as direct sums of tensor products. \end{proof} The above lemma already sets up an enormous amount of notation, and even more notation will be required to apply it to a complementary situation. Explicit expressions for these Hilbert space decompositions quickly become rather cumbersome, which is why much of the literature skips many steps in the derivations in order to focus on the intuitive interpretation. While intuition is key, an explicit calculation can also help make one's understanding more concrete. For this reason we give the following derivation with more detail. In the next section we will provide explicit examples of quantum error-correcting codes and analyse them in the same language established here. The reader may wish to skip the proof of the following theorem and read the examples in the next section first. The following derivation is inspired by proofs in \cite{harlow2017ryu} and \cite{almheiri2015bulk}. \begin{theorem} \label{thm:complementaritytoRT} \textbf{Complementary recovery implies a two-sided RT formula.} Consider an encoding isometry $V$, a subregion $A$ and a von Neumann algebra $\mathcal{M}$ so that $(V,A,\mathcal{M})$ have complementary recovery. Then $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$ both have an RT formula with the same area operator $L$ (that is, the RT formula is `two-sided'). Furthermore, $L$ is in the center $Z_\mathcal{M}$. \end{theorem} \begin{proof} Say $\mathcal{M}$ induces the decomposition $\H_L = \bigoplus_{\alpha} \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right) $. This way we can decompose $\mathcal{M}$ and $\mathcal{M}'$ together as: \begin{align} \mathcal{M} = \bigoplus_{\alpha} \left( \mathcal{L}(\H_{L_\alpha}) \otimes I_{\bar L_\alpha} \right), \hspace{1cm} \mathcal{M}' = \bigoplus_{\alpha} \left( I_{L_\alpha} \otimes \mathcal{L}(\H_{\bar L_\alpha}) \right) . \end{align} Let $\{\ket{\alpha,i,j}\}$ be a basis that is ``compatible with $\mathcal{M}$'' as in Lemma~\ref{lemma:factorization}. We observe that $\{\ket{\alpha,i,j}\}$ also `lines up with $\mathcal{M}'$' in the same sense, since really $\{\ket{\alpha,i,j}\}$ just lines up with the underlying decomposition of $\H_L$. Now, with two applications of Lemma~\ref{lemma:factorization} we know that there exist factorizations of $\H_{A}$ and $\H_{\bar A}$ of the form: \begin{align} \H_A = \bigoplus_\alpha\left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}, \hspace{1cm} \H_{\bar A} = \bigoplus_\alpha\left( \H_{\bar A^\alpha_1} \otimes \H_{\bar A^\alpha_2} \right) \oplus \H_{\bar A_3}, \end{align} so that there are unitaries $U_A$ and $U_{\bar A}$ such that: \begin{align} (U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}\\ (I_A \otimes U_{\bar A}) V \ket{\alpha,i,j} = \ket{\bar \chi_{\alpha,i}}_{A \bar A^\alpha_2} \otimes \ket{\bar \psi_{\alpha,j}}_{\bar A^\alpha_1}. \end{align} If we consider applying $(U_A \otimes I_{\bar A})$ followed by $(I_A \otimes U_{\bar A})$: \begin{align} (I_A \otimes U_{\bar A})(U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes (I_{A^\alpha_2} \otimes U_{\bar A}) \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}, \end{align} we see that $U_{\bar A}$ actually just acts on the state $\ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}$. Thus, we see in order for both decompositions to be true simultaneously, there must exist states $\ket{\bar \psi_{\alpha,j}}$ and $\ket{\chi_\alpha}$ such that $(I_{A^\alpha_2} \otimes U_{\bar A})\ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A} = \ket{\chi_{\alpha}}_{A^\alpha_2 \bar A^\alpha_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A^\alpha_1}$, implying: \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha}}_{A^\alpha_2 \bar A^\alpha_2} \otimes \ket{\bar \psi_{\alpha,j}}_{\bar A^\alpha_1}. \label{eqn:twosideddecomp} \end{align} The above factorization will spell out the RT formula when a logical operator is considered in this basis. Say $\rho$ is a state on $\H_L$. To show that $(V,A,\mathcal{M})$ have an RT formula, we will proceed to compute $S(\mathcal{M},\rho)$ as well as $S(\text{Tr}_{\bar A}( V\rho V^\dagger ))$ and take the difference. We will observe that the difference will have the form $\text{Tr}(\rho L)$ for some $L$. To derive $S(\text{Tr}_{\bar A}( V\rho V^\dagger ))$ recall the discussion in Section~\ref{sec:algebraic_states} and observe that one might as well consider $S(\text{Tr}_{\bar A}( V\rho_\mathcal{M} V^\dagger ))$ instead: say $O_A \in \mathcal{L}(\H_A)$, and write: \begin{align} \text{Tr}( O_A \cdot \text{Tr}_{\bar A}( V\rho V^\dagger ) ) = \text{Tr}( (O_A \otimes I_{\bar A}) \cdot V\rho V^\dagger ) = \text{Tr}( V^\dagger (O_A \otimes I_{\bar A}) V \cdot \rho ). \end{align} But $V^\dagger (O_A \otimes I_{\bar A}) V$ is in $\mathcal{M}$. Since for any $O \in \mathcal{M}$ we have $\text{Tr}(O \rho) = \text{Tr}(O \rho_\mathcal{M})$ (see Theorem~\ref{thm:expectations_vN}) we can just replace $\rho$ with $\rho_\mathcal{M}$ in the above. The states $\text{Tr}_{\bar A}( V\rho V^\dagger )$ and $\text{Tr}_{\bar A}( V\rho_\mathcal{M} V^\dagger )$ give the same expectations for all observables, so they must be the same state and have the same entropy. Furthermore, since acting with a unitary on $\H_{A}$ and $\H_{\bar A}$ separately does not change the entropy, we see: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) = S(\text{Tr}_{\bar A}( (U_A \otimes U_{\bar A})V \rho_M V^\dagger (U_A \otimes U_{\bar A})^\dagger )). \end{align} Next, we define isometries $\tilde V_\alpha: (\H_{L_\alpha} \otimes \H_{\bar L_\alpha}) \to (\H_{A^\alpha_1} \otimes \H_{\bar A^\alpha_1})$ using the states $\ket{\psi_{\alpha,i}}_{A^\alpha_1}$ and $\ket{\bar\psi_{\alpha,j}}_{\bar A^\alpha_1}$ from (\ref{eqn:twosideddecomp}): \begin{align} \tilde V_\alpha \ket{\alpha,i,j} := \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\psi_{\alpha,j}}_{\bar A^\alpha_1}. \end{align} We know that $\tilde V_\alpha$ is indeed an isometry because the states $\ket{\psi_{\alpha,i}}_{A^\alpha_1}$ and $\ket{\psi_{\alpha,j}}_{\bar A^\alpha_1}$ are actually bases for $\H_{A^\alpha_1}$ and $\H_{\bar A^\alpha_1}$ respectively. This follows from (\ref{eqn:twosideddecomp}) and the fact that the $\{\ket{\alpha,i,j}\}$ for fixed $\alpha$ are a basis for $\H_{L_\alpha} \otimes \H_{\bar L_\alpha}$. The purpose of $\tilde V_\alpha$ is that it lets us simplify (\ref{eqn:twosideddecomp}) to: \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \tilde V_\alpha \ket{\alpha,i,j} \otimes \ket{\chi_\alpha}. \end{align} This lets us bring the cumbersome expression $(U_A \otimes U_{\bar A})V \rho_\mathcal{M} V^\dagger (U_A \otimes U_{\bar A})^\dagger $ into a much neater form: \begin{align} & \hspace{4mm} (U_A \otimes U_{\bar A})V \rho_\mathcal{M} V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot (U_A \otimes U_{\bar A})V \rho_\alpha V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot \frac{1}{p_\alpha} \sum_{i,j}\sum_{i',j'} \rho[\alpha]_{i,j,i',j'} (U_A \otimes U_{\bar A})V\ket{\alpha,i,j}\bra{\alpha,i',j'} V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot \frac{1}{p_\alpha} \sum_{i,j}\sum_{i',j'} \rho[\alpha]_{i,j,i',j'} \tilde V_\alpha \ket{\alpha,i,j}\bra{\alpha,i',j'} \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha} \\ &= \sum_\alpha p_\alpha \cdot \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha}. \end{align} Since each of the states $\tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha}$ are normalized and disjoint (act on different blocks), the entropy takes the form: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) &= \sum_\alpha p_\alpha \log(p^{-1}_\alpha) + \sum_{\alpha} p_\alpha S( \text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha} ) )\\ &= \sum_\alpha p_\alpha \log(p^{-1}_\alpha) + \sum_{\alpha} p_\alpha S(\text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger )) + \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )). \label{eqn:entanglemententropy} \end{align} Finally, we observe that since $\ket{\psi_{\alpha,j}}$ is independent of $i$, we have that: \begin{align} S(\text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger ) = S(\text{Tr}_{\bar A^\alpha_1}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger )) = S( \text{Tr}_{\bar L_\alpha}(\rho_\alpha)). \end{align} We observe that the first two terms of (\ref{eqn:entanglemententropy}) are the exact same as the two terms of (\ref{eq:algebraic_entropy}), so their difference is just: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) - S(\mathcal{M},\rho) = \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )). \end{align} The right-hand side is linear in the $p_\alpha$, so it is linear in $\rho$, so there exists an area operator $L$ such that the right-hand side is $\text{Tr}(\rho L)$. We construct it explicitly below. \begin{align} I_\alpha &:= \sum_{i,j} \ket{\alpha,i,j}\bra{\alpha,i,j}\\ L &:= \sum_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot I_\alpha. \end{align} Observe that $L \in \mathcal{M}$, so therefore $\text{Tr}(\rho L) = \text{Tr}(\rho_\mathcal{M} L)$. Then we write: \begin{align} \text{Tr}(\rho_\mathcal{M} L) &= \text{Tr}\left( \sum_\alpha p_\alpha \rho_\alpha \cdot \sum_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot I_\alpha \right) \\ &= \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot \text{Tr}(\rho_\alpha I_\alpha) = S(\text{Tr}_{\bar A}( V\rho V^\dagger )) - S(\mathcal{M},\rho). \end{align} We have derived that $(V,A,\mathcal{M})$ satisfy an RT formula with operator $L$ and furthermore that $L \in \mathcal{M}$. The derivation for $(V,\bar A,\mathcal{M}')$ is exactly the same just with $i$ and $j$ swapped, and since $L \in \mathcal{M}'$ we have that $L$ is in the center $Z_\mathcal{M}$. \end{proof} According to \cite{harlow2018tasi} the reverse direction also holds: if $(V, A, \mathcal{M} )$ and $(V,\bar A, \mathcal{M}')$ both satisfy an RT formula with the same $L$, then $(V,A,\mathcal{M})$ must have complementary recovery. So actually, complementary recovery is equivalent to the existence of a `two-sided RT formula' for both $(V, A, \mathcal{M} )$ and $(V,\bar A, \mathcal{M}')$ This suggests the possibility that complementary recovery is actually stronger than the existence of a one-sided RT formula. Is it possible for $(V,A,\mathcal{M})$ to exhibit an RT formula, but not $(V,\bar A,\mathcal{M}')$? \subsection{A recipe for analysing codes} \label{sec:recipe} The derivation in this section not only defines the holographic properties of an error-correcting code, but also gives a recipe for computing the area operator of the RT formula: \begin{enumerate} \item Follow Theorem~\ref{thm:whatalgebra} and compute $\mathcal{M} := V^\dagger (\mathcal{L}(H_A)\otimes I_{\bar A})V$, and verify that it is indeed a von Neumann algebra. If so, we have complementary recovery. \begin{description} \item \emph{Shortcut}: If $\mathcal{M}$ has a trivial center ($Z_\mathcal{M} = \langle I\rangle_\text{vN}$), then since $L \in Z_\mathcal{M}$ we already know that the code must have a trivial RT formula. \end{description} \item Compute the Wedderburn decomposition on $\H_L$ that follows from $\mathcal{M}$. Follow Lemma~\ref{lemma:factorization} and define a basis $\ket{\alpha,i,j}$ that `lines up with $\mathcal{M}$'. \item Apply Lemma~\ref{lemma:factorization} twice to obtain unitaries $U_A$ and $U_{\bar A}$ such that: $(U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}} \otimes \ket{\chi_{\alpha}}\otimes \ket{\bar \psi_{\alpha,j}}$. \item Obtain the states $\ket{\chi_\alpha}$ and compute their entanglement entropies. These are the eigenvalues of the area operator. \end{enumerate} This is already a very complicated series of steps. While computing $\mathcal{M}$ is not so difficult for small codes, the later steps where we explicitly construct the $\ket{\chi_\alpha}$ states can be cumbersome. For this reason we recall that Theorem~\ref{thm:complementaritytoRT} showed that $L \in Z_\mathcal{M}$. So a trivial center implies a trivial RT Formula. The intuition is that an interesting holographic code features a variety of superselection sectors, each representing a different geometry with a different area. The center $Z_\mathcal{M}$ is the set of operators acting proportionally to the identity on each sector. Thus, if the center is trivial, there is only one superselection sector, so there can only be one area. This provides a convenient shortcut for analyzing the holographic properties of codes. In the next section we will practice this recipe on various examples. \section{Atomic examples} \label{sec:examples} In the previous section we discussed holographic properties of an isometry $V: \H_L \to \H$, a subregion $A$ and a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$. Together $(V,A,\mathcal{M})$ can exhibit `complementary recovery' if $\mathcal{M}$ can be recovered from $A$ and its commutant $\mathcal{M}'$ can be recovered from $\bar A$. Furthermore, $(V,A,\mathcal{M})$ are said to exhibit an `RT formula' if the following equation holds for all states $\rho$ on $\H_L$: \begin{align} S(\text{Tr}_{\bar A}(V \rho V^\dagger)) = S(\mathcal{M},\rho) + \text{Tr}(\rho L). \end{align} We established two results: First, we showed that the isometry $V$ and the subregion $A$ together uniquely determine an $\mathcal{M}$ so that $(V,A,\mathcal{M})$ have complementary recovery, and gave a simple method for calculating $\mathcal{M}$ if it exists. Second, we showed that complementary recovery implies that an RT Formula holds for both $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$. In this section we give some examples of quantum error-correcting codes that exhibit an RT formula. These examples aim to be non-trivial while using as few qubits as possible, motivating the name `atomic'. We begin with simple examples where the equation above holds in a trivial way, but then build our way up to an example that features an RT formula where every single term in the equation is nonvanishing. These toy models are a useful stepping stone toward an intuitive understanding of holography and its connection to quantum error correction. In particular, the statement of Lemma~\ref{lemma:factorization} and proof of Theorem~\ref{thm:complementaritytoRT} made heavy use of abstract decompositions of the Hilbert spaces as well as various intermediate states. These arguments are significantly easier to understand when keeping the examples in mind. In Theorem~\ref{thm:whatalgebra} we showed that $V,A$ together determine the algebra $\mathcal{M}$. In our examples however we only specify the encoding isometry $V$. This is because these examples actually exhibit RT formulae for all `contiguous subregions' $A$. That is, the physical Hilbert space is to be thought of as a ring of qubits, and $A$ can only contain adjacent sets of qubits. Moreover, the isometries $V$ are sufficiently symmetrical that the RT formulae for all these different subregions $A$ are identical provided they are large enough. Combined with the fact that $(V,A,\mathcal{M})$ and $(V,\bar A, \mathcal{M}')$ have the same area operator, the analysis is thus greatly simplified. Recall from the proof of Theorem~\ref{thm:complementaritytoRT} that for any state $\rho$ we can derive $\rho_\alpha$ and $p_\alpha$ so that the algebraic entropy can be written as: \begin{align} S(\mathcal{M},\rho) = \sum_{\alpha} p_\alpha \log(p_\alpha^{-1}) + \sum_\alpha p_\alpha S(\text{Tr}_{\bar L_\alpha}( \rho_\alpha)), \end{align} which intuitively splits the entropy into a `classical term' and a `quantum term'. The classical term is indeed just the classical entropy corresponding to the probabilities $p_\alpha$, while the quantum term is a probabilistic mixture of various von Neumann entropies. Substituting this expansion into the RT Formula, we obtain an equation with four terms. We name the first three $S_A$ after the subregion $A$, $S_c$ for `classical' and $S_q$ for `quantum': \begin{align} \underbrace{S(\text{Tr}_{\bar A}(V \rho V^\dagger))}_{S_A} = \underbrace{\sum_{\alpha} p_\alpha \log(p_\alpha^{-1})}_{S_c} + \underbrace{\sum_\alpha p_\alpha S(\text{Tr}_{\bar L_\alpha}( \rho_\alpha))}_{S_q} + \text{Tr}(\rho L). \label{eqn:expandedrt} \end{align} The structure of this section is as follows: we begin with three examples where only one of the terms $S_c, S_q$ and $\text{Tr}(\rho L)$ is nonzero. Then we give three examples where exactly two terms are nonvanishing. Then, finally, we give one example where all three terms appear. The definitions of all the isometries $V$ are summarized in Figure~\ref{fig:circuits}. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=.8em { \lstick{\alpha} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }$$ \caption{$S_A = S_\text{c} + \cancel{S_\text{q}} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=.8em { \lstick{\ket{+}} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + \cancel{S_\text{q}} + \text{Tr}(\rho L) $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.6em @R=1.3em { \lstick{i} & \qw\\ \lstick{j} & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + S_\text{q} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \vspace{5mm} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.1em { \lstick{\alpha} & \ctrl{2} & \qw \\ \lstick{i} & \qw & \qw & \qw \\ \lstick{\ket{0}} & \targ & \qw \\ \lstick{j} & \qw & \qw }$$ \caption{$S_A = S_\text{c} + S_\text{q} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.2em { \lstick{\alpha} & \ctrl{2} & \ctrl{1} & \qw \\ \lstick{\ket{+}} & \qw & \ctrl{2} & \qw \\ \lstick{\ket{0}} & \targ & \qw & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw }$$ \caption{$S_A = S_\text{c} + \cancel{S_\text{q}} + \text{Tr}(\rho L) $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.3em { \lstick{i} & \qw & \qw \\ \lstick{\ket{+}} & \ctrl{2} & \qw \\ \lstick{j} & \qw & \qw \\ \lstick{\ket{0}} & \targ & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + S_\text{q} + \text{Tr}(\rho L) $} \end{subfigure} \vspace{5mm} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.3em { \lstick{\alpha} & \ctrl{3} & \ctrl{2} & \qw \\ \lstick{i} & \qw & \qw & \qw \\ \lstick{\ket{+}} & \qw & \ctrl{3} & \qw \\ \lstick{\ket{0}} & \targ & \qw & \qw\\ \lstick{j} & \qw & \qw & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw }$$ \caption{$S_A = S_\text{c} + S_\text{q} + \text{Tr}(\rho L) $} \end{subfigure} \caption{\label{fig:circuits}Examples of encoding isometries $V$ considered in this section. All of these exhibit an RT formula as in (\ref{eqn:expandedrt}), but various terms vanish as shown. The logical Hilbert space $\H_L$ always factors into $\H_\alpha \otimes \H_i \otimes \H_{j}$, with the input qubits marked as such.} \end{figure} \subsection{Codes with one term} We specify all the isometries in terms of quantum circuits, which makes many of the non-trivial Hilbert space decompositions in Lemma~\ref{lemma:factorization} and Theorem~\ref{thm:complementaritytoRT} much simpler to understand. In particular, recall from Lemma~\ref{lemma:factorization} that $\mathcal{M}$ induces a decomposition on $\H_L$ of the form: \begin{align} \H_L = \bigoplus_\alpha \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right). \end{align} This very general form of a decomposition accounts for the fact that the dimensions of $\H_{L_\alpha}$ and $\H_{\bar L_\alpha}$ may vary depending on $\alpha$. This will not be the case for these examples, so we can simply remove the $\alpha$ dependence, relabeling $\H_{L_\alpha} \to \H_i$ and $\H_{\bar L_\alpha} \to \H_j$, and write: \begin{align} \H_L = \H_\alpha \otimes \H_i \otimes \H_{j}. \end{align} Each of the degrees of freedom $\alpha,i$ and $j$ is then simply encoded by the corresponding qubit, which is labeled as such on the left side of the circuit. Here, which block of the decomposition we are in is associated with its own Hilbert space $\H_\alpha$. \begin{atomicexample} \label{ex:c} We begin with an example where the RT Formula is simply $S_A = S_c$: \begin{align} V_{(a)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=.8em { \lstick{\alpha} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw} \end{array} \end{align} Without loss of generality we pick $\H_A$ to be the first qubit and $\H_{\bar A}$ to be the second qubit. Intuitively, when $\H_{\bar A}$ is traced out, then the qubit $\H_A$ acts like it has been measured in the computational basis. The probabilities of the two outcomes $p_0$ and $p_1$ are a classical probability distribution. Following Theorem~\ref{thm:whatalgebra}, we compute $V^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A}) V$ to obtain $\mathcal{M}$. A general element in $O \in \mathcal{L}(\H_A) \otimes I_{\bar A}$ can be expanded into Pauli matrices: \begin{align} O &= \alpha (I\otimes I) + \beta (X \otimes I) + \gamma (Y \otimes I) + \delta( Z \otimes I)\\ V^\dagger OV &= \alpha I + \delta Z. \end{align} So $\mathcal{M}$ is indeed a von Neumann algebra: the set of diagonal operators on $\H_\alpha$. This means that observables in $\mathcal{M}$ cannot really distinguish superpositions over different $\alpha$ from classical probability distributions over $\alpha$, since $\rho_\mathcal{M}$ is also diagonal. So the algebraic entropy $S(\mathcal{M},\rho)$ is also entirely classical. Notice however that $\mathcal{M}$ is its own center, and is not trivial! So we see that a von Neumann algebra with a non-trivial center can still have a trivial area operator $L = 0$. \end{atomicexample} \begin{atomicexample} \label{ex:l}Next, we consider an isometry where $S_A = \text{Tr}(\rho L)$. In this case the logical Hilbert space $\H_L$ is one dimensional: there are no logical qubits. We can still define a density matrix, though: the 1 x 1 matrix $\rho = 1$. \begin{align} V_{(b)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=.8em { \lstick{\ket{+}} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }\end{array} \end{align} $V_{(b)}$ simply prepares a Bell state, so $S_A$ is simply the constant 1. Furthermore, $\mathcal{M} := V^\dagger (\mathcal{L}(H_A) \otimes I_{\bar A}) V $ is just the set of scalars, so $S(\mathcal{M},\rho)$ vanishes. We can thus achieve $S_A = 1= \text{Tr}(\rho L)$ by making the area operator the 1x1 matrix $L = 1$. This is consistent with the fact that $\mathcal{M}$, being the set of scalars, has a trivial center. \end{atomicexample} \begin{atomicexample} \label{ex:q} Third, we consider an isometry with only a quantum part: $S_A = S_q$. In this case $\H_L $ and $\H$ are both two qubits, and $V$ is the identity. \begin{align} V_{(c)} := \hspace{3mm}\begin{array}{c}\Qcircuit @C=0.6em @R=1.3em { \lstick{i} & \qw\\ \lstick{j} & \qw }\end{array} \end{align} We see that $\H_i = \H_A$ and $\H_j = \H_{\bar A}$, so $S_A = S(\text{Tr}_j(\rho))$. We also have that $\mathcal{M} = \mathcal{L}(\H_i) \otimes I_j$, which is a factor, so the associated Hilbert space decompositions has only one big block with $\alpha = 0$ and no other values of $\alpha$. This makes the distribution over blocks trivial with $p_0 = 1$, so the classical part of $S(\mathcal{M},\rho)$ vanishes and only the quantum component remains. $\mathcal{M}$ is a factor, so it has a trivial center, consistent with $L = 0$. \end{atomicexample} Indeed the above examples are extremely trivial, since they each only feature one term in the RT formula. However, they are the fundamental building blocks for codes with more complicated RT formulae. \subsection{Codes with two terms} Now we move on to RT formulae with two non-trivial terms. These allow us to make some of the steps in the proof of Theorem~\ref{thm:complementaritytoRT} more explicit. In particular, the proof involved further decomposition of $\H_{A}$ and $\H_{\bar A}$ into: \begin{align} \H_{A} = \bigoplus_\alpha \left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3} \hspace{10mm} \H_{\bar A} = \bigoplus_\alpha \left( \H_{\bar A^\alpha_1} \otimes \H_{\bar A^\alpha_2} \right) \oplus \H_{\bar A_3}. \end{align} As with $ \H_L = \bigoplus_\alpha \left( \H_{L_\alpha} \otimes H_{\bar{L}_\alpha}\right)$, the $\alpha$ dependence allows the different blocks enumerated by $\alpha$ to have varying dimension. This will not be the case for our examples. Furthermore, the extra $\H_{A_3}$ allows this factorization to only factorize the image of $V$ in $\H_A$. In our case it is actually easier to just factor all of $\H_A$ and $\H_{\bar A}$ directly. \begin{align} \H_{A} = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2} \hspace{10mm} \H_{\bar A} = \H_{\bar A_\alpha} \otimes \H_{\bar A_1} \otimes \H_{\bar A_2}. \end{align} As with $\H_L$, which block $\alpha$ of the decomposition we are in actually factors out onto its own qubit $\H_{A_\alpha}$ or $\H_{\bar A_\alpha}$. The fact that $\alpha$ is visible from both sides of the bipartition is what lends it its classical behavior. In our circuits we now label the right side with the associated decomposition of $\H_{A}$ and $\H_{\bar A}$ as well. In Theorem~\ref{thm:complementaritytoRT}, the purpose of the decomposition of $\H_A$ and $\H_{\bar A}$ was to show that there exist unitaries $U_A$ and $U_{\bar A}$ that bring the states $V\ket{\alpha,i,j}$ into a particular form, specifically that of equation (\ref{eqn:twosideddecomp}): \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_\alpha A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha\bar A_1}. \label{eqn:twosideddecomp_again} \end{align} Then, the entanglement entropies of the $\ket{\chi_\alpha}$ states across $A_2\bar A_2$ yield the eigenvalues of the area operator. In our examples $U_A$ and $U_{\bar A}$ will just be the identity. \begin{atomicexample} \label{ex:cq} The following code has the RT formula $S_A = S_c + S_q$. This is the first example where all three components of $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ are two-dimensional. Below we have selected $\H_A$ as the first two qubits and $\H_{\bar A}$ as the last two qubits. However, other choices of $A$ will have the same formula provided $A$ is a pair of adjacent qubits. \begin{align} V_{(d)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.1em { \lstick{\alpha} & \ctrl{2} & \qw & \rstick{A_\alpha} \\ \lstick{i} & \qw & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{0}} & \targ & \qw & \rstick{\bar A_\alpha} \\ \lstick{j} & \qw & \qw & \rstick{\bar A_1} }\end{array} \end{align} We begin by computing $\mathcal{M}$: we see how, just as in Example~\ref{ex:q}, $\H_A$ has full access to $\H_i$, and for the same calculation as in Example~\ref{ex:c}, $\H_A$ has access to diagonal operators on $\H_\alpha$. On the other hand, it must act like the identity on $\H_j$. $Z_\mathcal{M}$ acts like the identity on $\H_i,\H_j$, but can act non-trivially on $\H_\alpha$. So we cannot rule out a trivial RT formula yet. At this point we see that a basis $\{\ket{\alpha,i,j}\}$ for $\H_L$ that `lines up with $\mathcal{M}$' as in Lemma~\ref{lemma:factorization} is actually just the computational basis on $\H_L$. We can just write $\ket{\alpha,i,j} = \ket{\alpha}\ket{i}\ket{j}$. Considering such a state we see that: \begin{align} V_{(d)} \ket{\alpha,i,j} = \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}\ket{\alpha}_{\bar A_\alpha} \ket{j}_{\bar A_1}. \end{align} Since this state already splits so cleanly into states $\ket{\psi_{\alpha,i}}_{A_\alpha A_1} = \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}$ and $\ket{\bar \psi_{\alpha,j}}_{\bar A_\alpha \bar A_1} = \ket{\alpha}_{\bar A_\alpha}\ket{j}_{\bar A_1}$, we actually can just select $U_A$ and $U_{\bar A}$ to be the identity. The only thing missing from equation (\ref{eqn:twosideddecomp_again}) is the $\ket{\chi_\alpha}$ on $A_2,\bar A_2$. However, we have both other contributions. The $\alpha$ degree of freedom is visible from $\bar A$, so therefore acts like it has been measured from $A$'s perspective. The $\H_i \otimes \H_j$ register might be entangled with $\H_\alpha$, so after the measurement it will collapse to one of the $\rho_\alpha$ states from the decomposition $\rho_\mathcal{M} = \sum_\alpha p_\alpha \rho_\alpha$. The quantum term of the entropy is then the associated probabilistic mixture of the von Neumann entropy of $\rho_\alpha$ reduced to $\H_i$. Writing out the full formula: \begin{align} \underbrace{S(\text{Tr}_{\bar A}(V \rho V^\dagger))}_{S_A} = \underbrace{\sum_{\alpha} p_\alpha \log(p_\alpha^{-1})}_{S_c} + \underbrace{\sum_\alpha p_\alpha S(\text{Tr}_{j}( \rho_\alpha))}_{S_q}. \end{align} Just as with Example~\ref{ex:c}, $Z_\mathcal{M}$ had a non-trivial center, but we still have $L = 0$. \end{atomicexample} \begin{atomicexample} \label{ex:cl} Next we consider a code with a classical term and an area term, but no quantum term: $S_A = S_c + \text{Tr}(\rho L)$. This is the first code where $L$ is not proportional to the identity. \begin{align} V_{(e)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.2em { \lstick{\alpha} & \ctrl{2} & \ctrl{1} & \qw & \rstick{A_\alpha} \\ \lstick{\ket{+}} & \qw & \ctrl{2} & \qw & \rstick{A_2} \\ \lstick{\ket{0}} & \targ & \qw & \qw & \rstick{\bar A_\alpha} \\ \lstick{\ket{0}} & \qw & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} The von Neumann algebra $\mathcal{M}$, due to a similar calculation as in Example~\ref{ex:c}, is again just the set of diagonal operators on $\H_L = \H_\alpha$. The algebraic entropy $S(\mathcal{M},\rho)$ is then again the classical entropy of the probability distribution $\{p_\alpha\}$. Since there are multiple superselection sectors corresponding to different $\alpha$, we do not have a trivial center. For this example, the entropy $S_A$ can actually be computed explicitly for some logical pure state $\beta_0\ket{0} + \beta_1\ket{1}$ where $p_\alpha = |\beta_\alpha|^2$. The circuit conditionally prepares a Bell state depending on the value of $\alpha$: \begin{align} V_{(e)} ( \beta_0\ket{0} + \beta_1\ket{1}) &= \beta_0 \ket{0\text{+}00} + \beta_1 \frac{ \ket{1010} +\ket{1111}}{\sqrt{2}}\\ \text{Tr}_{\bar A}( V_{(e)}\rho V_{(e)}^\dagger ) &= |\beta_0|^2 \ket{0}\bra{0}_{A_\alpha} \otimes \ket{+}\bra{+}_{A_2} + |\beta_1|^2 \ket{1}\bra{1}_{A_\alpha} \otimes \frac{I_{A_2}}{2} \\ S_A = S(\text{Tr}_{\bar A}( V_{(e)}\rho V_{(e)}^\dagger )) &= \left[|\beta_0|^2\log(|\beta_0)|^{-2}) + |\beta_1|^2\log(|\beta_1)|^{-2})\right]\\ &+ \left[ |\beta_0|^2 S( \ket{+}\bra{+} ) + |\beta_1|^2 S( I/2) \right]\\ &= \sum_\alpha p_\alpha \log(p_\alpha^{-1}) + \text{Tr}\left( \rho \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix} \right). \end{align} So we have explicitly derived an area operator $L = \ket{1}\bra{1}$. Also worth noting is that equation (\ref{eqn:twosideddecomp_again}) is now almost fully rendered out: while $A_1$ and $\bar A_1$ are missing, we now have: \begin{align} V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_\alpha} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha}, \end{align} where $\ket{\psi_{\alpha,i}} = \ket{\bar\psi_{\alpha,j}} = \ket{\alpha}$ and $\ket{\chi_0} = \ket{+}\ket{0}$ and $\ket{\chi_1}$ is a Bell state. We see that $L = \sum_\alpha S(\text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha})) \cdot I_\alpha$ matches what we derived above. \end{atomicexample} \begin{atomicexample} \label{ex:ql} Now we consider a code with a quantum term and an area term, but no classical term: $S_A = S_q + \text{Tr}(\rho L)$. This code actually features an area operator proportional to the identity again: the $\alpha$ degree of freedom determines $\ket{\chi_\alpha}$, whose entanglement in turn determines the area. But since there is only one $\alpha$, we have a trivial center and there can be no superposition over areas. \begin{align} V_{(f)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.2em { \lstick{i} & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{+}} & \ctrl{2} & \qw & \rstick{A_2} \\ \lstick{j} & \qw & \qw & \rstick{\bar A_1} \\ \lstick{\ket{0}} & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} Similarly to Example~\ref{ex:q}, we have $\H_i = \H_{A_1}$ and $\H_j = \H_{\bar A_1}$, and $\mathcal{M} = \mathcal{L}(\H_i) \otimes I_j$. Since $\mathcal{M}$ is a factor, the only contribution to $S(\mathcal{M},\rho)$ is the entropy of the reduced state on $\H_i$, that is, $S(\text{Tr}_j{\rho})$. There is only one value of $\alpha$. However, the entropy $S_A$ now features two contributions: the entropy of the state on $\H_i$ visible from $\H_{A_1}$, and the entropy of the Bell state across $\H_{A_2}\otimes \H_{\bar A_2}$. We can see this from the form of $V_{(f)}\ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_1}$ where $\ket{\psi_{\alpha,i}} = \ket{i}$, $\ket{\bar\psi_{\alpha,j}} = \ket{j}$ and $\ket{\chi_\alpha}$ is a Bell state. As a result, $S_A - S_q = 1$, so we achieve $S_A = S_q + \text{Tr}(\rho L)$ by setting $L = I$. \end{atomicexample} As we have seen, combining two of the primitive circuits from Examples~\ref{ex:c}~\ref{ex:l}, and~\ref{ex:q} already produces non-trivial results, including states of the form $\ket{\alpha,i,j}$ in Example~\ref{ex:cq} and area operators not proportional to the identity in Example~\ref{ex:cl}. Of particular importance in Example~\ref{ex:cl} was the conditional preparation of a Bell state based on $\alpha$. This caused the different $\ket{\chi_\alpha}$ states to exhibit varying amounts of entanglement, each becoming a different eigenvalue of $L$. \subsection{A complete example} To finish the section, we give a final example that features all three terms of the RT formula, and makes both of the decompositions $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ and $\H_A = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2}$ completely non-trivial. \begin{atomicexample} \label{ex:cql} This six-qubit code's RT formula has all three terms on the right-hand side: $S_A = S_c + S_q + \text{Tr}(\rho L)$. We consider the subregion $A$ to be the first three qubits, but the same RT formula holds for any choice of $A$ that is three adjacent qubits. Smaller or larger $A$ will exhibit a simpler RT formula, similar to those from the previous examples. \begin{align} V_{(g)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.3em { \lstick{\alpha} & \ctrl{3} & \ctrl{2} & \qw & \rstick{A_\alpha}\\ \lstick{i} & \qw & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{+}} & \qw & \ctrl{3} & \qw & \rstick{A_2} \\ \lstick{\ket{0}} & \targ & \qw & \qw & \rstick{\bar A_\alpha} \\ \lstick{j} & \qw & \qw & \qw & \rstick{\bar A_1} \\ \lstick{\ket{0}} & \qw & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} Similarly to Example~\ref{ex:cq}, $\H_A$ has full access to $\H_i$ via $\H_{A_1}$, as well as access to the diagonal operators on $\H_{\alpha}$ via $\H_{A_\alpha}$ from the calculation in Example~\ref{ex:c}, and no access to $\H_j$. Therefore, the basis $\ket{\alpha,i,j}$ is just the computational basis on the three logical qubits with $\ket{\alpha,i,j} = \ket{\alpha}\ket{i}\ket{j}$. If we apply the isometry $V_{(g)}$ to such a basis state we get the full equation (\ref{eqn:twosideddecomp_again}): \begin{align} V_{(g)} \ket{\alpha,i,j} &= \ket{\psi_{\alpha,i}}_{A_\alpha A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha \bar A_1}, \\[2mm] \ket{\psi_{\alpha,i}}_{A_\alpha A_1} &= \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}, \hspace{15mm} \ket{\bar \psi_{\alpha,j}}_{\bar A_\alpha \bar A_1} = \ket{\alpha}_{\bar A_\alpha}\ket{j}_{\bar A_1}, \\ \ket{\chi_0}_{A_2 \bar A_2} &= \ket{+}_{A_2}\ket{0}_{\bar A_2}, \hspace{14mm} \ket{\chi_0}_{\bar A_\alpha \bar A_1} = \frac{\ket{00}_{A_2\bar A_2}+\ket{11}_{A_2\bar A_2} }{\sqrt{2}}. \end{align} As in Example~\ref{ex:cl}, we conditionally prepare a Bell state on $\H_{A_2}\otimes \H_{\bar A_2}$, so following the same calculation we see that the area operator is $L = \ket{1}\bra{1}$. But additionally this example also features the $\H_{A_1}$ and $\H_{\bar A_1}$ spaces corresponding to $\H_i$ an $\H_j$, contributing a quantum term to the RT formula as in Example~\ref{ex:cq}. \end{atomicexample} These circuits seem to be the smallest examples of qubit quantum error-correcting codes to exhibit interesting holographic properties. However, there are some ideas that these circuits still oversimplify. First, the factorizations $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ and $\H_A = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2}$ are a significant simplification of the decompositions $\H_L = \bigoplus_\alpha\left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha} \right)$ and $\H_A = \bigoplus \left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}$ respectively. Not only are all the dimensions of $\H_{L_\alpha},\H_{\bar L_\alpha}, \H_{A^\alpha_1}, \H_{A^\alpha_2}$ independent of $\alpha$, but consequently the $\alpha$ degree of freedom neatly factors out onto a separate qubit. This is a highly non-generic feature for von Neumann algebras: while $\alpha$ can be measured via a projective measurement, it usually does not factor to its own degree of freedom like this. Second, none of these examples exhibit the `radial commutativity' discussed in Subsection \ref{sub:radial}. In holography, operators acting on a single point at the boundary do not have access to any bulk degrees of freedom and must therefore commute with all bulk operators. In a finite-dimensional analogy, \cite{harlow2017ryu} constructed a three-qutrit code where operators acting on any single qutrit must commute with the logical operators of the code. However, the codes presented here do not have this property. In example~\ref{ex:cql}, access to the physical qubit labeled $A_1$ already gives full access to the $\H_i$ factor of the logical Hilbert space. One method for remedying this could be to encode each of the physical qubits of example~\ref{ex:cql} into another quantum error-correcting code that protects against single qubit erasures. \section{Discussion} \label{sec:discussion} Toy models for holographic quantum error correction serve as a microcosm for understanding AdS/CFT. In this work we have reformulated and extended the framework of \cite{harlow2017ryu} with a uniqueness result and several examples. These together serve to make holographic quantum error correction `more concrete' in the sense that they pave the road to more complex examples. In this discussion we briefly summarize the ways in which our construction differs from previous work, and also list some future directions. The construction of \cite{harlow2017ryu} is, of course, central to our work and discussions of holographic quantum error correction in general. However, we made several changes to the formalism to facilitate our particular viewpoint. Here is a brief summary of these changes: \begin{description} \item[Code subspace vs encoding isometry.] In holography, we can think of the bulk Hilbert space as `emanating from' the boundary space and physically place the bulk into the boundary. In this sense, we could consider the space of allowed bulk states a subspace $\H_\text{code} $ of the physical space $\H$, which could be defined via some set of constraints on the boundary qubits. This perspective might be effective for stabilizer codes. However, the codes we discuss in Section~\ref{sec:examples} are more naturally described via a quantum circuit, which is an active transformation. For this reason we explicitly think of the bulk space as a separate space $\H_L$, which is not emanating from the boundary in the same way, and is then mapped to the boundary space via an encoding isometry $V :\H_L \to \H$. Of course, we could still switch to the old picture by defining $\H_\text{code}$ to be the image of $V$. \item[Step by step vs general case.] A key result of \cite{harlow2017ryu} is that many seemingly disparate ideas are actually equivalent: subregion duality, the existence of an RT formula, and entropic properties of the holographic states. This is an illuminating observation about the general properties of holographic quantum error correction codes. However, in our work we are interested in the analysis of particular codes: we want to consider a particular encoding isometry $V$ and obtain its RT formula. To that end, we `unroll' the sequence of equivalences given in Theorem~5.1 of \cite{harlow2017ryu} and focus on the direction that yields a method for computing the area operator. Our derivation of this result in Theorem~\ref{thm:complementaritytoRT} goes into significantly more detail, and the resulting recipe from Section~\ref{sec:recipe} makes the analysis of codes more straightforward. \item[Uniqueness of the algebra.] Following \cite{harlow2017ryu}, we still consider holography to be a property that a code, a bipartition, and a von Neumann algebra can have together. But to some extent this is no longer really necessary: we can say that holography is merely a property of a code and a bipartition, because when these are fixed then the von Neumann algebra is unique if it exists. Ideally, we would like to go even further and say that it is a property of a quantum error correction code alone, asserting that every (reasonable) bipartition obeys an RT formula. \end{description} In particular, making `holography' a property of a code alone leaves a couple open questions. Furthermore, there are several directions in which this framework could be expanded. \begin{description} \item[A `one-sided' RT formula without complementary recovery?] In our Theorem~\ref{thm:complementaritytoRT}, we demonstrate that complementary recovery of $(V,A,\mathcal{M})$ implies a `two-sided' RT formula, an RT formula for both $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$. Indeed, Theorem~5.1 of \cite{harlow2017ryu} shows that the existence of this `two-sided' is actually equivalent to complementary recovery. So why do we not simply remove $\mathcal{M}$ since it is uniquely determined by complementary recovery? We have not closed the possibility of a `one-sided' RT formula exhibited by just $(V,A,\mathcal{M})$ but not by $(V,\bar A,\mathcal{M}')$. Is this mathematically possible? Does a code with such an RT formula possess a sensible physical interpretation? \item[A non-trivial RT formula for all subregions?] The qubits in Example~\ref{ex:cql} are arranged such that every contiguous subregion $A$ of three qubits has a non-trivial RT formula. But when we consider three qubits that are non-adjacent, then the RT formula becomes trivial. We would hope that larger holographic error correction codes have sensible and interesting area operators even when $A$ is not contiguous. But is it possible for \emph{every} subregion to have a non-trivial RT formula? We attempted to construct such a code without success. It is possible that this difficulty is related to the difficulty of obtaining power-law correlations between generic subregions in holographic tensor networks, observed in e.g.\ \cite{Gesteau:2020hoz,Jahn:2020ukq,Cao:2021wrb}. Since the number of possible subregions grows very quickly, this requirement places many constraints on the code. Thus, we conjecture that this is not possible. Is it possible if we restrict the pieces of the subregions to be at least a certain size? \item[A tensor network with superposition of geometries?] Seminal work by [Happy] showed that holographic tensor networks can be constructed from a tessellation of hyperbolic space with a fundamental tensor, in their case a perfect tensor. There are very many extensions of this construction, for instance \cite{cao2020approximate} consider replacing the fundamental tensor with skewed Bacon-Shor codes, and \cite{taylor2021holography} consider higher-dimensional tessellations. What happens when we replace the fundamental tensor with one of our circuits? What does operator pushing look like in this scenario? Does the network possess a non-trivial area operator? \item[A holographic stabilizer code?] All the atomic examples that have a non-trivial area operator possess a Toffoli gate, so they are not stabilizer codes. Furthermore, the skewed codes considered by \cite{cao2020approximate}, although they are superpositions of stabilizer codes, are themselves also not stabilizer codes. It appears that the stabilizer formalism places strong limitations on the entanglement properties of the resulting codes, making the design of a stabilizer code with a non-trivial area operator challenging. Is it even possible? \item[Consequences of the uniqueness of $\mathcal{M}$ in quantum gravity?] It is often natural to consider only a subalgebra of the operators in the entanglement wedge of a particular boundary $A$. For example, we might only be interested in local operators. But an implication of Theorem~\ref{thm:whatalgebra} is that such a von Neumann algebra cannot exhibit complementary recovery. This is clear from the perspective of error correction, but can it be proved from the AdS perspective as well? It is also natural from the AdS perspective to consider sets of operators which are not subalgebras (such as low-point correlators of local bulk operators)---can anything be said about such cases? \item[Extensions of holographic quantum error correction?] The toy models considered in this work, just like the constructions of \cite{harlow2017ryu} and \cite{cao2020approximate}, are restricted to a single time slice. Can they be extended to exhibit dynamics (similarly to what has been proposed for tensor networks~\cite{kohler2019toy})? What about dynamics with decoherence and black hole formation/evaporation? Since the purpose of the toy models is to illuminate and provide more mathematically tractable examples of AdS/CFT, extending them towards the full capabilities of AdS/CFT is a very natural direction. For a fixed geometry, one expects bulk time evolution (for example, on a Rindler wedge) to be implemented approximately as a local operator in the code subspace, the (modular) Hamiltonian--but evolving with the full Hamiltonian of the boundary system should give corrections to this picture. See \cite{jahn2021holographic} for a recent review. \end{description} \section{Complementarity of private and correctable algebras} \label{app:privacy} Here we give a brief overview of the main result of~\cite{crann2016private} closely following the simpler, finite-dimensional, presentation given in~\cite{kribs2018quantum}. A quantum channel $\Phi: \mathcal{L}(\mathcal{H}_A) \rightarrow \mathcal{L}(\mathcal{H}_A)$ is a completely-positive, trace preserving map between two spaces of linear operators. The dual map $\Phi^{\dagger}$ of a quantum channel $\Phi$ is defined via the trace inner product $\operatorname{Tr}(\Phi(\rho) X)=\operatorname{Tr}\left(\rho \Phi^{\dagger}(X)\right)$. Using the Stinespring dilation theorem we can express any quantum channel $\Phi$ in terms of its action on an auxiliary Hilbert space $\mathcal{H}_C$ (with $|\mathcal{H}_C|\leq |\mathcal{H}_C|^2$ ). In particular, given a state $\left|\psi_{C}\right\rangle \in \mathcal{H}_{C}$ and a unitary $U$ on $\mathcal{H}_{A} \otimes \mathcal{H}_{C}$ such that for all $\rho \in \mathcal{L}\left(\mathcal{H}_{A}\right)$ we have that, \begin{equation} \Phi(\rho)=\operatorname{Tr}_{C} \circ \, \mathcal{U}\left(\rho \otimes\left|\psi_{C}\right\rangle\left\langle\psi_{C}\right|\right)=\operatorname{Tr}_{C} \circ \mathcal{V}(\rho), \end{equation} where $\operatorname{Tr}_{C}$ denotes the partial trace map from $\mathcal{L}\left(\mathcal{H}_{A} \otimes \mathcal{H}_{C}\right)$ to $\mathcal{L}\left(\mathcal{H}_{A}\right)$, the map $\mathcal{U}(\cdot)=U(\cdot) U^{*}$, and $\mathcal{V}(\cdot)=V(\cdot) V^{*}$ is the map implemented by the isometry $V: \mathcal{H}_{A} \rightarrow \mathcal{H}_{A} \otimes \mathcal{H}_{C}$ defined by $V|\psi\rangle=U\left(|\psi\rangle \otimes\left|\psi_{C}\right\rangle\right)$. The Stinespring dilation theorem allows us to define a notion of complementarity for quantum channels. \begin{definition}[complementary map] Given a quantum channel $\Phi$ the complementary map from $\mathcal{L}\left(\mathcal{H}_{A}\right)$ to $\mathcal{L}\left(\mathcal{H}_{C}\right)$ is \begin{equation} \Phi^{C}(\rho)=\operatorname{Tr}_{A} \circ \mathcal{V}(\rho). \end{equation} \end{definition} Equipped with these notions we proceed to define correctable and private algebras. \begin{definition}[correctable algebra, Definition 2.1~\cite{kribs2018quantum}] Let $\mathcal{H}$ be a finite-dimensional Hilbert space (the physical space). A quantum error-correcting code is defined by a projection $P$ on $\mathcal{H}$ such that $\mathcal{H}_{L} = P\mathcal{H} $. Given an error channel $\mathcal{E}: \mathcal{L}(\mathcal{H}) \rightarrow \mathcal{L}(\mathcal{H})$, we say that a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(P \mathcal{H})$ is correctable for $\mathcal{E}$ with respect to $P$ if there exists a channel $\mathcal{R}: \mathcal{L}\left(\mathcal{H}_{C}\right) \rightarrow \mathcal{M}$ such that \begin{equation} \Phi_{P} \circ \mathcal{E}^{\dagger} \circ \mathcal{R}^{\dagger}=i d_{\mathcal{A}}, \end{equation} where $\Phi_{P}$ is the channel associated with the projection into the code subspace $\Phi_{P}(\cdot)=P(\cdot) P$. \end{definition} \begin{definition}[private algebra, Definition 2.2~\cite{kribs2018quantum}] Let $\mathcal{H}$ be a (finite-dimensional) Hilbert space and let $P$ be a projection on $\mathcal{H}$. Given a channel $\mathcal{E}: \mathcal{L}(\mathcal{H}) \rightarrow \mathcal{L}(\mathcal{H})$, a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(P \mathcal{H})$ is private for $\mathcal{E}$ with respect to $P$ if \begin{equation} \quad \Phi_{P} \circ \mathcal{E}^{\dagger}(\mathcal{L}(\mathcal{H})) \subseteq \mathcal{M}^{\prime}=\{X \in \mathcal{L}(P \mathcal{H}) \mid[X, O]=0 \, \forall O \in \mathcal{<}\}. \end{equation} \end{definition} Correctable and private algebras are related by the following theorem \begin{theorem}[Proposition 2.4~\cite{kribs2018quantum}] \label{thm:correctable-private} Let $\mathcal{M}$ be a subalgebra of $\mathcal{L}(P \mathcal{H})$ for some Hilbert space $\mathcal{H}$ and projection $P$. Let $\mathcal{E}$ be a channel on $\mathcal{H}$ with complementary channel $\mathcal{E}^{C}$. Then $\mathcal{M}$ is correctable for $\mathcal{E}$ with respect to $P$ if and only if $\mathcal{M}$ is private for $\mathcal{E}^{C}$ with respect to $P$. \end{theorem} \section{A proof of a special case of the factorization lemma} \label{app:structure_lemma} We give a proof of Lemma~\ref{lemma:factorization} for the factor algebra $\mathcal{M}=\mathcal{L}(\mathcal{H}_L)$. Our proof is similar to the ones given in~\cite[Section 3.2]{almheiri2015bulk} and~\cite[Section 3.1]{harlow2017ryu} and, as the proofs given in these works, is based on a technique developed in~\cite{schumacher1996quantum} to prove that the presence of entanglement in a code is a necessary and sufficient condition for perfect quantum error correction. More specifically, \cite{schumacher1996quantum} considers a setting where the system $A$ is entangled to a reference system $R$ and shows that perfect quantum error correction (i.e. the ability to recover the logical information after the erasure of $\bar A$) is possible if and only if \begin{equation} \label{eq:entropic_correction_condition} I_{R\bar A} = S_R + S_{\bar A} - S_{R\bar A} = 0, \end{equation} where $I_{R\bar A}$ is the mutual information of the composite system ${R\bar A}$ and $S$ denotes the von Neumann entropy. Let $\ket{\phi}$ be a pure state on the $R A \bar A$ system. Then \eqref{eq:entropic_correction_condition} implies that \begin{equation} \label{eq:reduced_factorization} \rho_{R \bar A}[\phi]=\rho_{R}[\phi] \otimes \rho_{\bar A}[\phi], \end{equation} where $\rho_{R \bar A}[\phi] = \operatorname{Tr}_{R \bar A} (\ket{\phi} \bra{\phi})$, $\rho_{R}[\phi]=\operatorname{Tr}_{R } (\ket{\phi} \bra{\phi})$, and $\rho_{ \bar A}[\phi]=\operatorname{Tr}_{ \bar A} (\ket{\phi} \bra{\phi})$ denote the reduced density matrices for the state $\ket{\phi}$. For the proof of Lemma~\ref{lemma:factorization} when $\mathcal{M}$ is a factor algebra $\mathcal{M} = \mathcal{L}(\mathcal{H}_L)$, consider a reference system $R$ maximally entangled with $A$ (this is the Choi state for $V$) \begin{equation} \label{eq:Choi} \ket{\phi}=2^{-k / 2} \sum_{i}\ket{i}_{R} (V\ket{i})_{A\bar A}, \end{equation} where $|R| = |\mathcal{H}_L| = 2^k$. Observe that $\ket{\phi}$ is a purification of $\rho_{R \bar A}[\phi]$ on $A$. Because $\ket{\phi}$ is maximally entangled we have that $\rho_{R}[\phi] = \frac{I}{2^{k / 2}}$ is the maximally mixed state. Therefore \eqref{eq:reduced_factorization} becomes \begin{equation} \label{eq:product_maxmixed} \rho_{R \bar A}[\phi]=\frac{I}{2^{k / 2}} \otimes \rho_{\bar A}[\phi]. \end{equation} Say $k$ is the largest integer such that $|A| = k |R| + r$. Then there exists a factorization $\H_A = (\H_{A_1} \otimes \H_{A_2}) \oplus \H_{A_3}$ such that $|A_1| = |R|$, $|A_2| = k$ and $|A_3| = r$. Now define the following states: \begin{equation} \ket{\Psi}_{R A_1} = \frac{1}{2^{k / 2}} \sum_{i}\ket{i}_{R} \ket{i}_{A_1}, \quad \ket{\chi}_{A_2 \bar A} = \sum \sqrt{p_j} \ket{j}_{A_2} \ket{j}_{\bar A}. \end{equation} and observe that the state \begin{equation} \ket{\phi ^\prime} = \ket{\Psi}_{R A_1} \otimes \ket{\chi}_{A_2 \bar A}, \end{equation} is a purification of $\rho_{R\bar A}[\phi]$ \begin{align} \operatorname{Tr}_{A_1 A_2} \left( \ket{\Psi}\bra{\Psi}_{R A_1} \otimes \ket{\chi} \bra{\chi}_{A_2 \bar A} \right) &= \operatorname{Tr}_{A_1} \left( \ket{\Psi} \bra{\Psi}_{R A_1} \right) \operatorname{Tr}_{A_2} \left( \ket{\chi} \bra{\chi}_{A_2 \bar A} \right) \\ &= \rho_{R}[\phi] \otimes \rho_{\bar A}[\phi]. \end{align} where $\ket{\Psi}_{R A_1}$ purifies $\rho_R[\phi]$ in $A_1$ and $\ket{\chi}_{A_2 \bar A}$ purifies $\rho_{\bar A} [\phi]$ in $A_2$. Note that such a factorisation exists because the $R$ and $\bar A$ registers are unentangled in \eqref{eq:product_maxmixed}. In a purification the dimension of the purifying system needs to be at least as big as the rank of the state to be purified and therefore we have that $ |A_1| = |R| = 2^{k}$ (because $\rho_R [\phi]$ is maximally mixed) and $\operatorname{rank}(\rho_{\bar A}[\phi]) \leq |A_2|$. Because all purifications are equivalent up to unitaries performed on the purifying system ($A$, in our case) we know that there exists a unitary $U_A$ acting solely on the subsystem $A$ that maps $\ket{\phi^\prime}$ to $\ket{\phi}$. Therefore we have that \begin{equation} (U_A \otimes I_{\bar A} )V\ket{i}_{A\bar A} = \ket{i}_{A_1} \ket{\chi}_{A_2 \bar A}. \end{equation} \section{The 2x2 Bacon-Shor code} \label{app:2x2bacon-shor} \cite{cao2020approximate} presents a construction of holographic tensor networks using the 2 x 2 Bacon-Shor code. This four-qubit stabilizer subsystem code can be shown to have simple holographic properties. \cite{cao2020approximate} find that, via a notion of `skewing' which involves taking linear combinations of several encoding isometries, they can construct quantum error-correcting codes with a non-trivial RT formula. For ease of comparison to their work, we review some of their calculations in our language. We rederive that, while the 2x2 Bacon-Shor code is holographic, its area operator is proportional to the identity and its RT formula is trivial. This demonstrates that some notion of `skewing' is necessary to obtain non-trivial RT formulas from this code. Before we talk about the Bacon-Shor code, we derive a result about stabilizer codes in general. \begin{lemma} \label{lemma:stabm} Say $G$ is an abelian subgroup of the $n$-qubit Pauli group, defining a stabilizer code. Say $V_G : \H_L \to \H$ is an encoding isometry of this code. Say $A$ is any subset of the $n$ qubits, decomposing $\H = \H_A \otimes \H_{\bar A}$. Then $\mathcal{M} := V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G $ is a von Neumann algebra, and $(V_G,A,\mathcal{M})$ satisfy complementary recovery. \end{lemma} \begin{proof} All we need to show is that $V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G $ is closed under multiplication, since the other properties of von Neumann algebras are always guaranteed. Then Theorem~\ref{thm:whatalgebra} implies complementary recovery. To do so, we observe that $\mathcal{L}(\mathbb{C}^2) = \langle X,Z \rangle_\text{vN}$, which lets us write: \begin{align} \mathcal{L}(\H_A) \otimes I_{\bar A} = \langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN}. \end{align} Conjugating by $V_G^\dagger$ projects each Pauli matrix in the above set into the code space, and then decodes it. Note that a Pauli matrix is in the code space if and only if it commutes with the stabilizer $G$. We write: \begin{align} \mathcal{M} = V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G = \left\{ V_G^\dagger P V_G \text{ for } P \in \langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN} \text{ if } PG = GP \right\}. \end{align} Now all that remains to show is that the above set is closed under multiplication. Write the code space projector as $\Pi_G = V_G^\dagger V_G \in \mathcal{L}(\H)$, and consider two elements $V_G^\dagger P V_G$ and $V_G^\dagger Q V_G$ in the above set. Then, since $P$ and $\Pi_G$ commute: \begin{align} V_G^\dagger P V_G V_G^\dagger Q V_G = V_G^\dagger P \Pi_G Q V_G = V_G^\dagger\Pi_G P Q V_G = V_G^\dagger P Q V_G. \end{align} $PQ$ is in $\langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN}$ since it is a von Neumann algebra by definition, and furthermore since $P$ and $Q$ both commute with $G$, so must $PQ$. So $ V_G^\dagger P Q V_G$ is also in $\mathcal{M}$. \end{proof} This establishes the fact that situations like Example~\ref{ex:badcode} cannot happen with stabilizer codes, but also gives a simple method for computing $\mathcal{M}$. Now we discuss the 2x2 Bacon-Shor code. Recall from subsystem quantum error correction that a subsystem code is generated by a non-abelian group of Pauli matrices $\mathcal{G}$. In this case: \begin{align} \mathcal{G} := \langle X_1X_2, X_3X_4, Z_1Z_3, Z_2Z_4 \rangle. \end{align} Non-abelian Pauli groups do not have a simultaneous $+1$ eigenspace. However, we can construct several abelian Pauli groups from $\mathcal{G}$. One of these is its center: \begin{align} Z_\mathcal{G} = \langle X_1X_2X_3X_4, Z_1Z_2Z_3Z_4 \rangle. \end{align} This is by definition abelian, and has an encoding isometry: \begin{align} V_{Z_\mathcal{G}} := \hspace{8mm} \begin{array}{c}\Qcircuit @C=.8em @R=.8em { & \ctrl{2} & \qw & \gate{H} & \ctrl{1} & \qw \\ & \qw & \ctrl{2} & \qw & \targ & \qw \\ \lstick{\ket{0}} & \targ & \qw & \gate{H} & \ctrl{1} & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw & \targ & \qw }\end{array} \end{align} Visibly, $Z_\mathcal{G}$ defines a quantum error-correcting code with two logical qubits. This code has a von Neumann algebra $\mathcal{M}^{Z_\mathcal{G}}$ corresponding to the subregion $A$. We can restrict this code further by selecting a `gauge': an operator $P \in \mathcal{G}$ that is not in the center $P \not\in Z_\mathcal{G}$. Then, the abelian group $\langle P , Z_\mathcal{G} \rangle$ defines a stabilizer code with just one logical qubit. We can construct an encoding isometry for $\langle P , Z_\mathcal{G} \rangle$ by computing the two-qubit Pauli matrix, $P_L := V_\mathcal{G}^\dagger P V_\mathcal{G}$ and then constructing a two-qubit isometry $V_{P_L}$ such that $V_{P_L}^\dagger P_L V_{P_L} = Z_2$. Then the map $V_\mathcal{G}V_{P_L}$ is an encoding isometry for $\langle P , Z_\mathcal{G} \rangle$. This code has a von Neumann algebra $\mathcal{M}^{P}$ corresponding the bipatition $A$. \begin{example} \textbf{3-1 bipartitions of the 2x2 Bacon Shor Code.} Here we derive that for \emph{any} choice of $P$, if $A$ contains just one qubit then $L = I_L$. We begin with analyzing the code defined by $Z_\mathcal{G}$. Since $Z_\mathcal{G}$ is symmetric to permutations of qubits, we assume without loss of generality that $A = \{1\}$, and use the method in Lemma~\ref{lemma:stabm} to compute $\mathcal{M}^{Z_\mathcal{G}}$: \begin{align} \mathcal{L}(\H_A) \otimes I_{\bar A} = \langle X_1 , Z_1\rangle_\text{vN} = \{I, X_1,Y_1,Z_1\}. \end{align} We find that none of these commute with $Z_\mathcal{G}$, so $\mathcal{M}^{Z_\mathcal{G}} = \langle I \rangle_\text{vN}$. Therefore $S(\mathcal{M}^{Z_\mathcal{G}},\rho) = 0$ for all $\rho$. Now all that remains to be done is to compute $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}}))$. We define the following states for $a,b \in \{0,1\}$: \begin{align} \ket{X^a Z^b} := \left( Z^b \otimes X^a \right) \frac{\ket{00} + \ket{11}}{\sqrt{2}}. \end{align} Now we can inspect the action of $V_{Z_\mathcal{G}}$ on the computational basis for $\H_L$: \begin{align} V_{Z_\mathcal{G}} \ket{ab}_L = \ket{X^a Z^b}_{1,2} \ket{X^a Z^b}_{3,4}. \end{align} Tracing out qubits $3,4$ measures $a,b$ so now qubits $1,2$ are in some probabilistic mixture of the $\ket{X^a Z^b}$. But these states are all maximally entangled, so the reduced state on qubit $1$ is $I/2$. So $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) = 1$. We find that: \begin{align} \text{Tr}(\rho L) = S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) - S(\mathcal{M}^{Z_\mathcal{G}},\rho) = 1, \end{align} which is achieved by $L = I_L$. Now we consider any gauge $P$, defining an isometry $V_{P_L}$. Observe that $\mathcal{M}^P = V_{P_L}^\dagger \langle I \rangle_\text{vN}V_{P_L} = \langle I \rangle_\text{vN}$ remains unchanged. Furthermore, we say that the entanglement entropy on qubit $1$ is $I/2$. So $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) = 1$ is independent of $a,b$ so it also remains unchanged if we select a subspace of the code space. Thus we also have $L = I_L$ in this situation. \end{example} We saw that we could perform an analysis of all gauges $P$ in a unified manner by instead analyzing the code stabilized by $Z_\mathcal{G}$. While for 3-1 bipartitions the RT formula was independent of $P$, it is actually dependent on $P$ for 2-2 bipartitions. Nonetheless it is helpful to consider the code stabilized by $Z_\mathcal{G}$. For the discussion below, we label the von Neumann algebras with their corresponding subregions in the subscript: for example, $\mathcal{M}^{Z_\mathcal{G}}_{1,2}$ is the algebra defined by $V_{Z_\mathcal{G}}$ and $A = \{1,2\}$. \begin{example} \textbf{2-2 bipartitions of the 2x2 Bacon Shor Code with no gauge.} We begin with $A = \{1,2\}$: Write: \begin{align} \mathcal{L}(\H_{1,2}) \otimes I_{3,4} = \langle X_1 , Z_1, X_2,Z_2\rangle_\text{vN}. \end{align} Of these, only the subalgebra $\langle X_1X_2, Z_1Z_2 \rangle_\text{vN}$ commutes with $Z_G$, so: \begin{align} \mathcal{M}_{1,2}^{Z_\mathcal{G}} = \langle V_{Z_\mathcal{G}}^\dagger X_1X_2 V_{Z_\mathcal{G}}, V_{Z_\mathcal{G}}^\dagger Z_1Z_2 V_{Z_\mathcal{G}} \rangle_\text{vN} = \langle Z_1 , Z_2 \rangle_\text{vN}, \end{align} which is just the set of diagonal operators in the computational basis of $\H_L$. Now, while the code $Z_\mathcal{G}$ is symmetric with respect to qubit permutations, the encoding isometry $V_{Z_\mathcal{G}}$ is not. Since $\mathcal{M}^{Z_\mathcal{G}}_{1,2}$ is just the set of diagonal operators in the computational basis, $S(\mathcal{M}^{Z_\mathcal{G}}_{1,2},\rho)$ is the classical entropy after measuring in the computational basis. Recall the relation $ V_{Z_\mathcal{G}} \ket{ab}_L = \ket{X^a Z^b} \ket{X^a Z^b}$ from above. We see that discarding qubits $3,4$ essentially measures the first two qubits in the $\ket{X^a Z^b}$ basis, so $ S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}}))$ is actually the same as $S(\mathcal{M}^{Z_\mathcal{G}}_{1,2},\rho)$. So their difference vanishes, and $L = 0$. Now we consider $A = \{1,3\}$, which we can actually obtain from the above analysis by inserting the gate $S_{2,3}$ that swaps qubits 2 and 3. That is, $V_{Z_\mathcal{G}}$ under $A=\{2,3\}$ should have the von Neumann algebra as $V_{Z_\mathcal{G}}S_{2,3}$ under $A=\{1,2\}$. Observe that $S_{2,3}$ commutes with $Z_G$, so it must implement a logical operator. With some calculation we see that $ V_{Z_\mathcal{G}}^\dagger S_{2,3} V_{Z_\mathcal{G}} = H^{\otimes 2} S_L $ where $S_L$ swaps the two logical qubits (this is done most easily by propagating $X_1,Z_1,X_2,Z_2$ through the Clifford circuit $V_{Z_\mathcal{G}}^\dagger S_{2,3} V_{Z_\mathcal{G}}$, and observing that it implements the same transformation as $H^{\otimes 2} S_L$). As a result, we see that $\mathcal{M}^{Z_\mathcal{G}}_{1,3} = H^{\otimes 2} S_L \mathcal{M}^{Z_\mathcal{G}}_{1,2} S_L H^{\otimes 2} = \langle X_1, X_2 \rangle_\text{vN} $, which is the set of diagonal operators in the $H^{\otimes 2}\ket{ab}_L$ basis. We also see that: \begin{align} V_{Z_\mathcal{G}} H^{\otimes 2}\ket{ab}_L = S_{2,3} V_{Z_\mathcal{G}}\ket{ba}_L = \ket{X^a Z^b}_{1,3} \ket{X^a Z^b}_{2,4}. \end{align} Tracing out qubits 2 and 4, just like before, measures the qubits in 1 and 3 in the $\ket{X^a Z^b}$ basis, and the resulting entropy is the same as $S(\mathcal{M}^{Z_\mathcal{G}}_{1,3},\rho)$, implying $L = 0$. \end{example} The code $Z_\mathcal{G}$ is symmetrical under $S_{2,3}$ so we expect the code to have the same entanglement properties for both $A = \{1,2\}$ and $A = \{2,3\}$. However, gauges will break this symmetry. To illustrate this, we analyze the gauges considered by \cite{cao2020approximate}. \begin{example} \textbf{2-2 bipartitions of the 2x2 Bacon Shor Code with fixed gauges}. First we consider $P = Z_1Z_2$ with $A = \{1,2\}$. We calculate $P_L = V_{Z_\mathcal{G}}^\dagger PV_{Z_\mathcal{G}} = Z_2 $. This forces the second qubit in $V_{Z_\mathcal{G}}$ to be $\ket{0}$: we could write $V_P \ket{\psi} = \ket{\psi}\ket{0}$. The corresponding von Neumann algebra is $\mathcal{M}^{Z_1Z_2}_{1,2} = V_P^\dagger \mathcal{M}^{Z_\mathcal{G}}_{1,2} V_P = \langle Z_1 \rangle_\text{vN}$, which is again just the set of diagonal operators in the computational basis on the first qubit. The corresponding encoded states are $V_{Z_\mathcal{G}}\ket{a0}_L = \ket{X^a Z^0}_{1,2} \ket{X^a Z^0}_{3,4}$. We see that discarding qubits 3 and 4 measures qubits 1 and 2, so for the exact same reasoning as above, we have $ S(\text{Tr}_{\bar A}(V^\dagger_P V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}} V_P )) = S( \mathcal{M}^{Z_1Z_2}, \rho)$ so $L = 0$. However, $A = \{1,3\}$ with $P = Z_1Z_2$ yields a different result. We find that $\mathcal{M}^{Z_1Z_2}_{1,3} = V_P^\dagger \mathcal{M}^{Z_\mathcal{G}}_{1,3} V_P = \langle X_1 \rangle_\text{vN} $, which is diagonal in the $H_1 \ket{a0}_L$ basis. $S(\mathcal{M}^{Z_1Z_2}_{1,3},\rho)$ corresponds to the entropy of the $a$ degree of freedom as measured in the computational basis. We find that: \begin{align} V_{Z_\mathcal{G}} H_1 \ket{a0}_L &= S_{2,3} V_{Z_\mathcal{G}} S_L H^{\otimes 2} H_1 \ket{a0}_L = S_{2,3} V_{Z_\mathcal{G}} H_1 S_L \ket{a0}_L = S_{2,3} V_{Z_\mathcal{G}} \ket{+a}_L\\ &= \frac{\ket{X^0 Z^a}_{1,3} \ket{X^0 Z^a}_{2,4} + \ket{X^1 Z^a}_{1,3} \ket{X^1 Z^a}_{2,4} }{\sqrt{2}}. \end{align} Now we see that tracing out qubits 2 and 4 yields two sources of entropy for the remaining qubits on $A$: one bit of entropy from the $X$ degree of freedom, and the other stemming from the measurement of $a$ in the computational basis. Thus, $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) - S(\mathcal{M}^{Z_1Z_2}_{1,3},\rho) = 1$. So $L = I_L$. We saw that for $P = Z_1 Z_2$, $A = \{1,2\}$ featured $S = 0$ and $S = \{1,3\}$ featured $S = I_L$. Now we consider $P = X_1 X_3$: we will find that the opposite is the case by just swapping $V_{Z_\mathcal{G}}$ with $S_{2,3}V_{Z_\mathcal{G}}$. We compute: \begin{align} V_{Z_\mathcal{G}}^\dagger S_{2,3} P S_{2,3} V_{Z_\mathcal{G}} = S_L H^{\otimes 2} V_{Z_\mathcal{G}}^\dagger P V_{Z_\mathcal{G}} H^{\otimes 2} S_L = S_L H^{\otimes 2} X_1 H^{\otimes 2} S_L = Z_2. \end{align} So we see that $P = X_1X_3$ behaves just like $Z_1 Z_2$ when qubits 2 and 3 are swapped. Thus, $A = \{1,2\}$ features $S = I_L$ and $S = \{1,3\}$ featured $S = 0$. \end{example} \section{Atomic examples} \label{sec:examples} In the previous section we discussed holographic properties of an isometry $V: \H_L \to \H$, a subregion $A$ and a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$. Together $(V,A,\mathcal{M})$ can exhibit `complementary recovery' if $\mathcal{M}$ can be recovered from $A$ and its commutant $\mathcal{M}'$ can be recovered from $\bar A$. Furthermore, $(V,A,\mathcal{M})$ are said to exhibit an `RT formula' if the following equation holds for all states $\rho$ on $\H_L$: \begin{align} S(\text{Tr}_{\bar A}(V \rho V^\dagger)) = S(\mathcal{M},\rho) + \text{Tr}(\rho L). \end{align} We established two results: First, we showed that the isometry $V$ and the subregion $A$ together uniquely determine an $\mathcal{M}$ so that $(V,A,\mathcal{M})$ have complementary recovery, and gave a simple method for calculating $\mathcal{M}$ if it exists. Second, we showed that complementary recovery implies that an RT Formula holds for both $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$. In this section we give some examples of quantum error-correcting codes that exhibit an RT formula. These examples aim to be non-trivial while using as few qubits as possible, motivating the name `atomic'. We begin with simple examples where the equation above holds in a trivial way, but then build our way up to an example that features an RT formula where every single term in the equation is nonvanishing. These toy models are a useful stepping stone toward an intuitive understanding of holography and its connection to quantum error correction. In particular, the statement of Lemma~\ref{lemma:factorization} and proof of Theorem~\ref{thm:complementaritytoRT} made heavy use of abstract decompositions of the Hilbert spaces as well as various intermediate states. These arguments are significantly easier to understand when keeping the examples in mind. In Theorem~\ref{thm:whatalgebra} we showed that $V,A$ together determine the algebra $\mathcal{M}$. In our examples however we only specify the encoding isometry $V$. This is because these examples actually exhibit RT formulae for all `contiguous subregions' $A$. That is, the physical Hilbert space is to be thought of as a ring of qubits, and $A$ can only contain adjacent sets of qubits. Moreover, the isometries $V$ are sufficiently symmetrical that the RT formulae for all these different subregions $A$ are identical provided they are large enough. Combined with the fact that $(V,A,\mathcal{M})$ and $(V,\bar A, \mathcal{M}')$ have the same area operator, the analysis is thus greatly simplified. Recall from the proof of Theorem~\ref{thm:complementaritytoRT} that for any state $\rho$ we can derive $\rho_\alpha$ and $p_\alpha$ so that the algebraic entropy can be written as: \begin{align} S(\mathcal{M},\rho) = \sum_{\alpha} p_\alpha \log(p_\alpha^{-1}) + \sum_\alpha p_\alpha S(\text{Tr}_{\bar L_\alpha}( \rho_\alpha)), \end{align} which intuitively splits the entropy into a `classical term' and a `quantum term'. The classical term is indeed just the classical entropy corresponding to the probabilities $p_\alpha$, while the quantum term is a probabilistic mixture of various von Neumann entropies. Substituting this expansion into the RT Formula, we obtain an equation with four terms. We name the first three $S_A$ after the subregion $A$, $S_c$ for `classical' and $S_q$ for `quantum': \begin{align} \underbrace{S(\text{Tr}_{\bar A}(V \rho V^\dagger))}_{S_A} = \underbrace{\sum_{\alpha} p_\alpha \log(p_\alpha^{-1})}_{S_c} + \underbrace{\sum_\alpha p_\alpha S(\text{Tr}_{\bar L_\alpha}( \rho_\alpha))}_{S_q} + \text{Tr}(\rho L). \label{eqn:expandedrt} \end{align} The structure of this section is as follows: we begin with three examples where only one of the terms $S_c, S_q$ and $\text{Tr}(\rho L)$ is nonzero. Then we give three examples where exactly two terms are nonvanishing. Then, finally, we give one example where all three terms appear. The definitions of all the isometries $V$ are summarized in Figure~\ref{fig:circuits}. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=.8em { \lstick{\alpha} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }$$ \caption{$S_A = S_\text{c} + \cancel{S_\text{q}} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=.8em { \lstick{\ket{+}} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + \cancel{S_\text{q}} + \text{Tr}(\rho L) $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.6em @R=1.3em { \lstick{i} & \qw\\ \lstick{j} & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + S_\text{q} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \vspace{5mm} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.1em { \lstick{\alpha} & \ctrl{2} & \qw \\ \lstick{i} & \qw & \qw & \qw \\ \lstick{\ket{0}} & \targ & \qw \\ \lstick{j} & \qw & \qw }$$ \caption{$S_A = S_\text{c} + S_\text{q} + \cancel{\text{Tr}(\rho L)} $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.2em { \lstick{\alpha} & \ctrl{2} & \ctrl{1} & \qw \\ \lstick{\ket{+}} & \qw & \ctrl{2} & \qw \\ \lstick{\ket{0}} & \targ & \qw & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw }$$ \caption{$S_A = S_\text{c} + \cancel{S_\text{q}} + \text{Tr}(\rho L) $} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.3em { \lstick{i} & \qw & \qw \\ \lstick{\ket{+}} & \ctrl{2} & \qw \\ \lstick{j} & \qw & \qw \\ \lstick{\ket{0}} & \targ & \qw }$$ \caption{$S_A = \cancel{S_\text{c}} + S_\text{q} + \text{Tr}(\rho L) $} \end{subfigure} \vspace{5mm} \begin{subfigure}[b]{0.3\textwidth} $$ \Qcircuit @C=0.3em @R=1.3em { \lstick{\alpha} & \ctrl{3} & \ctrl{2} & \qw \\ \lstick{i} & \qw & \qw & \qw \\ \lstick{\ket{+}} & \qw & \ctrl{3} & \qw \\ \lstick{\ket{0}} & \targ & \qw & \qw\\ \lstick{j} & \qw & \qw & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw }$$ \caption{$S_A = S_\text{c} + S_\text{q} + \text{Tr}(\rho L) $} \end{subfigure} \caption{\label{fig:circuits}Examples of encoding isometries $V$ considered in this section. All of these exhibit an RT formula as in (\ref{eqn:expandedrt}), but various terms vanish as shown. The logical Hilbert space $\H_L$ always factors into $\H_\alpha \otimes \H_i \otimes \H_{j}$, with the input qubits marked as such.} \end{figure} \subsection{Codes with one term} We specify all the isometries in terms of quantum circuits, which makes many of the non-trivial Hilbert space decompositions in Lemma~\ref{lemma:factorization} and Theorem~\ref{thm:complementaritytoRT} much simpler to understand. In particular, recall from Lemma~\ref{lemma:factorization} that $\mathcal{M}$ induces a decomposition on $\H_L$ of the form: \begin{align} \H_L = \bigoplus_\alpha \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right). \end{align} This very general form of a decomposition accounts for the fact that the dimensions of $\H_{L_\alpha}$ and $\H_{\bar L_\alpha}$ may vary depending on $\alpha$. This will not be the case for these examples, so we can simply remove the $\alpha$ dependence, relabeling $\H_{L_\alpha} \to \H_i$ and $\H_{\bar L_\alpha} \to \H_j$, and write: \begin{align} \H_L = \H_\alpha \otimes \H_i \otimes \H_{j}. \end{align} Each of the degrees of freedom $\alpha,i$ and $j$ is then simply encoded by the corresponding qubit, which is labeled as such on the left side of the circuit. Here, which block of the decomposition we are in is associated with its own Hilbert space $\H_\alpha$. \begin{atomicexample} \label{ex:c} We begin with an example where the RT Formula is simply $S_A = S_c$: \begin{align} V_{(a)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=.8em { \lstick{\alpha} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw} \end{array} \end{align} Without loss of generality we pick $\H_A$ to be the first qubit and $\H_{\bar A}$ to be the second qubit. Intuitively, when $\H_{\bar A}$ is traced out, then the qubit $\H_A$ acts like it has been measured in the computational basis. The probabilities of the two outcomes $p_0$ and $p_1$ are a classical probability distribution. Following Theorem~\ref{thm:whatalgebra}, we compute $V^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A}) V$ to obtain $\mathcal{M}$. A general element in $O \in \mathcal{L}(\H_A) \otimes I_{\bar A}$ can be expanded into Pauli matrices: \begin{align} O &= \alpha (I\otimes I) + \beta (X \otimes I) + \gamma (Y \otimes I) + \delta( Z \otimes I)\\ V^\dagger OV &= \alpha I + \delta Z. \end{align} So $\mathcal{M}$ is indeed a von Neumann algebra: the set of diagonal operators on $\H_\alpha$. This means that observables in $\mathcal{M}$ cannot really distinguish superpositions over different $\alpha$ from classical probability distributions over $\alpha$, since $\rho_\mathcal{M}$ is also diagonal. So the algebraic entropy $S(\mathcal{M},\rho)$ is also entirely classical. Notice however that $\mathcal{M}$ is its own center, and is not trivial! So we see that a von Neumann algebra with a non-trivial center can still have a trivial area operator $L = 0$. \end{atomicexample} \begin{atomicexample} \label{ex:l}Next, we consider an isometry where $S_A = \text{Tr}(\rho L)$. In this case the logical Hilbert space $\H_L$ is one dimensional: there are no logical qubits. We can still define a density matrix, though: the 1 x 1 matrix $\rho = 1$. \begin{align} V_{(b)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=.8em { \lstick{\ket{+}} & \ctrl{1} & \qw\\ \lstick{\ket{0}}& \targ & \qw }\end{array} \end{align} $V_{(b)}$ simply prepares a Bell state, so $S_A$ is simply the constant 1. Furthermore, $\mathcal{M} := V^\dagger (\mathcal{L}(H_A) \otimes I_{\bar A}) V $ is just the set of scalars, so $S(\mathcal{M},\rho)$ vanishes. We can thus achieve $S_A = 1= \text{Tr}(\rho L)$ by making the area operator the 1x1 matrix $L = 1$. This is consistent with the fact that $\mathcal{M}$, being the set of scalars, has a trivial center. \end{atomicexample} \begin{atomicexample} \label{ex:q} Third, we consider an isometry with only a quantum part: $S_A = S_q$. In this case $\H_L $ and $\H$ are both two qubits, and $V$ is the identity. \begin{align} V_{(c)} := \hspace{3mm}\begin{array}{c}\Qcircuit @C=0.6em @R=1.3em { \lstick{i} & \qw\\ \lstick{j} & \qw }\end{array} \end{align} We see that $\H_i = \H_A$ and $\H_j = \H_{\bar A}$, so $S_A = S(\text{Tr}_j(\rho))$. We also have that $\mathcal{M} = \mathcal{L}(\H_i) \otimes I_j$, which is a factor, so the associated Hilbert space decompositions has only one big block with $\alpha = 0$ and no other values of $\alpha$. This makes the distribution over blocks trivial with $p_0 = 1$, so the classical part of $S(\mathcal{M},\rho)$ vanishes and only the quantum component remains. $\mathcal{M}$ is a factor, so it has a trivial center, consistent with $L = 0$. \end{atomicexample} Indeed the above examples are extremely trivial, since they each only feature one term in the RT formula. However, they are the fundamental building blocks for codes with more complicated RT formulae. \subsection{Codes with two terms} Now we move on to RT formulae with two non-trivial terms. These allow us to make some of the steps in the proof of Theorem~\ref{thm:complementaritytoRT} more explicit. In particular, the proof involved further decomposition of $\H_{A}$ and $\H_{\bar A}$ into: \begin{align} \H_{A} = \bigoplus_\alpha \left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3} \hspace{10mm} \H_{\bar A} = \bigoplus_\alpha \left( \H_{\bar A^\alpha_1} \otimes \H_{\bar A^\alpha_2} \right) \oplus \H_{\bar A_3}. \end{align} As with $ \H_L = \bigoplus_\alpha \left( \H_{L_\alpha} \otimes H_{\bar{L}_\alpha}\right)$, the $\alpha$ dependence allows the different blocks enumerated by $\alpha$ to have varying dimension. This will not be the case for our examples. Furthermore, the extra $\H_{A_3}$ allows this factorization to only factorize the image of $V$ in $\H_A$. In our case it is actually easier to just factor all of $\H_A$ and $\H_{\bar A}$ directly. \begin{align} \H_{A} = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2} \hspace{10mm} \H_{\bar A} = \H_{\bar A_\alpha} \otimes \H_{\bar A_1} \otimes \H_{\bar A_2}. \end{align} As with $\H_L$, which block $\alpha$ of the decomposition we are in actually factors out onto its own qubit $\H_{A_\alpha}$ or $\H_{\bar A_\alpha}$. The fact that $\alpha$ is visible from both sides of the bipartition is what lends it its classical behavior. In our circuits we now label the right side with the associated decomposition of $\H_{A}$ and $\H_{\bar A}$ as well. In Theorem~\ref{thm:complementaritytoRT}, the purpose of the decomposition of $\H_A$ and $\H_{\bar A}$ was to show that there exist unitaries $U_A$ and $U_{\bar A}$ that bring the states $V\ket{\alpha,i,j}$ into a particular form, specifically that of equation (\ref{eqn:twosideddecomp}): \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_\alpha A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha\bar A_1}. \label{eqn:twosideddecomp_again} \end{align} Then, the entanglement entropies of the $\ket{\chi_\alpha}$ states across $A_2\bar A_2$ yield the eigenvalues of the area operator. In our examples $U_A$ and $U_{\bar A}$ will just be the identity. \begin{atomicexample} \label{ex:cq} The following code has the RT formula $S_A = S_c + S_q$. This is the first example where all three components of $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ are two-dimensional. Below we have selected $\H_A$ as the first two qubits and $\H_{\bar A}$ as the last two qubits. However, other choices of $A$ will have the same formula provided $A$ is a pair of adjacent qubits. \begin{align} V_{(d)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.1em { \lstick{\alpha} & \ctrl{2} & \qw & \rstick{A_\alpha} \\ \lstick{i} & \qw & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{0}} & \targ & \qw & \rstick{\bar A_\alpha} \\ \lstick{j} & \qw & \qw & \rstick{\bar A_1} }\end{array} \end{align} We begin by computing $\mathcal{M}$: we see how, just as in Example~\ref{ex:q}, $\H_A$ has full access to $\H_i$, and for the same calculation as in Example~\ref{ex:c}, $\H_A$ has access to diagonal operators on $\H_\alpha$. On the other hand, it must act like the identity on $\H_j$. $Z_\mathcal{M}$ acts like the identity on $\H_i,\H_j$, but can act non-trivially on $\H_\alpha$. So we cannot rule out a trivial RT formula yet. At this point we see that a basis $\{\ket{\alpha,i,j}\}$ for $\H_L$ that `lines up with $\mathcal{M}$' as in Lemma~\ref{lemma:factorization} is actually just the computational basis on $\H_L$. We can just write $\ket{\alpha,i,j} = \ket{\alpha}\ket{i}\ket{j}$. Considering such a state we see that: \begin{align} V_{(d)} \ket{\alpha,i,j} = \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}\ket{\alpha}_{\bar A_\alpha} \ket{j}_{\bar A_1}. \end{align} Since this state already splits so cleanly into states $\ket{\psi_{\alpha,i}}_{A_\alpha A_1} = \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}$ and $\ket{\bar \psi_{\alpha,j}}_{\bar A_\alpha \bar A_1} = \ket{\alpha}_{\bar A_\alpha}\ket{j}_{\bar A_1}$, we actually can just select $U_A$ and $U_{\bar A}$ to be the identity. The only thing missing from equation (\ref{eqn:twosideddecomp_again}) is the $\ket{\chi_\alpha}$ on $A_2,\bar A_2$. However, we have both other contributions. The $\alpha$ degree of freedom is visible from $\bar A$, so therefore acts like it has been measured from $A$'s perspective. The $\H_i \otimes \H_j$ register might be entangled with $\H_\alpha$, so after the measurement it will collapse to one of the $\rho_\alpha$ states from the decomposition $\rho_\mathcal{M} = \sum_\alpha p_\alpha \rho_\alpha$. The quantum term of the entropy is then the associated probabilistic mixture of the von Neumann entropy of $\rho_\alpha$ reduced to $\H_i$. Writing out the full formula: \begin{align} \underbrace{S(\text{Tr}_{\bar A}(V \rho V^\dagger))}_{S_A} = \underbrace{\sum_{\alpha} p_\alpha \log(p_\alpha^{-1})}_{S_c} + \underbrace{\sum_\alpha p_\alpha S(\text{Tr}_{j}( \rho_\alpha))}_{S_q}. \end{align} Just as with Example~\ref{ex:c}, $Z_\mathcal{M}$ had a non-trivial center, but we still have $L = 0$. \end{atomicexample} \begin{atomicexample} \label{ex:cl} Next we consider a code with a classical term and an area term, but no quantum term: $S_A = S_c + \text{Tr}(\rho L)$. This is the first code where $L$ is not proportional to the identity. \begin{align} V_{(e)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.2em { \lstick{\alpha} & \ctrl{2} & \ctrl{1} & \qw & \rstick{A_\alpha} \\ \lstick{\ket{+}} & \qw & \ctrl{2} & \qw & \rstick{A_2} \\ \lstick{\ket{0}} & \targ & \qw & \qw & \rstick{\bar A_\alpha} \\ \lstick{\ket{0}} & \qw & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} The von Neumann algebra $\mathcal{M}$, due to a similar calculation as in Example~\ref{ex:c}, is again just the set of diagonal operators on $\H_L = \H_\alpha$. The algebraic entropy $S(\mathcal{M},\rho)$ is then again the classical entropy of the probability distribution $\{p_\alpha\}$. Since there are multiple superselection sectors corresponding to different $\alpha$, we do not have a trivial center. For this example, the entropy $S_A$ can actually be computed explicitly for some logical pure state $\beta_0\ket{0} + \beta_1\ket{1}$ where $p_\alpha = |\beta_\alpha|^2$. The circuit conditionally prepares a Bell state depending on the value of $\alpha$: \begin{align} V_{(e)} ( \beta_0\ket{0} + \beta_1\ket{1}) &= \beta_0 \ket{0\text{+}00} + \beta_1 \frac{ \ket{1010} +\ket{1111}}{\sqrt{2}}\\ \text{Tr}_{\bar A}( V_{(e)}\rho V_{(e)}^\dagger ) &= |\beta_0|^2 \ket{0}\bra{0}_{A_\alpha} \otimes \ket{+}\bra{+}_{A_2} + |\beta_1|^2 \ket{1}\bra{1}_{A_\alpha} \otimes \frac{I_{A_2}}{2} \\ S_A = S(\text{Tr}_{\bar A}( V_{(e)}\rho V_{(e)}^\dagger )) &= \left[|\beta_0|^2\log(|\beta_0)|^{-2}) + |\beta_1|^2\log(|\beta_1)|^{-2})\right]\\ &+ \left[ |\beta_0|^2 S( \ket{+}\bra{+} ) + |\beta_1|^2 S( I/2) \right]\\ &= \sum_\alpha p_\alpha \log(p_\alpha^{-1}) + \text{Tr}\left( \rho \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix} \right). \end{align} So we have explicitly derived an area operator $L = \ket{1}\bra{1}$. Also worth noting is that equation (\ref{eqn:twosideddecomp_again}) is now almost fully rendered out: while $A_1$ and $\bar A_1$ are missing, we now have: \begin{align} V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_\alpha} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha}, \end{align} where $\ket{\psi_{\alpha,i}} = \ket{\bar\psi_{\alpha,j}} = \ket{\alpha}$ and $\ket{\chi_0} = \ket{+}\ket{0}$ and $\ket{\chi_1}$ is a Bell state. We see that $L = \sum_\alpha S(\text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha})) \cdot I_\alpha$ matches what we derived above. \end{atomicexample} \begin{atomicexample} \label{ex:ql} Now we consider a code with a quantum term and an area term, but no classical term: $S_A = S_q + \text{Tr}(\rho L)$. This code actually features an area operator proportional to the identity again: the $\alpha$ degree of freedom determines $\ket{\chi_\alpha}$, whose entanglement in turn determines the area. But since there is only one $\alpha$, we have a trivial center and there can be no superposition over areas. \begin{align} V_{(f)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.2em { \lstick{i} & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{+}} & \ctrl{2} & \qw & \rstick{A_2} \\ \lstick{j} & \qw & \qw & \rstick{\bar A_1} \\ \lstick{\ket{0}} & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} Similarly to Example~\ref{ex:q}, we have $\H_i = \H_{A_1}$ and $\H_j = \H_{\bar A_1}$, and $\mathcal{M} = \mathcal{L}(\H_i) \otimes I_j$. Since $\mathcal{M}$ is a factor, the only contribution to $S(\mathcal{M},\rho)$ is the entropy of the reduced state on $\H_i$, that is, $S(\text{Tr}_j{\rho})$. There is only one value of $\alpha$. However, the entropy $S_A$ now features two contributions: the entropy of the state on $\H_i$ visible from $\H_{A_1}$, and the entropy of the Bell state across $\H_{A_2}\otimes \H_{\bar A_2}$. We can see this from the form of $V_{(f)}\ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_1}$ where $\ket{\psi_{\alpha,i}} = \ket{i}$, $\ket{\bar\psi_{\alpha,j}} = \ket{j}$ and $\ket{\chi_\alpha}$ is a Bell state. As a result, $S_A - S_q = 1$, so we achieve $S_A = S_q + \text{Tr}(\rho L)$ by setting $L = I$. \end{atomicexample} As we have seen, combining two of the primitive circuits from Examples~\ref{ex:c}~\ref{ex:l}, and~\ref{ex:q} already produces non-trivial results, including states of the form $\ket{\alpha,i,j}$ in Example~\ref{ex:cq} and area operators not proportional to the identity in Example~\ref{ex:cl}. Of particular importance in Example~\ref{ex:cl} was the conditional preparation of a Bell state based on $\alpha$. This caused the different $\ket{\chi_\alpha}$ states to exhibit varying amounts of entanglement, each becoming a different eigenvalue of $L$. \subsection{A complete example} To finish the section, we give a final example that features all three terms of the RT formula, and makes both of the decompositions $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ and $\H_A = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2}$ completely non-trivial. \begin{atomicexample} \label{ex:cql} This six-qubit code's RT formula has all three terms on the right-hand side: $S_A = S_c + S_q + \text{Tr}(\rho L)$. We consider the subregion $A$ to be the first three qubits, but the same RT formula holds for any choice of $A$ that is three adjacent qubits. Smaller or larger $A$ will exhibit a simpler RT formula, similar to those from the previous examples. \begin{align} V_{(g)} := \hspace{8mm}\begin{array}{c}\Qcircuit @C=0.3em @R=1.3em { \lstick{\alpha} & \ctrl{3} & \ctrl{2} & \qw & \rstick{A_\alpha}\\ \lstick{i} & \qw & \qw & \qw & \rstick{A_1} \\ \lstick{\ket{+}} & \qw & \ctrl{3} & \qw & \rstick{A_2} \\ \lstick{\ket{0}} & \targ & \qw & \qw & \rstick{\bar A_\alpha} \\ \lstick{j} & \qw & \qw & \qw & \rstick{\bar A_1} \\ \lstick{\ket{0}} & \qw & \targ & \qw & \rstick{\bar A_2} }\end{array} \end{align} Similarly to Example~\ref{ex:cq}, $\H_A$ has full access to $\H_i$ via $\H_{A_1}$, as well as access to the diagonal operators on $\H_{\alpha}$ via $\H_{A_\alpha}$ from the calculation in Example~\ref{ex:c}, and no access to $\H_j$. Therefore, the basis $\ket{\alpha,i,j}$ is just the computational basis on the three logical qubits with $\ket{\alpha,i,j} = \ket{\alpha}\ket{i}\ket{j}$. If we apply the isometry $V_{(g)}$ to such a basis state we get the full equation (\ref{eqn:twosideddecomp_again}): \begin{align} V_{(g)} \ket{\alpha,i,j} &= \ket{\psi_{\alpha,i}}_{A_\alpha A_1} \otimes \ket{\chi_{\alpha}}_{A_2 \bar A_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A_\alpha \bar A_1}, \\[2mm] \ket{\psi_{\alpha,i}}_{A_\alpha A_1} &= \ket{\alpha}_{A_\alpha}\ket{i}_{A_1}, \hspace{15mm} \ket{\bar \psi_{\alpha,j}}_{\bar A_\alpha \bar A_1} = \ket{\alpha}_{\bar A_\alpha}\ket{j}_{\bar A_1}, \\ \ket{\chi_0}_{A_2 \bar A_2} &= \ket{+}_{A_2}\ket{0}_{\bar A_2}, \hspace{14mm} \ket{\chi_0}_{\bar A_\alpha \bar A_1} = \frac{\ket{00}_{A_2\bar A_2}+\ket{11}_{A_2\bar A_2} }{\sqrt{2}}. \end{align} As in Example~\ref{ex:cl}, we conditionally prepare a Bell state on $\H_{A_2}\otimes \H_{\bar A_2}$, so following the same calculation we see that the area operator is $L = \ket{1}\bra{1}$. But additionally this example also features the $\H_{A_1}$ and $\H_{\bar A_1}$ spaces corresponding to $\H_i$ an $\H_j$, contributing a quantum term to the RT formula as in Example~\ref{ex:cq}. \end{atomicexample} These circuits seem to be the smallest examples of qubit quantum error-correcting codes to exhibit interesting holographic properties. However, there are some ideas that these circuits still oversimplify. First, the factorizations $\H_L = \H_\alpha \otimes \H_i \otimes \H_j$ and $\H_A = \H_{A_\alpha} \otimes \H_{A_1} \otimes \H_{A_2}$ are a significant simplification of the decompositions $\H_L = \bigoplus_\alpha\left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha} \right)$ and $\H_A = \bigoplus \left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}$ respectively. Not only are all the dimensions of $\H_{L_\alpha},\H_{\bar L_\alpha}, \H_{A^\alpha_1}, \H_{A^\alpha_2}$ independent of $\alpha$, but consequently the $\alpha$ degree of freedom neatly factors out onto a separate qubit. This is a highly non-generic feature for von Neumann algebras: while $\alpha$ can be measured via a projective measurement, it usually does not factor to its own degree of freedom like this. Second, none of these examples exhibit the `radial commutativity' discussed in Subsection \ref{sub:radial}. In holography, operators acting on a single point at the boundary do not have access to any bulk degrees of freedom and must therefore commute with all bulk operators. In a finite-dimensional analogy, \cite{harlow2017ryu} constructed a three-qutrit code where operators acting on any single qutrit must commute with the logical operators of the code. However, the codes presented here do not have this property. In example~\ref{ex:cql}, access to the physical qubit labeled $A_1$ already gives full access to the $\H_i$ factor of the logical Hilbert space. One method for remedying this could be to encode each of the physical qubits of example~\ref{ex:cql} into another quantum error-correcting code that protects against single qubit erasures. \section{Complementarity of private and correctable algebras} \label{app:privacy} Here we give a brief overview of the main result of~\cite{crann2016private} closely following the simpler, finite-dimensional, presentation given in~\cite{kribs2018quantum}. A quantum channel $\Phi: \mathcal{L}(\mathcal{H}_A) \rightarrow \mathcal{L}(\mathcal{H}_A)$ is a completely-positive, trace preserving map between two spaces of linear operators. The dual map $\Phi^{\dagger}$ of a quantum channel $\Phi$ is defined via the trace inner product $\operatorname{Tr}(\Phi(\rho) X)=\operatorname{Tr}\left(\rho \Phi^{\dagger}(X)\right)$. Using the Stinespring dilation theorem we can express any quantum channel $\Phi$ in terms of its action on an auxiliary Hilbert space $\mathcal{H}_C$ (with $|\mathcal{H}_C|\leq |\mathcal{H}_C|^2$ ). In particular, given a state $\left|\psi_{C}\right\rangle \in \mathcal{H}_{C}$ and a unitary $U$ on $\mathcal{H}_{A} \otimes \mathcal{H}_{C}$ such that for all $\rho \in \mathcal{L}\left(\mathcal{H}_{A}\right)$ we have that, \begin{equation} \Phi(\rho)=\operatorname{Tr}_{C} \circ \, \mathcal{U}\left(\rho \otimes\left|\psi_{C}\right\rangle\left\langle\psi_{C}\right|\right)=\operatorname{Tr}_{C} \circ \mathcal{V}(\rho), \end{equation} where $\operatorname{Tr}_{C}$ denotes the partial trace map from $\mathcal{L}\left(\mathcal{H}_{A} \otimes \mathcal{H}_{C}\right)$ to $\mathcal{L}\left(\mathcal{H}_{A}\right)$, the map $\mathcal{U}(\cdot)=U(\cdot) U^{*}$, and $\mathcal{V}(\cdot)=V(\cdot) V^{*}$ is the map implemented by the isometry $V: \mathcal{H}_{A} \rightarrow \mathcal{H}_{A} \otimes \mathcal{H}_{C}$ defined by $V|\psi\rangle=U\left(|\psi\rangle \otimes\left|\psi_{C}\right\rangle\right)$. The Stinespring dilation theorem allows us to define a notion of complementarity for quantum channels. \begin{definition}[complementary map] Given a quantum channel $\Phi$ the complementary map from $\mathcal{L}\left(\mathcal{H}_{A}\right)$ to $\mathcal{L}\left(\mathcal{H}_{C}\right)$ is \begin{equation} \Phi^{C}(\rho)=\operatorname{Tr}_{A} \circ \mathcal{V}(\rho). \end{equation} \end{definition} Equipped with these notions we proceed to define correctable and private algebras. \begin{definition}[correctable algebra, Definition 2.1~\cite{kribs2018quantum}] Let $\mathcal{H}$ be a finite-dimensional Hilbert space (the physical space). A quantum error-correcting code is defined by a projection $P$ on $\mathcal{H}$ such that $\mathcal{H}_{L} = P\mathcal{H} $. Given an error channel $\mathcal{E}: \mathcal{L}(\mathcal{H}) \rightarrow \mathcal{L}(\mathcal{H})$, we say that a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(P \mathcal{H})$ is correctable for $\mathcal{E}$ with respect to $P$ if there exists a channel $\mathcal{R}: \mathcal{L}\left(\mathcal{H}_{C}\right) \rightarrow \mathcal{M}$ such that \begin{equation} \Phi_{P} \circ \mathcal{E}^{\dagger} \circ \mathcal{R}^{\dagger}=i d_{\mathcal{A}}, \end{equation} where $\Phi_{P}$ is the channel associated with the projection into the code subspace $\Phi_{P}(\cdot)=P(\cdot) P$. \end{definition} \begin{definition}[private algebra, Definition 2.2~\cite{kribs2018quantum}] Let $\mathcal{H}$ be a (finite-dimensional) Hilbert space and let $P$ be a projection on $\mathcal{H}$. Given a channel $\mathcal{E}: \mathcal{L}(\mathcal{H}) \rightarrow \mathcal{L}(\mathcal{H})$, a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(P \mathcal{H})$ is private for $\mathcal{E}$ with respect to $P$ if \begin{equation} \quad \Phi_{P} \circ \mathcal{E}^{\dagger}(\mathcal{L}(\mathcal{H})) \subseteq \mathcal{M}^{\prime}=\{X \in \mathcal{L}(P \mathcal{H}) \mid[X, O]=0 \, \forall O \in \mathcal{<}\}. \end{equation} \end{definition} Correctable and private algebras are related by the following theorem \begin{theorem}[Proposition 2.4~\cite{kribs2018quantum}] \label{thm:correctable-private} Let $\mathcal{M}$ be a subalgebra of $\mathcal{L}(P \mathcal{H})$ for some Hilbert space $\mathcal{H}$ and projection $P$. Let $\mathcal{E}$ be a channel on $\mathcal{H}$ with complementary channel $\mathcal{E}^{C}$. Then $\mathcal{M}$ is correctable for $\mathcal{E}$ with respect to $P$ if and only if $\mathcal{M}$ is private for $\mathcal{E}^{C}$ with respect to $P$. \end{theorem} \section{The holographic principle} \label{sec:holography_background} What characterizes a (quantum) gravitational/holographic theory? Which of these characteristics can be captured by low-dimensional discrete toy models? The goal of this section is to provide the interested reader with enough intuition about gravity and holography to answer these questions. In particular, reading this section is meant to establish the concepts and intuition necessary to understand the following key claim: \begin{itemize} \item The entanglement structure of holographic states is special. The entanglement entropy of a mixed state is not in general an observable. However, the reduced density matrix in the spatial subregion $A$ of a \emph{classical} holographic state $\rho$ obeys an area law known as the Ryu-Takayanagi formula: \begin{align}\label{eq:classical_RT} S_A(\rho) = \frac{1}{4G_N} \min_{\gamma_{A}} Area(\gamma_{A}). \end{align} That is, entropies of subregions are proportional to the value of a geometric observable, the minimal area operator, which is obtainable by following a well-defined procedure (detailed later in this section) to construct the classical geometry corresponding to $\rho$, and the constant of proportionality involves the gravitational constant $G_N$. Furthermore, holographic states in a much larger class, where we allow entanglement of bulk degrees of freedom as well as superpositions of different geometries, nevertheless have entropies described by a more general RT formula: \begin{align}\label{eq:full_RT} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho), \end{align} where $L$ is again a bulk observable whose eigenvalues are the minimal areas of the different geometries in the superposition. An RT formula also holds for the reduced state in the complementary subregion $\bar A$, with the \emph{same} operator $L$. \end{itemize} The following sections of the paper will be devoted to understanding the features of, and building examples of, quantum error-correcting codes which obey Eq.\ \eqref{eq:full_RT}. In particular, we will show that an operator $L$ exists for codes which obey a \emph{complementary recovery} property relating errors correctable in a region $A$ to errors correctable in the complement of the region. These codes are themselves \emph{holographic}: even though the codes are not gravitational, encoded states have the same special entanglement structure. However, before moving to the error-correction setting, in this section we give more details on what the RT formula \emph{means}. We discuss how to describe a spacetime geometry which obeys the equations of general relativity: first using a metric, then using the more invariant data of geodesic distances and extremal areas. We then sketch how to think of a \emph{quantum state} which describes (a spatial slice of) a given geometry, and then the larger Hilbert space which allows for entangled degrees of freedom living on these geometries, as well as states describing superpositions of geometries. Finally, we move from this abstract discussion to the more concrete setting of holographic theories which give dual descriptions of some of these quantum-gravitational Hilbert spaces, allowing us to measure bulk observables via a holographic operator dictionary and relating the bulk geometry to the boundary entropic structure. \subsection{Some general relativity} We begin by explaining the objects that appear on the right-hand side of \eqref{eq:classical_RT}; in later subsections we'll discuss the meaning of the left-hand side and of the generalization \eqref{eq:full_RT}. These are geometric quantities, so we'll first need to explain what we mean by a geometry, by introducing the notion of a metric to define the distance between points and along curves, and then the special curves called geodesics which define the causal structure of a spacetime. We'll then pass from mathematics to physics by discussing how the Einstein field equations relate the geometry to the energy and matter living on top of it. Finally, we'll give a (reasonably) careful discussion of the symmetries of spacetime and of the Einstein equations. Many metrics can describe the same spacetime, so if we want to work with physical quantities we need objects which don't change when we alter the metric but leave the spacetime unchanged. We'll see that, when a spacetime has a boundary, one of these quantities is precisely the minimal area operator appearing in \eqref{eq:classical_RT}. We recommend that the interested reader looking for a more complete but still concise introduction to GR consult \cite{carroll2001no}. \subsubsection{Metrics and distances}\label{sub:metrics} The full machinery of quantum gravity won't be necessary for this review, but it will be useful to establish some intuitions and terminology. We begin with classical general relativity. The space in this theory is a particular non-flat D-dimensional geometry. Formally, what we mean by a ``geometry" is some smooth (differentiable) manifold, which we can describe by some set of coordinates $\{x^\mu\}_{\mu=0}^{D-1}$, where the index $\mu$ ranges over the $D$ dimensions of the manifolds. What we mean by ``curved'' is that distances between points in this manifold aren't given by the Euclidean distance. Instead, we use a more general notion of distance, a \emph{pseudo-Riemannian metric} $g_{\mu \nu}$. Using the metric, we can define the \emph{line element} \begin{equation} ds^2 = g_{\mu \nu} dx^\mu dx^\nu, \end{equation} where we are adopting the convention that repeated indices are summed over. That is, at every point of space the metric is a \emph{matrix} (or, more formally, a two-index tensor): if we specify a particular point $x$ and pick two coordinate directions $\mu,\nu$ we can find the matrix element $g_{\mu \nu}$. When $g_{\mu \nu}=\delta_{\mu \nu}$ at all points in space, we recover the special case of the Euclidean metric in $D$ dimensions: the line element is \begin{equation} ds^2_\text{Euc} = (dx^0)^2 + \ldots + (dx^{D-1})^2 \equiv dx^\mu dx^\mu. \end{equation} However, we more often have cases in which some of the coordinates are \emph{timelike}, $g_{aa}<0$. In particular, when exactly one of the coordinates (by convention, $x^0$) is timelike (or, more precisely, when the metric has one negative eigenvalue everywhere on the manifold), we say that the metric is \emph{Lorentzian}. A simple example is given by taking the Euclidean metric and putting a minus sign in front of the {00} (``time-time'') component: \begin{equation} ds^2_\text{SR} = -(dx^0)^2 + \ldots + (dx^{D-1})^2 = dx^\mu dx^\mu. \end{equation} This is a metric which describes the situation of \emph{special relativity}: we can see that, for any fixed value of $x_0$, the \emph{spatial} part of the metric is still flat. The line element in turn allows us to compute the length of a curve $\gamma$, which we can parametrize as a choice of coordinates at each point on the curve: $\gamma(\lambda)=x^\mu(\lambda)$. The length of the curve is given by adding up the infinitesimal displacements along the curve, i.e. the arc length integral \begin{equation}\label{eq:arc_length} \left|\gamma\right| \equiv \int_0^1 d\lambda \sqrt{ds^2} = \int_0^1 d\lambda \sqrt{g_{\mu \nu} \frac{dx^\mu(\lambda)}{d\lambda} \frac{dx^\nu(\lambda)}{d\lambda}}. \end{equation} In a given geometry, we can construct the set of \emph{all possible} (smooth) curves which connect two points (in the equation above, the points are $x^\mu(0)$ and $x^\mu(1)$). Individual curves in this set depend on some choice of coordinates, but, crucially, the entire set depends only on the geometry and the choice of points\footnote{Admittedly, so far we've labelled these points in a particular coordinate system, but we could just call them $A$ and $B$, or alternatively consider any coordinate system that preserves the locations of the two points but allow the coordinates to vary on the rest of the geometry. Below we're going to consider the whole \emph{space} of geodesics, and that will remove even this dependence.}. So any quantity we can compute given access to the entire set is coordinate-independent. In particular, we'd like to use the set to come up with a coordinate-independent distance between two points. In Euclidean (or more generally Riemannian) metrics, one such quantity is \emph{the length of the shortest curve connecting two points}. If there's a timelike direction, this isn't true anymore: we can take a curve and add zig-zags in the timelike direction which will make the curve steadily shorter and shorter. So we can't simply take this as our distance measure. The right generalization turns out to be to consider the lengths not of \emph{minimal} curves, but \emph{extremal} ones. To find these curves, we take the arc length integral \eqref{eq:arc_length}, consider it as a functional of the curve $\gamma$, vary the curve $\gamma\rightarrow\gamma + \delta\gamma$, and look for stationary points of the variation. This is the fundamental problem of the calculus of variations, and its solution is given by solving the Euler-Lagrange equations with the action taken to be the line element $ds=\sqrt{ds^2}$. We won't write this down explicitly, but the equation to be solved is known as the \emph{geodesic equation} and the extremal curves are called \emph{geodesics}; in GR, they're the curves traced out by non-accelerating (``freely-falling'') observers. If all of the geodesics connecting two points have the same length, we call this length the \emph{geodesic distance} between two points; if there are multiple geodesics with different lengths, we take the geodesic distance to be the shortest such length. In geometries with one timelike direction, the geodesic distance between pairs of points gives a \emph{causal structure} for the geometry: for all pairs of points, we can tell whether they are spacelike, timelike, or null separated by checking whether the geodesic distance between them is, respectively, positive, negative, or zero. When two points are timelike separated, we often call the negative of the geodesic distance the \emph{proper time}; with appropriate units, it measures the time elapsed by a clock carried by an observer freely falling between the two points. Crucially, as we'll discuss below, the causal structure is really a property \emph{of the geometry itself}: the set of all geodesics on a manifold is independent of how the metric is parameterized, and two metrics describe the same geometry precisely when they produce the same causal structure. It should be clear that we can generalize this entire dicussion by passing from curves to higher-dimensional objects (surfaces, volumes, etc.). Instead of the arc length integral \eqref{eq:arc_length} we have some higher-dimensional integral, which we vary to find stationary solutions: extremal surfaces, volumes, etc. Like the geodesics, these are similarly coordinate-invariant objects. For ease of drawing figures, we'll typically work in two space and one time dimension. This hopefully explains our choice of notation in \eqref{eq:classical_RT}: the $A$ is a subregion of a spatial slice of the boundary, that is, a dimension $D-2$ (``codimension 2'') object, and the minimization is over the areas of extremal dimension $D-2$ objects in the bulk of the spacetime that touch the boundary at the edge of $A$ (actually a subclass of these objects, as we'll discuss towards the end of this section). If, like our universe, $D=4$, $A$ would be two-dimensional. But in three spacetime dimensions $D-2=1$, so the boundary is equivalent to a circle, $A$ is some portion of that circle, and the relevant extremal objects are curves which we've accordingly labeled as $\gamma_A$. We nevertheless call the operator which computes their length an \emph{area} operator because in general spacetime dimension the invariant objects have \emph{codimension} 2. \subsubsection{The Einstein field equations} So far we have just done mathematics (differential geometry, to be precise). General relativity is a physical theory which relates the geometry of a manifold to the matter distribution living on it. More precisely, the Einstein field equations read \begin{equation}\label{eq:einstein} G_{\mu \nu} = 8 \pi G_N T_{\mu \nu}. \end{equation} It won't be necessary for us to precisely define the objects in this equation, but we mention several features of it: \begin{itemize} \item $G_{\mu \nu}$ is the ``Einstein curvature tensor'', a geometric object which is a function of the metric and its derivatives. \item $T_{\mu \nu}$ is the ``stress-energy tensor.'' In a particular coordinate frame in a Lorentzian spacetime, we can identify $T_{00}$ as the energy density, $T_{0k}$ as momentum density in the $x^k$ direction, and the mixed components as pressures and stresses. \item \eqref{eq:einstein} is $D^2$ equations given by different choices of $\mu,\nu$, but both the left- and right-hand sides of the equation are symmetric under exchange of $\mu$ and $\nu$, e.g. $G_{\mu \nu}=G_{\nu \mu}$, so there are only $D(D-1)/2$ independent equations, 10 in 4 spacetime dimensions. For a fixed choice of stress-energy tensor, this is a set of (second-order) nonlinear coupled differential equations which determine the metric. \item Although it isn't manifest in this form, we can often rewrite the Einstein equations as an \emph{initial value problem}: if we know the metric and its derivatives and the stress-energy, \emph{at one particular moment in time}, i.e. everywhere in space for one particular value of a timelike coordinate, we can use the field equations to tell us what the metric will be at some later time\footnote{When we're trying to solve Einstein's equations on a manifold with a boundary, we need to give (spatial) boundary conditions in addition to initial data. This is the case, in particular, for the asymptotically-AdS spacetimes that are of interest in holography.}. Now we can study, for example, backreaction---given some particular matter configuration, how does the geometry evolve? \end{itemize} \subsubsection{Diffeomorphism invariance}\label{sub:diff} We said above that a particular geometry is described by a manifold specified by a metric. In general, however, there is not a one-to-one correspondence between metrics and geometries---there are many metrics which describe the same geometry. We're already used to being able to use different coordinate choices to describe the same physics: in Newtonian physics we're free to choose different origins and choices of axis for our coordinate system, or, with a little more work, to make one coordinate system move and rotate with respect to another. We can tell that two seemingly different situations are actually the same thing in different coordinate systems when the laws of physics are the same in both situations. In Newtonian physics, for example, acceleration is the same in all inertial frames, and Newton's laws of motion depend only on the acceleration: they have the property of ``Galilean invariance." Similarly, in special relativity, the laws of physics are invariant under Lorentz transformations; two observers may differ, for example, on what the strength of an electric or magnetic field is but they will agree on the appropriate invariant combinations. So, to answer the question of whether two metrics describe the same geometry, we should compute invariant quantities and check whether they are the same in both situations. In GR, the invariant quantities are related to the causal structure: they are the proper distances \eqref{eq:arc_length} along geodesics. Two metrics which share the same causal structure are related by a \emph{diffeomorphism}. That is, in general a given spacetime can be described by multiple distinct metrics. We emphasize that the \emph{metrics} really are distinct; it's only by computing ``diff-invariant" quantities that we can check that they describe the same spacetime. Another way to phrase this is that the metric doesn't only contain physical information about a spacetime, it \emph{also} contains extra redundant information that doesn't matter to an observer. (Think of the choice of the origin and axes for a flat metric, for example.) In high-energy physics one often refers to this information as ``gauge'' degrees of freedom. When we go from a metric to the physical quantities, i.e.\ the geodesics, we ``gauge out" these degrees of freedom so that only the physical ones remain. We say that two metrics are ``gauge-equivalent" if they're related by a gauge transformation, i.e.\ there is a diffeomorphism which takes one metric to the other. These two metrics are members of a ``gauge orbit", the equivalence class of all gauge-equivalent metrics. We ``gauge-fix" by specifying information to go to a subset of metrics in the equivalence class; if we specify so much that only one metric remains, we've totally fixed the gauge. A ``gauge'' is just another word for a measuring device; think of gauge-fixing as specifying the properties of this measuring device, i.e. giving enough information that two observers can agree on how to perform a measurement. In Newtonian physics, for example, we'd gauge-fix by fixing the direction of each coordinate axis, and a position and velocity for the origin of the coordinate system. None of that affects the physics, but if you want to check someone else's measurements you'll need that information. However, it's important to point out that not all gauge transformations preserve all of the information we might call physical. The issue arises when we consider metrics for manifolds with \emph{boundaries}. It's useful to gain some intuition by first thinking about the equivalent case in electrodynamics. Recall that the behavior of charged particles and electromagnetic fields is governed by Maxwell's equations. However, just like the metric, electric and magnetic fields are not gauge-invariant, only certain combinations of them (like $E^2 + B^2$ are). Just like Einstein's equations, the Maxwell equations are also differential equations, and hence their solution in a region with a boundary depends on a choice of boundary conditions. We could solve for the behavior of the electromagnetic fields inside a conducting sphere, for example, or with a charged surface. The point is that these boundary conditions represent an additional set of physical information. In general, then, the set of gauge transformations which preserve all physical quantities will be \emph{smaller} than if we didn't worry about boundary conditions at all. This same issue arises even when we're not placing boundary conditions at some particular region of space, but instead placing them ``at infinity." In this case there is a precise language used to talk about these types of boundary conditions. We ask whether the gauge transformation has any effect at infinity, or, equivalently, if it can be distinguished from the identity transformation in the limit that we go very far away from the origin of our coordinates. If it can't, we call the gauge transformation a ``small gauge transformation". If it can, we call the gauge transformation a ``large gauge transformation." And we have in mind that large gauge transformations are \emph{physical} while small gauge transformations are not. A large gauge transformation in electrodynamics can, for example, change the total charge a distant observer measures enclosed within some radius. A similar story holds in general relativity, but now the gauge transformations are diffeomorphisms applied to metrics. A small gauge transformation of the metric is one that takes $g_{\mu \nu}\rightarrow g_{\mu \nu}+\delta g_{\mu \nu}$, with $\lim_{x^\mu\rightarrow0} \delta g_{\mu \nu}(x^\mu)=0.$ A large gauge transformation can, for example, change the total invariant mass enclosed within some region. One convenient way to gauge-fix in general relativity is to fix a direction of time everywhere on the spacetime, or equivalently identify points which are on ``the same spatial slice" at a given time. In four spacetime dimensions, this is referred to as a \emph{3 + 1 decomposition}. Geometrically, we can think of this as a foliation\footnote{It turns out that there are some manifolds where it isn't actually possible to do such a foliation, but none of these exotic spacetimes will be relevant for our purposes. For a review of this formalism, which is most important when solving Einstein's equations numerically, see \ref{gourgoulhon20123+}.} of the spacetime into spatial slices. Again, when the spacetime has a boundary (in a spacelike direction), some foliations will coincide on the boundary and others will not. It's only the foliations which look the same at the boundary which we think of as describing the same physics. As we'll discuss below, the RT formula applies to spacetimes that have a (spatial) boundary. Invariant quantities are those which are left unchanged by ``small diffeomorphism'', i.e.\ diffeomorphisms which leave the boundary unchanged. In particular, the invariant quantities of interest for the RT formula are the areas of extremal surfaces which end on the boundary. In $2+1$ dimensions, these are geodesics which extend between points on the boundary; in higher dimensions there are also extremal surfaces, volumes, etc. which touch the boundary. In subsequent sections we will talk about not spacetimes, but \emph{Hilbert spaces} with gauge symmetries and redundancies. Although this type of gauging can be described independently of anything having to do with gravity, we will always have in mind that holographic error-correcting codes should indeed exhibit some version of the gauge symmetries we see in gravity. In particular, our codes will manifest a particular version of the observation that large gauge transformations describe physically distinct spacetimes: we will see that changing the gauge in the holographic code yields a different result when measuring with an ``area operator." See Appendix \ref{app:2x2bacon-shor} for a discussion of the Bacon-Shor code which is phrased in the language of gauges. \subsection{Towards quantum gravity} General relativity is a \emph{classical} theory. Just like Newton's laws govern the behavior of massive particles and extended objects moving, accelerating, and applying forces to each other, and Maxwell's electrodynamics govern the coupled behavior of charged objects and electromagnetic fields, Einstein's equations \eqref{eq:einstein} govern the coupled behavior of energy distributions and geometry. By ``govern the behavior", we really mean that given enough data to describe things at an initial time (the position and velocity of particles, the electric and magnetic fields everywhere, the stress-energy tensor and metric), we can use the theory to find a description at a later (or earlier) time. Quantum mechanics is also a theory in this sense: given a Hamiltonian and an initial wave function, evolution is governed by the Schr\"odinger equation. If we arrived at the theory by quantizing an initial theory, we can get back the classical quantities by applying the appropriate observables (i.e., Hermitian operators) to the wave function: for example, for the quantum mechanics of a point particle in a potential, position or momentum operators. For reasonable choices of Hamiltonian, the \emph{expectation value} of these observables will evolve smoothly--but when we measure the observable we project onto one of its eigenstates according to the Born rule, and only by repeatedly resetting the system, evolving, and measuring can we actually get access to the expectation value. Finding a fully consistent theory of quantum gravity is beyond the scope of this review, to put it mildly, but we \emph{can} say some things confidently. In the quantum theory of a point particle, the classical observables (i.e., the observables which reduce to classical quantities in the classical limit) are the operators which measure position, velocity, etc. The classical observables of a field theory are, similarly, the operators which measure field value and its derivatives. The classical observables of a gravitational theory, then, when applied to states corresponding to classical geometries, measure the metric, stress tensor, etc. So, at minimum, we expect the Einstein equations to hold in some classical limit. That is, the Einstein equations suggest a schematic operator equation \begin{equation} \hat G_{\mu \nu}\: ``="\:8 \pi G_N \hat T_{\mu \nu}, \end{equation} where the hat indicates that this is an operator expression, and we've put quote marks around the equality to emphasize that it's not really precise. What we really mean by this expression is that, for classical states $\ket{\Psi}$ which are simultaneous eigenstates of both operators, \begin{equation} \hat G_{\mu \nu} \ket{\Psi} = G_{\mu \nu} \ket{\Psi} = 8 \pi G \hat T_{\mu \nu} \ket{\Psi} = 8 \pi G T_{\mu \nu} \ket{\Psi}, \end{equation} which automatically implies as well that \begin{equation} \left\langle\hat G_{\mu \nu}\right\rangle_\Psi = 8 \pi G \left\langle\hat T_{\mu \nu}\right\rangle_\Psi. \end{equation} Again, we emphasize that making these expressions precise is complicated in full quantum gravity. The abundant gauge freedoms in GR which we discussed in the previous subsection mean that the notion of a local operator is itself subtle, for example. Keeping this in mind, we can proceed to move gingerly away from exactly classical states (i.e., exact eigenstates of these operators), in two ways: \begin{itemize} \item We can use perturbation theory to understand the result of measuring operators in states close to classical states. For example, if we have a massive system in some superposition of locations, we can see that the expectation value of the metric is that sourced by the average position of the mass, but that measuring quantities sensitive to the metric (for example, the motion of a test mass passing near the system) will project the wave function onto a state of definite metric (and so the particle will be seen (experimentally! \cite{PhysRevLett.47.979}) to follow a geodesic of this metric, emit gravitational waves quantized as gravitons, etc.). For a given classical geometry, we can use these sorts of techniques to work our way all the way up to the full machinery of quantum field theory in curved space. At the linear, perturbative level, the graviton enters as just another type of field. (To be clear, though, perturbation theory has its limits! We can't use this machinery to fully quantize gravity, which is famously impossible using just the machinery of quantum field theory.) \item We can use the linearity of quantum mechanics to discuss not only states close to particular geometries but \emph{superpositions} of distinct geometries. \end{itemize} It's important to emphasize that there's a major caveat with this second point: the linearity of quantum mechanics applies to states \emph{in a fixed Hilbert space}. Let's return to the basic example of the quantum-mechanical theory of a single particle in a potential. There's a position operator on this Hilbert space, and we understand how it acts both on eigenstates of position and on general states (because we can write a general state in the position basis). But this doesn't tell us how to act on states of a single particle in a different potential. Actually, there are ways to give a sensible answer to this question. We could arrange for the particle to move in a given potential by coupling it to another set of degrees of freedom, an ``external field'', so that the potential is recovered for a fixed state of the fields. Then, in this new, larger Hilbert space, we now have a way to talk about a superposition of a particle in one potential with a particle in a different one. But doing so requires us to understand how to embed the original Hilbert space into the larger one. This (finally) is where holography comes in. You might sensibly worry that we could only ever measure the metric, stress-energy, geodesic distance, etc., on a Hilbert space describing states perturbatively close to a single geometry. But holography, as we'll describe next, does much better than this. And, to be clear, we have very good reasons to expect that quantum gravity does in fact require us to deal with superpositions of different geometries! As we discussed in the first bullet point above, we can imagine coupling the metric to a quantum degree of freedom--for example, arranging to move a test mass into one of two locations depending on the result of a projective measurement. Even without explicitly arranging for this ourselves, though, there are (at least) two places where nature as we understand it naturally creates superpositions. One is cosmology: in the early universe, quantum variance of an inflating field \cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi} could have been converted \cite{Polarski:1995jg,Lombardo:2005iz} during the Big Bang into superpositions of different classical configurations of matter, radiation, etc. which ultimately seeded the large-scale structure of the universe. Another is black hole evaporation: according to Hawking's famous calculation \cite{Hawking:1975vcx}, a geometry with a black hole in it can ultimately evolve into a superposition of many possible states which each contain no black hole but rather some collection of matter and radiation, which in turn can source distinct spatial geometries. So, if we want our theory of quantum gravity to describe any of these scenarios, we're going to have to be able to work in a Hilbert space that allows for superpositions of geometries. \subsection{Holographic theories} Holographic theories are ones in which a gravitational theory can be described using a different non-gravitational theory ``at the boundary.'' These theories implement the desired feature of the last subsection: we can use them to describe superpositions of states which describe distinct geometries. Unfortunately, in the best-understood examples of holography none of these geometries look anything like our universe: in particular, they have \emph{negative spacetime curvature}, meaning that at large distances the spacetime metric becomes hyperbolic (``asymptotically anti-de Sitter''). Our universe, as best as we can tell, looks like it has \emph{positive} spacetime curvature. So we can't just immediately interpret our universe as a particular state in a holographic Hilbert space. However, holographic theories are nevertheless worth studying not only because they have a nice, well-understood Hilbert space, but also because states in these theories that describe geometries, or superpositions of geometries, have a number of nice properties, not least the RT formula itself. Like in the previous subsections, but perhaps even more so, our discussion here will only scratch the surface of what is by now a vast literature. Our goal will be to reach the RT formula and its interpretation, in particular, and along the way we will sometimes be heuristic (and we note in a few footnotes places where the main discussion has been imprecise). We recommend that interested readers looking for more comprehensive reviews on the aspects of holography most closely related to quantum information consult, for example, \cite{harlow2016jerusalem,harlow2018tasi} and references therein. First, we'll say a bit more about how the known examples of holographic theories actually work. Then, finally, we'll be in a position to present the RT formula once and for all. After we do that, we'll take the time to introduce a few last concepts to which it will be useful to refer later in the paper: the geometric notions of the causal and entanglement wedge, and the properties of complementary recovery and radial commutativity. \subsubsection{The holographic dictionary} Let's be a little more precise by what it means to describe one theory using another. The fundamental objects in any quantum theory are states and observables. In the last subsection we described how to think about states describing quantum fields on top of a spacetime geometry, or a superposition of spacetime geometries. In particular, when a state describes (a spatial slice of) a classical spacetime geometry satisfying the Einstein equations, it is an eigenstate of certain operators, with eigenvalues given by diffeomorphism-invariant quantities like the length of a geodesic, area of an extremal surface, etc. Then we can compute the expectation values of these operators on states close to these classical ones, and the linearity of quantum mechanics then allows us to compute the expectations on superpositions of near-geometric states. It was realized in the 1990s by string theorists \cite{maldacena1999large,Gubser:1998bc,Witten:1998qj,Aharony:1999ti} that, for states describing asymptotically anti-de Sitter geometries in $D$ dimensions, the expectation values of \emph{all} of these operators could instead be computed using operators in a non-gravitational $(D-1)$-dimensional theory. In particular, one major result of the known holographic correspondences is that there is a precise dictionary for matching operators in the gravitational theory inserted at points near the AdS boundary to operators in the ``boundary theory'', and a precise prescription \cite{Hamilton:2005ju,Hamilton:2006az,Skenderis:2008dg,Christodoulou:2016nej} for integrating over points on the boundary to reconstruct operators deeper into the bulk of the spacetime. For the purposes of this paper, we won't really need to know about the details of the boundary theory: just the entropies of reduced density matrices constructed from (some of) the states in the theory. However, it's worth mentioning two of their properties. First, this type of correspondence could only make sense if the boundary theory at least had the same symmetries as the symmetries of the gravitational theory \emph{at its boundary}. In the language of Subsection \ref{sub:diff} above, these are the small gauge symmetries of diffeorphisms that leave the boundary at spatial infinity unchanged. With a little bit of work, you could stare at a metric that describes hyperbolic space and figure this out---it turns out that the group of transformations that do this is the \emph{conformal group} of angle-preserving tranformations. And, accordingly, the boundary theories are \emph{conformal} field theories. Now you know the reason why another name for the holographic correspondence is ``the AdS/CFT correspondence''! Second, hyperbolic (and spherical) metrics have a length scale, the ``anti-de Sitter length''. Einstein's equations \eqref{eq:einstein} \emph{also} have a length scale, the Planck scale, which can be derived (in a dimension-dependent way) from the Newton gravitational constant $G_N$. The ratio of these two length scales is a dimensionless number, which also appears in the conformal field theory on the boundary. In spacetimes that look classical, this ratio needs to be very large: it's the ratio between ``cosmological'' scales and ``quantum gravitational'' scales. Accordingly, boundary CFTs that can describe classical-looking geometries have a very large number of fields--they're often referred to as ``large-$N$ CFTs.'' And it's a fact about (non-free) conformal theories that the larger the number of fields, the more strongly the fields couple to each other. So, when used to describe classical geometries, the holographic correspondence relates gravity in asymptotally AdS spacetimes to the behavior of strongly-coupled conformal field theories. For the purposes of this article we'll only care about fundamentally \emph{gravitational} observables like the lengths of geodesics, etc., whose expectation values on classical states we can compute knowing only the metric. But it's important to understand that this dictionary doesn't apply only to these, it also applies to any other diff-invariant observable built from the \emph{fields} in the theory--for example, the stress tensor on the right-hand side of the Einstein equations, or just the expectation value of a field at some point. So far, this might just seem like an interesting coincidence, but no more than that. After all, as discussed in the previous subsection, we already in principle know how to compute these observables for nearly-classical states: we write down a metric describing the geometry on the state, solve the Einstein equations to get the field configurations on top of the geometry, then perturb these field configurations slightly and see how this backreacts on the geometry using the operator form of the Einstein equations. However, there are a few reasons the existence of a holographic description is exciting: \begin{itemize} \item Sometimes we can compute quantities easily in the gravitational theory but not the non-gravitational theory, or vice-versa. The RT formula itself is an example of this: it's a straightforward mechanical task to compute the areas of extremal curves given the metric, but computing the entropy of a CFT state requires first writing down what the state is, already a nontrivial task, and then doing all the integrals to trace out the state of the fields outside the region of interest. \item As we discussed in the last subsection, we're free to compute the expectation values of states that \emph{aren't} nearly-classical. These states need not come anywhere near solving the classical Einstein equations, i.e. they might not be easily described as geometric at all! Yet they live in the same Hilbert space as all the nearly-classical states. We can think of these as \emph{bona fide} quantum-gravitational states! (In fact, historically the logic worked almost the other way around: the holographic dualities were used within string theory to exhibit examples of states that weren't ``stringy'' but had nearly-classical descriptions.) \end{itemize} \subsubsection{The RT formula}\label{sub:RT} Now, at last, let's return to the versions of the RT formula we presented at the start of this section. First, the version \eqref{eq:classical_RT} that applies to a holographic state dual to a classical geometry: \begin{align} S_A(\rho) = \frac{1}{4G_N} \min_{\gamma_{A}} Area(\gamma_{A}). \end{align} We've already understood the meaning of the right-hand side from previous sections. If we have a metric already, we can find the geodesics (or extremal surfaces) which hit the boundary at precisely the edge of the boundary region $A$. If there are multiple such geodesics, we choose the one with smallest area\footnote{\label{fn:homology}We mention one caveat for experts: if $A$ is the entire boundary, and $\rho$ is mixed, then it might seem like the RT formula leads us astray, because the boundary of $A$ is the empty set and so any geodesic which is a closed circle hits the boundary at the empty set, i.e. doesn't touch it at all. This puzzle was resolved by realizing that the spacetime dual to a thermal state is \emph{AdS-Schwarzschild}, i.e. hyperbolic spacetime with a black hole of the appropriate temperature sitting in the black hole. Then we get the correct result if we take the minimal surface to be the one which wraps around the horizon of the black hole. To get this, we need to impose a ``homology constraint'': the only geodesics which we consider in the minimization in \eqref{eq:classical_RT} are those which not only meet the boundary at the appropriate place, but those which can be continuously extended through the bulk to touch $A$ without crossing any holes or horizons in spacetime, i.e. those that are ``homologous to A.''}. On the left-hand side, the state $\rho$ lives in the Hilbert space of a large-N CFT, with $N$ related to the Newton constant $G_N$ as discussed above. In principle, we can perform the field-theoretic equivalent of tracing out a subregion, which involves integrating out the values of fields outside the region with appropriately chosen boundary conditions, then take a logarithm to find the entropy. In practice, the computation is usually done by holographers using slicker mathematical techniques to compute the entropy, for example computing $\text{Tr}_A(\rho^n)$ and then obtaining the entropy by taking a limit in $n$. Even so, it's very hard to carry out this procedure except in states close to certain highly symmetric states like the vacuum or a thermal state. Now let's consider the more general RT formula \eqref{eq:full_RT} which applies to states with entanglement in the bulk and superpositions of geometries: \begin{align} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho). \end{align} $L$ is an operator in the quantum-gravitational Hilbert space. Its eigenstates are the states dual to classical geometries (with appropriate AdS length to be described by the particular CFT under consideration), and its eigenvalues are the areas of the minimal surfaces that meet the boundary subregion. However, roughly speaking\footnote{For a discussion of the limitations of this approach, and the circumstances under which the RT formula breaks down (essentially, when there are very many, exponential in $N^2$, terms in the superposition), see \cite{Almheiri:2016blp}.} there are many different ``field-theoretic'' states that live on the same curved spacetime. The bulk entanglement term identifies which of these states (or rather, which equivalence class of states with the same bulk entanglement inside the extermal surface) is described by $\rho$. Hence neither the left-hand nor the right-hand side of \eqref{eq:full_RT} is the expectation value of an observable, but the difference between the boundary and bulk entropies \emph{is}. Moreover, we can see that, as we will discuss below, $L$ is an operator which can be obtained given access either to the reduced state either in $A$ or to its complement $\bar A$. \subsubsection{The causal and entanglement wedges}\label{sub:wedges} Which bulk operators can we reconstruct given access to a particular boundary region? As discussed in Subsection \ref{sub:metrics} above, Lorentzian metrics have \emph{causal structure}, so we know that only those parts of the boundary that can send or receive a signal from the point or region can affect the value of an operator there. This is formalized by the notion of the \emph{causal wedge}, depicted in Figure \ref{fig:subregion}: \begin{figure}[htb!] \centering \includegraphics[height=8cm]{bulk_reconstruction.pdf} \caption{(This is a slightly modified version of Figure 11 in~\cite{pastawski2015holographic} and its caption.) Bulk field reconstruction in the causal wedge. On the left is a spacetime diagram, showing the full spacetime extent of the causal wedge $\mathcal{C}[A]$ associated with a boundary subregion $A$ that lies within a boundary time slice $\Sigma$. The point $x$ lies within $\mathcal{C}[A]$ and thus any operator at $x$ can be reconstructed on $A$. On the right is a bulk time slice containing $x$ and $\Sigma$, which has a geometry similar to that of our tensor networks. The point $x$ can simultaneously lie in distinct causal wedges, so $\phi(x)$ has multiple representations in the CFT.}\label{fig:subregion} \end{figure} On a time-slice of the boundary theory, choose a spatial sub-region $A$. The \emph{causal wedge} $\mathcal{C}[A]$ of $A$ is the bulk region bounded by (1) the boundary domain of dependence of $A$ (dark grey curve in Fig.~\ref{fig:subregion}) and (2) the set of bulk geodesics which start and end on (1). The casual wedge is determined by the domain of dependence of $A$, hence ``causal.'' Often, especially in static spacetimes, it's convenient not to work with the full causal wedge, but instead some particular spatial slice within it. If the RT surface is spacelike, then every spatial slice in the causal wedge ends on the RT surface itself, but they hit the boundary at different times. It's usually most convenient to choose a spatial slice which intersects the boundary at the region $A$ itself. In nice situations, for example if the spacetime is static, we can pick a spatial slice that extends between $A$ and its RT surface, which by causality contains all of the information necessary to reproduce the entire causal wedge. In this case, as shown on the right diagram in Figure \ref{fig:subregion}, we're free to draw diagrams which suppress the time direction entirely: compare to the figures in the Introduction, which similarly depict the situation at one particular time. \begin{figure}[htb!] \centering \includegraphics[height=7cm]{causal_vs_entanglement.pdf} \caption{(This is a slightly modified version of Figure 14 in~\cite{pastawski2015holographic} and its caption.) The intersection of the entanglement wedge $\mathcal{E}[A]$ with a bulk time-slice, in the case where $A$ has two connected components. Minimal geodesics in the bulk are solid lines. When $A$ is smaller than $A^c$, we have the situation on the left and the causal wedge agrees with the entanglement wedge. When $A$ is bigger, however, the minimal geodesics switch and the entanglement wedge becomes larger. In particular the point in the center lies in the $\mathcal{E}[A]$ but not $\mathcal{C}[A]$.}\label{fig:wedges} \end{figure} However, we also know that given access to the entropy of a CFT subregion we can use the RT formula to compute the area of the relevant extremal surface. In fact, we expect that if we know not just the entropy but the full reduced density matrix, we can construct, e.g., the RT surface itself. And we can also use this information to construct the RT surfaces of smaller parts of the subregion, so we should be able to read off the metric everywhere in the portion of the bulk between the boundary region and the RT surface. This is formalized by the notion of the \emph{entanglement wedge}: The \emph{entanglement wedge} $\mathcal{E}[A]$ is the domain of dependence of the bulk region bounded by (1) $A$ (dark grey curve in Fig.~\ref{fig:wedges}) and (2) the minimal extremal bulk surface homologous to $\partial A$ (i.e. the RT surface of $A$) (black curve in Fig.~\ref{fig:wedges}). The entanglement wedge is determined by the RT surface, which has area equal to the von Neumann entropy of the part of the boundary theory contained in $A$, hence ``entanglement.'' One can show, under the assumption the bulk theory describes a sensible gravitational spacetime, that the causal wedge is contained in the entanglement wedge \cite{Wall:2012uf,Headrick:2014cta}. Figure \ref{fig:wedges} depicts a situation in which the two wedges do not coincide. Note that, for a pure boundary state, the RT surface of $A$ can clearly seen to be the same as the RT surface of $\bar A$. That is, the operator $L$ which gives its area is both in the set of operators acting only on region $A$ of the boundary theory, and the set of operators acting only on region $\bar A$. But causality dictates that all operators in one of these two sets commute with all operators in the other of these sets. So, $L$ commutes with every operator acting on $A$: we say it's in the \emph{center} of the operators acting on $A$. One such operator is the identity. But, in general, when there is some gauge symmetry in the bulk theory, there will be elements in the center which are \emph{not} trivial, and the area operator will be one of these. So the nontriviality of the area operator tells us about the fact that the bulk is gravitational, and thus has diffeomorphism invariance! Thinking about the algebraic properties of bulk and boundary operators will be key to our approach in the rest of the paper; we'll review the concepts of operator algebras, centers, etc.\ in the next section. \subsubsection{Complementary recovery and radial commutativity}\label{sub:radial} Recall that the causal wedge tells us which bulk operators can be reconstructed given access to a boundary region $A$. As Figure \ref{fig:subregion} makes clear, if we divide the boundary into two regions, an operator acting at a particular bulk point must lie in the causal wedge of at least one of the regions, and it only lies in the causal wedge of both when the point is part of the RT surface. This is \emph{complementary recovery}: given a subregion we can reconstruct all operators inside its causal wedge but none of the operators outside it. Now consider, instead of fixing the region $A$, what happens when we allow it to vary but still keep the bulk operator fixed. In general, we see that an operator lies inside the causal wedge of \emph{many} regions. So, if we had access to a boundary region large enough that many of its subregions could alone reconstruct the operator, knowledge of the operator is \emph{redundantly} encoded in the state: we don't need the full state on $A$ to reconstruct it, and there are many possible ways to reconstruct it. We say the state exhibits \emph{subregion duality}, in which many subregions can be used to reconstruct the same operator. Furthermore, if we erase a piece of the boundary that is much smaller than $A$, almost every bulk operator can still be exactly reconstructed. Historically, it was this sort of code-like redundancy which led to the consideration of holographic error correction. The flip side of subregion duality is radial commutativity. We can see that, at least for non-pathological spacetimes, the RT surfaces of \emph{small} subregions don't extend deep into the bulk: we need large subregions to penetrate deep into the interior. If a bulk operator is outside of the causal wedge of a subregion, that means, by causality, that it commutes with every operator acting in the casual wedge subregion. In particular, it commutes with the operators that act on the boundary region itself. But every boundary operator acting on a point in the boundary lives inside the causal wedge of any boundary subregion containing that point; in particular, it lives inside the causal wedge of an arbitrarily small subregion around the point. Hence any bulk operator which lives away from the boundary must commute with \emph{all} boundary operators acting on single points in the boundary: this is the property of \emph{radial commutativity}. This might not seem to be a problem yet: an arbitrary operator in the boundary \emph{doesn't} act at a single point in the boundary, but at many points. However, field theories, and conformal field theories in particular, have an \emph{operator product expansion}: the product of operators acting at multiple points can be written as a sum of local operators acting only at a single point. And each of these local operators, by the argument above, commutes with the bulk operator! If we take this argument seriously, then, a bulk operator commutes with \emph{every} operator in the boundary theory. This seeming paradox was another motivation behind the introduction of error correction in holography---we only reached this conclusion because we treated bulk operators and boundary operators as acting on the same Hilbert space, but in fact the bulk Hilbert space of a given geometry, as we have seen, is much smaller and redundantly encoded into the CFT Hilbert space. So, in the language of this paper, the resolution can be stated simply: the bulk doesn't live ``inside the boundary,'' i.e. in the same space. Rather, as depicted in Figure \ref{fig:notation}, we must map the bulk into the boundary using an isometry $V$. \section{Introduction} In the last decade, ideas and tools from quantum information and computation have found an increasing number of applications in the efforts to understand the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence~\cite{maldacena1999large} as a holographic quantum theory of gravity. Notable examples include the ER=EPR~\cite{maldacena2013cool} conjecture and the proposed resolutions of: the black hole information paradox \cite{almheiri2020page, penington2020entanglement}, the firewall paradox~\cite{harlow2013quantum}, and the wormhole growth paradox in terms of the complexity=volume~\cite{susskind2016computational,aaronson2016complexity, haferkamp2021linear} and complexity=action \cite{brown2016holographic} conjectures. Central to the connection between quantum gravity and quantum information is the Ryu-Takayanagi (RT) formula. The RT formula conjectures that the entanglement entropy of a boundary CFT state is dual to the area of a bulk region in AdS~\cite{ryu2006holographic}. The study of the entanglement properties of the AdS/CFT holographic duality~\cite{almheiri2013black}, spurred by the result of Ryu and Takayanagi, has led to a reformulation of the AdS/CFT correspondence in terms of quantum error-correcting codes~\cite{verlinde2013black, almheiri2015bulk, mintun2015bulk}. This framework has helped to clarify the relationship between bulk and boundary and proved to be an effective and simple toy model of the AdS/CFT correspondence. Based on these early results, researchers built toy models that reproduce key features of the correspondence (such as subregion duality, radial commutativity and the RT formula) using quantum error-correcting codes based on tensor networks~\cite{pastawski2015holographic, donnelly2017living}, random tensor networks~\cite{hayden2016holographic}, and approximate Bacon-Shor codes \cite{cao2020approximate}. All these models (which have been recently reviewed in~\cite{jahn2021holographic}) give an explicit bulk-boundary mapping for states and observables. Using techniques from Hamiltonian simulation, \cite{kohler2019toy} showed how the mapping can be extended to local Hamiltonians. In parallel with the development of increasingly-advanced toy models, Harlow initiated a systematic study of holographic quantum error correction~\cite{harlow2017ryu,Akers:2021fut} (for a pedagogical introduction to these ideas see~\cite{harlow2016jerusalem, harlow2018tasi, rath2020aspects}). Leveraging the operator algebra quantum error correction framework developed in~\cite{beny2007quantum, beny2007generalization, kribs2005unified, kribs2006operator},~\cite{harlow2017ryu} identified the conditions that make a quantum error-correcting code a good holographic code (that is, a code that reproduces the key features of the AdS/CFT correspondence). In particular,~\cite{harlow2017ryu} showed that standard quantum error-correcting codes such as stabiliser codes~\cite{gottesman1997stabilizer, gottesman2010introduction} or subsystem codes~\cite{poulin2005stabilizer, bravyi2011subsystem}, correct errors ``too well'' to give rise to good holographic codes. This statement can be made precise using the language of finite-dimensional von Neumann algebras, which we review in Section~\ref{sec:vonNeumann}. Consider the algebra of operators that can be reconstructed after the erasure of a region of the boundary: for a good holographic code this algebra is not a factor algebra. In this paper, we build on the formalism of Harlow to derive new properties and examples of holographic quantum error-correcting codes. Our contributions further sharpen our understanding of the relationship between bulk and boundary and give even simpler examples of holographic codes reproducing key features of the AdS/CFT correspondence. In particular: \begin{itemize} \item We give new ``atomic'' examples of holographic codes. The key feature of these examples is that they are based on quantum circuits with a minimal number of qubits rather than on the large tensor networks that have appeared in the literature \cite{pastawski2015holographic, donnelly2017living, hayden2016holographic, cao2020approximate}. By significantly reducing the complexity of the toy models, we hope to introduce a new tool to identify what features of error correcting codes enable the emergence of holographic states. \item We prove new properties of holographic quantum correcting codes. More specifically, we show that the code algebra is the unique von Neumann algebra satisfying complementary recovery (defined below). The proof is obtained by leveraging a connection between quantum error correction and quantum privacy~\cite{crann2016private, kribs2018quantum} which we believe is entirely novel in the context of holographic quantum error correction. The uniqueness of the algebra shows that error correcting codes which satisfy complementary recovery are ``rigid'' in the sense that they are uniquely determined by the requirements of holography. \item We give a reformulation of key results in holographic quantum error correction which is aimed at experts in quantum information. This might be a desirable feature for researchers with a quantum information background that are venturing into the field and could give people already familiar with these ideas a new angle to think about related problems. \end{itemize} We give a brief presentation of key results from holographic quantum error correction in Section~\ref{sec:overview_HQEC} and a detailed overview of our contributions in Section~\ref{sec:overview_our_contributions}. The remainder of this paper is organised as follows. Section~\ref{sec:holography_background} gives an informal presentation of some of the central concepts in holography for a reader with no prior background on the subject. In Section~\ref{sec:vonNeumann}, we review some key facts about finite-dimensional von Neumann algebras. Section~\ref{sec:complementary_recovery} and Section~\ref{sec:examples} contain the bulk of our contributions. In Section~\ref{sec:complementary_recovery}, we give a reformulation of holographic quantum error correction and prove new properties of the code algebra, while in Section~\ref{sec:examples}, we construct several ``atomic'' examples of holographic codes using quantum circuits. We conclude in Section~\ref{sec:discussion} with some remarks on the main differences between our work and \cite{harlow2017ryu} and a list of open questions. This paper has three Appendices. In Appendix~\ref{app:privacy}, we review some key notions and results on quantum private systems. In Appendix~\ref{app:structure_lemma}, we give a new proof of the main theorem in~\cite{harlow2017ryu}. In Appendix~\ref{app:2x2bacon-shor}, we give a full analysis of the holographic properties of the 2x2 Bacon-Shor code. \subsection{Overview of holographic quantum error correction} \label{sec:overview_HQEC} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{holographic_quantities_gamma.pdf} \end{center} \caption{\label{fig:notation} Sketch of a holographic quantum error-correcting code in $2+1$ dimensions using our notation, indicating some of the terms in Table~\ref{table:Rosetta}. } \end{figure} In holography, we consider a bulk asymptotically-AdS space described by a Hilbert space $\mathcal{H}_L$, surrounded by a boundary CFT with a Hilbert space $\mathcal{H}$. The correspondence manifests via a holographic dictionary $V : \mathcal{H}_L \to \mathcal{H}$, which maps the state from the bulk into the boundary. See Figure~\ref{fig:notation}. The same setup cleanly maps to a quantum error-correcting code. We let $\mathcal{H}_L$ be the logical space, and $\mathcal{H}$ be the physical space. Then $V$ is an isometry that takes the data in $\mathcal{H}_L$ and encodes it in the physical space $\mathcal{H}$. Our goal is to concretely define what we mean for such a setup to exhibit ``holographic quantum error correction''. We will do this by taking an RT formula and writing it in the notation of quantum error correction. Then we can proceed to derive general properties of such an RT formula, and building specific examples of codes that exhibit one. Having these concrete examples can illuminate the relationship between bulk and boundary, and generally make AdS/CFT easier to understand. We begin with a classical RT formula. Say $A$ is a subregion of the boundary space, splitting the boundary into a bipartition $A$-$\bar A$. Then, the classical RT formula states\footnote{This is the version of the formula that holds in a \emph{static} geometry, i.e.\ one that can be described by a time-independent metric. In a time-dependent geometry the extremization is more subtle, and is described by the maximin prescription \cite{Engelhardt:2014gca} for the HRT formula \cite{Hubeny:2007xt}. In particular, the geodesics are not confined to a fixed spatial slice of the bulk but instead live inside the \emph{entanglement wedge} of the boundary region; see Section \ref{sub:wedges} for further discussion. Furthermore, the minimization over geodesics should include only the geodesics homologous to A; see Footnote \ref{fn:homology} for a discussion.} that, in a holographic state corresponding to a (2+1)-dimensional classical bulk geometry: \begin{align} \text{entanglement across } A\text{-}\bar A \propto \min_{\gamma_{A}} \text{Area}(\gamma_{A}), \end{align} where $\gamma_A$ is a geodesic in the (negatively curved, gravitating) bulk whose endpoints are the same as those of $A$ on the boundary. In $d+1$ dimensions, the geodesic is replaced by a $(d-1)$-dimensional extremal surface ending on (and homologous to) the boundary subregion; `area' denotes a codimension 2 quantity, and hence it is actually a length for a (2+1)-dimensional bulk. The RT formula connects a geometrical quantity, an area, with a quantum-mechanical quantity: the entanglement entropy. (Readers who find the RT formula unfamiliar are invited to consult Section~\ref{sec:holography_background} for a more detailed exposition of the quantum-gravitational setting where the formula arises.) If the bulk is itself a quantum system that can be in a mixed state, then we must be more careful in defining the left-hand side of the equation: we only care about the entanglement entropy stemming from the holographic dictionary $V$, but not any entropy from the bulk degrees of freedom. Thus, we must subtract off the entropy from the bulk state. Say $\rho_L$ is a state in the bulk $\mathcal{H}_L$, and $\rho = V \rho_L V^\dagger$ is its encoded state on the boundary $\mathcal{H}$. The subregion $A$ induces a factorization\footnote{In a conformal field theory this factorization may be subtle: to ensure the theory factorizes we can introduce edge modes \cite{Donnelly:2016auv}. Throughout this paper we will follow convention and assume without comment that the boundary theory does indeed factorize, which is already necessary to define the left-hand side of the RT formula.} of the boundary into $\mathcal{H} = \mathcal{H}_{A} \otimes \mathcal{H}_{\bar A}$. We can then say $\rho_{A}$ is the reduced state obtained by taking $\rho$ and tracing out $\mathcal{H}_{\bar A}$. Now we can phrase a quantum RT formula as: \begin{align} \text{entropy of } \rho_A - \text{entropy of } \rho_L \text{ visible from }A = \min_{\gamma_{A}} \text{Area}(\gamma_{A}). \end{align} We can define the entropy of $\rho_A$ via the von Neumann entropy $S(\rho_A)$. The other quantities are more challenging to define. The geometry itself may be a superposition, so that the area actually corresponds to an observable $L_A$ on the bulk Hilbert space $\mathcal{H}_L$. The area contribution to the RT formula is then the expectation $\langle L_A \rangle_{\rho_L} = \text{Tr}(\rho_L L_A)$. It can be reconstructed by a bulk observer given access to either subregion. The state $\rho_L$ describes the state of the bulk, and thus also captures the superposition over geometries. We are left with: \begin{align} S(\rho_A) = \text{entropy of } \rho_L \text{ visible from }A + \text{Tr}(\rho_L L_A). \end{align} To make the ``entropy of $\rho_L$ visible from $A$'' rigorous, we will need some tools from the quantum error correction literature. Our goal is to identify a collection of operators $\mathcal{M}$ on $\mathcal{H}_L$ that exactly capture what we can see given only access to the boundary subregion $A$. Then we can use this family of operators to define the entropy. Some language developed in the quantum error correction framework from \cite{beny2007quantum, beny2007generalization, kribs2005unified, kribs2006operator} is particularly useful for this purpose. These papers are concerned with what kinds of observables in $\mathcal{H}_L$ are affected by a general quantum error channel. Here we restrict their language to erasure errors, since we just want to erase the subregion $\bar A$. \begin{definition} Say $V:\mathcal{H}_L \to \mathcal{H}$ is an encoding isometry, and $A$ is a subregion that induces the factorization $\mathcal{H} = \mathcal{H}_{A} \otimes \mathcal{H}_{\bar A}$. We say a bulk operator $O_L \in \mathcal{L}(\H_L)$ is \textbf{correctable from $A$} if there is some boundary operator $O$ with support only on $A$ that lets us access $O_L$ via $V^\dagger O V = O_L$. If all the operators with support only on $A$ commute with $O_L$ after projection with $V^\dagger$, then the observable corresponding to $O_L$ cannot be measured from $A$. In this case we say $O_L$ is \textbf{private from $A$}. \end{definition} We are looking for a collection of operators $\mathcal{M}$ that exactly capture what degrees of freedom are visible from $A$. These are all correctable from $A$. To make sure we are not missing any operators, we want the `mirror image' of this condition to be true from $\bar A$. If $\mathcal{M}'$ is the set of all operators in the bulk that commute with $\mathcal{M}$ (also known as the `commutant' or the `normalizer of $\mathcal{M}$ in $\mathcal{L}(\H_L)$'), then we want $\mathcal{M}'$ to be correctable from $\bar A$. The center $Z_\mathcal{M} := \mathcal{M}\cap\mathcal{M'}$, which contains the area operator, is correctable from both regions. We call this condition `complementary recovery'\footnote{Holography experts: note that in classical holographic states, the equivalent of this condition is that access to a boundary subregion $A$ allows the reconstruction of bulk operators in (at least) the causal wedge of $A$, while access to the complement $\bar A$ allows the reconstruction of operators in the causal wedge of $\bar A$. See Figure \ref{fig:subregion}; although the union of these two causal wedges does not cover the entire spacetime, when $A$ is a \emph{spatial} subregion and the boundary state is pure it \emph{does} contain the entirety of a spatial slice of the bulk. (If the boundary state is mixed, the union won't cover an entire spatial slice; for example, there could be a black hole horizon beyond which the boundary-anchored geodesics will not penetrate.)}: \begin{definition} An encoding isometry $V$, a subregion $A$, and collection of operators $\mathcal{M}$ satisfy \textbf{complementary recovery} if $\mathcal{M}$ is correctable from $A$, and $\mathcal{M}'$ is correctable from $\bar A$. \end{definition} If we can find such a collection of operators $\mathcal{M}$, then we have exactly captured the degrees of freedom in $\mathcal{H}_L$ that are visible from $A$. Now all that is left to do is to use $\mathcal{M}$ to define an entropy on $\rho_L$. It turns out that there is a very natural way of doing this if $\mathcal{M}$ is closed under multiplication, in which case $\mathcal{M}$ is a von Neumann algebra. In this case there is a natural generalization of the entanglement entropy called the `algebraic entropy' $S(\mathcal{M},\rho_L)$ (which we review in Section~\ref{sec:vonNeumann}). This finally lets us define what we mean by `entropy of $\rho_L$ visible from $A$', and state an RT formula in quantum mechanical language: \begin{align} S(\rho_A) = S(\mathcal{M},\rho_L) + \text{Tr}(\rho_L L_A). \label{eqn:intro_rt_formula} \end{align} In fact, a key result of \cite{harlow2017ryu} is that the existence of a von Neumann algebra $\mathcal{M}$ implies that the code satisfies an RT formula. \begin{theorem}\label{thm:simple_complementaritytoRT}\textbf{From Theorem~5 of \cite{harlow2017ryu}.} Say an encoding isometry $V$, a subregion $A$, and a von Neumann algebra $\mathcal{M}$ satisfy complementary recovery. Then there is an area operator $L_A$ such that (\ref{eqn:intro_rt_formula}) holds. \end{theorem} A summary of the notation from this discussion is to be found in Table~\ref{table:Rosetta}. In section~\ref{sec:vonNeumann} we give an introduction to von Neumann algebras and their properties. Then, in Section~\ref{sec:complementary_recovery} we present the above discussion in more mathematical detail and also outline some of our main results. \begin{table} \small \begin{center} \begin{tabular}{p{3cm}|p{4.2cm}|p{4.2cm}} \textbf{Symbol} & \textbf{Quantum Error Correction \mbox{Interpretation}} & \textbf{Holographic \mbox{Interpretation}} \\ \hline \hline $\mathcal{H}$ & physical space & boundary CFT \\ \hline $\mathcal{H}_{L}$ & logical space & bulk asympotically AdS space \\ \hline $V :\mathcal{H}_{L} \rightarrow \mathcal{H}$ & encoding isometry & AdS/CFT dictionary \\ \hline $\ket{\psi}_L$, $\rho_L$, $O_L$ & state, operator in the logical space $\mathcal{H}_L$ & state, operator in the bulk \\ \hline $\mathcal{H}_A$, where $\mathcal{H} = \mathcal{H}_A \otimes \mathcal{H}_{\bar A}$ & the region of $\mathcal{H}$ that remains after the erasure of $\bar A$ & a subregion of the boundary complementary to $\bar A$ \\ \hline $\ket{\psi}_A$, $\rho_A$, $O_A$ & state, operator in $\mathcal{H}_A$ & boundary state, operator in the $A$ subregion \\ \hline $\mathcal{M} $ & algebra of operators protected from the erasure of $\bar A$ & algebra of bulk operators in the entanglement wedge of $A$ \\ \hline $Z_\mathcal{M}$ & algebra of operators protected from the erasure of either $A$ or $\bar A$ & bulk operators that can be reconstructed from either $A$ or $\bar A$ \\ \end{tabular} \end{center} \caption{\textbf{A Rosetta stone for symbols and their quantum error correction and holographic interpretations}. First column: main symbols used throughout the paper. Second column: interpretation of the symbol in the language of quantum error correction. Third column: holographic interpretation of the symbol.} \label{table:Rosetta} \end{table} \subsection{Overview of our contributions} \label{sec:overview_our_contributions} The central goal of our work is to build concrete examples and analysis techniques using the work of \cite{harlow2017ryu} as a starting point. We begin with some general results that aid in the analysis of holographic codes. First, we show that the von Neumann algebra of interest to holography is unique, and that there is a direct way of computing it. Then, we give several `atomic' examples of quantum error correction codes that manifest holographic features despite only possessing very few qubits. \begin{theorem} \label{thm:simple_vn_algebra}\textbf{What is the von Neumann algebra?} Say $V$ is an encoding isometry and $A$ is a subregion. If there exists a von Neumann algebra $\mathcal{M}$ such that $V,A,\mathcal{M}$ obey complementary recovery, then it is unique. Furthermore, let $\mathcal{M}$ be exactly the set of operators in the bulk that are correctable from $A$. If it is closed under multiplication, then it is the unique von Neumann algebra with complementary recovery. Otherwise, no such algebra exists. \end{theorem} This theorem is a direct consequence of a result from the quantum error correction literature: \begin{lemma} A von Neumann algebra $\mathcal{M}$ is correctable from $A$ if and only if it is private from $\bar A$. \end{lemma} The main idea is that complementary recovery restricts $\mathcal{M}$ from both sides: on the one hand $\mathcal{M}$ must be correctable from $A$ so it cannot contain too many operators. But on the other hand, since $\mathcal{M}'$ is correctable from $\bar A$, we must have that $\mathcal{M}'$ is private from $A$. So $\mathcal{M}$ must be large enough so that its commutant remains small enough to be private. In particular, when $\mathcal{M}$ is correctable from $A$ there is then no proper subalgebra of $\mathcal{M}$ whose commutant is correctable from $\bar A$. We comment on the implications of this fact for bulk reconstruction in the Discussion. The uniqueness theorem implies a concrete `recipe' for analyzing quantum error-correcting codes and determining their RT formulae. In section~\ref{sec:complementary_recovery} we present Theorem~\ref{thm:simple_vn_algebra}, as well as a mostly self-contained derivation of Theorem~\ref{thm:simple_complementaritytoRT}. With all these mathematical tools together, the section culminates in a series of step-by-step instructions for analyzing a quantum error-correcting code. In section~\ref{sec:examples} we then practice this recipe on several examples. We construct these examples by building quantum circuits for the encoding isometry $V$. These examples are designed to flesh out the different terms of (\ref{eqn:intro_rt_formula}) to varying degrees of completeness. The section culminates in the example sketched in Figure~\ref{fig:full_example}. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{holographic_circuit.pdf} \end{center} \caption{\label{fig:full_example} A sketch of a quantum circuit with an RT formula. On the left side, $i, \alpha, j$ label three qubits in the bulk. The central degree of freedom $\alpha$ looks classical since it is visible from both $A$ and $\bar A$ via the CNOT. $\alpha$ also conditionally creates some of the entanglement across the bipartition via a Toffoli gate, which determines the area operator's eigenvalues. The bulk degrees of freedom $i,j$ are not encoded at all and are plainly visible from the boundary. A full technical explanation of the encoding can be found in Sections~4 and 5. See also Example~\ref{ex:cql} in particular. } \end{figure} Here we give a brief non-technical summary of how the code in Figure~\ref{fig:full_example} generates an RT formula. Since the full explanation is fairly involved, we focus only on the intuition behind the features and leave the technical explanation for Sections~4 and~5. We will review in Section~3 that the algebraic entropy $S(\mathcal{M},\rho_L)$ naturally divides into two terms: a classical term $S_c$ and a quantum term $S_q$. \begin{align} S(\rho_A) = S_c + S_q + \text{Tr}(\rho_L L_A). \end{align} The degree of freedom $\alpha$ is copied via CNOT onto an extra qubit, so that $\alpha$ can be seen from both $A$ and $\bar A$. This essentially `measures' the $\alpha$ degree of freedom in the computational basis, and makes it look entirely classical. This is the classical part of the entropy $S_c$. On the other hand, the $i$ and $j$ degrees of freedom are not measured, and thus retain their coherence. Subregion $A$ cannot see the $j$ qubit, but it can see the $i$ qubit. Thus, the von Neumann entropy of the $i$ qubit forms the quantum part of the entropy $S_q$. Finally, some of the entanglement across $A$-$\bar A$ stems from the subcircuit involving the Toffoli gate, which connects $\alpha$ and the two boundary regions. This entropy forms the area term $\text{Tr}(\rho_L L_A) $ since it is completely independent of entanglement between bulk degrees of freedom. However, the generation of entanglement is conditional on the value of $\alpha$, so the area operator $L_A$ is actually $\ket{1}\bra{1}$ on the $\alpha$ subsystem. In this sense $\alpha$ is a bulk degree of freedom that indexes which geometry we are in. The examples in section~\ref{sec:examples} are in some sense the simplest possible quantum error-correcting codes with non-trivial holographic properties. Their purpose is primarily to serve as pedagogical examples for understanding holographic quantum error correction. However, there are many possible future directions, some of which we elaborate in the Discussion below. An obvious direction is to try to use these examples as building blocks for a tensor network that supports superpositions of geometries. This is an idea that tensor network models seem to struggle to capture. Other possibilities include trying to find an example so that $\alpha$ is not a separate degree of freedom, or to add dynamics. \section*{Acknowledgements} We thank Scott Aaronson, Elena Caceres, Charles Cao, William Kretschmer, Kunal Marwaha, Alex May, Frank Schindler, Haoyu Sun, and Yuxuan Zhang for detailed comments on the manuscript. We thank Mario Martone for his comments and for his participation in an initial phase of this project. AR and JP are supported by the Simons Foundation through It from Qubit: Simons Collaboration on Quantum Fields, Gravity, and Information. PR is supported by Scott Aaronson's Vannevar Bush Faculty Fellowship. \section{A proof of a special case of the factorization lemma} \label{app:structure_lemma} We give a proof of Lemma~\ref{lemma:factorization} for the factor algebra $\mathcal{M}=\mathcal{L}(\mathcal{H}_L)$. Our proof is similar to the ones given in~\cite[Section 3.2]{almheiri2015bulk} and~\cite[Section 3.1]{harlow2017ryu} and, as the proofs given in these works, is based on a technique developed in~\cite{schumacher1996quantum} to prove that the presence of entanglement in a code is a necessary and sufficient condition for perfect quantum error correction. More specifically, \cite{schumacher1996quantum} considers a setting where the system $A$ is entangled to a reference system $R$ and shows that perfect quantum error correction (i.e. the ability to recover the logical information after the erasure of $\bar A$) is possible if and only if \begin{equation} \label{eq:entropic_correction_condition} I_{R\bar A} = S_R + S_{\bar A} - S_{R\bar A} = 0, \end{equation} where $I_{R\bar A}$ is the mutual information of the composite system ${R\bar A}$ and $S$ denotes the von Neumann entropy. Let $\ket{\phi}$ be a pure state on the $R A \bar A$ system. Then \eqref{eq:entropic_correction_condition} implies that \begin{equation} \label{eq:reduced_factorization} \rho_{R \bar A}[\phi]=\rho_{R}[\phi] \otimes \rho_{\bar A}[\phi], \end{equation} where $\rho_{R \bar A}[\phi] = \operatorname{Tr}_{R \bar A} (\ket{\phi} \bra{\phi})$, $\rho_{R}[\phi]=\operatorname{Tr}_{R } (\ket{\phi} \bra{\phi})$, and $\rho_{ \bar A}[\phi]=\operatorname{Tr}_{ \bar A} (\ket{\phi} \bra{\phi})$ denote the reduced density matrices for the state $\ket{\phi}$. For the proof of Lemma~\ref{lemma:factorization} when $\mathcal{M}$ is a factor algebra $\mathcal{M} = \mathcal{L}(\mathcal{H}_L)$, consider a reference system $R$ maximally entangled with $A$ (this is the Choi state for $V$) \begin{equation} \label{eq:Choi} \ket{\phi}=2^{-k / 2} \sum_{i}\ket{i}_{R} (V\ket{i})_{A\bar A}, \end{equation} where $|R| = |\mathcal{H}_L| = 2^k$. Observe that $\ket{\phi}$ is a purification of $\rho_{R \bar A}[\phi]$ on $A$. Because $\ket{\phi}$ is maximally entangled we have that $\rho_{R}[\phi] = \frac{I}{2^{k / 2}}$ is the maximally mixed state. Therefore \eqref{eq:reduced_factorization} becomes \begin{equation} \label{eq:product_maxmixed} \rho_{R \bar A}[\phi]=\frac{I}{2^{k / 2}} \otimes \rho_{\bar A}[\phi]. \end{equation} Say $k$ is the largest integer such that $|A| = k |R| + r$. Then there exists a factorization $\H_A = (\H_{A_1} \otimes \H_{A_2}) \oplus \H_{A_3}$ such that $|A_1| = |R|$, $|A_2| = k$ and $|A_3| = r$. Now define the following states: \begin{equation} \ket{\Psi}_{R A_1} = \frac{1}{2^{k / 2}} \sum_{i}\ket{i}_{R} \ket{i}_{A_1}, \quad \ket{\chi}_{A_2 \bar A} = \sum \sqrt{p_j} \ket{j}_{A_2} \ket{j}_{\bar A}. \end{equation} and observe that the state \begin{equation} \ket{\phi ^\prime} = \ket{\Psi}_{R A_1} \otimes \ket{\chi}_{A_2 \bar A}, \end{equation} is a purification of $\rho_{R\bar A}[\phi]$ \begin{align} \operatorname{Tr}_{A_1 A_2} \left( \ket{\Psi}\bra{\Psi}_{R A_1} \otimes \ket{\chi} \bra{\chi}_{A_2 \bar A} \right) &= \operatorname{Tr}_{A_1} \left( \ket{\Psi} \bra{\Psi}_{R A_1} \right) \operatorname{Tr}_{A_2} \left( \ket{\chi} \bra{\chi}_{A_2 \bar A} \right) \\ &= \rho_{R}[\phi] \otimes \rho_{\bar A}[\phi]. \end{align} where $\ket{\Psi}_{R A_1}$ purifies $\rho_R[\phi]$ in $A_1$ and $\ket{\chi}_{A_2 \bar A}$ purifies $\rho_{\bar A} [\phi]$ in $A_2$. Note that such a factorisation exists because the $R$ and $\bar A$ registers are unentangled in \eqref{eq:product_maxmixed}. In a purification the dimension of the purifying system needs to be at least as big as the rank of the state to be purified and therefore we have that $ |A_1| = |R| = 2^{k}$ (because $\rho_R [\phi]$ is maximally mixed) and $\operatorname{rank}(\rho_{\bar A}[\phi]) \leq |A_2|$. Because all purifications are equivalent up to unitaries performed on the purifying system ($A$, in our case) we know that there exists a unitary $U_A$ acting solely on the subsystem $A$ that maps $\ket{\phi^\prime}$ to $\ket{\phi}$. Therefore we have that \begin{equation} (U_A \otimes I_{\bar A} )V\ket{i}_{A\bar A} = \ket{i}_{A_1} \ket{\chi}_{A_2 \bar A}. \end{equation} \section{Finite-dimensional von Neumann algebras} \label{sec:vonNeumann} In Section~\ref{sec:overview_HQEC}, we discussed how the appropriate language to analyse the entropy contributions arising from the bulk degrees of freedom is the one of von Neumann algebras. In this section, we review some basic notions from the theory of von Neumann algebras (a special case of the more general $C^*$-algebras). Although it is common to study von Neumann algebras over infinite-dimensional Hilbert spaces (and it is in this case that they have proved most useful) we only consider the finite-dimensional case, which is the most relevant for our purposes. Unless otherwise specified, when we use the term von Neumann algebra we always refer to a von Neumann algebra over a finite-dimensional Hilbert space. The content of this section is mostly based on the presentation given in~\cite{harlow2017ryu} which in turn draws from the lecture notes of Jones~\cite{jones2003neumann}. An \emph{algebra} over a field is a set which is closed under scalar multiplication, addition and multiplication, and for which there exists a unit element. Von Neumann algebras are algebras of linear operators acting over a complex Hilbert space with the additional property of closure under complex conjugation. More specifically, we have that \begin{definition}[von Neumann algebra] Let $\mathcal{L}(\mathcal{H})$ be the set of linear operators over a finite-dimensional complex Hilbert space $\mathcal{H}$. A von Neumann algebra is a subset $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ which is closed under: \begin{itemize} \item (addition) if $A, \, B \in \mathcal{M}$ then $A + B \in \mathcal{M}$; \item (multiplication) if $A, \, B \in \mathcal{M}$ then $AB \in \mathcal{M}$; \item (scalar multiplication) if $A \in \mathcal{M}$ and $c \in \mathbb{C}$ then $cA \in \mathcal{M}$; \item (complex conjugation) if $A \in \mathcal{M}$ then $A^\dagger \in \mathcal{M}$; \end{itemize} and for which there exists an element $I \in \mathcal{M}$ such that for every $A \in \mathcal{M}$ we have $IA = A$. \end{definition} From now on, whenever we use the term algebra we always assume that the algebra is a finite-dimensional von Neumann algebra (sometimes, when extra care is required, we still write the full name explicitly). We often define a von Neuman algebra through its generators using the following notation $\mathcal{M} = \langle A, B, \dots \rangle_{\mathrm{vN}}$, where the angle brackets denote the algebra generated by some operators $A, B, \dots$. Note that the von Neumann algebra generated by a set of operators is different from the group generated by a set of operators as the latter does not have an addition and scalar multiplication operation. Because in quantum error correction it is customary to use the angle bracket notation to define the group generated by a set of operators we chose to adopt the, bulkier, notation $\langle \dots \rangle_{\mathrm{vN}}$ for von Neumann algebras. So, for example, $\langle Z \rangle_{vN}$ is the set of all 2 x 2 diagonal matrices ($X,Y,Z$ denote the Pauli matrices) and $\langle Z,X \rangle_{vN} = \mathcal{L}(\mathbb{C}^2)$. \begin{example} The von Neumann algebra $\mathcal{M} = \langle ZII, IXI, IZI \rangle_{\mathrm{vN}}$ over $\mathcal{H}= \mathbb{C}^{8}$, where $X,Z$ are Pauli operators. \end{example} There are three fundamental notions in the study of von Neumann algebras: commutant, center, and factor. The commutant $\mathcal{M}^{\prime}$ is the set of operators which commute with every element of $\mathcal{M}$. The commutant itself forms a von Neumann algebra. \begin{definition}[commutant] Given a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ the commutant is the set \begin{equation} \mathcal{M}^{\prime} \equiv \left \{ B \in \mathcal{L}(\mathcal{H}) \mid \forall A \in \mathcal{M}: AB = BA \right \}. \end{equation} \end{definition} Double commutation (or bicommutation) leaves a von Neumann algebra unvaried. This important property is known as the bicommutant theorem. \begin{theorem}[bicommutant] For every von Neumann algebra $\mathcal{M}\subseteq \mathcal{L}(\mathcal{H})$ we have that \begin{equation} \mathcal{M}^{\prime \prime} \equiv (\mathcal{M}^{\prime})^\prime = \mathcal{M}. \end{equation} \end{theorem} The center is the set of commuting elements of an algebra. \begin{definition}[center] Given a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ the center $Z_{\mathcal{M}}$ is the set \begin{equation} Z_{\mathcal{M}}\equiv \mathcal{M} \cap \mathcal{M}^{\prime}. \end{equation} \end{definition} Trivial centers (i.e. centers that are multiples of the identity) are known as factors. \begin{definition}[factor] Let $c\in \mathbb{C}$. A von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\mathcal{H})$ is a factor if its center satisfies \begin{equation} Z_{\mathcal{M}} = \langle I\rangle_\text{vN} \equiv \{z I\hspace{1mm}|\hspace{1mm} z \in \mathbb{C}\}. \end{equation} \end{definition} \subsection{Classification of von Neumann algebras} \label{thm:Wedderburn} The classification theorem shows that any von Neumann algebra can be decomposed as a direct sum of factors. \begin{theorem}[classification theorem] For every von Neumann algebra $\mathcal{M}$ on a finite-dimensional Hilbert space $\mathcal{H}$ there exists a block decomposition of the Hilbert space \begin{equation} \mathcal{H} = \left[ \oplus_{\alpha} \left( \mathcal{H}_{A_\alpha} \otimes \mathcal{H}_{\bar{A}_\alpha} \right) \right] \oplus \mathcal{H}_0 \end{equation} such that \begin{align} \label{eq:Wedderburn} \mathcal{M} = \left[ \oplus_{\alpha} \left( \mathcal{L}(\mathcal{H}_{A_\alpha}) \otimes I_{\bar{A}_\alpha} \right) \right] \oplus 0, \\ \mathcal{M}^{\prime} = \left[ \oplus_{\alpha} \left( I_{A_\alpha} \otimes \mathcal{L}(\mathcal{H}_{\bar{A}_\alpha} ) \right) \right] \oplus 0, \\ Z_\mathcal{M} = \oplus_{\alpha} \left( c_{\alpha} I_{A_\alpha} \otimes I_{\bar{A}_\alpha} \right), \end{align} where $\mathcal{H}_0$ is the null space and $0$ is the zero operator on $\mathcal{H}_0$. For simplicity, whenever we write a decomposition of an algebra (Hilbert space), we no longer write the direct sum with the null space (zero operator). The decomposition in \eqref{eq:Wedderburn} is known as the Wedderburn decomposition. \end{theorem} Note that, in order to denote the different blocks in the sum, we adopt the heavy notation $\H_{A_\alpha}$---and not the simpler $\H_{\alpha}$---to ensure consistency with the notation of Lemma~\ref{lemma:factorization}. In that case, the letter $A$ is used to denote a partition of the Hilbert space. In this section the letter $A$ has no other meaning but to denote one of the two factors of a block. We now proceed to give a series of examples of increasing generality of the classification theorem. We begin with the special case of a factor algebra over $\mathcal{H}$ that is equivalent to $\mathcal{L}(\mathcal{H})$. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^2$ with Wedderburn decomposition \begin{equation} \mathcal{M} = \mathcal{L}(\mathbb{C}^2) \otimes 1 = \mathcal{L}(\mathbb{C}^2) = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, \end{equation} where $a, \dots, d \in \mathbb{C}$, is a factor. The commutant of the algebra is $\mathcal{M}^\prime = \langle I\rangle_\text{vN}$. \end{example} The following is an example of a factor algebra over $\mathcal{H}$ that is strictly contained in $\mathcal{L}(\mathcal{H})$. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^4$ with Wedderburn decomposition \begin{equation} \mathcal{M} = \mathcal{L}(\mathbb{C}^2) \otimes I = \begin{bmatrix} a & 0 & b & 0 \\ 0 & a & 0 & b \\ c & 0 & d & 0 \\ 0 & c & 0 & d \end{bmatrix}, \end{equation} where $a, \dots, d \in \mathbb{C}$,is a factor. The commutant of the algebra is $\mathcal{M}^\prime = I \otimes \mathcal{L}(\mathbb{C}^2)$. \end{example} Finally, we give two examples of algebras that are not a factor. The first is a fully diagonal algebra (and, in the language of quantum mechanics, can be thought of describing a classical algebra of observables) while the second has a block diagonal structure (thus describing a quantum algebra of observables). For many more examples of von Neumann algebras, Weddernburn decompositions, and their relationship to coarse-graining and decoherence the interested reader can consult \cite{Kabernik:2019jko}. \begin{example} The von Neumann algebra over $\mathcal{H} = \mathbb{C}^2$ generated by the Pauli $Z$ operator has the following Wedderburn decomposition \begin{equation} \mathcal{M} = \langle Z \rangle_{vN} = \begin{bmatrix} a & 0 \\ 0 & b \end{bmatrix} = \left[ \mathcal{L}(\mathbb{C})\otimes 1 \right] \oplus \left[ \mathcal{L}(\mathbb{C})\otimes 1 \right], \end{equation} where $a,b \in \mathbb{C}$. \end{example} \begin{example} \label{example:Wedderburn} The von Neumann algebra $\mathcal{M} = \langle ZII, IXI, IZI \rangle_{\mathrm{vN}}$ over $\mathcal{H}= \mathbb{C}^{8}$ has the following Wedderburn decomposition \begin{equation} \mathcal{M} = \bigoplus_{\alpha=0} ^1 \left (\mathcal{L}(\mathbb{C}^2) \otimes I \right) = \begin{pmatrix} \begin{matrix} a & 0 & b & 0 \\ 0 & a & 0 & b \\ c & 0 & d & 0 \\ 0 & c & 0 & d \end{matrix} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \mbox{\normalfont\Large\bfseries 0} \\ \hline \mbox{\normalfont\Large\bfseries 0} & \hspace*{-\arraycolsep}\vline\hspace*{-\arraycolsep} & \begin{matrix} e & 0 & f & 0\\ 0 & e & 0 & f \\ g & 0 & h & 0\\ 0 & g & 0 & h \end{matrix} \end{pmatrix}, \end{equation} where $a, \dots, h \in \mathbb{C}$. \end{example} \subsection{Algebraic states and entropies} \label{sec:algebraic_states} A quantum state on a Hilbert space $\mathcal{H}$ is a Hermitian positive semi-definite operator $\rho \in \mathcal{L}(\mathcal{H})$ with $\operatorname{Tr}(\rho) =1$. Given a state $\rho$ and a Hermitian operator $O$ we can define the expectation value of the operator $O$ on $\rho$ as \begin{equation} \mathbb{E}_\rho (O) = \operatorname{Tr} (O \rho). \end{equation} It is often the case that one is interested in computing expectation values of operators that form an algebra $\mathcal{M}$. A generic state $\rho$ is not necessarily an element of $\mathcal{M}$ and could contain more information than is needed to compute expectation of values of operators in $\mathcal{M}$. It is therefore useful to define the notion of an algebraic state---that is, the state that is ``visible'' from an algebra $\mathcal{M}$. For an algebra $\mathcal{M}$ and quantum state $\rho$ we denote the respective algebraic state by $\rho_\mathcal{M}$. The following theorem shows that algebraic states are unique and that, for the purpose of computing expectation values of operators in $\mathcal{M}$, we can always replace $\rho$ by $\rho_\mathcal{M}$. That is, the algebraic state is a generalization of the reduced density matrix for an algebra which need not be a factor. \begin{theorem} \label{thm:expectations_vN} Let $\mathcal{M}$ be a von Neumann algebra on $\mathcal{H}$ and let $\rho \in \mathcal{H}$ be a quantum state. Then, there exists a unique state $\rho_\mathcal{M} \in \mathcal{M}$ such that \begin{equation} \operatorname{Tr}(O \rho_\mathcal{M}) = \operatorname{Tr}(O \rho) \end{equation} for all $O \in \mathcal{M}$. \end{theorem} For an algebra $\mathcal{M}$ and state $\rho$ it is possible to write an explicit formula for the algebraic state $\rho_\mathcal{M}$. Recall that by Theorem~\ref{thm:Wedderburn} there exists a decomposition of the Hilbert space \begin{equation} \label{eq:Hdec} \mathcal{H} = \oplus_{\alpha} \left( \mathcal{H}_{A_\alpha} \otimes \mathcal{H}_{\bar{A}_\alpha} \right), \end{equation} in terms of which we can write the Wedderburn decomposition of the algebra \begin{equation} \mathcal{M} = \oplus_{\alpha} \left( \mathcal{L}(\mathcal{H}_{A_\alpha}) \otimes I_{\bar{A}_\alpha} \right). \end{equation} Let $\{\ket{\alpha,i,j}\}$ be an orthonormal basis for $\H_{A_\alpha} \otimes \H_{\bar{A}_\alpha} $ (a block in the decomposition) that is ``compatible with $\mathcal{M}$'', that is, the $\alpha$ enumerates the diagonal blocks and within each block we have $\ket{\alpha,i,j} = \ket{i_\alpha}_{A_{\alpha}} \otimes \ket{j_\alpha}_{{\bar A_\alpha}}$ where $\{\ket{i_\alpha}_{A_{\alpha}} \}$ and $\{ \ket{j_\alpha}_{\bar A_\alpha} \}$ are orthonormal bases for $\H_{A_\alpha}$ and $\H_{\bar A_\alpha}$ respectively. Any state $\rho$ can be written in terms of the Hilbert space decomposition of \eqref{eq:Hdec} as \begin{equation} \rho = \sum_{\alpha,\alpha'}\sum_{i,j}\sum_{i',j'} \rho[\alpha,\alpha']_{i,j,i',j'} \ket{\alpha,i,j}\bra{\alpha',i',j'}, \end{equation} where $\{\ket{\alpha,i,k} \}$ is a basis for the $\alpha$-block. Because for the purpose of computing expectation values of elements of $\mathcal{M}$ only the blocks that are diagonal in $\alpha$ will give non-zero contributions we have that $\rho[\alpha,\alpha'] = 0$ for all $\alpha \neq \alpha'$. For computational purposes, it is then useful to define the blocks of $\rho$ that are diagonal in $\alpha$ as \begin{equation} \rho_{A_\alpha} \equiv \frac{1}{p_\alpha} \operatorname{Tr}_{\bar A_\alpha} (\rho[\alpha]), \end{equation} where $p_\alpha \equiv \sum_{i,j} \rho [\alpha, \alpha] _{i,j,i,j}$ is a positive normalisation constant such that $\operatorname{Tr}_{ A_\alpha} (\rho_{A_\alpha}) = 1$ and $\rho[\alpha] \equiv \rho[\alpha, \alpha]$ is the part of $\rho$ which is in the $\alpha$-block. Using this notation we can write the algebraic state $\rho_\mathcal{M}$ as \begin{equation} \label{eq:algebraic_state} \rho_\mathcal{M} \equiv \oplus_\alpha \left( p_\alpha \rho_{A_\alpha} \otimes \frac{I_{\bar{A}_\alpha}}{|I_{\bar{A}_\alpha|}}\right). \end{equation} From \eqref{eq:algebraic_state} we can see that when $\mathcal{M}$ is a factor the von Neumann entropy of the algebraic state is equivalent to the von Neumann entropy of the reduced state $\rho_A = \operatorname{Tr}_{\bar A} (\rho)$. This suggests the following generalisation of the von Neumann entropy for a general quantum state $\rho$ and an arbitrary algebra $\mathcal{M}$: \begin{definition}\textbf{Algebraic entropy.} Let $\rho$ be a state on an arbitrary von Neumann algebra $\mathcal{M}$. The algebraic entropy of $\rho$ with respect to $\mathcal{M}$ is \begin{equation} \label{eq:algebraic_entropy} S(\rho, \mathcal{M}) \equiv-\sum_{\alpha} \operatorname{Tr}_{A_{\alpha}}\left(p_{\alpha} \rho_{A_{\alpha}} \log \left(p_{\alpha} \rho_{A_{\alpha}}\right)\right)=-\sum_{\alpha} p_{\alpha} \log p_{\alpha}+\sum_{\alpha} p_{\alpha} S\left(\rho_{A_{\alpha}}\right), \end{equation} where $S\left(\rho_{A_{\alpha}}\right) \equiv - \operatorname{Tr}_A (\rho_A \log \rho_{A_\alpha})$ is the von Neumann entropy of the reduced state $\rho_{A_\alpha}$. \end{definition} Note that when $\mathcal{M}$ is a factor the algebraic entropy reduces to the standard von Neumann entropy (i.e. the classical term $-\sum_{\alpha} p_{\alpha} \log p_{\alpha}$ vanishes). The definition in \eqref{eq:algebraic_entropy} has two terms: a \emph{classical} term arising from the uncertainty over which block of the Wedderburn decomposition the state is in, and a \emph{quantum} term associated to the standard von Neumann entropies over the blocks. \begin{example} Consider the von Neumann algebra of Example~\ref{example:Wedderburn}. The algebra has two diagonal blocks denoted by $\alpha = 0,1$. Consider the $3$-qubit GHZ state $\ket{\Psi} = 2^{-1/2} (\ket{000} + \ket{111})$. We have that \begin{equation} \rho_{A_0} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \quad \rho_{A_1} = \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \end{equation} and $p_0 = 1/2$, $p_1 = 1/2$. The algebraic entropy of the state is \begin{equation} S(\ket{\Psi}\bra{\Psi}, \mathcal{M}) = 1. \end{equation} \end{example} \section{The 2x2 Bacon-Shor code} \label{app:2x2bacon-shor} \cite{cao2020approximate} presents a construction of holographic tensor networks using the 2 x 2 Bacon-Shor code. This four-qubit stabilizer subsystem code can be shown to have simple holographic properties. \cite{cao2020approximate} find that, via a notion of `skewing' which involves taking linear combinations of several encoding isometries, they can construct quantum error-correcting codes with a non-trivial RT formula. For ease of comparison to their work, we review some of their calculations in our language. We rederive that, while the 2x2 Bacon-Shor code is holographic, its area operator is proportional to the identity and its RT formula is trivial. This demonstrates that some notion of `skewing' is necessary to obtain non-trivial RT formulas from this code. Before we talk about the Bacon-Shor code, we derive a result about stabilizer codes in general. \begin{lemma} \label{lemma:stabm} Say $G$ is an abelian subgroup of the $n$-qubit Pauli group, defining a stabilizer code. Say $V_G : \H_L \to \H$ is an encoding isometry of this code. Say $A$ is any subset of the $n$ qubits, decomposing $\H = \H_A \otimes \H_{\bar A}$. Then $\mathcal{M} := V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G $ is a von Neumann algebra, and $(V_G,A,\mathcal{M})$ satisfy complementary recovery. \end{lemma} \begin{proof} All we need to show is that $V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G $ is closed under multiplication, since the other properties of von Neumann algebras are always guaranteed. Then Theorem~\ref{thm:whatalgebra} implies complementary recovery. To do so, we observe that $\mathcal{L}(\mathbb{C}^2) = \langle X,Z \rangle_\text{vN}$, which lets us write: \begin{align} \mathcal{L}(\H_A) \otimes I_{\bar A} = \langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN}. \end{align} Conjugating by $V_G^\dagger$ projects each Pauli matrix in the above set into the code space, and then decodes it. Note that a Pauli matrix is in the code space if and only if it commutes with the stabilizer $G$. We write: \begin{align} \mathcal{M} = V_G^\dagger (\mathcal{L}(\H_A) \otimes I_{\bar A})V_G = \left\{ V_G^\dagger P V_G \text{ for } P \in \langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN} \text{ if } PG = GP \right\}. \end{align} Now all that remains to show is that the above set is closed under multiplication. Write the code space projector as $\Pi_G = V_G^\dagger V_G \in \mathcal{L}(\H)$, and consider two elements $V_G^\dagger P V_G$ and $V_G^\dagger Q V_G$ in the above set. Then, since $P$ and $\Pi_G$ commute: \begin{align} V_G^\dagger P V_G V_G^\dagger Q V_G = V_G^\dagger P \Pi_G Q V_G = V_G^\dagger\Pi_G P Q V_G = V_G^\dagger P Q V_G. \end{align} $PQ$ is in $\langle X_i, Z_i \text{ for } i \in A \rangle_\text{vN}$ since it is a von Neumann algebra by definition, and furthermore since $P$ and $Q$ both commute with $G$, so must $PQ$. So $ V_G^\dagger P Q V_G$ is also in $\mathcal{M}$. \end{proof} This establishes the fact that situations like Example~\ref{ex:badcode} cannot happen with stabilizer codes, but also gives a simple method for computing $\mathcal{M}$. Now we discuss the 2x2 Bacon-Shor code. Recall from subsystem quantum error correction that a subsystem code is generated by a non-abelian group of Pauli matrices $\mathcal{G}$. In this case: \begin{align} \mathcal{G} := \langle X_1X_2, X_3X_4, Z_1Z_3, Z_2Z_4 \rangle. \end{align} Non-abelian Pauli groups do not have a simultaneous $+1$ eigenspace. However, we can construct several abelian Pauli groups from $\mathcal{G}$. One of these is its center: \begin{align} Z_\mathcal{G} = \langle X_1X_2X_3X_4, Z_1Z_2Z_3Z_4 \rangle. \end{align} This is by definition abelian, and has an encoding isometry: \begin{align} V_{Z_\mathcal{G}} := \hspace{8mm} \begin{array}{c}\Qcircuit @C=.8em @R=.8em { & \ctrl{2} & \qw & \gate{H} & \ctrl{1} & \qw \\ & \qw & \ctrl{2} & \qw & \targ & \qw \\ \lstick{\ket{0}} & \targ & \qw & \gate{H} & \ctrl{1} & \qw \\ \lstick{\ket{0}} & \qw & \targ & \qw & \targ & \qw }\end{array} \end{align} Visibly, $Z_\mathcal{G}$ defines a quantum error-correcting code with two logical qubits. This code has a von Neumann algebra $\mathcal{M}^{Z_\mathcal{G}}$ corresponding to the subregion $A$. We can restrict this code further by selecting a `gauge': an operator $P \in \mathcal{G}$ that is not in the center $P \not\in Z_\mathcal{G}$. Then, the abelian group $\langle P , Z_\mathcal{G} \rangle$ defines a stabilizer code with just one logical qubit. We can construct an encoding isometry for $\langle P , Z_\mathcal{G} \rangle$ by computing the two-qubit Pauli matrix, $P_L := V_\mathcal{G}^\dagger P V_\mathcal{G}$ and then constructing a two-qubit isometry $V_{P_L}$ such that $V_{P_L}^\dagger P_L V_{P_L} = Z_2$. Then the map $V_\mathcal{G}V_{P_L}$ is an encoding isometry for $\langle P , Z_\mathcal{G} \rangle$. This code has a von Neumann algebra $\mathcal{M}^{P}$ corresponding the bipatition $A$. \begin{example} \textbf{3-1 bipartitions of the 2x2 Bacon Shor Code.} Here we derive that for \emph{any} choice of $P$, if $A$ contains just one qubit then $L = I_L$. We begin with analyzing the code defined by $Z_\mathcal{G}$. Since $Z_\mathcal{G}$ is symmetric to permutations of qubits, we assume without loss of generality that $A = \{1\}$, and use the method in Lemma~\ref{lemma:stabm} to compute $\mathcal{M}^{Z_\mathcal{G}}$: \begin{align} \mathcal{L}(\H_A) \otimes I_{\bar A} = \langle X_1 , Z_1\rangle_\text{vN} = \{I, X_1,Y_1,Z_1\}. \end{align} We find that none of these commute with $Z_\mathcal{G}$, so $\mathcal{M}^{Z_\mathcal{G}} = \langle I \rangle_\text{vN}$. Therefore $S(\mathcal{M}^{Z_\mathcal{G}},\rho) = 0$ for all $\rho$. Now all that remains to be done is to compute $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}}))$. We define the following states for $a,b \in \{0,1\}$: \begin{align} \ket{X^a Z^b} := \left( Z^b \otimes X^a \right) \frac{\ket{00} + \ket{11}}{\sqrt{2}}. \end{align} Now we can inspect the action of $V_{Z_\mathcal{G}}$ on the computational basis for $\H_L$: \begin{align} V_{Z_\mathcal{G}} \ket{ab}_L = \ket{X^a Z^b}_{1,2} \ket{X^a Z^b}_{3,4}. \end{align} Tracing out qubits $3,4$ measures $a,b$ so now qubits $1,2$ are in some probabilistic mixture of the $\ket{X^a Z^b}$. But these states are all maximally entangled, so the reduced state on qubit $1$ is $I/2$. So $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) = 1$. We find that: \begin{align} \text{Tr}(\rho L) = S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) - S(\mathcal{M}^{Z_\mathcal{G}},\rho) = 1, \end{align} which is achieved by $L = I_L$. Now we consider any gauge $P$, defining an isometry $V_{P_L}$. Observe that $\mathcal{M}^P = V_{P_L}^\dagger \langle I \rangle_\text{vN}V_{P_L} = \langle I \rangle_\text{vN}$ remains unchanged. Furthermore, we say that the entanglement entropy on qubit $1$ is $I/2$. So $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) = 1$ is independent of $a,b$ so it also remains unchanged if we select a subspace of the code space. Thus we also have $L = I_L$ in this situation. \end{example} We saw that we could perform an analysis of all gauges $P$ in a unified manner by instead analyzing the code stabilized by $Z_\mathcal{G}$. While for 3-1 bipartitions the RT formula was independent of $P$, it is actually dependent on $P$ for 2-2 bipartitions. Nonetheless it is helpful to consider the code stabilized by $Z_\mathcal{G}$. For the discussion below, we label the von Neumann algebras with their corresponding subregions in the subscript: for example, $\mathcal{M}^{Z_\mathcal{G}}_{1,2}$ is the algebra defined by $V_{Z_\mathcal{G}}$ and $A = \{1,2\}$. \begin{example} \textbf{2-2 bipartitions of the 2x2 Bacon Shor Code with no gauge.} We begin with $A = \{1,2\}$: Write: \begin{align} \mathcal{L}(\H_{1,2}) \otimes I_{3,4} = \langle X_1 , Z_1, X_2,Z_2\rangle_\text{vN}. \end{align} Of these, only the subalgebra $\langle X_1X_2, Z_1Z_2 \rangle_\text{vN}$ commutes with $Z_G$, so: \begin{align} \mathcal{M}_{1,2}^{Z_\mathcal{G}} = \langle V_{Z_\mathcal{G}}^\dagger X_1X_2 V_{Z_\mathcal{G}}, V_{Z_\mathcal{G}}^\dagger Z_1Z_2 V_{Z_\mathcal{G}} \rangle_\text{vN} = \langle Z_1 , Z_2 \rangle_\text{vN}, \end{align} which is just the set of diagonal operators in the computational basis of $\H_L$. Now, while the code $Z_\mathcal{G}$ is symmetric with respect to qubit permutations, the encoding isometry $V_{Z_\mathcal{G}}$ is not. Since $\mathcal{M}^{Z_\mathcal{G}}_{1,2}$ is just the set of diagonal operators in the computational basis, $S(\mathcal{M}^{Z_\mathcal{G}}_{1,2},\rho)$ is the classical entropy after measuring in the computational basis. Recall the relation $ V_{Z_\mathcal{G}} \ket{ab}_L = \ket{X^a Z^b} \ket{X^a Z^b}$ from above. We see that discarding qubits $3,4$ essentially measures the first two qubits in the $\ket{X^a Z^b}$ basis, so $ S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}}))$ is actually the same as $S(\mathcal{M}^{Z_\mathcal{G}}_{1,2},\rho)$. So their difference vanishes, and $L = 0$. Now we consider $A = \{1,3\}$, which we can actually obtain from the above analysis by inserting the gate $S_{2,3}$ that swaps qubits 2 and 3. That is, $V_{Z_\mathcal{G}}$ under $A=\{2,3\}$ should have the von Neumann algebra as $V_{Z_\mathcal{G}}S_{2,3}$ under $A=\{1,2\}$. Observe that $S_{2,3}$ commutes with $Z_G$, so it must implement a logical operator. With some calculation we see that $ V_{Z_\mathcal{G}}^\dagger S_{2,3} V_{Z_\mathcal{G}} = H^{\otimes 2} S_L $ where $S_L$ swaps the two logical qubits (this is done most easily by propagating $X_1,Z_1,X_2,Z_2$ through the Clifford circuit $V_{Z_\mathcal{G}}^\dagger S_{2,3} V_{Z_\mathcal{G}}$, and observing that it implements the same transformation as $H^{\otimes 2} S_L$). As a result, we see that $\mathcal{M}^{Z_\mathcal{G}}_{1,3} = H^{\otimes 2} S_L \mathcal{M}^{Z_\mathcal{G}}_{1,2} S_L H^{\otimes 2} = \langle X_1, X_2 \rangle_\text{vN} $, which is the set of diagonal operators in the $H^{\otimes 2}\ket{ab}_L$ basis. We also see that: \begin{align} V_{Z_\mathcal{G}} H^{\otimes 2}\ket{ab}_L = S_{2,3} V_{Z_\mathcal{G}}\ket{ba}_L = \ket{X^a Z^b}_{1,3} \ket{X^a Z^b}_{2,4}. \end{align} Tracing out qubits 2 and 4, just like before, measures the qubits in 1 and 3 in the $\ket{X^a Z^b}$ basis, and the resulting entropy is the same as $S(\mathcal{M}^{Z_\mathcal{G}}_{1,3},\rho)$, implying $L = 0$. \end{example} The code $Z_\mathcal{G}$ is symmetrical under $S_{2,3}$ so we expect the code to have the same entanglement properties for both $A = \{1,2\}$ and $A = \{2,3\}$. However, gauges will break this symmetry. To illustrate this, we analyze the gauges considered by \cite{cao2020approximate}. \begin{example} \textbf{2-2 bipartitions of the 2x2 Bacon Shor Code with fixed gauges}. First we consider $P = Z_1Z_2$ with $A = \{1,2\}$. We calculate $P_L = V_{Z_\mathcal{G}}^\dagger PV_{Z_\mathcal{G}} = Z_2 $. This forces the second qubit in $V_{Z_\mathcal{G}}$ to be $\ket{0}$: we could write $V_P \ket{\psi} = \ket{\psi}\ket{0}$. The corresponding von Neumann algebra is $\mathcal{M}^{Z_1Z_2}_{1,2} = V_P^\dagger \mathcal{M}^{Z_\mathcal{G}}_{1,2} V_P = \langle Z_1 \rangle_\text{vN}$, which is again just the set of diagonal operators in the computational basis on the first qubit. The corresponding encoded states are $V_{Z_\mathcal{G}}\ket{a0}_L = \ket{X^a Z^0}_{1,2} \ket{X^a Z^0}_{3,4}$. We see that discarding qubits 3 and 4 measures qubits 1 and 2, so for the exact same reasoning as above, we have $ S(\text{Tr}_{\bar A}(V^\dagger_P V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}} V_P )) = S( \mathcal{M}^{Z_1Z_2}, \rho)$ so $L = 0$. However, $A = \{1,3\}$ with $P = Z_1Z_2$ yields a different result. We find that $\mathcal{M}^{Z_1Z_2}_{1,3} = V_P^\dagger \mathcal{M}^{Z_\mathcal{G}}_{1,3} V_P = \langle X_1 \rangle_\text{vN} $, which is diagonal in the $H_1 \ket{a0}_L$ basis. $S(\mathcal{M}^{Z_1Z_2}_{1,3},\rho)$ corresponds to the entropy of the $a$ degree of freedom as measured in the computational basis. We find that: \begin{align} V_{Z_\mathcal{G}} H_1 \ket{a0}_L &= S_{2,3} V_{Z_\mathcal{G}} S_L H^{\otimes 2} H_1 \ket{a0}_L = S_{2,3} V_{Z_\mathcal{G}} H_1 S_L \ket{a0}_L = S_{2,3} V_{Z_\mathcal{G}} \ket{+a}_L\\ &= \frac{\ket{X^0 Z^a}_{1,3} \ket{X^0 Z^a}_{2,4} + \ket{X^1 Z^a}_{1,3} \ket{X^1 Z^a}_{2,4} }{\sqrt{2}}. \end{align} Now we see that tracing out qubits 2 and 4 yields two sources of entropy for the remaining qubits on $A$: one bit of entropy from the $X$ degree of freedom, and the other stemming from the measurement of $a$ in the computational basis. Thus, $S(\text{Tr}_{\bar A}(V_{Z_\mathcal{G}}^\dagger \rho V_{Z_\mathcal{G}})) - S(\mathcal{M}^{Z_1Z_2}_{1,3},\rho) = 1$. So $L = I_L$. We saw that for $P = Z_1 Z_2$, $A = \{1,2\}$ featured $S = 0$ and $S = \{1,3\}$ featured $S = I_L$. Now we consider $P = X_1 X_3$: we will find that the opposite is the case by just swapping $V_{Z_\mathcal{G}}$ with $S_{2,3}V_{Z_\mathcal{G}}$. We compute: \begin{align} V_{Z_\mathcal{G}}^\dagger S_{2,3} P S_{2,3} V_{Z_\mathcal{G}} = S_L H^{\otimes 2} V_{Z_\mathcal{G}}^\dagger P V_{Z_\mathcal{G}} H^{\otimes 2} S_L = S_L H^{\otimes 2} X_1 H^{\otimes 2} S_L = Z_2. \end{align} So we see that $P = X_1X_3$ behaves just like $Z_1 Z_2$ when qubits 2 and 3 are swapped. Thus, $A = \{1,2\}$ features $S = I_L$ and $S = \{1,3\}$ featured $S = 0$. \end{example} \section{Complementary recovery and the RT formula} \label{sec:complementary_recovery} In this section we define several holographic properties of quantum error-correcting codes and establish some relationships between them. The main goal is the RT formula, which is a remarkable relationship between the entropy of a subregion of the boundary $A$, called $S_A$, as well as the entropy $S_{\text{bulk},A}$ of the bulk degrees of freedom visible from $A$: \begin{align} S_A(\rho) = S_{\text{bulk},A}(\rho) + \text{Tr}(L \rho). \end{align} For a holographic quantum error-correcting code, the above holds for any encoded state $\rho$. Such a relationship imposes a lot of structure on the family of states in the code: entropies are non-linear functions of $\rho$, whereas the rightmost term $\text{Tr}(L \rho)$ is a linear function. If $\rho$ is pure, we can intuitively think of $S_A$ as the entanglement entropy of the state encoded into physical qubits, whereas $S_{\text{bulk},A}$ is like the entanglement entropy of the underlying logical state. Rearranging the equation to $S_A(\rho) - S_\text{bulk}(\rho) = \text{Tr}(L \rho)$, we can see that this is essentially saying that the extra entropy added by the encoding process is linear in $\rho$. Furthermore, the amount of entropy is added is an observable: a hermitian `area operator' $L$. While such a structured relationship might seem very rare, we find that there is actually a fairly simple and natural property that implies it: complementary recovery. This property demands a certain symmetry of the error-correcting code across a bipartition $A$-$\bar A$ of the physical Hilbert space. The errors correctable given only access to $A$ are exactly those that commute with the ones correctable only from subregion $\bar A$. This symmetry is present in many quantum error-correcting codes, such as stabilizer codes (See Lemma~\ref{lemma:stabm}). Surprisingly, it immediately implies an RT formula! \subsection{Complementary recovery} We begin with a discussion of complementary recovery and its relationship to quantum error correction. A quantum error-correcting code can be thought of as a subspace $\H_\text{code}$ of the physical Hilbert space $\H$. However, in this discussion as well as in the next section we will find it more convenient to work with a `logical space' $\H_L$ with the same dimension as $\H_\text{code}$, which is thought of as separate from $\H$. Then, an `encoding isometry' $V: \H_L \to \H$ takes logical states and encodes them in the physical Hilbert space. The image of $V$ is $\H_\text{code}$. Intuitively one can think of this as fixing a basis for the code space, since $\H_\text{code}$ is invariant under a basis change $V \to VU_L$ for some unitary $U_L$ on $\H_L$. While different from the approach of other literature, this view has two advantages. First, it makes the notion of a commutant of a von Neumann algebra in $H_L$ a little easier to understand. Second, we find that when giving explicit examples in the next section it is easier to write down $V$ rather than $\H_\text{code}$. Above we have been speaking of holographic properties of a code, defined by its encoding isometry $V$. However, two other quantities are important for an RT formula: the subregion $A$ that determines the entropy $S_A$, and the visible bulk degrees of freedom that determine the entropy $S_{\text{bulk},A}$. What degrees of freedom are visible is denoted by a von Neumann algebra $\mathcal{M}$. Clearly, $(V,A,\mathcal{M})$ are interrelated, so we establish the following vocabulary: \begin{definition} Say $V : \H_L \to \H$ is an encoding isometry $V$ for some quantum error-correcting code, and $A$ is a subregion of $\H$ inducing the factorization $\H = \H_A \otimes \H_{\bar A}$. A von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$ is said to be: \begin{itemize} \item \textbf{correctable} from $A$ with respect to $V$ if $\mathcal{M} \subseteq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$. That is: for every $O_L \in \mathcal{M}$ there exists an $O_A \in \mathcal{L}(\H_A)$ such that $O_L = V^\dagger (O_A \otimes I_{\bar A}) V$. \item \textbf{private} from $A$ with respect to $V$ if $V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{M}'$. That is: for every $O_A \in \mathcal{L}(\H_A)$ it is the case that $ V^\dagger (O_A \otimes I_{\bar A}) V$ commutes with every operator in $\mathcal{M}$. \end{itemize} \end{definition} If $\mathcal{M}$ is correctable, then it is a set of logical operators that can be performed on the encoded state given access to only the subregion $A$. A hermitian element in $\mathcal{M}$ then corresponds to an observable on the logical Hilbert space that could be measured from $A$, so, intuitively, $\mathcal{M}$ tells us about what parts of the logical state are recoverable from $A$. Conversely, if $\mathcal{M}$ is private then the observables in $\mathcal{M}$ tell us what parts of $\rho$ are invisible from $A$. The notion of correctability is central to complementary recovery: a von Neumann algebra $\mathcal{M}$ exhibits complementary recovery if it can be corrected from $A$, and its commutant $\mathcal{M}'$ can be corrected from $\bar A$. \begin{definition} A code with encoding isometry $V:\H_L\to \H$, a subregion of the physical Hilbert space $A$ and a von Neumann algebra $\mathcal{M} \subseteq \mathcal{L}(\H_L)$, together $(V,A,\mathcal{M})$, exhibit \textbf{complementary recovery} if: \begin{itemize} \item $\mathcal{M}$ is correctable from $A$ with respect to $V$: $\mathcal{M} \subseteq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$, \item $\mathcal{M}'$ is correctable from $\bar A$ with respect to $V$: $\mathcal{M}' \subseteq V^\dagger (I_A \otimes \mathcal{L}(\H_{\bar A}))V$. \end{itemize} \end{definition} So far, there do not appear to be very many restrictions on the von Neumann algebra $\mathcal{M}$. In particular if $\mathcal{N}$ is a subalgebra of $\mathcal{M}$, and $\mathcal{M}$ is correctable, then $\mathcal{N}$ is correctable as well. It would thus seem plausible that if $(V,A,\mathcal{M})$ has complementary recovery, then so does $(V,A,\mathcal{N})$, so there are multiple von Neumann algebras with complementary recovery. However, we will find that complementary recovery is actually so restrictive on $\mathcal{M}$ that it determines it uniquely, and subalgebras of $\mathcal{M}$ do not have complementary recovery. This is important because the von Neumann algebra plays a key role in the RT formula: it tells us how to concretely define `the entropy of bulk degrees of freedom visible from $A$' via an algebraic entropy $S(\mathcal{M},A)$. For this to make sense, $\mathcal{M}$ must be completely determined by the isometry $V$ and the subregion $A$. To prove this result we cite a helpful lemma from the quantum error correction literature. \begin{lemma} \label{lemma:corrpriv} \textbf{Correctable from $A$ $\leftrightarrow$ private from $\bar A$.} A von Neumann algebra $\mathcal{M}$ is correctable from $A$ with respect to $V$ if and only if $\mathcal{M}$ is private from $\bar A$ with respect to $V$. \end{lemma} This lemma establishes the complementarity of correctable and private algebras for the case of erasure errors (the correctability from $A$ implies that $\bar A$ has been erased). Informally, a subsystem $B$ of a Hilbert space $\mathcal{H} = A \otimes B$ is private if it completely decoheres after the action of a channel. The lemma follows from the more general Theorem~\ref{thm:correctable-private} (which we present in Appendix~\ref{app:privacy}) that applies to the case of general error channels (to get the lemma simply consider $\mathcal{E}$ to be the erasure channel for the subsystem $\bar A$ and $P$ a projection onto the code subspace). The complementarity theorem was first proven for the case of factor algebras \cite{kretschmann2008complementarity} and then extended to general, infinite dimensional, von Neumann algebras~\cite{crann2016private}. Now we are ready to demonstrate that $\mathcal{M}$ is unique, provided it exists at all. The theorem below also shows an easy way to calculate $\mathcal{M}$ as well as a simple criterion for its existence. The proof relies on the fact that privacy of $\mathcal{M}'$ is defined as a statement that is a bit like an `upper bound version' of correctability of $\mathcal{M}$: it demands that the set of correctable operators lies in $\mathcal{M}$, rather than that $\mathcal{M}$ is correctable. By sandwiching together correctability of $\mathcal{M}$ and privacy of $\mathcal{M}'$ we fix what $\mathcal{M}$ must be. \begin{theorem}\label{thm:whatalgebra} \textbf{Uniqueness of the Neumann algebra.} Say $V$ is an encoding isometry and say $A$ is a subregion. Let $\mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ be the image of operators on $\H_A$ projected onto $\H_L$. If $\mathcal{M}$ is a von Neumann algebra (that is, it is closed under multiplication), then it is the unique von Neumann algebra satisfying complementary recovery with $V$ and $A$. If it is not, then no von Neumann algebra satisfying complementary recovery exists. \end{theorem} \begin{proof} We split this proof into two conditions: \begin{description} \item[Existence] If $\mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, then $(V,A,\mathcal{M})$ have complementary recovery. \item[Uniqueness] If $\mathcal{N} \subsetneq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, then $(V,A,\mathcal{N})$ do not have complementary recovery. \end{description} We begin with existence: we assume that $ \mathcal{M} := V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ is a von Neumann algebra, so $\mathcal{M}'$ is well defined. The first condition of complementary recovery holds by definition of $\mathcal{M}$. Also by definition we have that: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{M} = \mathcal{M}'', \end{align} where in the last part we used the bicommutant theorem. Thus, by definition of privacy, $\mathcal{M}'$ is private from $A$ with respect to $V$. By Lemma~\ref{lemma:corrpriv}, $\mathcal{M}'$ is thus correctable from $\bar A$ with respect to $V$, which is the second condition of complementary recovery. Next we show uniqueness. Let $\mathcal{N} \subsetneq V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$ be any von Neumann algebra that is correctable from $A$, but not equal to the full set of correctable operators. We assume that $(V,A,\mathcal{N})$ has complementary recovery and derive a contradiction. By the second condition of complementary recovery, $\mathcal{N}$ is correctable from $\bar A$ with respect to $V$. By Lemma~\ref{lemma:corrpriv}, $\mathcal{N}'$ is thus private from $A$ with respect to $V$, that is: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V \subseteq \mathcal{N}'' = \mathcal{N}, \end{align} where in the last part we used the bicommutant theorem. So we have \begin{align} \mathcal{N} \subsetneq V^\dagger (\mathcal{L}(H_A)\otimes I_{\bar A})V \subseteq \mathcal{N}, \end{align} implying that $\mathcal{N}$ does not contain itself - a contradiction. \end{proof} While an RT formula seems like an extremely unlikely property, complementary recovery on the other hand seems like a property that is rather natural and that most, if perhaps all, quantum error-correcting codes should have. Thus, the fact that complementary recovery implies an RT formula is surprising. However, the fact that a von Neumann algebra with complementary recovery can fail to exist implies that complementary recovery is actually less trivial than it might seem. While still exhibited by many quantum error-correcting codes, it is worth giving an explicit example of a code without complementary recovery. \begin{example} \label{ex:badcode} \textbf{A code without complementary recovery.} \\ Let $\H = \text{span}(\ket{00},\ket{01},\ket{10},\ket{11})$ be two qubits and $\H_L = \text{span}(\ket{0},\ket{1},\ket{2})$ be a qutrit. Let $A$ be the first qubit of $\H$, and let: \begin{align} V = \ket{00}\bra{0} + \ket{01}\bra{1} + \ket{10}\bra{2}. \end{align} Then the set of correctable operators is: \begin{align} V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V = \begin{bmatrix} a & b & 0 \\ c & d & 0 \\ 0 & 0 & a\end{bmatrix} \text{ for all }a,b,c,d \in \mathbb{C}. \end{align} Notably, this set is not closed under multiplication and is not a von Neumann algebra. Let us pick $\mathcal{M}$ to be the largest von Neumann algebra in this set: \begin{align} \mathcal{M} = \begin{bmatrix} a & 0 & 0 \\ 0 & d & 0 \\ 0 & 0 & a\end{bmatrix} \text{ for all }a,d \in \mathbb{C}. \end{align} While $(V,A,\mathcal{M})$ satisfy the first condition of complementary recovery, they do not satisfy the second: \begin{align} \begin{bmatrix} a & 0 & b \\ 0 & d & 0 \\ c & 0 & e\end{bmatrix} = \mathcal{M}' \not\subseteq V^\dagger (I_{A} \otimes \mathcal{L}(\H_{\bar A}))V = \begin{bmatrix} a & 0 & b \\ 0 & a & 0 \\ c & 0 & e\end{bmatrix}. \end{align} Say we had chosen $\mathcal{M}$ to be some smaller subalgebra of $V^\dagger (\mathcal{L}(\H_A)\otimes I_{\bar A})V$. Then $\mathcal{M}'$ would only be larger, containing the $\mathcal{M}'$ in the line above. But since that $\mathcal{M}'$ is already not contained in $V^\dagger (I_{A} \otimes \mathcal{L}(\H_{\bar A}))V$, there does not exist \emph{any} von Neumann algebra with complementary recovery with $V$ and $A$. \end{example} The fact that complementary recovery can fail to exist should illustrate that it actually imposes a non-trivial constraint on the quantum error-correcting code. This constraint is strong enough to imply an RT formula, which we will now define carefully. Note that it is not obvious at all how to obtain an $\mathcal{M}$ that makes the RT formula work from the definition of the formula itself - that is where complementary recovery comes in. \subsection{The RT formula and its properties} \begin{definition} Say $V$ is an encoding isometry, say $A$ is a subregion, and say $\mathcal{M}$ is a von Neumann algebra on $\H_L$. Then we say $(V,A,\mathcal{M})$ have an \textbf{RT formula} if there exists an \textbf{area operator} $L \in \mathcal{L}(\H_L)$ such that for any state $\rho$ on $\H_L$: \begin{align} S(\text{Tr}_{\bar A}(V\rho V^\dagger)) = S( \mathcal{M}, \rho) + \text{Tr}(\rho L). \end{align} If $L \propto I$ then we say $(V,A, \mathcal{M})$ have a trivial RT formula. \end{definition} Now we show the connection between complementary recovery and the existence of the RT formula. This is a highly non-trivial claim that makes use of an enormous amount of structure implied by complementary recovery. Recall from the previous section that a von Neumann algebra implies a Wedderburn decomposition on the Hilbert space that it acts on. We find that when a von Neumann algebra is correctable from $A$ with respect to $V:\H_L \to \H$, then not only does $\H_L$ decompose, but the Hilbert space associated with the subregion $A$ also decomposes. Furthermore, these decompositions are directly related. The following lemma formalizes this structure, even when complementary recovery is not present. Recall that complementary recovery really implies the correctability of both $\mathcal{M}$ and $\mathcal{M}'$, which allows us to invoke the lemma below not once but twice. We then exploit this to prove that an RT formula exists. \begin{lemma} \label{lemma:factorization} \textbf{Factorization of encoded states.} Say $V : \H_L \to \H$ is an encoding isometry, $A$ is a subregion inducing $\H = \H_A \otimes \H_{\bar A}$, and say $\mathcal{M}$ is a von Neumann algebra on $\H_L$ that is correctable from $A$ with respect to $V$. Say $\mathcal{M}$ induces the decomposition $\H_L = \bigoplus_{\alpha} \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right) $ so that \begin{equation} \label{eq:algebra_4_lemma} \mathcal{M} = \bigoplus_{\alpha} \left( \mathcal{L}(\H_{L_\alpha}) \otimes I_{\bar L_\alpha} \right), \end{equation} and that $\{\ket{\alpha,i,j}\}$ is an orthonormal basis for $\H_L$ that is ``compatible with $\mathcal{M}$'', that is, the $\alpha$ enumerates the diagonal blocks and within each block we have $\ket{\alpha,i,j} = \ket{i_\alpha}_{L_{\alpha}} \otimes \ket{j_\alpha}_{{\bar L_\alpha}}$ where $\{\ket{i_\alpha}_{L_{\alpha}} \}$ and $\{ \ket{j_\alpha}_{\bar L_\alpha} \}$ are orthonormal bases for $\H_{L_\alpha}$ and $\H_{\bar L_\alpha}$ respectively. Then there exists a factorization $\H_A = \bigoplus_\alpha\left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}$ and a unitary $U_A$ on $\H_A$ such that the state $(U_A \otimes I_{\bar A}) V \ket{\alpha,i,j}$ factors as follows: \begin{align} \label{eq:factorisation_encoded_state} (U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}, \end{align} where the state $\ket{\psi_{\alpha,i}}$ is independent of $j$, and $\ket{\chi_{\alpha,j}}$ is independent of $i$. \end{lemma} \begin{proof} The full proof for the general case of the algebra in Eq.~\ref{eq:algebra_4_lemma} is given in~\cite[Section 5.1]{harlow2017ryu}. We give a proof for the simpler case of a factor algebra in Appendix~\ref{app:structure_lemma}. Both proofs follow a similar strategy---originally developed in~\cite{schumacher1996quantum}---that involves introducing a reference system $R$ which is maximally entangled with the region $A$. By analysing the von Neumann entropies of the reduced density matrices of the $RA\bar A$ system one can obtain a necessary and sufficient condition for quantum error correction. This condition and standard properties of the Schmidt decomposition give the proof of the lemma. We note that an alternative proof of this result can be obtained via a result---see~\cite[Section VI]{hayden2004structure}---that shows that states that saturate the strong subadditivity inequality for the von Neumann entropy can be decomposed as direct sums of tensor products. \end{proof} The above lemma already sets up an enormous amount of notation, and even more notation will be required to apply it to a complementary situation. Explicit expressions for these Hilbert space decompositions quickly become rather cumbersome, which is why much of the literature skips many steps in the derivations in order to focus on the intuitive interpretation. While intuition is key, an explicit calculation can also help make one's understanding more concrete. For this reason we give the following derivation with more detail. In the next section we will provide explicit examples of quantum error-correcting codes and analyse them in the same language established here. The reader may wish to skip the proof of the following theorem and read the examples in the next section first. The following derivation is inspired by proofs in \cite{harlow2017ryu} and \cite{almheiri2015bulk}. \begin{theorem} \label{thm:complementaritytoRT} \textbf{Complementary recovery implies a two-sided RT formula.} Consider an encoding isometry $V$, a subregion $A$ and a von Neumann algebra $\mathcal{M}$ so that $(V,A,\mathcal{M})$ have complementary recovery. Then $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$ both have an RT formula with the same area operator $L$ (that is, the RT formula is `two-sided'). Furthermore, $L$ is in the center $Z_\mathcal{M}$. \end{theorem} \begin{proof} Say $\mathcal{M}$ induces the decomposition $\H_L = \bigoplus_{\alpha} \left( \H_{L_\alpha} \otimes \H_{\bar L_\alpha}\right) $. This way we can decompose $\mathcal{M}$ and $\mathcal{M}'$ together as: \begin{align} \mathcal{M} = \bigoplus_{\alpha} \left( \mathcal{L}(\H_{L_\alpha}) \otimes I_{\bar L_\alpha} \right), \hspace{1cm} \mathcal{M}' = \bigoplus_{\alpha} \left( I_{L_\alpha} \otimes \mathcal{L}(\H_{\bar L_\alpha}) \right) . \end{align} Let $\{\ket{\alpha,i,j}\}$ be a basis that is ``compatible with $\mathcal{M}$'' as in Lemma~\ref{lemma:factorization}. We observe that $\{\ket{\alpha,i,j}\}$ also `lines up with $\mathcal{M}'$' in the same sense, since really $\{\ket{\alpha,i,j}\}$ just lines up with the underlying decomposition of $\H_L$. Now, with two applications of Lemma~\ref{lemma:factorization} we know that there exist factorizations of $\H_{A}$ and $\H_{\bar A}$ of the form: \begin{align} \H_A = \bigoplus_\alpha\left( \H_{A^\alpha_1} \otimes \H_{A^\alpha_2} \right) \oplus \H_{A_3}, \hspace{1cm} \H_{\bar A} = \bigoplus_\alpha\left( \H_{\bar A^\alpha_1} \otimes \H_{\bar A^\alpha_2} \right) \oplus \H_{\bar A_3}, \end{align} so that there are unitaries $U_A$ and $U_{\bar A}$ such that: \begin{align} (U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}\\ (I_A \otimes U_{\bar A}) V \ket{\alpha,i,j} = \ket{\bar \chi_{\alpha,i}}_{A \bar A^\alpha_2} \otimes \ket{\bar \psi_{\alpha,j}}_{\bar A^\alpha_1}. \end{align} If we consider applying $(U_A \otimes I_{\bar A})$ followed by $(I_A \otimes U_{\bar A})$: \begin{align} (I_A \otimes U_{\bar A})(U_A \otimes I_{\bar A}) V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes (I_{A^\alpha_2} \otimes U_{\bar A}) \ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}, \end{align} we see that $U_{\bar A}$ actually just acts on the state $\ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A}$. Thus, we see in order for both decompositions to be true simultaneously, there must exist states $\ket{\bar \psi_{\alpha,j}}$ and $\ket{\chi_\alpha}$ such that $(I_{A^\alpha_2} \otimes U_{\bar A})\ket{\chi_{\alpha,j}}_{A^\alpha_2 \bar A} = \ket{\chi_{\alpha}}_{A^\alpha_2 \bar A^\alpha_2} \otimes \ket{\bar\psi_{\alpha,j}}_{\bar A^\alpha_1}$, implying: \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\chi_{\alpha}}_{A^\alpha_2 \bar A^\alpha_2} \otimes \ket{\bar \psi_{\alpha,j}}_{\bar A^\alpha_1}. \label{eqn:twosideddecomp} \end{align} The above factorization will spell out the RT formula when a logical operator is considered in this basis. Say $\rho$ is a state on $\H_L$. To show that $(V,A,\mathcal{M})$ have an RT formula, we will proceed to compute $S(\mathcal{M},\rho)$ as well as $S(\text{Tr}_{\bar A}( V\rho V^\dagger ))$ and take the difference. We will observe that the difference will have the form $\text{Tr}(\rho L)$ for some $L$. To derive $S(\text{Tr}_{\bar A}( V\rho V^\dagger ))$ recall the discussion in Section~\ref{sec:algebraic_states} and observe that one might as well consider $S(\text{Tr}_{\bar A}( V\rho_\mathcal{M} V^\dagger ))$ instead: say $O_A \in \mathcal{L}(\H_A)$, and write: \begin{align} \text{Tr}( O_A \cdot \text{Tr}_{\bar A}( V\rho V^\dagger ) ) = \text{Tr}( (O_A \otimes I_{\bar A}) \cdot V\rho V^\dagger ) = \text{Tr}( V^\dagger (O_A \otimes I_{\bar A}) V \cdot \rho ). \end{align} But $V^\dagger (O_A \otimes I_{\bar A}) V$ is in $\mathcal{M}$. Since for any $O \in \mathcal{M}$ we have $\text{Tr}(O \rho) = \text{Tr}(O \rho_\mathcal{M})$ (see Theorem~\ref{thm:expectations_vN}) we can just replace $\rho$ with $\rho_\mathcal{M}$ in the above. The states $\text{Tr}_{\bar A}( V\rho V^\dagger )$ and $\text{Tr}_{\bar A}( V\rho_\mathcal{M} V^\dagger )$ give the same expectations for all observables, so they must be the same state and have the same entropy. Furthermore, since acting with a unitary on $\H_{A}$ and $\H_{\bar A}$ separately does not change the entropy, we see: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) = S(\text{Tr}_{\bar A}( (U_A \otimes U_{\bar A})V \rho_M V^\dagger (U_A \otimes U_{\bar A})^\dagger )). \end{align} Next, we define isometries $\tilde V_\alpha: (\H_{L_\alpha} \otimes \H_{\bar L_\alpha}) \to (\H_{A^\alpha_1} \otimes \H_{\bar A^\alpha_1})$ using the states $\ket{\psi_{\alpha,i}}_{A^\alpha_1}$ and $\ket{\bar\psi_{\alpha,j}}_{\bar A^\alpha_1}$ from (\ref{eqn:twosideddecomp}): \begin{align} \tilde V_\alpha \ket{\alpha,i,j} := \ket{\psi_{\alpha,i}}_{A^\alpha_1} \otimes \ket{\psi_{\alpha,j}}_{\bar A^\alpha_1}. \end{align} We know that $\tilde V_\alpha$ is indeed an isometry because the states $\ket{\psi_{\alpha,i}}_{A^\alpha_1}$ and $\ket{\psi_{\alpha,j}}_{\bar A^\alpha_1}$ are actually bases for $\H_{A^\alpha_1}$ and $\H_{\bar A^\alpha_1}$ respectively. This follows from (\ref{eqn:twosideddecomp}) and the fact that the $\{\ket{\alpha,i,j}\}$ for fixed $\alpha$ are a basis for $\H_{L_\alpha} \otimes \H_{\bar L_\alpha}$. The purpose of $\tilde V_\alpha$ is that it lets us simplify (\ref{eqn:twosideddecomp}) to: \begin{align} (U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \tilde V_\alpha \ket{\alpha,i,j} \otimes \ket{\chi_\alpha}. \end{align} This lets us bring the cumbersome expression $(U_A \otimes U_{\bar A})V \rho_\mathcal{M} V^\dagger (U_A \otimes U_{\bar A})^\dagger $ into a much neater form: \begin{align} & \hspace{4mm} (U_A \otimes U_{\bar A})V \rho_\mathcal{M} V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot (U_A \otimes U_{\bar A})V \rho_\alpha V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot \frac{1}{p_\alpha} \sum_{i,j}\sum_{i',j'} \rho[\alpha]_{i,j,i',j'} (U_A \otimes U_{\bar A})V\ket{\alpha,i,j}\bra{\alpha,i',j'} V^\dagger (U_A \otimes U_{\bar A})^\dagger \\ &= \sum_\alpha p_\alpha \cdot \frac{1}{p_\alpha} \sum_{i,j}\sum_{i',j'} \rho[\alpha]_{i,j,i',j'} \tilde V_\alpha \ket{\alpha,i,j}\bra{\alpha,i',j'} \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha} \\ &= \sum_\alpha p_\alpha \cdot \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha}. \end{align} Since each of the states $\tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha}$ are normalized and disjoint (act on different blocks), the entropy takes the form: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) &= \sum_\alpha p_\alpha \log(p^{-1}_\alpha) + \sum_{\alpha} p_\alpha S( \text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger \otimes \ket{\chi_\alpha}\bra{\chi_\alpha} ) )\\ &= \sum_\alpha p_\alpha \log(p^{-1}_\alpha) + \sum_{\alpha} p_\alpha S(\text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger )) + \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )). \label{eqn:entanglemententropy} \end{align} Finally, we observe that since $\ket{\psi_{\alpha,j}}$ is independent of $i$, we have that: \begin{align} S(\text{Tr}_{\bar A}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger ) = S(\text{Tr}_{\bar A^\alpha_1}( \tilde V_\alpha \rho_\alpha \tilde V_\alpha^\dagger )) = S( \text{Tr}_{\bar L_\alpha}(\rho_\alpha)). \end{align} We observe that the first two terms of (\ref{eqn:entanglemententropy}) are the exact same as the two terms of (\ref{eq:algebraic_entropy}), so their difference is just: \begin{align} S(\text{Tr}_{\bar A}( V\rho V^\dagger )) - S(\mathcal{M},\rho) = \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )). \end{align} The right-hand side is linear in the $p_\alpha$, so it is linear in $\rho$, so there exists an area operator $L$ such that the right-hand side is $\text{Tr}(\rho L)$. We construct it explicitly below. \begin{align} I_\alpha &:= \sum_{i,j} \ket{\alpha,i,j}\bra{\alpha,i,j}\\ L &:= \sum_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot I_\alpha. \end{align} Observe that $L \in \mathcal{M}$, so therefore $\text{Tr}(\rho L) = \text{Tr}(\rho_\mathcal{M} L)$. Then we write: \begin{align} \text{Tr}(\rho_\mathcal{M} L) &= \text{Tr}\left( \sum_\alpha p_\alpha \rho_\alpha \cdot \sum_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot I_\alpha \right) \\ &= \sum_\alpha p_\alpha S( \text{Tr}_{\bar A}(\ket{\chi_\alpha}\bra{\chi_\alpha} )) \cdot \text{Tr}(\rho_\alpha I_\alpha) = S(\text{Tr}_{\bar A}( V\rho V^\dagger )) - S(\mathcal{M},\rho). \end{align} We have derived that $(V,A,\mathcal{M})$ satisfy an RT formula with operator $L$ and furthermore that $L \in \mathcal{M}$. The derivation for $(V,\bar A,\mathcal{M}')$ is exactly the same just with $i$ and $j$ swapped, and since $L \in \mathcal{M}'$ we have that $L$ is in the center $Z_\mathcal{M}$. \end{proof} According to \cite{harlow2018tasi} the reverse direction also holds: if $(V, A, \mathcal{M} )$ and $(V,\bar A, \mathcal{M}')$ both satisfy an RT formula with the same $L$, then $(V,A,\mathcal{M})$ must have complementary recovery. So actually, complementary recovery is equivalent to the existence of a `two-sided RT formula' for both $(V, A, \mathcal{M} )$ and $(V,\bar A, \mathcal{M}')$ This suggests the possibility that complementary recovery is actually stronger than the existence of a one-sided RT formula. Is it possible for $(V,A,\mathcal{M})$ to exhibit an RT formula, but not $(V,\bar A,\mathcal{M}')$? \subsection{A recipe for analysing codes} \label{sec:recipe} The derivation in this section not only defines the holographic properties of an error-correcting code, but also gives a recipe for computing the area operator of the RT formula: \begin{enumerate} \item Follow Theorem~\ref{thm:whatalgebra} and compute $\mathcal{M} := V^\dagger (\mathcal{L}(H_A)\otimes I_{\bar A})V$, and verify that it is indeed a von Neumann algebra. If so, we have complementary recovery. \begin{description} \item \emph{Shortcut}: If $\mathcal{M}$ has a trivial center ($Z_\mathcal{M} = \langle I\rangle_\text{vN}$), then since $L \in Z_\mathcal{M}$ we already know that the code must have a trivial RT formula. \end{description} \item Compute the Wedderburn decomposition on $\H_L$ that follows from $\mathcal{M}$. Follow Lemma~\ref{lemma:factorization} and define a basis $\ket{\alpha,i,j}$ that `lines up with $\mathcal{M}$'. \item Apply Lemma~\ref{lemma:factorization} twice to obtain unitaries $U_A$ and $U_{\bar A}$ such that: $(U_A \otimes U_{\bar A})V \ket{\alpha,i,j} = \ket{\psi_{\alpha,i}} \otimes \ket{\chi_{\alpha}}\otimes \ket{\bar \psi_{\alpha,j}}$. \item Obtain the states $\ket{\chi_\alpha}$ and compute their entanglement entropies. These are the eigenvalues of the area operator. \end{enumerate} This is already a very complicated series of steps. While computing $\mathcal{M}$ is not so difficult for small codes, the later steps where we explicitly construct the $\ket{\chi_\alpha}$ states can be cumbersome. For this reason we recall that Theorem~\ref{thm:complementaritytoRT} showed that $L \in Z_\mathcal{M}$. So a trivial center implies a trivial RT Formula. The intuition is that an interesting holographic code features a variety of superselection sectors, each representing a different geometry with a different area. The center $Z_\mathcal{M}$ is the set of operators acting proportionally to the identity on each sector. Thus, if the center is trivial, there is only one superselection sector, so there can only be one area. This provides a convenient shortcut for analyzing the holographic properties of codes. In the next section we will practice this recipe on various examples. \section{Discussion} \label{sec:discussion} Toy models for holographic quantum error correction serve as a microcosm for understanding AdS/CFT. In this work we have reformulated and extended the framework of \cite{harlow2017ryu} with a uniqueness result and several examples. These together serve to make holographic quantum error correction `more concrete' in the sense that they pave the road to more complex examples. In this discussion we briefly summarize the ways in which our construction differs from previous work, and also list some future directions. The construction of \cite{harlow2017ryu} is, of course, central to our work and discussions of holographic quantum error correction in general. However, we made several changes to the formalism to facilitate our particular viewpoint. Here is a brief summary of these changes: \begin{description} \item[Code subspace vs encoding isometry.] In holography, we can think of the bulk Hilbert space as `emanating from' the boundary space and physically place the bulk into the boundary. In this sense, we could consider the space of allowed bulk states a subspace $\H_\text{code} $ of the physical space $\H$, which could be defined via some set of constraints on the boundary qubits. This perspective might be effective for stabilizer codes. However, the codes we discuss in Section~\ref{sec:examples} are more naturally described via a quantum circuit, which is an active transformation. For this reason we explicitly think of the bulk space as a separate space $\H_L$, which is not emanating from the boundary in the same way, and is then mapped to the boundary space via an encoding isometry $V :\H_L \to \H$. Of course, we could still switch to the old picture by defining $\H_\text{code}$ to be the image of $V$. \item[Step by step vs general case.] A key result of \cite{harlow2017ryu} is that many seemingly disparate ideas are actually equivalent: subregion duality, the existence of an RT formula, and entropic properties of the holographic states. This is an illuminating observation about the general properties of holographic quantum error correction codes. However, in our work we are interested in the analysis of particular codes: we want to consider a particular encoding isometry $V$ and obtain its RT formula. To that end, we `unroll' the sequence of equivalences given in Theorem~5.1 of \cite{harlow2017ryu} and focus on the direction that yields a method for computing the area operator. Our derivation of this result in Theorem~\ref{thm:complementaritytoRT} goes into significantly more detail, and the resulting recipe from Section~\ref{sec:recipe} makes the analysis of codes more straightforward. \item[Uniqueness of the algebra.] Following \cite{harlow2017ryu}, we still consider holography to be a property that a code, a bipartition, and a von Neumann algebra can have together. But to some extent this is no longer really necessary: we can say that holography is merely a property of a code and a bipartition, because when these are fixed then the von Neumann algebra is unique if it exists. Ideally, we would like to go even further and say that it is a property of a quantum error correction code alone, asserting that every (reasonable) bipartition obeys an RT formula. \end{description} In particular, making `holography' a property of a code alone leaves a couple open questions. Furthermore, there are several directions in which this framework could be expanded. \begin{description} \item[A `one-sided' RT formula without complementary recovery?] In our Theorem~\ref{thm:complementaritytoRT}, we demonstrate that complementary recovery of $(V,A,\mathcal{M})$ implies a `two-sided' RT formula, an RT formula for both $(V,A,\mathcal{M})$ and $(V,\bar A,\mathcal{M}')$. Indeed, Theorem~5.1 of \cite{harlow2017ryu} shows that the existence of this `two-sided' is actually equivalent to complementary recovery. So why do we not simply remove $\mathcal{M}$ since it is uniquely determined by complementary recovery? We have not closed the possibility of a `one-sided' RT formula exhibited by just $(V,A,\mathcal{M})$ but not by $(V,\bar A,\mathcal{M}')$. Is this mathematically possible? Does a code with such an RT formula possess a sensible physical interpretation? \item[A non-trivial RT formula for all subregions?] The qubits in Example~\ref{ex:cql} are arranged such that every contiguous subregion $A$ of three qubits has a non-trivial RT formula. But when we consider three qubits that are non-adjacent, then the RT formula becomes trivial. We would hope that larger holographic error correction codes have sensible and interesting area operators even when $A$ is not contiguous. But is it possible for \emph{every} subregion to have a non-trivial RT formula? We attempted to construct such a code without success. It is possible that this difficulty is related to the difficulty of obtaining power-law correlations between generic subregions in holographic tensor networks, observed in e.g.\ \cite{Gesteau:2020hoz,Jahn:2020ukq,Cao:2021wrb}. Since the number of possible subregions grows very quickly, this requirement places many constraints on the code. Thus, we conjecture that this is not possible. Is it possible if we restrict the pieces of the subregions to be at least a certain size? \item[A tensor network with superposition of geometries?] Seminal work by [Happy] showed that holographic tensor networks can be constructed from a tessellation of hyperbolic space with a fundamental tensor, in their case a perfect tensor. There are very many extensions of this construction, for instance \cite{cao2020approximate} consider replacing the fundamental tensor with skewed Bacon-Shor codes, and \cite{taylor2021holography} consider higher-dimensional tessellations. What happens when we replace the fundamental tensor with one of our circuits? What does operator pushing look like in this scenario? Does the network possess a non-trivial area operator? \item[A holographic stabilizer code?] All the atomic examples that have a non-trivial area operator possess a Toffoli gate, so they are not stabilizer codes. Furthermore, the skewed codes considered by \cite{cao2020approximate}, although they are superpositions of stabilizer codes, are themselves also not stabilizer codes. It appears that the stabilizer formalism places strong limitations on the entanglement properties of the resulting codes, making the design of a stabilizer code with a non-trivial area operator challenging. Is it even possible? \item[Consequences of the uniqueness of $\mathcal{M}$ in quantum gravity?] It is often natural to consider only a subalgebra of the operators in the entanglement wedge of a particular boundary $A$. For example, we might only be interested in local operators. But an implication of Theorem~\ref{thm:whatalgebra} is that such a von Neumann algebra cannot exhibit complementary recovery. This is clear from the perspective of error correction, but can it be proved from the AdS perspective as well? It is also natural from the AdS perspective to consider sets of operators which are not subalgebras (such as low-point correlators of local bulk operators)---can anything be said about such cases? \item[Extensions of holographic quantum error correction?] The toy models considered in this work, just like the constructions of \cite{harlow2017ryu} and \cite{cao2020approximate}, are restricted to a single time slice. Can they be extended to exhibit dynamics (similarly to what has been proposed for tensor networks~\cite{kohler2019toy})? What about dynamics with decoherence and black hole formation/evaporation? Since the purpose of the toy models is to illuminate and provide more mathematically tractable examples of AdS/CFT, extending them towards the full capabilities of AdS/CFT is a very natural direction. For a fixed geometry, one expects bulk time evolution (for example, on a Rindler wedge) to be implemented approximately as a local operator in the code subspace, the (modular) Hamiltonian--but evolving with the full Hamiltonian of the boundary system should give corrections to this picture. See \cite{jahn2021holographic} for a recent review. \end{description} \section*{Acknowledgements} We thank Scott Aaronson, Elena Caceres, Charles Cao, William Kretschmer, Kunal Marwaha, Alex May, Frank Schindler, Haoyu Sun, and Yuxuan Zhang for detailed comments on the manuscript. We thank Mario Martone for his comments and for his participation in an initial phase of this project. AR and JP are supported by the Simons Foundation through It from Qubit: Simons Collaboration on Quantum Fields, Gravity, and Information. PR is supported by Scott Aaronson's Vannevar Bush Faculty Fellowship.
1,314,259,996,832
arxiv
\section{\label{sec:obs} OBSERVATIONS \& DATA REDUCTION} We observed G330.2+1.0/J1601 with the European Photon Imaging Camera (EPIC) on board {\it XMM-Newton} Observatory on 2008-03-20 (ObsID 0500300101). The pointing (RA[J2000] = 16$^h$ 01$^m$ 3$\hbox{$.\mkern-4mu^s$}$14, Dec[J2000] = --51$^{\circ}$ 33$^{\prime}$ 53$\farcs$6) is to J1601 which is positioned at the center of the nearly circular X-ray shell of SNR G330.2+1.0. We chose the small-window mode (4$\hbox{$.\mkern-4mu^\prime$}$4 $\times$ 4$\hbox{$.\mkern-4mu^\prime$}$3 field of view [FOV] and 6 ms time resolution) for the EPIC pn to search for pulsations of J1601. We chose the full-window mode ($\sim$30$'$ diameter FOV) for the EPIC MOS detectors to study the entire SNR. The medium filter was used for all detectors. We reduced the data using the Science Analysis System (SAS) software package v7.1.0. Our {\it XMM-Newton} observations of G330.2+1.0/J1601 were significantly contaminated by flaring particle background. We removed time bins in which the overall count rate is 2$\sigma$ (the pn) or 3$\sigma$ (the MOS1 and MOS2) above the mean value for time intervals unaffected by flaring background. Time intervals including a considerable contamination by the flaring background ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 50\% above the average quiescent rate) were eliminated by these time-filters. After the time filtering, 26, 31, and 33 ks exposures for the pn\footnote{The $\sim$30\% deadtime-corrected exposure for the small window mode of the pn is 18.3 ks.}, MOS1, and MOS2, respectively, are available for further data analysis, which is $\sim$40--45\% of the total exposure. We then reduced the data following the standard screening of event pattern (PATTERN $\leq$ 12 for the MOS1/2 and PATTERN $\leq$ 4 for the pn) and hot pixels (FLAG = 0). (For the timing analysis of J1601, we used a longer exposure while choosing a smaller aperture and more strict event pattern criteria as described in \S~\ref{sec:cco_time}.) There are stable components of instrumental background in the EPIC detectors. The primary components that could affect this work are Al-K ($E$ $\sim$ 1.5 keV) and Si-K ($E$ $\sim$ 1.7 keV) fluorescence lines due to the interactions of high energy particles with the structure surrounding the detectors and the detectors themselves\footnote{{\it XMM-Newton} Users' Handbook, \S~3.3.7.2.}. We removed these events from our image analysis by excluding narrow energy bands centered on these lines. Our background-subtracted source spectra show little evidence of these lines. Thus, we believe that the impact of contamination from this instrumental background on our EPIC data analysis is negligible. Because of the severe contamination by the flaring background, photon statistics of the filtered {\it XMM-Newton} data are significantly lower than originally intended. Thus, in addition to the {\it XMM-Newton} data, we use the {\it Chandra} data (ObsID 6687) for the spectral analysis to improve overall photon statistics. The high angular resolution of {\it Chandra} data is also essential to measure the widths of the thin X-ray filaments of G330.2+1.0. We performed the {\it Chandra} observation of G330.2+1.0 with the I-array of the Advanced CCD Imaging Spectrometer (ACIS) on 2006-05-22 as part of the Guaranteed Time Observations program. The effective exposure after the data screening is $\sim$50 ks, and thus photon statistics for the SNR and CCO in the ACIS data are similar to those obtained by the EPIC MOS1+MOS2 data. The details of the {\it Chandra} observation and data reduction are described by Park et al. (2006). \section{\label{sec:cco_spec} X-Ray Spectrum of the Central Compact Object} We extracted the spectrum of J1601 ($\sim$530, 290, and 310 counts in the 0.5--10 keV band for the pn, MOS1, and MOS2, respectively) from a circular region with a radius of 15$^{\prime\prime}$. The background spectrum was estimated from two nearby source-free regions with a radius of 30$^{\prime\prime}$. The background counts contribute $\sim$15\% to the pn spectrum and $\sim$9\% to the MOS spectra. The background-subtracted, deadtime-corrected count rates (in the 15$^{\prime\prime}$ radius aperture) are $\sim$0.025 (pn) and $\sim$0.009 counts s$^{-1}$ (MOS1/2). The {\it Chandra} spectrum of J1601 was extracted from an $\sim$2$^{\prime\prime}$ circular region. The background spectrum was extracted from the surrounding annular region with the inner and outer radii of 4$^{\prime\prime}$ and 15$^{\prime\prime}$, respectively \citep{park06}. The background-subtracted ACIS count rate is $\sim$0.012 counts s$^{-1}$. The total source counts combining all the {\it XMM-Newton} EPIC and {\it Chandra} ACIS data are $\sim$1700 counts, which is about three times higher than those used in the previous work. Each source spectrum was binned to contain a minimum of 20 counts per energy bin. We simultaneously fit four spectra of J1601 obtained by the {\it XMM-Newton} pn, MOS1, MOS2, and the {\it Chandra} ACIS. Initially we fit the spectrum with a BB model. The best-fit BB temperature and the absorbing column ($T_{\rm BB}$ = 5.6$^{+0.3}_{-0.4}$ MK, $N_{\rm H}$ = 2.46$^{+0.38}_{-0.35}$ $\times$ 10$^{22}$ cm$^{-2}$, $\chi^{2}$/$\nu$ = 73.4/74, errors are at 90\% confidence level [C.L.], hereafter) are consistent with those by Park et al. (2006). The implied emitting area is small ($R$ $\sim$ 0.44 $d_5$ km, where $d_5$ is the distance to the CCO in units of 5 kpc), which is also in agreement with the previous work. The observed flux ($f_{\rm X}$ $\sim$ 1.22 $\times$ 10$^{-13}$ ergs cm$^{-2}$ s$^{-1}$ in the 1--10 keV band) is consistent with the previous {\it Chandra} results as well. Although a PL model may also fit the data, a very steep photon index ($\Gamma$ = 5.6$^{+0.5}_{-0.4}$) and a high $N_{\rm H}$ = 5.4$\pm$0.6 $\times$ 10$^{22}$ cm$^{-2}$ ($\chi^{2}$/${\nu}$ = 93.6/74) are implied (The PL fit is not acceptable with $\chi^2_{\nu}$ $>$ 2, when $N_{\rm H}$ is fixed at 2.5--3.0 $\times$ 10$^{22}$ cm$^{-2}$). This PL shape is too soft for typical synchrotron emission from the neutron star's magnetosphere, and the fit is statistically worse than that by the BB model. Thus, we conclude that X-ray emission of J1601 is consistent with a BB spectrum. Using only {\it XMM-Newton} data, we estimate the {\it same} flux ($f_{\rm 1-10 keV}$ $\sim$ 1.25 $\times$ 10$^{-13}$ ergs cm$^{-2}$ s$^{-1}$), which indicates that flux variations in the two years between the {\it Chandra} (2006-05-22) and {\it XMM-Newton} (2008-03-20) observations are negligible ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$5\%). While the BB model can fit the overall X-ray spectrum of J1601, it may not be physically adequate to describe thermal emission from a neutron star. A neutron star is not a perfect BB, and likely has an atmosphere whose emission from the outermost H-layer may dominate the observed spectrum (e.g., Pavlov \& Zavlin 2000). The observed spectrum of the thermal radiation from a neutron star's surface is substantially affected by the properties of its atmosphere such as chemical composition, magnetic field, gravity, and the energy-dependent opacities (Pavlov et al. 1995 and references therein). A typical observational effect may be a higher temperature and a smaller emitting area than the ``true'' values when the spectrum is fitted by a simple BB model (e.g., Zavlin et al. 1998; Pavlov et al. 2000). Therefore, taking advantage of the improved photon statistics in the {\it XMM-Newton} + {\it Chandra} data, we fit the observed X-ray spectrum of J1601 with a hydrogen neutron star atmosphere model (NSA model in XSPEC, Pavlov et al. 1995; Zavlin et al. 1996). First, we fit the spectrum of J1601 with a single NSA model. The magnetic fields of CCOs may be significantly lower or higher than the ``canonical'' magnetic field of a pulsar, $B=10^{12}$ G (e.g., Pavlov \& Zavlin 2000; Bignami et al. 2003). Based on the lack of a strong absorption feature in the observed spectrum of J1601, which would be interpreted as an electron cyclotron line in an NSA spectrum, we can only exclude fields in the range of $\sim (2-8)\times 10^{11}$ G. We thus fit our spectrum using all three magnetic field values for which the NSA models are available in XSPEC: $B=0$ (applicable for low fields, $B\lesssim 10^{10}$ G), $10^{12}$, and $10^{13}$ G. We fix the neutron star mass and radius at the canonical values $M_{\rm ns} = 1.4 M_\odot$ and $R_{\rm ns}=10$ km, which correspond to the gravitational redshift parameter $g_{\rm r}=(1-2GM_{\rm ns}/R_{\rm ns}c^2)^{1/2}= 0.766$, and vary the effective temperature, distance, and $N_{\rm H}$. The fits are statistically acceptable for all three magnetic field values. For $B=0$ and $10^{13}$ G, the redshifted best-fit effective temperatures, $T_{\rm eff}^\infty = 2.6$ and 3.6 MK, respectively, are lower than the $T_{\rm BB}$, while $T_{\rm eff}^\infty=5.7$ MK for $B = 10^{12}$ G is about the same as the BB temperature\footnote{The higher $T_{\rm eff}$ and the correspondingly larger distance in the $B=10^{12}$ G NSA fit are caused by the fact that the high-energy part of the observed spectrum coincides with the low-energy wing of the the gravitationally redshifted electron cyclotron feature, centered at $\approx 9$ keV. Therefore, the high-energy tail of the X-ray spectrum is much softer than those in the models with very low or very high magnetic fields. Although we cannot exclude this field value based on the observational data available, we note that the models with slightly lower fields ($B$ $\sim$ 2--8 $\times$ 10$^{11}$ G) would not fit the data, while the models with higher fields (e.g., $B\gtrsim 2\times 10^{12}$ G) would yield essentially the same results as the $B=10^{13}$ G model.}. The best-fit $N_{\rm H}$ values, 3.1, 2.4, and 2.7 $\times 10^{22}$ cm$^{-2}$, for $B=0$, $10^{12}$, and $10^{13}$ G, respectively, are consistent with that for SNR G330.2+1.0 ($N_{\rm H}\sim 2.5$--$3 \times 10^{22}$ cm$^{-2}$). The best-fit distances, 24, 169, and 55 kpc, for $B=0$, $10^{12}$, and $10^{13}$ G, respectively, are unreasonably large for a Galactic object. To reconcile them with the distance to the SNR ($d\sim 5$ kpc; McClure-Griffiths et al.\ 2001), we have to assume that the sizes of the X-ray emitting areas are much smaller than $R_{\rm ns}=10$ km ($R\sim 2$, 0.3, and 0.9 km for $B$ = 0, 10$^{12}$, and 10$^{13}$ G, respectively). Thus, although the fits with the $B=0$ and $10^{13}$ G NSA models yield lower temperature and larger sizes than the BB fit, the observed emission cannot be interpreted as emitted from the entire neutron star surface. Based on these results, it is natural to assume that the X-ray emission in J1601 originates from small hot spots on the neutron star's surface, such as suggested for other CCOs (e.g., the CCO in Cassiopeia A, Pavlov et al. 2000). In this scenario, the observed thermal X-ray emission consists of two characteristic components: the hot component from a small region(s) and the cool emission from the rest of the stellar surface (e.g., Pavlov et al. 2000). Therefore, we fit the observed spectrum of J1601 with two-component NSA models, assuming $B$ = 0, 10$^{12}$, or 10$^{13}$ G, the same for both components. For the soft component, we fix the distance to the CCO and the size of the emitting region at $d$ = 5 kpc and $R$ = 10 km, respectively, while varying the surface temperature. For the hard component, both the distance and surface temperature are varied. The foreground column $N_{\rm H}$ is tied common for both components, and then is fitted. The results are summarized in Table~\ref{tbl:tab1}. The X-ray spectrum of J1601 and the best-fit two-component NSA model with a high magnetic field ($B$ = 10$^{13}$ G) are presented in Fig.~\ref{fig:fig1}. We note that, although the additional soft component is statistically not required, implying only an upper limit on the observed flux (e.g., $f_{\rm 1-10 keV}$ $<$ 5 $\times$ 10$^{-14}$ ergs cm$^{-2}$ s$^{-1}$ at 90\% C.L.), the two-component model likely represents a physically more realistic picture than the one-component model to account for the small hot region implied by the single NSA model. Therefore, we hereafter discuss the spectral nature of J1601 based on the two-component NSA model fits. \section{\label{sec:cco_time} Search for Pulsations from the Central Compact Object} Park et al. (2006) searched for X-ray pulsations from J1601 in the 50 ks {\it Chandra} ACIS observation (3.24 s time resolution) and reported a marginally significant (at a $\approx2\sigma$ level) periodic signal ($P$ = 7.48 s, a pulsed fraction $f_{rm p}$ $\sim$ 30\%). One of the goals of our follow-up {\it XMM-Newton} observation was to test the significance of the previously reported period candidate, and to search for periodicity outside the frequency range accessible with {\it Chandra} data (The EPIC pn in the small-window mode provides much better 6 ms resolution). However, as discussed in \S~\ref{sec:obs}, the flaring background hampered our timing analysis of the EPIC pn data. For the timing analysis, we performed the data reduction following the methods described in \S~\ref{sec:obs}, except that (1) we used 44 ks of uninterrupted pn data after removing major flares, (2) we extracted photons from a smaller circular aperture (8$\farcs$4 in radius) than the standard pointlike source extraction area for the EPIC (15$^{\prime\prime}$ in radius), and (3) we applied a stricter screening by selecting events with PATTERN = 0. After these data reduction, we obtained a total of 761 photons (including $\approx30\%$ background). The arrival times of these photons were recalculated to the solar system barycenter using the SAS {\tt barycen} tool. As with the {\it Chandra} data, we used the $Z_{m}^2$ test \citep{buc83} to search for periodicities in the $5\times10^{-5}-80$ Hz frequency range. We calculated $Z_{1}^2$ at $3.5~\times~10^{7}$ equally spaced frequencies, which corresponds to oversampling by a factor of 10 compared to the expected width $T_{\rm span}^{-1}\approx 22~\mu$Hz of the $Z_1^2$ peaks, and guarantees that we miss no peaks. The most significant peaks we find have $Z_{1}^{2} = 31.82, 31.68,$ and $30.56$ at $f=72.363177(5) {\rm Hz}$, $12.011894(5) {\rm Hz}$, and $3.006574(5) {\rm Hz}$. However, even for the maximum value of $Z_{1}^{2}=31.82$, the corresponding significance is low: only 56.5\%, for the number of independent trials $\mathcal{N}=f_{\rm max} T_{\rm{span}}\approx 3.5\times 10^{6}$. Therefore, most likely these peaks are due to the noise. We also calculated $Z_{m}^{2}$ for $m=1,2$ around the tentative pulsation frequency ($f = 0.1336185$--$0.1336999$ Hz) suggested by the {\it Chandra} data \citep{park06}. We find a broad peak, $Z_1^2=10.06$ at $f=0.133661(5) {\rm Hz}$, overlapping with the {\it Chandra} peak. However, even a single trial significance (assuming that the periodicity found in the {\it Chandra} data is real and hence the pulsation frequency is known) for the peak in the pn data is only marginal ($2.7\sigma$ with the corresponding $f_{\rm p}=24\%$), while for a blind search significance of this peak is negligible. Finally, we searched for periodicity in the combined {\it Chandra} and {\it XMM-Newton} data. Allowing for a non-zero period derivative typical for anomalous X-ray pulsars (AXPs), we calculate $Z_1^2$ on two-dimensional grid: $f=5\times10^{-5}-0.15$ Hz and $\dot{P}=0-3\times10^{-13}$ s s$^{-1}$. The maximum value of $Z_{1}^2$ is 29.9 at $f = 0.1336615$ Hz and $\dot{P}=5\times10^{-14}$ s s$^{-1}$. Although the frequency corresponding to the maximum $Z_1^2$ is consistent with that of the peak found in the {\it Chandra} data alone, the significance of the peak found in the joint data is very low. Thus, we conclude that the tentative 7.48 s periodicity reported with \ {\it Chandra} is not confirmed by the {\it XMM-Newton} data. \section{\label{sec:snr} Spectral Analysis of the Supernova Remnant} {\it XMM-Newton} images of SNR G330.2+1.0 are presented in Fig.~\ref{fig:fig2}. Since the small-window mode was used for the pn, the SNR is detected only on the MOS detectors. The {\it Chandra} image has revealed that G330.2+1.0 is a shell-type SNR with enhanced emission in the thin SW and NE parts of the shell \citep{park06}. Our {\it XMM-Newton} images confirm this general morphology. We further reveal spectral variations across the SNR: i.e., the E region of the shell is softer than other regions (Fig.~\ref{fig:fig3}). Also, there is a faint hard extended feature at $\sim$2$'$ SW from the CCO (marked with an arrow in Figs.~\ref{fig:fig2} and \ref{fig:fig3}). These features were not clearly seen in the {\it Chandra} data because of their positions in the ACIS-I chip gaps. We extracted the spectra from bright portions of the SNR shell in SW and NE (Fig.~\ref{fig:fig4}). The SW spectrum was extracted from the $\sim$1$'$ $\times$ 3$'$ brightest filament in the SW shell, which contains $\sim$1700 counts ($\sim$25\% of them are background) for the MOS1+MOS2 data. This SW shell contains $\sim$1300 counts (including $\sim$13\% background) for the ACIS-I data. The NE spectrum was extracted from a circular region ($\sim$30$^{\prime\prime}$ in radius) in the NE parts of the shell. This region contains $\sim$570 counts ($\sim$30\% of them are background) and $\sim$540 counts (including $\sim$30\% background) for the MOS1+MOS2 and the ACIS-I, respectively. We used the 0.5--10 keV band spectrum for the spectral analysis, and the source spectra were binned to contain a minimum of 20 counts per energy bin. Because of the non-uniform particle background across the MOS detectors, the background spectrum for the {\it XMM-Newton} data was carefully selected for faint extended sources. We chose a few background regions close to the source regions while avoiding any detected (by {\it Chandra}) point sources. We find generally consistent results between the background subtracted {\it XMM-Newton} spectra and the {\it Chandra} data. Thus, we believe that our background estimates for the {\it XMM-Newton} data are acceptable. The X-ray spectra of G330.2+1.0 extracted from the SW and NE regions are shown in Fig.~\ref{fig:fig5}. Our {\it Chandra} and {\it XMM-Newton} data show featureless continuum-dominated spectra for the bright SW and NE filaments. For each region, we performed a simultaneous PL model fit for all the three spectra obtained by the MOS1, MOS2, and ACIS-I (Fig.~\ref{fig:fig5}). The best-fit parameters are presented in Table~\ref{tbl:tab2}. The high absorbing column for the SNR shell is consistent with that for the CCO J1601, supporting the SNR-CCO association. The PL photon indices are typical for synchrotron emission from highly accelerated relativistic electrons. Thus, we fit these SW and NE shell spectra with the SRCUT model, which describes X-ray synchrotron emission from the shock-accelerated electrons that are also responsible for the observed radio counterpart \citep{rey98,rk99}. We assume the radio spectral index $\alpha$ = 0.3 (where $S_{\nu}$ $\propto$ $\nu$$^{-\alpha}$) as measured from the entire SNR \citep{green01}. The results of the SRCUT model fits are presented in Table~\ref{tbl:tab3}. The spectrally-hard, extended emission feature at $\sim$2$'$ SW of the CCO is faint: we obtain $\sim$490 counts from this feature (MOS1+MOS1) in which $\sim$60\% of the photons are the background. There is no evidence for line features, and the X-ray spectrum may be fitted by a PL of $\Gamma$ $\sim$ 2 with $N_{\rm H}$ fixed at 2.6 $\times$ 10$^{22}$ cm$^{-2}$. This overall spectral shape, the filamentary morphology and size (about a few $d_5$ pc), and the proximity to J1601 raise an intriguing possibility that this feature might be related to the CCO (e.g., the pulsar wind nebula). Alternatively, it could be a part of the SNR shell. However, reliable spectral modeling of this faint feature is difficult because of the poor photon statistics. Thus, we do not attempt any further analysis or discussion on this feature. Follow-up deep X-ray observations are required to reveal the origin of this potentially intriguing feature. On the other hand, the E parts of the SNR shell are spectrally softer than other regions (Figs.~\ref{fig:fig2} and \ref{fig:fig3}). The E region spectrum is extracted from an $\sim$1$\hbox{$.\mkern-4mu^\prime$}$3 $\times$ 2$'$ region in the E parts of the shell (Figs.~\ref{fig:fig4} and \ref{fig:fig6}). This region contains $\sim$730 counts ($\sim$45\% of them are background) for the MOS1+MOS2 data. Since the central part of this region falls in the ACIS-I chip gap, we use only the {\it XMM-Newton} data for the spectral analysis. The best-fit PL photon index for region E is significantly steeper ($\Gamma$ $\sim$ 4--5, $\chi^2_{\nu}$ $\sim$ 1.3--1.4, depending on the assumed $N_{\rm H}$) than those for the SW and NE regions. In fact, the PL of $\Gamma$ = 2.3 (as an average for SW and NE shell) cannot fit the observed spectrum of region E ($\chi^2_{\nu}$ $\sim$ 1.8--3.2, depending on assumed $N_{\rm H}$) because of a soft excess emission at $E$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 2 keV. This suggests the presence of soft thermal emission in the E part of the shell. Thus, we fit the region E spectrum with a plane-shock (PSHOCK) model \citep{bor01}. Since the photon statistics are poor for this faint feature, we fixed the metal abundances at the solar values \citep{ag89}. Initially we fit the observed spectrum with a single PSHOCK model, assuming that X-ray emission in region E is entirely thermal in origin. Then, we used a two-component model (PSHOCK + PL) assuming that there is underlying nonthermal emission as seen in the SW and NE filaments. Results from these two model fits are statistically indistinguishable ($\chi^2_{\nu}$ $\sim$ 1.2 for either model fit). The main difference is that the best-fit electron temperature ($kT$ = 1.4$^{+0.9}_{-0.6}$ keV) for the single PSHOCK model appears to be somewhat higher than that for the PSHOCK + PL model ($kT$ = 0.7$^{+1.3}_{-0.3}$ keV). The best-fit volume emission measure ($EM$) for the two-component model is higher by a factor of $\sim$2 than that for the single PSHOCK model. Since the uncertainties of these measurements are large because of poor photon statistics, it is difficult to discriminate these modeled parameters. Thus, we assume plausible ranges of the best-fit electron temperature and emission measure in the following discussion, based on the these two models: i.e., $kT$ $\sim$ 0.7--1.4 keV and $EM$ $\sim$ 0.6--1.4 $\times$ 10$^{56}$ cm$^{-3}$. The results from the PSHOCK + PL model fit for the E region are summarized in Table~\ref{tbl:tab2}. \section{\label{sec:disc} Discussion} \subsection{\label{subsec:cco} Characteristics of the Central Compact Object} The previous {\it Chandra} data analysis of J1601 revealed characteristics of a thermal spectrum, a location at the center of SNR G330.2+1.0, a pointlike morphology without any extended nebulosity, the absence of counterparts at other wavelengths, and a large limit on the X-ray-to-optical flux ratio \citep{park06}. J1601 also shows no evidence for long-term variability, which is confirmed by our new {\it XMM-Newton} data showing a constant X-ray flux ($f_{\rm 1-10 keV}$ $\sim$ 1.2 $\times$ 10$^{-13}$ ergs cm$^{-2}$ s$^{-1}$) over the $\sim$2 yr period. Thus, J1601 is most likely the CCO associated with SNR G330.2+1.0. Park et al. (2006) noted that the BB temperature is higher than the surface temperature expected from the standard cooling of a young neutron star, and that the estimated emitting area is too small to be a neutron star. It was also noted by Park et al. (2006) that the suggested candidate pulsations with a long period, if confirmed, would have been typical for an AXP. Our results from the {\it XMM-Newton} and {\it Chandra} data analysis indicate that the hot component emission ($T^{\infty}_{\rm h}$ $\sim$ 2.5--5.5 MK, depending on the assumed $B$) must originate from a small region of $R_{\rm h}$ $\sim$ 0.4--2 $d_5$ km. The estimated size of the hot region varies depending on the assumed values of the magnetic field and the distance to the CCO. Nonetheless, within the ranges of parameters that we consider in this work ($B$ = 0, 10$^{12}$, or 10$^{13}$ G, and $d$ = 5--10 kpc), the size of the hot region is significantly smaller than the canonical size of the neutron star; i.e., the largest area could be $R_{\rm h}$ $\sim$ 4 km, where $B$ = 0 and $d$ = 10 kpc. A small hot region(s) has been suggested in other CCOs, probably indicating X-ray emission from a locally-heated region such as the hot polar cap (e.g., Pavlov et al. 2000). On the other hand, the estimated surface temperature of the neutron star is significantly lower ($T^{\infty}_{\rm s}$ $<$ 1.5 MK) than that of the hot region. According to the standard cooling curves of a neuron star (e.g., Tsuruta 1998; Yakovlev \& Pethick 2004), this temperature limit corresponds to a lower limit of several 10$^{2}$--10$^4$ yr for the neutron star's age. This neutron star age is in plausible agreement with the estimated age of SNR G330.2+1.0 (see \S\S~\ref{subsec:nonthermal} and \ref{subsec:thermal}, and Torii et al. 2006). The overall characteristics such as the low $T^{\infty}_{\rm s}$, the high $T^{\infty}_{\rm h}$, and the small $R_{\rm h}$ are consistent with those found in the prototype CCOs in Galactic SNRs such as Cas A and Vela Jr. \citep{pav00,pav01}. Since the X-ray flux from the small hot region contributes a significant fraction of the observed flux ($>$ 50\% of the total flux in the 1--10 keV band), the observed X-ray emission from J1601 may be expected to pulsate. However, our {\it XMM-Newton} data do not show any conclusive evidence for pulsations, indicating that the previously suggested pulsations are unlikely real. We note that the low photon statistics in the EPIC-pn data are not sufficient to detect pulsations with an intrinsic pulsed-fraction $f_{\rm p}$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$40\%. With the combined {\it Chandra} and {\it XMM-Newton} data, the detection of pulsations with $f_{\rm p}$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$25\% is not feasible. Thus, the presence of an X-ray pulsar for J1601 is not ruled out by the current data. The neutron star's magnetic field, which would provide critical information on the nature of the object, remains unknown. Deep X-ray observations of J1601 are required to make conclusive remarks on the nature of J1601 such as pulsations, magnetic field, age, and the origin of its X-ray emission. \subsection{\label{subsec:nonthermal} Nonthermal X-Ray Emission of the Supernova Remnant} Our joint spectral analysis of the {\it XMM-Newton} and {\it Chandra} data of G330.2+1.0 shows that X-ray emission from the bright filaments of the SNR shell is dominated by a PL continuum. We find that this PL spectrum prevails for the most parts of the SNR, which was also suggested by a previous study \citep{torii06}. The best-fit PL model for the bright SW and NE regions of the shell indicates photon indices of $\Gamma$ $\sim$ 2.1--2.5 which are typical for synchrotron emission from shock-accelerated relativistic electrons. Although thermal plasma models may also fit the observed spectra, the estimated electron temperatures are high ($kT$ $\sim$ 4--5 keV), and low metal abundances ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 0.1 solar, Anders \& Grevesse 1989) are required. While a thermal origin of X-ray emission from the SNR shell may not be completely ruled out by the current data, the estimated plasma temperature and abundances appear to be unusual for SNRs. Thus, except for region E (\S~\ref{subsec:thermal}), we discuss this SNR based on the nonthermal interpretations of X-ray emission. According to our SRCUT model fits of the SW and NE filaments, the best-fit exponential roll-off frequency, $\nu_{\rm rolloff}$ $\sim$ 1.6--3.3 $\times$ 10$^{17}$ Hz, is relatively high among Galactic SNRs \citep{rk99}, while being similar to those for SN 1006 \citep{bamba03} and the bright TeV $\gamma$-ray emitting SNR G347.3--0.5 \citep{laz04}. If the particle (electron) acceleration is limited by synchrotron losses, the cutoff frequency corresponding to the maximum electron energy $E_{\rm max}$ is $\nu_{\rm m}$(loss) $\propto$ $B$ $E^2_{\rm max}$(loss). Since $E_{\rm max}$(loss) $\propto$ $B^{-{1\over2}}$, $\nu_{\rm m}$(loss) is independent of $B$, and depends only on the shock velocity: e.g., assuming a strong shock of the compression ratio of $>$ 4 and the shock normal perpendicular to $B$, the cutoff frequency is $\nu_{\rm m}$(loss) $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 3 $\times$ 10$^{16}$ $\eta$ $v^2_3$ Hz, where $v_3$ is the shock velocity in units of 10$^3$ km s$^{-1}$, and the ratio of the electron scattering mean free path to the gyroradius $\eta$ $\geq$ 1 (e.g., Reynolds 1998; Lazendic et al. 2004). As discussed below and in \S~\ref{subsec:thermal}, the shock velocity appears to be roughly $v_s$ $\sim$ 4000 km s$^{-1}$ for G330.2+1.0, and thus we estimate $\nu_{\rm m}$(loss) $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 5 $\times$ 10$^{17}$ Hz. Unless the shock velocity is much higher and/or the particle acceleration is inefficient ($\eta$ $\gg$ 1), the estimated $\nu_{\rm m}$(loss) is comparable with the observed $\nu_{\rm rolloff}$, suggesting that the particle acceleration of electrons in G330.2+1.0 is likely limited by synchrotron losses rather than the age of the SNR. The peak frequency of a synchrotron emitting electron is $\nu_{\rm p}$ = 1.8 $\times$ 10$^{18}$ $E^2_{\rm e} B$ Hz, where $B$ is the postshock magnetic field perpendicular to the shock normal, and $E_{\rm e}$ is the electron energy. For $\nu_{\rm p}$ $\sim$ 7 $\times$ 10$^{17}$ Hz (or $\sim$3 keV) representing the typical X-ray photons based on the observed spectrum of the nonthermal filaments in G330.2+1.0, the corresponding electron energy is $E_{\rm e}$ = 0.62 $B^{-{1\over2}}$ ergs. The characteristic synchrotron loss time scale for such electrons can then be estimated to be $\tau_{\rm loss}$ = 630 $E^{-1}_{\rm e}$ $B^{-2}$ s = 1017 $B^{-{3\over2}}$ s. We estimate $\tau_{\rm loss}$ by measuring the widths of the bright nonthermal filaments of G330.2+1.0 using {\it Chandra} images (Fig.~\ref{fig:fig7}). We construct projected intensity profiles across the bright SW filaments by averaging the photon counts (in 4$^{\prime\prime}$ pixel bins) over the 40$^{\prime\prime}$ segments along the filaments. We fit these 1-D intensity profiles with a Gaussian to estimate the widths of the filaments. We note that high resolution {\it Chandra} images of bright X-ray synchrotron filaments in young SNRs show typical substructures of a broad exponential downstream region and a much steeper flux decay in the upstream (e.g., Bamba et al. 2003). G330.2+1.0 is more distant than other young SNRs (that show bright nonthermal filaments), and the X-ray shell is relatively faint, which does not allow us to resolve such a substructure. Since the downstream region is observed to dominate the width of the filaments, we assume a negligible contribution from the upstream emission in the widths of the filaments to measure the {\it downstream} widths of the filaments with a simple Gaussian model. The measured widths are $\sim$12$^{\prime\prime}$--16$^{\prime\prime}$ (FWHM) which correspond to physical sizes $D$ $\sim$ 0.3--0.4 $d_5$ pc. Because of the far distance and faint surface brightness of G330.2+1.0, our width measurements could be an overestimate from superpositions of thinner filaments. Nonetheless, the estimated widths are comparable with an average value for the individual filaments in SN 1006 ($\sim$0.2 pc, Bamba et al. 2003). Therefore, we take our measurements as a first-order estimate, and certainly as an upper limit. The advection distance of the downstream electrons from the shock is $D_{\rm ad}$ = $v_s$ $\tau_{\rm loss}$ $r^{-1}$, where $r$ is the compression ratio in the shock. Since the direct measurements of the shock velocity of G330.2+1.0 are not available, we consider some plausible estimates for the shock velocity based on several independent approaches. Assuming an electron-ion temperature equipartition in the postshock region, the detected thermal emission of G330.2+1.0 (region E) implies $v_s$ $\sim$ 1000 km s$^{-1}$ (\S~\ref{subsec:thermal}). This value may be considered as a lower limit for $v_s$, because the assumed temperature equilibration between electrons and ions may have not been established in relatively young SNRs with $v_s$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ several 10$^2$ km s$^{-1}$ \citep{ghav07}. G330.2+1.0 shows similar characteristics (e.g., the SNR age, $\nu_{\rm rolloff}$, and the physical width of the nonthermal filaments etc.) to those of G347.3--0.5 and SN 1006 in which the shock velocities are high ($v_s$ $\sim$ 3000--4000 km s$^{-1}$, e.g., Parizot et al. 2006 and references therein). The ambient density for G330.2+1.0 ($n_0$ $\sim$ 0.1 cm$^{-3}$, \S~\ref{subsec:thermal}) is not unusually high compared with other SNRs (e.g., Bamba et al. 2003). Thus, the actual shock velocity of G330.2+1.0 is likely higher than $v_s$ $\sim$ 1000 km s$^{-1}$, perhaps close to $v_s$ $\sim$ 3000--4000 km s$^{-1}$. In fact, models predict high shock velocities of $v_s$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 2000 km s$^{-1}$ for an efficient particle acceleration (e.g., Ellison et al. 2000;2004). Although a small sample is used, an empirical relationship between $\nu_{\rm rolloff}$ and the physical width of the nonthermal filaments $D$ is derived to be $\nu_{\rm rolloff}$ $D^{-2}$ = 2.6 $\times$ 10$^{27}$ $\tau^{-2.96}_{\rm SNR}$ for several young historical SNRs \citep{bamba05}. This empirical relation suggests an SNR age $\tau_{\rm SNR}$ $\sim$ 1000--1200 yr for G330.2+1.0 for the measured $\nu_{\rm rolloff}$ $\sim$ 2--3 $\times$ 10$^{17}$ Hz. The inferred young age and the low ambient density suggest that the SNR may be in a free-expansion or an adiabatic phase, or could be in transition between the two. Assuming an adiabatic phase, the suggested SNR ages imply $v_s$ $\sim$ 2300--2800 km s$^{-1}$ for the SNR radius of $R$ = 7.3 pc (see \S~\ref{subsec:thermal} for the SNR radius). For a free-expansion phase, $v_s$ $\sim$ 5800--7000 km s$^{-1}$ is implied. These high velocities are consistent with those estimated for young SNRs showing an efficient particle acceleration (e.g. Parizot et al. 2006 and references therein). Thus, as a rough estimate by {\it averaging} several values discussed above, we for simplicity adopt a shock velocity $v_s$ $\sim$ 4000 km s$^{-1}$ for G330.2+1.0. This shock velocity is admittedly not a measurement and thus only a crude first-order estimate. (We would allow a factor of $\sim$2 uncertainty in this velocity estimate, and within this range, our conclusions as discussed below are not affected.) Assuming $r$ $\sim$ 5--8 for an efficient particle acceleration (e.g., Ellison et al. 2007), we estimate $\tau_{\rm loss}$ $\sim$ 350--600 yr for the measured $D$ $\sim$ $D_{\rm ad}$ $\sim$ 0.3 $d_5$ pc, and thus $B$ $\sim$ 14--20 $\mu$G. The maximum electron energy can be estimated by $E_{\rm max}$ = 2.5 $\times$ 10$^{-7}$ $\nu_{\rm rolloff}^{1\over2}$ $B^{-{1\over2}}$ TeV = 100--144 $B^{-{1\over2}}_{{\mu}{\rm G}}$ TeV, where $B_{{\mu}{\rm G}}$ is the postshock magnetic field in units of $\mu$G \citep{rk99,laz04}. Thus, $E_{\rm max}$ $\sim$ 22--38 TeV (depending on measured $\nu_{\rm rolloff}$) is derived. In addition, if we consider a geometrical projection effect in measuring the widths of the nonthermal filaments (e.g., the observed width is $\sim$ 4.6 $\times$ the actual width assuming a spherical shock with an exponential emission profile, Ballet 2006), the estimated $B$ can be a few times higher ($\sim$50 $\mu$G). The estimated $E_{\rm max}$ for G330.2+1.0 suggests that this SNR is a candidate $\gamma$-ray source. For instance, the $\gamma$-ray emission by the inverse Compton (IC) scattering off interstellar photons can be estimated by $E_{\gamma}$ $\sim$ 5.1 $\times$ 10$^{-12}$ $E_{\star}$ $E^2_{\rm e}$ eV, where $E_{\gamma}$ is the average final energy of the up-scattered photons, and $E_{\star}$ is the typical energy for the seed photons \citep{tat08}. Using $E_{\star}$ $\sim$ 7 $\times$ 10$^{-4}$ eV for the cosmic microwave background (CMB) and $E_{\rm e}$ = $E_{\rm max}$ $\sim$ 30 TeV, we estimate $E_{\gamma}$ $\sim$ 3 TeV. However, G330.2+1.0 is not identified in the H.E.S.S. Galactic plane survey catalog \citep{aha06a}. It is probably because G330.2+1.0 is more distant and thus apparently fainter than other TeV-bright SNRs (e.g., G347.3--0.5 and G266.2--1.2). The IC to synchrotron flux ratio $f_{\rm IC}$/$f_{\rm syn}$ = 8$\pi$$U_{\rm rad}$/$B^2$ $\sim$ 10 $B^{-2}_{{\mu}{\rm G}}$ $\sim$ 0.004--0.1 (where the energy density of the seed CMB photons $U_{\rm rad}$ $\sim$ 0.25 eV cm$^{-3}$) for the plausible range of $B$ $\sim$ 10--50 $\mu$G in G330.2+1.0. These $f_{\rm IC}$/$f_{\rm syn}$ are in fact similar to the observed $f_{\rm TeV}$/$f_{\rm X}$ for SNRs G347.3--0.5 and G266.2--1.2 (e.g., Matsumoto et al. 2007 and references therein). Then, the overall X-ray flux of $f_{\rm syn}$ $\sim$ 10$^{-11}$ ergs cm$^{-2}$ s$^{-1}$ for G330.2+1.0 \citep{torii06} implies $f_{\rm IC}$ $\sim$ 10$^{-13}$--10$^{-12}$ ergs cm$^{-2}$ s$^{-1}$. The sky position of G330.2+1.0 was at the edge of the H.E.S.S. survey, in which the exposure was short ($<$5 hr). Considering the small angular size ($\sim$10$'$) of G330.2+1.0, which is close to the point spread function of the H.E.S.S. (several arcminutes), and the short exposure in the survey, the estimated IC flux is likely close to or below the H.E.S.S. detection limit of $f$ $\sim$ 10$^{-12}$ ergs cm$^{-2}$ s$^{-1}$ at $E$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 1 TeV (e.g., Aharonian et al. 2005). Thus, if the $\gamma$-ray emission from G330.2+1.0 is dominated by the IC process of the same electrons to produce X-ray synchrotron emission, the non-detection of G330.2+1.0 with the current H.E.S.S. survey data is not surprising. A deep search of $\gamma$-ray emission for G330.2+1.0 using ground-based TeV telescopes and {\it Fermi} (formerly {\it GLAST}) is warranted. It is notable that nonthermal X-ray emission in G330.2+1.0 is generally anti-correlated with the radio emission \citep{torii06}. Our high resolution {\it Chandra} and {\it XMM-Newton} images reveal that there actually exist radio counterparts for the bright X-ray filaments in SW and NE, but the radio emission is faint (Fig.~\ref{fig:fig3}). The brightest radio emission is in the E parts of the SNR, where X-ray emission is faint and spectrally soft (Fig.~\ref{fig:fig3}). Thus, the bright radio emission likely traces high density regions where soft (thermal) X-ray emission is enhanced. Based on our SRCUT model fits, X-ray emission in SW and NE filaments implies the 1 GHz radio flux of 0.7--1.5 $\times$ 10$^{-4}$ Jy, while the MOST 843 MHz image of the SNR suggests $\sim$0.1 Jy for these regions (assuming that the total 1 GHz flux for the entire SNR is 5 Jy, Green 2001). Although our radio flux estimates are crude and should be considered only as an order-of-magnitude approximation based on a simple ``normalization'' of the total image intensity to the area corresponding to the X-ray-bright SW and NE filaments, the discrepancy is substantial by three orders of magnitudes, and should thus be real. We do not have an immediate answer as to what causes the large difference between the modeled and observed radio fluxes corresponding to the X-ray bright filaments. One speculation is that the radio spectral index might not be uniform across the SNR. While the overall radio spectrum is fitted by $\alpha$ = 0.3, the faint radio filaments corresponding to the bright X-ray shell might have a steeper spectrum. For instance, if we assume a plausible range of the {\it observed} radio flux $\sim$0.01--0.1 Jy for the SW region and vary the radio spectral index in our SRCUT model fit, we obtain a best-fit $\alpha$ $\sim$ 0.53--0.66 ($\chi^2_{\nu}$ = 1.2). These radio spectral indices are not unusual for shell-type SNRs \citep{green01}. The best-fit roll-off frequencies are high, but are poorly constrained ($\nu_{\rm rolloff}$ = 13$^{+93}_{-9}$ $\times$ 10$^{17}$ Hz when the 1 GHz radio flux of 0.1 Jy is assumed, and $\nu_{\rm rolloff}$ = 8$^{+25}_{-6}$ $\times$ 10$^{17}$ Hz for the radio flux of 0.01 Jy). Although the high roll-off frequency, $\nu_{\rm rolloff}$ $\sim$ 10$^{18}$ Hz, implies somewhat higher estimates for the shock velocity and the maximum electron energy, these changes do not make a significant effect on our conclusions presented here. High resolution radio observations with a deep exposure would be essential to study the detailed relationship between the X-ray and the radio emission in this SNR. \subsection{\label{subsec:thermal} Thermal X-Ray Emission of the Supernova Remnant} In the E region, soft thermal emission is a significant component in the observed X-ray spectrum. The best-fit electron temperature is $kT$ $\sim$ 0.7--1.4 keV, depending on models (\S~\ref{sec:snr}). The best-fit ionization timescale appears to be high ($n_{\rm e}t$ $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 10$^{13}$ cm$^{-3}$ s) suggesting that the plasma could be in collisional ionization equilibrium, but the $n_{\rm e}t$ parameter is not well-constrained because of the low photon statistics. Detecting thermal emission in SNRs in which nonthermal emission dominates is critical to reveal the environmental conditions (e.g., ambient density) and the supernova energetics that should have affected the SNR evolution and the particle acceleration. In fact, G330.2+1.0 is the only example to reveal thermal X-ray emission among the four Galactic SNRs which have been known to be dominated entirely by nonthermal X-rays (see \S~{\ref{sec:intro}). Therefore, although it is difficult to perform a thorough spectral analysis of thermal emission and to draw firm conclusions on the nature of the SNR because of the poor photon statistics for the faint thermal component, we present a brief discussion on some fundamental SNR parameters based on our spectral analysis of region E. Based on the best-fit volume emission measure ($EM$ = $n_{\rm e}$$n_{\rm H}$$V$, where $n_{\rm e}$, $n_{\rm H}$, and $V$ are the postshock electron, proton densities, and the X-ray emitting volume, respectively), we estimate $n_{\rm e}$ $\sim$ 0.4--0.5 $f^{-{1\over2}}$$d^{-{1\over2}}_5$ cm$^{-3}$ (where $f$ is the X-ray emitting volume filling factor). These postshock electron densities correspond to the preshock hydrogen density $n_{\rm 0}$ $\sim$ 0.1 $f^{-{1\over2}}$$d^{-{1\over2}}_5$ cm$^{-3}$. In these estimates, we assume $n_{\rm e}$ = 1.2 $n_{\rm H}$ for the mean charge state with normal composition, and $n_{\rm H}$ = 4$n_0$ for a strong shock. We use the emission volume $V$ $\sim$ 4 $\times$ 10$^{56}$ cm$^{-3}$ assuming that the path-length through region E is comparable to the physical size corresponding to the angular size ($\sim$2$'$) of region E at $d$ = 5 kpc. Assuming an ion-electron temperature equilibration, the measured electron temperature implies a shock velocity of $v_s$ $\sim$ 800 ($kT$ = 0.7 keV) -- 1100 ($kT$ = 1.4 keV) km s$^{-1}$. However, equipartition of the electron-ion temperatures may not have been reached, and thus the actual shock velocity could be higher than $v_s$ $\sim$ 1000 km s$^{-1}$, probably by a factor of a few (\S~\ref{subsec:nonthermal}). We estimate the SNR radius of $R$ $\sim$ 5$'$ (the half of the angular distance between the the bright SW and NE filaments), which corresponds to the physical distance of $\sim$7.3 $d_5$ pc. Then, assuming an adiabatic phase for the SNR, we apply the Sedov solution to derive the SNR age $\tau_{\rm SNR}$ $\sim$ 1100 $d_5$ yr (e.g., for $v_s$ $\sim$ 2500 km s$^{-1}$, \S~\ref{subsec:nonthermal}). For a free-expansion phase, the SNR age is also derived to be $\tau_{\rm SNR}$ $\sim$ 1100 yr (e.g., for $v_s$ $\sim$ 6500 km s$^{-1}$, \S~\ref{subsec:nonthermal}). Using a Sedov solution, the explosion energy is estimated to be $E_0$ $\sim$ 2--9 $\times$ 10$^{50}$ $d^{5\over2}_5$ ergs for $\tau_{\rm SNR}$ $\sim$ 1000--2000 yr. \section{\label{sec:sum} Summary and Conclusions} Based on the {\it ASCA} data, the overall X-ray emission from SNR G330.2+1.0 was suggested to be continuum-dominated with no evidence for line features \citep{torii06}. The high resolution {\it Chandra} images subsequently revealed that X-ray emission from this SNR originates primarily from the thin shell with enhanced filaments in the SW and NE parts of the shell \citep{park06}. Park et al. (2006) have also discovered the CCO J1601 at the center of the SNR. We performed follow-up observations of G330.2+1.0 with {\it XMM-Newton} to investigate the nature of the CCO and the SNR. Although our spectral and temporal analyses of J1601 and G330.2+1.0 are limited by poor photon statistics of the {\it XMM-Newton} data caused by significant contamination from flaring particle background, we find several important characteristics of these objects utilizing the {\it XMM-Newton} and {\it Chandra} data. The X-ray spectrum of J1601 can be described by two-component neutron star atmosphere models. X-ray emission primarily originates from a small hot region ($R$ $\sim$ 0.4--2 km, $T$ $\sim$ 2.5--5.5 MK). The rest of the neutron star's surface is cooler ($R$ $\sim$ 10 km, $T$ $<$ 1.5 MK), suggesting an $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$10$^{3-4}$ yr old neutron star based on the standard cooling models. The neutron star atmosphere models do not provide useful constraints on the magnetic field of J1601 with the current data. The previously suggested pulsations ($P$ $\sim$ 7.48 s) are not confirmed by the {\it XMM-Newton} data. These characteristics are similar to those found for CCOs in other Galactic SNRs such as Cas A and Vela Jr. The spectrally hard, faint nebulosity at $\sim$2$'$ SW from the CCO could be the associated PWN, but its true nature is uncertain with the current data because of the poor photon statistics. Follow-up deep X-ray observations are required to reveal the detailed nature of J1601. Assuming that X-ray emission in the shell of G330.2+1.0 is synchrotron radiation from the shock accelerated electrons, the roll-off frequency of $\nu_{\rm rolloff}$ $\sim$ 1.6--3.3 $\times$ 10$^{17}$ Hz is estimated. It is difficult to measure the shock velocity with the currently available data. Based on several independent approaches, we make a rough estimate of the shock velocity $v_s$ $\sim$ 4000 km s$^{-1}$ (with a factor of $\sim$2 uncertainty). Based on this shock velocity and the measured roll-off frequency, we find that the particle (electron) acceleration in G330.2+1.0 is likely limited by synchrotron losses rather than the SNR age. Using the {\it Chandra} images, we measure the widths of the bright nonthermal X-ray filaments ($D$ $\sim$ 0.3--0.4 pc). Using these widths and the shock velocity, we estimate the synchrotron loss time of $\tau_{\rm loss}$ $\sim$ 350--600 yr and the magnetic field of $B$ $\sim$ 10--50 $\mu$G. The maximum electron energy is derived to be $E_{\rm max}$ $\sim$ 22--38 TeV. These electron energies suggest that G330.2+1.0 is a candidate $\gamma$-ray source (up to $\sim$TeV) by the IC scattering of the CMB photons. The non-detection of G330.2+1.0 in the current H.E.S.S. survey with a short exposure is perhaps expected, because G330.2+1.0 is more distant and likely a fainter $\gamma$-ray source than the bright TeV SNRs like G347.3--0.5 and Vela Jr. G330.2+1.0 is particularly intriguing because this is the only SNR in which we detect a thermal component among the four Galactic SNRs known to be dominated by nonthermal X-ray emission. Although the uncertainties are large due to the poor photon statistics, the estimated density ($n_0$ $\sim$ 0.1 cm$^{-3}$) is low, suggesting that $\gamma$-ray emission, if it exists, would be dominated by the IC process. The detection of $\gamma$-ray emission as well as thermal X-ray emission with high photon statistics from G330.2+1.0 will be essential to test and constrain models for $\gamma$-ray production from shock-accelerated particles. Follow-up deep observations with X-ray detectors on board {\it XMM-Newton} and {\it Suzaku} are necessary for a thorough study of thermal X-ray emission. Deep $\gamma$-ray observations using {\it Fermi} and the ground-based TeV telescopes will be critical to reveal the nature of nonthermal radiation produced by shock accelerated particles. High resolution radio and X-ray observations of G330.2+1.0 with a deep exposure are essential to reveal the origin of the apparent inconsistency between the radio and nonthermal X-ray emission, such as the radio spectral index variation across the SNR. \acknowledgments The authors thank V. E. Zavlin for the helpful discussion on the hydrogen neutron star atmosphere models. This work was supported in parts by NASA grant NNX08AW88G and SAO grant SV4-74018. POS acknowledges partial support from NASA contract NAS8-03060. KM was partially supported by the Grant in-Aid for Young Scientists (B) of the MEXT (No. 18740108). This work makes use of the {\it Supernova Remnant Catalog} by the MOST which is operated by the University of Sydney with support from the Australian Research Council and the Science Foundation for Physics within the University of Sydney.
1,314,259,996,833
arxiv
\section{Introduction} High energy, single-cycle terahertz pulses are essential for both fundamental studies and applications, such as materials analysis \cite{hamm2017perspective,manceau2010direct,sharma2010time}, 6G communications \cite{yang20196g}, electron acceleration \cite{zhang2018segmented}, and high resolution spectroscopy \cite{cocker2013ultrafast}. A common approach to realize high-energy, few-cycle terahertz pulses from optical pulses is optical rectification, which exploits the second-order nonlinearity of materials such as $\text{LiNbO}_{\text{3}}$~(LN), GaAs, ZnTe, GaP etc. \cite{fulop2020laser}. In particular, LN is widely used to generate terahertz pulses in the range of $1~\mathrm{THz}$ due to its relatively strong nonlinearity \cite{hebling2004tunable}. Alternative platforms, such as graphene and gas plasma, have been studied for few-cycle terahertz pulse generation \cite{mikhailov2012theory,koulouklidis2020observation,sun2010coherent}. In all cases, strong nonlinearity is a key requirement for generating terahertz pulses of high energy. Three-dimensional Dirac semimetals (3D DSMs) \cite{borisenko2014experimental}-- a recently discovered class of topological materials-- have been shown to exhibit extremely large optical nonlinearities that originate from their linear and gapless energy-momentum dispersion in all three dimensions. For this reason, the 3D DSM $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~\cite{liu2014stable}, which possesses exceptionally high Fermi velocities and electron mobilities \cite{borisenko2014experimental,liu2014stable}, has been used to generate highly efficient terahertz high-order harmonics from input terahertz pulses \cite{cheng2020efficient,kovalev2020non,lim2020efficient,ullah2020third}, and studied as a platform for nonlinear plasmonics \cite{ooi2019nonlinear,ooi2020dirac}. Related materials like Weyl semimetals have been shown to support chiral terahertz emission and polarization manipulation \cite{gao2020chiral}. We show that the extreme optical nonlinearity of 3D DSMs can be leveraged for highly efficient optical-to-terahertz conversion over nanometer-scale propagation distances. Specifically, we predict a conversion efficiency enhancement of over 5000 times in $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~compared to LN, in a propagation distance of 300 \te{nm}. This is especially surprising given that we use the third-order nonlinearity in $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$, whereas the second-order nonlinearity is used for the corresponding process in LN. Our results reveal that tuning the Fermi energy allows us to leverage Pauli blocking to achieve a step-like conversion efficiency enhancement in terahertz generation. Our findings are crucial in the development of efficient ultrathin-film terahertz sources and the development of compact terahertz driven technologies \cite{withayachumnankul2012sub,lu2020strong}. \begin{figure}[H] \centering \includegraphics[width=1\linewidth]{E0_compare_2.pdf} \caption{Highly efficient terahertz generation via third-order nonlinearities in 3D DSMs. \textbf{a} Terahertz conversion efficiencies as a function of the propagation distance for an input field strength $E_2=50~\mathrm{MV/m}$ (marked by dotted circles in \textbf{b}). The decrease of the efficiency is subjective to both the phase-matching condition and the material absorption. 3D DSM $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~delivers over 5000 times the efficiency of $\text{LiNbO}_{\text{3}}$~(LN) over a propagation distance of 300 nm. \textbf{b} Terahertz peak conversion efficiency as a function of the input electric field strength $E_2$ centered at $\omega_{20}$ (dots; curves are visual guides). The output terahertz electric fields and spectra at $E_2=50~\mathrm{MV/m}$ (pink dotted circle in \textbf{b}) are presented in \textbf{c} and \textbf{d} respectively. We consider $\lambda_1=1~\mathrm{\mu m}$, $\omega_{10}=2\pi c/\lambda_1$, and $\hbar\omega_{20}=\hbar\omega_{10}/2=0.62~\text{eV}$ at temperature $T=77~\mathrm{K}$. We fix the Fermi energy at $\mathcal{E}_\mathrm{f}=0.45~\mathrm{eV}$ and scattering times for both the inter-band and intra-band to 150 \te{fs}. The electric field strength at $\omega_1$ is fixed at $E_1=5~\mathrm{MV/m}$. Both the optical pulses have $150~\mathrm{fs}$ full-width-half-maximum pulse duration. Unless otherwise stated, we consider these parameters throughout our work.} \label{fig} \end{figure} \section{Model} \subsection{Physics of terahertz generation in 3D DSMs} Terahertz generation from optical pulses in DSMs, schematically illustrated in Fig. \ref{fig}\textbf{a} (inset), occurs when two co-propagating optical pulses of the same polarization ($\mathbf{\hat{x}}$ polarized), but with different central frequencies $\omega_{10}$ and $\omega_{20}$, (frequencies within the optical pulse centered at $\omega_{10}$ and $\omega_{20}$ are denoted by $\omega_{1}$ and $\omega_{2}$ respectively), impinge on the sample, generating an output terahertz pulse that travels in the same direction. In momentum space, the driving fields induce inter-band and intra-band carrier transitions, resulting in the absorption of two low-energy photons at $\omega_{2}\approx0.5\omega_1$ and emitting one high energy photon at $\omega_1$ and another low-energy terahertz photon of frequency $\Omega=2\omega_2-\omega_1$. In Fig. \ref{fig}, we study optical-to-terahertz conversion for the specific case of the 3D DSM $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$. We determine the linear and nonlinear material conductivities associated with the 3D Dirac cone band structure using perturbative quantum theory, and simulate the terahertz generation process by solving Maxwell's equations using these conducitvities. Our model fully considers effects including the optical Kerr effect, finite temperatures, arbitrary Fermi energies, and carrier scattering. Our model captures both inter-band and intra-band dynamics, as well as coupling between them (henceforth denoted by the term inter-intra-band). {The Hamiltonian that describes the carrier dynamics within a 3D DSM is given by} \begin{equation} {i} \hbar \partial_t \psi(t)= \left[H_0+H_\text{int} (t) \right]\psi(t)\label{ham}, \end{equation} {where $\psi(t)$ is the electron wave function, $H_0= \sum v_{j} \sigma_{j} {p}_{j}$} is the stationary Hamiltonian, $H_\text{int}= e \mathbf{r} \cdot \mathbf{E}(t) $ is the interaction Hamiltonian in the length gauge \cite{aversa1995nonlinear}, $\mathbf{r}$ is the position operator, $e\,(>0)$ is the elementary charge, $\mathbf{E}$ is the electric field, $\hbar$ is the Planck constant, $\sigma_{j}$ is the Pauli matrix with ${j} \in {x,y,z}$, $v_{j}$ is the Fermi velocity along direction ${j}$ in Cartesian coordinates, and ${p_\mathrm{j}}$ is the momentum operator in the ${j}$ direction. The length gauge is chosen for $H_\mathrm{int}$ since the resulting nonlinear response is free of nonphysical zero-frequency divergences and a more transparent representation can be obtained \cite{aversa1995nonlinear,taghizadeh2017linear}. Due to the inversion symmetry of the $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$, even-ordered nonlinear conductivities are zero in our configuration. We apply perturbative quantum theory to Eq. (\ref{ham}) and obtain, for the first time, linear and nonlinear conductivities corresponding to a general 3D DSM (See Methods and SM Sections III and IV). Our model fully considers finite temperatures, carrier scattering, and an anisotropic Dirac cone band structures with Fermi velocities corresponding to realistic 3D DSM materials. We simulate the terahertz generation process by solving the Maxwell's equations using a finite-difference split-step method, which captures linear and nonlinear propagation effects of paraxial pulses up to the third order in nonlinear conductivity, including the back conversion of the terahertz pulse on the optical pulses and the optical Kerr effect. In particular, by defining the electric field as \begin{align} & \tilde{\mathbf{E}}(z,t)=E\exp{(-t^2/\tau^2)}\exp{(-i\omega_0t)}\mathbf{\hat{x}}+\mathrm{c.c}., \label{et} \\ & \mathscr{F}[ E\exp{(-t^2/\tau^2)}\exp{(-i\omega_0t)}]=E(z,\omega)\exp{[ik(\omega)z]}, \label{ew} \end{align} the optical-to-terahertz conversion in DSMs is given by \begin{eqnarray} {\partial E( z,\Omega)}/{\partial z}&&=\frac{-3}{2 n(\Omega)c \varepsilon_0}\int_0^{\infty}\!\!\int_0^{\infty} \!\!\!\! \sigma^{(3)}(\omega_2,\omega_3,-\omega_1)E_2(z,\omega_2) \nonumber\\ && \times E_1^*(z,\omega_2+\omega_3-\Omega) E_2(z,\omega_3) \exp \left\{ i \left[ k(\omega_2)\right. \right.\nonumber\\ && \left. \left. +k(\omega_3) -k^*(\omega_1)-k(\Omega) \right] z\right\} d\omega_2 d\omega_3, \label{main_thz_gen} \end{eqnarray} where $\mathscr{F}$ is the Fourier transform, $^*$ represents complex conjugate, $z$ is the propagation distance, $\tau$ is the pulse duration, $\sigma^{(3)}(\omega_2,\omega_2,-\omega_1)$ is the third-order conductivity, $\Omega$ represents the terahertz frequency, $k(\omega_1)=n(\omega_1)\omega_1/c$ represents the angular wavenumber at frequency $\omega_1=\omega_2+\omega_3-\Omega$ (in our case both $\omega_2$ and $\omega_3$ are centered at $\omega_{20}$), $n(\omega)$ is the refractive index, $z$ is the propagation distance, $c$ is the speed of light, and $\varepsilon_0$ is the vacuum permittivity. The terahertz pulse amplitude is given by $E$, and the optical pulse amplitudes by $E_1$ and $E_2$. \begin{figure}[H] \hspace*{-2cm} \centering \includegraphics[width=1.1\linewidth]{fermi_scan_eff.pdf} \caption{Optical-to-terahertz conversion efficiency as a function of Fermi energy. \textbf{a} the peak terahertz conversion efficiencies at different input field strength $E_2$ as a function of the Fermi energy $\mathcal{E}_\mathrm{f}$. The simulation results are denoted by the filled circles. The curves are visual guides. The non-perturbative regime (gray shaded area) corresponds to $E_2<\sqrt{\abs{\sigma^{(1)}}/\abs{\sigma^{(3)}}}$ (see SM Section VII). The pink data point marked by the dashed circle indicates the same data as in Fig. \ref{fig}\textbf{b}. \textbf{b} shows how the third order conductivity for terahertz generation at 1 $\mathrm{THz}$ ($\omega_{2}+\omega_{2}-\omega_{1}=1~\mathrm{THz}$) varies as a function of $\mathcal{E}_\mathrm{f}$. The "No Pauli blocking", "{Enhanced terahertz}" and "{Forbidden terahertz}" regions in \textbf{b} correspond to \textbf{c} ($\mathcal{E}_ \mathrm{f}=0.05~\mathrm{eV}$), \textbf{d} ($\mathcal{E}_ \mathrm{f}=0.45~\mathrm{eV}$) and \textbf{e} ($\mathcal{E}_ \mathrm{f}=0.7~\mathrm{eV}$) respectively, where the conductivity density {as a function of the electron energy} is shown (${\sigma}^{(3)}(\omega_2,\omega_2,-\omega_1)=\int \widetilde{\sigma}^{(3)}(\mathcal{E},\omega_2,\omega_2,-\omega_1) d\mathcal{E}$).\textbf{c} shows two strong conductivity density peaks, which contribute with opposite signs to the overall conductivity. Consequently, an increase in the conductivity can be observed in \textbf{d} where one of the peaks are suppressed while the other remains. \textbf{e} shows that all conductivity density peaks are suppressed and thus the terahertz generation is forbidden. We consider the same simulation parameters as Fig. \ref{fig}. {It should be noted that linear and gapless band structure of $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~ extends up to 1 eV \mbox{\cite{jeon2014landau,cheng2020efficient}}, which justifies plotting up to Fermi energies of 0.8 eV in (a) and (b). }} \label{fig_fermi_scan} \end{figure} In this work, we consider the two optical pulses of amplitudes $E_1$ and $E_2$ centered at wavelengths of $1~\mathrm{\mu m}~(\hbar\omega_{10}=1.24~\mathrm{eV})$ and $2~\mathrm{\mu m}~(\hbar\omega_{20}=0.62~\mathrm{eV})$ respectively, {with $150~\mathrm{fs}~(\tau=150~\mathrm{fs}/\sqrt{2\log{(2)}})$ intensity full-width-half-maximum pulse duration. } \section{Results} \subsection{Enhanced optical optical-to-terahertz conversion efficiency in 3D DSMs} Figure \ref{fig}\textbf{a} shows the optical-to-terahertz conversion efficiency as a function of propagation distance for a collinear configuration (Fig. \ref{fig}\textbf{a} inset). At a propagation distance of about 300 nm, the conversion efficiency in 3D DSM $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~exceeds that of LN by $>$ 5000 times. Additionally, we find that even when no restrictions are placed on propagation distance, $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~outperforms LN~in efficiency by over 10 times. Figure \ref{fig}\textbf{b} shows the terahertz conversion efficiency with respect to optical field $E_2$, when optical field $E_1$ is fixed at $5~\text{MV/m}$. Whereas LN~outperforms $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~at low field strengths $E_2<\text{5~MV/m}$ (Fig. \ref{fig}\textbf{b}, inset), $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~rapidly surpasses LN as field strength increases. The output terahertz fields and spectra for $E_2=50~\text{MV/m}$ (dotted circles in Fig. \ref{fig}\textbf{b}) are shown in Fig. \ref{fig}\textbf{c} and Fig. \ref{fig}\textbf{d} respectively. Here, we consider the experimentally measured Fermi velocities $(v_\mathrm{x},v_\mathrm{y},v_\mathrm{z})=(1.28, 1.3, 0.33) \times 10^6 \mathrm{m/s}$. It is possible that still higher conversion efficiencies exist at larger field strengths, but that would require a non-perturbative treatment for calculating the nonlinear conductivity that falls beyond the scope of this work. In SM Section VII, we present an analytical estimate for the input field strengths for which our conductivity calculations remain valid. \subsection{Optimizing efficiency by tuning the Fermi energy }\label{fermi_sec} Figure \ref{fig_fermi_scan} shows that an appropriate choice of the Fermi energy $\mathcal{E}_\mathrm{f}$ allows us to access a regime of enhanced terahertz generation. In Fig. \ref{fig_fermi_scan}\textbf{a}, we see that a broad range of Fermi energies and driving field strengths exist where substantial terahertz generation efficiencies can be accessed, even within the limits of perturbation theory. {The generation efficiency is defined as the ratio to the generated terahertz energy and the input pump pulse energy, ${\int I(z,\Omega) d\Omega}/{\int I(0,\omega)d\omega}$, where $I(z,\Omega)=c\epsilon_0 n(\omega)|E(z,\Omega)\exp{[ik(\Omega)z]}|^2/2$ is the intensity in the frequency domain as a function of terahertz frequency $\Omega$, $n(\omega)$ is the refractive index, and $\omega$ is the optical frequency.} The trend in conversion efficiency is partly explained through the third-order conductivity $\sigma^{(3)}(\omega_2,\omega_2,-\omega_1)$ in Fig. \ref{fig_fermi_scan}\textbf{b}, which follows a similar trend to the conversion efficiency as we increase the Fermi energy from the purple-shaded "No Pauli blocking" regime, to the unshaded "{Enhanced terahertz}" regime, and the yellow-shaded "{Forbidden terahertz}" regime. To further understand the step-like increase of the third-order conductivity, we plot in Figs. \ref{fig_fermi_scan}\textbf{c-d} the conductivity density $\widetilde{\sigma}^{(3)}(\mathcal{E},\omega_2,\omega_2,-\omega_1)$ as a function of the energy of the electronic states, defined by \begin{equation} {\sigma}^{(3)}(\omega_2,\omega_2,-\omega_1)=\int \widetilde{\sigma}^{(3)}(\mathcal{E},\omega_2,\omega_2,-\omega_1) d\mathcal{E}, \end{equation} at representative Fermi energy values from each regime ($0.05~\mathrm{eV},~ 0.45~\mathrm{eV},~0.7~\mathrm{eV}$). {Here $\mathcal{E}$ denotes the eigenenergy of an electron with a given wavevector $\mathbf{k}$}. In Figs. \ref{fig_fermi_scan}\textbf{c}, we see that at relatively low Fermi energies, the nonlinear conductivity density corresponding to terahertz generation contains two peaks, one at $\mathcal{E} \approx -0.62 ~\mathrm{eV}$ and another at $\mathcal{E} \approx 0.31 ~\mathrm{eV}$. As the Fermi energy increases, the peaks of the conductivity density that lie within the range $\mathcal{E}=[-\mathcal{E}_\mathrm{f}, \mathcal{E}_\mathrm{f}]$ are suppressed. We infer that this is related to Pauli blocking, which occurs when an electron cannot be excited from the valence band to the conduction band due to the lack of unoccupied states in the conduction band. This can be seen in Fig. \ref{fig_fermi_scan}\textbf{d}, where the peak at $\mathcal{E} \approx 0.31 ~\mathrm{eV}$ disappears. Since the contribution of the peaks at $\mathcal{E} \approx -0.62 ~\mathrm{eV}$ and $\mathcal{E} \approx 0.31 ~\mathrm{eV}$ add up destructively, the disappearance of one peak has the effect of enhancing the nonlinear conductivity associated with terahertz generation, explaining the step-like increase in conductivity moving from the "No Pauli blocking" to "Enhanced terahertz" regime. At still larger Fermi energies -- exemplified by the scenario in Fig. \ref{fig_fermi_scan}\textbf{e} -- both conductivity density peaks are suppressed since all transitions required for terahertz generation is forbidden, leading to a plunge in the resulting nonlinear conductivity in the "Forbidden terahertz" regime. In Fig. \ref{fig_fermi_scan}\textbf{a}, we see that the optical-to-terahertz conversion efficiency generally follows the same trend as the nonlinear conductivity in Fig. \ref{fig_fermi_scan}\textbf{b}. However, at low Fermi energies, the low electron filling in the conduction band leads to a smaller first-order conductivity at terahertz frequencies i.e. smaller terahertz absorption as shown in Eq. (\ref{intra_1}). Consequently, a relatively high efficiency can be potentially attained due to lower terahertz absorption in the "No Pauli blocking" regime at very low Fermi energies. As Fig. 2b shows, the high conversion efficiency of $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~ holds over a broad range of field strengths and Fermi energies. Although we have focused on the case of $T = 77 ~\mathrm{K}$ here, our simulations at other temperatures (see Fig. 3 in SM Section V) reveal stability over a broad range of temperatures ranging from 4 K to 200 K. The enhanced conversion efficiency in the "Enhanced terahertz" regime, as well as the need to stay within the validity of our perturbative conductivity calculations motivated our choice of $\mathcal{E}_\mathrm{f} = 0.45 ~eV$ in Fig. \ref{fig}. \section{Discussion} \begin{table}[h] \begin{center} \caption{Fermi velocities of different materials. The ZrTe$_5^*$ represents the averaged Fermi velocities $(v_\mathrm{xy},v_\mathrm{xz},v_\mathrm{zy})$. For ZrTe$_5$, the Fermi velocity along $\hat{\mathbf{z}}$ is calculated by, $v_\mathrm{z}=\sqrt{v_\mathrm{xz}^2v_\mathrm{zy}^2\text{\cite{zheng2016transport}}/(v_\mathrm{x}v_\mathrm{y} \text{\cite{martino2019two}})}$. }\label{parameter} \begin{tabular}{ c |c } \hline Name & Fermi velocity $(v_\mathrm{x},v_\mathrm{y},v_\mathrm{z})~\mathrm{m/s}$ \\ \hline $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$ & $(1.28, 1.3, 0.33) \times 10^6 $ \cite{liu2014stable}\\ Na$_3$Bi &$(4.17,3.63,0.95)\times 10^5 $ \cite{liu2014discovery}\\ ZrTe$_5$ & $(7,4.6,1.94)\times 10^5$ \cite{zheng2016transport,martino2019two} \\ ZrTe$_5^*$ & $(4.89,4.03,1.94)\times 10^5$ \cite{zheng2016transport} \\ TlBiSSe & $(1.6,1.6,1.6)\times 10^5$ \cite{novak2015large}\\ \hline \end{tabular} \end{center} \end{table} Our model is readily extended to capture the physics of a general, anisotropic 3D Dirac cone band structure. Although $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~has been considered in Figs. \ref{fig} and \ref{fig_fermi_scan}, our model also applies to other 3D DSMs featuring different Fermi velocities. The significance of the material's Fermi velocities can be seen from Eqs. (\ref{intra_1}-\ref{sigma3_ei}) in Methods, where the linear conductivity $\sigma^{(1)}_\text{x}$ and the nonlinear conductivity $\sigma^{(3)}_\text{xxxx}$ are directly proportional to $v_\mathrm{x}/(v_\mathrm{y} v_\mathrm{z})$ and $v_\mathrm{x}^3/(v_\mathrm{y} v_\mathrm{z})$ respectively. Figure \ref{fermi_v_raw} shows the peak conversion efficiency as a function of these prefactors, revealing that the combination of a small $\sigma^{(1)}_\text{x}$ and a large $\sigma^{(3)}_\text{xxxx}$ can lead to efficient terahertz generation. This can also be understood intuitively since it implies low terahertz absorption and large nonlinearity for optical-to-terahertz conversion. We show that under the given conditions, $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~is close to an ideal choice for optimal conversion efficiency. Our findings suggest that even larger efficiencies can be obtained with field strengths and Fermi energies that require a non-perturbative treatment of the nonlinear conductivity. Larger conversion efficiencies for the same input fields could also potentially be obtained by considering a non-collinear interaction geometry. In particular, having obliquely incident input fields would lead to the existence of second-order nonlinearities in 3D DSMs, which could also be a promising avenue for efficient optical-to-terahertz conversion. \begin{figure}[H] \centering \includegraphics[width=0.6\linewidth]{fermi_v_scan.pdf} \caption{Optical-to-terahertz conversion peak efficiencies efficiencies across the spectra of possible anisotropic 3D DSMs. The peak conversion efficiency is presented as a function of $\sigma^{(1)}_\text{xx} \propto v_\mathrm{x}/(v_\mathrm{y}v_\mathrm{z})$ and $\sigma^{(3)}_\text{xxxx}\propto v_\mathrm{x}^3/(v_\mathrm{y}v_\mathrm{z})$. The input electric field strengths are $E_1=5~\mathrm{MV/m}$ and $E_2=50~\mathrm{MV/m}$. The non-perturbative regime corresponds to the grey shaded area at the top-left corner. The above results are calculated with $\mathcal{E}_\mathrm{f}=0.45~\mathrm{eV}$ at $T=77~\mathrm{K}$. The Fermi velocities for the corresponding materials are listed in Table \ref{parameter}. We consider a propagation distance of 400 nm. Our results show that under the given conditions, $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~is close to an ideal choice for optimal conversion efficiency.} \label{fermi_v_raw} \end{figure} \section{Methods} The input electric field and its positive frequency component $\exp{(-i\omega_0 t)}$ are shown in Eqs.~(\ref{et}) and (\ref{ew}) respectively. Without the loss of generality, the input fields are chosen to be linearly polarized in $\hat{\mathbf{x}}$ direction. Note that in the following calculations, only the positive frequency part of the $\sigma(\omega)$ and $E(\omega)$ are presented. However, no approximation is made. Since with the relation $\sigma(\omega)=\sigma(-\omega)^*$ \cite{sipe2000second} and $E(\omega)=E(-\omega)^*$, all the information is contained in the positive frequency elements. The first-order conductivities are the following \begin{eqnarray} &&\sigma^{(1)}_{\text{i},\mathrm{xx}}(\omega_1)=\frac{ig e^{2} v_{x}}{6 \pi^{2} \hbar^{3} v_{y} v_{z}(\omega_1+i\gamma_\mathrm{i})} \left(\mathcal{E}_\mathrm{f}^{2}+\frac{\pi^{2}}{3} \mathrm{k_{\mathrm{B}}}^{2} T^{2}\right), \label{intra_1}\\ &&\sigma^{(1)}_{\text{e},\mathrm{xx}}(\omega_1)=\frac{i g e^2v_\mathrm{x} (\omega_1+i\gamma_\mathrm{e})}{24v_\mathrm{y}v_\mathrm{z} \pi^2 \hbar}\int_{-\infty}^{\infty} \frac{n(\mathcal{E}) }{\mathcal{E}-\hbar(\omega_1+i\gamma_\mathrm{e})/2} d\mathcal{E}, \nonumber\\ \label{inter_1} \end{eqnarray} where "i" denotes intra-band, "e" denotes inter-band, $g=4$ is the combined valley and spin degeneracy, $n(\mathcal{E})=f(\mathcal{E})-f(-\mathcal{E})$, $f(\mathcal{E})=\left\{\exp{\left[\left(\mathcal{E}-\mathcal{E}_\mathrm{f}\right)/\mathrm{k_B} T\right]}+1\right\}^{-1}$ is the Fermi distribution, $\mathcal{E}$ is the energy, $\mathrm{k_B}$ is the Boltzmann constant, $T$ is the temperature, $\gamma_\mathrm{i}$ is the inter-band decay rate, and $\gamma_\mathrm{e}$ is the coherence decay rate (see SM). In our calculations, $\gamma_\mathrm{e}=\gamma_\mathrm{i}=1/(150~\mathrm{fs})$. Equation (\ref{inter_1}) agrees with the work of Kotov \cite{kotov2016dielectric} (see SM Eq. (39)). Equation (\ref{intra_1}) can also be obtained from the Boltzmann transport equation (SM Section VIII). Although Eq.~(\ref{inter_1}) may appear to be readily solved via complex analysis, one should note that $n(\mathcal{E})$ contains an infinite number of poles in the complex-$\mathcal{E}$ domain. By defining $\omega_\mathrm{cv}=2\mathcal{E}/\hbar$, the terms in the expression for the third order conductivity can be written as \begin{align} & \sigma^{(3)}_{\text{i},\mathrm{xxxx}}(\omega_1,\omega_2,\omega_{3})=\frac{ige^4v_\mathrm{x}^3}{5\pi^2\hbar^3v_\mathrm{y}v_\mathrm{z}(\omega_1+\omega_2+\omega_3+i\gamma_\mathrm{i})(\omega_1+\omega_2+i\gamma_\mathrm{i})(\omega_1+i\gamma_\mathrm{i})} ,\label{sigma3iii}\\ &\sigma^{(3)}_{\text{e},\mathrm{xxxx}}(\omega_1,\omega_2,\omega_{3})=\frac{-ie^4v_\mathrm{x}^3}{15\hbar^3\pi^2v_\mathrm{y}v_\mathrm{z}(\omega_1+\omega_2+i\gamma_\mathrm{i})}\int_{-\infty}^{\infty}\frac{2(\omega_1+i\gamma_\mathrm{e})n(\mathcal{E})/\mathcal{E}}{ \omega_\mathrm{cv}^2-(\omega_1+i\gamma_\mathrm{e})^2}\nonumber\\ &\times \frac{1}{\omega_\mathrm{cv}-(\omega_1+\omega_2+\omega_3+i\gamma_\mathrm{e})}d\mathcal{E}, \label{sigma3eee}\\ &\sigma^{(3)}_{\text{ie},\mathrm{xxxx}}(\omega_1,\omega_2,\omega_{3})=\frac{-ige^4v_\mathrm{x}^3}{30v_\mathrm{y}v_\mathrm{z}\pi^2\hbar^3(\omega_1+\omega_2+\omega_3+i\gamma_\mathrm{i})}\int_{-\infty}^{\infty} \left\{ \frac{4 n(\mathcal{E})/\mathcal{E}}{(\omega_1+\omega_2+i\gamma_\mathrm{i})} \frac{1}{(\omega_\mathrm{cv}-\omega_1-i\gamma_\mathrm{e})} \right.\nonumber\\ &\left. +\frac{\mathcal{M}_{-}(\omega_1,\mathcal{E})}{\omega_\mathrm{cv}-\omega_1-\omega_2-i\gamma_\mathrm{e}} \right\} d \mathcal{E}, \\ &\sigma^{(3)}_{\mathrm{ei},\mathrm{xxxx}}(\omega_1,\omega_2,\omega_{3})=\frac{-ige^4v_\mathrm{x}^3}{30v_\mathrm{y}v_\mathrm{z}\pi^2\hbar^3}\left\{ \frac{1}{(\omega_1+i\gamma_\mathrm{i})(\omega_1+\omega_2+i\gamma_\mathrm{i})}\int_{-\infty}^{\infty} \frac{4\partial_\mathcal{E} n(\mathcal{E})+\mathcal{E} \partial^2_\mathcal{E} n(\mathcal{E})}{(\omega_\mathrm{cv}-\omega_1-\omega_2-\omega_3-i\gamma_\mathrm{e})} \right. \nonumber \\ & \left. +\int_{-\infty}^{\infty} \frac{(2\omega_\mathrm{cv}-\omega_1-\omega_2-\omega_3-i\gamma_\mathrm{e})\mathcal{M}_{-}(\omega_1,\mathcal{E})}{(\omega_\mathrm{cv}-\omega_1-\omega_2-\omega_3-i\gamma_\mathrm{e})^2(\omega_\mathrm{cv}-\omega_1-\omega_2-i\gamma_\mathrm{e})} \right\} d\mathcal{E}, \label{sigma3_ei}\\ &\mathcal{M}_{-}(\omega_1,\mathcal{E})=\left[\frac{\partial}{\partial \mathcal{E}} \left(\frac{n}{\omega_\mathrm{cv}-\omega_1-i\gamma_\mathrm{e}}\right) -\frac{2n}{(\omega_\mathrm{cv}-\omega_1-i\gamma_\mathrm{e})\mathcal{E}}-\frac{\partial_\mathcal{E} n(\mathcal{E})}{\omega_1+i\gamma_\mathrm{i}}\right], \end{align} where $\sigma^{(3)}_{\text{i},\mathrm{xxxx}}$ represents the purely intra-band process, $\sigma^{(3)}_{\text{e},\mathrm{xxxx}}$ represents the inter-band process, $\sigma^{(3)}_{\text{ie},\mathrm{xxxx}}$ and $\sigma^{(3)}_{\text{ei},\mathrm{xxxx}}$ arise from the diagonal terms of the density matrix and the coherence terms of the density matrix respectively and represent the inter-intra-band process (see SM). The frequency permutation remains to be included in Eqs.~(\ref{sigma3iii}-\ref{sigma3_ei}), due to the permutation symmetry \cite{boyd2020nonlinear}. The final $\sigma^{(n)}$ should be an average of all ($n!$) frequency permutations i.e. $\sigma^{(3)}=\mathcal{P}\left[ \sigma^{(3)}_\text{i}+\sigma^{(3)}_\text{e}+\sigma^{(3)}_\text{ie}+\sigma^{(3)}_\text{ei}\right]/6$, where $\mathcal{P}$ represents the summation of 6 possible permutations. Similar as Eq. (\ref{intra_1}), Eq. (\ref{sigma3iii}) can also be obtained by Boltzmann transport equation (SM Section VIII). Surprisingly, $\sigma^{(3)}_\text{i}$ is independent of $\mathcal{E}_\mathrm{f}$ and $T$. The lack of dependence on $\mathcal{E}_\mathrm{f}$ for $\sigma^{(3)}_\text{i}$ has also been discussed in the work of Cheng et. al \cite{cheng2020third}, which present linear and nonlinear conductivity expressions for isotropic 3D DSMs in the limit where $T=0$ and carrier scattering is negligible. \section{Conclusion} In conclusion, we find that $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~ is a promising platform for highly efficient terahertz generation. We predict an enhancement in efficiency of $>5000$ times in 3D DSMs $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$, compared to conventional materials like $\text{LiNbO}_{\text{3}}$, over a nanoscale propagation distance. Even when no restrictions are placed on propagation distance, $\text{Cd}_{\text{3}}\text{As}_{\text{2}}$~still outperforms $\text{LiNbO}_{\text{3}}$~in efficiency by over $10$ times. Furthermore, our results indicate that tuning the Fermi energy allows us to leverage Pauli blocking, which can be leveraged to realize a step-like efficiency increase in the optical-to-terahertz conversion process. We achieve these exciting results despite working within the perturbative regime, and we expect similarly promising results at non-perturbative fields strengths, a regime which warrants future investigation. We also present closed form expressions for linear and nonlinear conductivities that take into account the effect of finite temperatures, carrier scattering, and anisotropic Dirac cone band structures with Fermi velocities corresponding to realistic 3D DSM materials. Our findings should pave the way towards the development of efficient ultrathin-film terahertz sources for compact terahertz driven technologies. \begin{acknowledgement} This work is funded by Singapore Institute of Manufacturing Technology (Singapore Institute of Manufacturing Technology - A STAR)-A1984c0043. \end{acknowledgement} \begin{suppinfo} The data is available upon reasonable request. \end{suppinfo} \noindent Notes: The authors declare no competing financial interest.
1,314,259,996,834
arxiv
\section{Introduction} \label{sec:intro} The last decade has seen the rise of effective field theory (EFT) to the forefront of particle physics as the mainstream general framework to process experimental data into theory. This shift was originated by experimental data and the absence of long-heralded evidence in it but at the same time EFT brings changes to the theorist perspective also. The non-so-aptly named non-renormalizable theories possess a well defined and computable pertubative expansion with a finite set of parameters at any given order in couplings and loop expansion. Indeed the recent surge in activity has produced quantum level general ~\cite{Jenkins:2013zja,Jenkins:2013wua,Alonso:2013hga,Alonso:2014zka,Elias-Miro:2013mua,Henning:2014wua,Drozd:2015rsp}, and new~\cite{Alonso:2014rga,Cheung:2015aba,Bern:2019wie,Henning:2015alf,Henning:2019enq} results and automatization~\cite{Criado:2017khh,Bakshi:2018ics,Celis:2017hod}. While these works are inspired by the reasons that mark the Standard Model (SM) as incomplete, there is another theory of nature which requires completion, gravity, and it does fit the mold of EFT seamlessly~\cite{Donoghue:1994dn,Donoghue:1995cz,Donoghue:2012zc}. Here in an effort to bring the two closer together, techniques developed in the context of the SM EFT will be generalized to dynamical gravity. To be specific, by means of a covariant derivative expansion~\cite{Gaillard:1985uh}, the UV divergences at one loop generated by gravitational interactions for Hilbert-Einstein gravity with a cosmological constant (CC) and scalar, fermions and vector bosons will be computed. A good deal of the final results for UV divergences here obtained have been in the literature for some time~\cite{tHooft:1974toh,Deser:1974cy,Deser:1974cz}, and with the heat-kernel method~\cite{Avramidi:2000bm} general results at the loop level are available~\cite{Fradkin_1977,Christensen:1984dv,Barvinsky:1985an,Vilkovisky:1992pb} by computation of DeWitt coefficients~\cite{DeWitt:1965jb}. The novel aspect of this work is therefore the technique for the computation, which we hope makes the derivation of results in quantum gravity more accessible for a particle physicist while the quantum gravity practitioner might find the reduced mathematical machinery makes some aspects of the quantum structure of gravity more pristine. Section~\ref{Del2S} lays out the functional formulation of one loop corrections and computes the field-covariant second order variation of the action while sec.~\ref{CDEGR} presents the transformation and the resulting covariant derivative for gravity. Sec.~\ref{TrLogEv} combines the previous results to compute the UV divergences at one loop. Our conventions are a flat metric as $\eta_{\mu\nu}=$Diag$(1,-1,-1,-1)$ and \begin{align} \nabla_\mu A^{\alpha}&=\partial_\mu A^\alpha+\Gamma_{\mu\nu}^\alpha A^\nu& [\nabla_\mu,\nabla_\nu] A^{\alpha}&\equiv R^{\alpha}_{\,\,\,\beta\mu\nu}A^\beta &R_{\mu\nu}&\equiv R^{\alpha}_{\,\,\,\mu\alpha\nu} \end{align} where we note that part of the literature uses an opposite-sign definition for $R_{\mu\nu}$~\cite{Donoghue:1994dn}. Given that in sec.~\ref{TrLogEv} dimensional regularization is used we write our formulae in $d$ dimensions with $d$ in the vicinity of 4. \section{Second order covariant variation of the action\label{Del2S}} Functional methods have been applied to particle physics over the decades and the recent literature contains complete and accessible descriptions~\cite{Henning:2014wua,Drozd:2015rsp} to which we refer the reader for the detailed formulation; here rather we shall start from a number of results in the literature whose combination is required to tackle gravity. The one-loop corrections to the action can be synthesized into a Gaussian integral as, formally, \begin{align} e^{iS[\hat\phi]_{\rm eff}}=\int D\delta \phi e^{iS[\hat \phi]+i\delta \phi \delta S[\hat \phi]+\frac i2 (\delta \phi)^2\delta^2S[\hat\phi]+\mathcal O(\delta\phi^3)}\simeq e^{iS[\hat\phi]-\frac12 {\rm tr}({\rm log}(-\delta^2 S[\hat\phi])) }\,, \end{align} with $\hat \phi$ the background field, $S_{\rm eff}$ the effective action and the last equality valid to one loop. The one point to be underlined here is that, if one were to use a different variable for the field related as $\phi=\phi(\varphi)$ the second variation $\delta^2 S$ does not transform as a true tensor, \begin{align} ( \delta \phi)^2 \frac{\delta ^2 S}{\delta \phi\delta \phi}=\left(\delta \varphi \frac{\delta \phi}{\delta \varphi}\right)^2 \frac{\delta ^2 S}{\delta \phi\delta \phi}=(\delta \varphi)^2 \frac{\delta ^2 S}{\delta \varphi\delta \varphi} -(\delta \varphi)^2\frac{\delta^2 \phi }{\delta\varphi\delta\varphi}\frac{\delta S}{\delta \phi}\,, \end{align} this one can remedy making use of a (true) 2-tensor, the metric in field space: \begin{align} \partial_\mu \phi G(\phi)\partial^\mu\phi\to \partial_\mu \varphi \frac{\partial\phi}{\partial\varphi} G(\phi)\frac{\partial\phi}{\partial\varphi}\partial^\mu\varphi=\partial^\mu\varphi G' (\varphi)\partial^\mu\varphi\,, \end{align} and a covariant derivative {\it in field space}~\cite{Honerkamp:1971sh} $\mathcal D_i V^j= \delta_i V^j+\hat\Gamma^{j}_{ik} V^k$. In particular for the action (taken to be a scalar) we have: \begin{align}\label{ConFldS} \mathcal D S=\frac{\delta S}{\delta \phi}\,,&&\mathcal D^2S=\frac{\delta^2 S }{\delta\phi^i\delta\phi^j }-\hat \Gamma^k_{ij}\frac{\delta S}{\delta \phi^k}\,,&& \hat\Gamma=\frac{(G^{-1})^{kl}}{2}\left(\frac{\delta G_{li}}{\delta \phi^j}+\frac{\delta G_{jl}}{\delta \phi^i} - \frac{\delta G_{ij}}{\delta \phi^l}\right)\,, \end{align} where we note that this applies even if one started with a constant metric $G$ and for some reason wanted to perform a non-linear change of field variable. In this way the covariant one loop action result, including the invariant measure in field space $\sqrt{G}D\phi$, reads to the one-loop level \begin{align} iS_{\rm eff}[\hat \phi]={\rm log}\left(\int\! \!\sqrt{G} D\delta\phi \,e^{iS+i\delta \phi \mathcal D S+i\delta \phi^2\mathcal D^2 S/2}\right)=iS[\hat\phi]-\frac12 {\rm tr}({\rm log}(-(\mathcal D^2 S[\hat\phi]) G^{-1}))\,, \label{Origin} \end{align} where the product $(\mathcal D^2 S[\hat\phi]) G^{-1}$ makes an operator with a covariant and a contra-variant index in field-variable-indexes and hence the trace is an `invariant' result, meaning an expression for which physicists who choose to describe a system with different field variables agree on. This covariant description does as well preserve the (linear \& non-linear) symmetries of the original action at the loop level which one can realise in this formalism as a specific change of variable. Let us then turn to the action at hand to first determine $(\mathcal D^2 S[\hat\phi]) G^{-1}$, here considered is the Hilbert-Einstein action with a cosmological constant and spin 0,1/2 and 1 matter, \begin{align}\label{OrAct} S=\int dV\left(\frac{1}{2\kappa^2} (2\Lambda-R)+\frac12\left( \nabla_\mu\phi\nabla^\mu\phi-m_\phi^2\phi^2\right)+ \psi^\dagger \sigma^\m \frac{i\overleftrightarrow{\nabla}_\mu}{2} \psi+ \frac14 F_{\alpha\beta}F^{\beta\alpha}\right)\,, \end{align} with $dV=d^dx\sqrt{-g}$, $\kappa^2=8\pi G_N$ where $G_N$ is Newton's constant. This action describes the Standard Model (SM) plus gravity in the limit of vanishing SM couplings (gauge, Yukawa and quartic) and so with $\Lambda\sim 4\times 10^{-66}$eV we believe it describes nature in said limit. For the covariant action the first variation of the action w.r.t. the metric is needed \begin{align}\nonumber \frac{\delta S}{\delta g_{\mu\nu}}=\int dV\Bigg(&-\frac{1}{2\kappa^2}\left(\frac{g^{\mu\nu}}{2}\left(R-2\Lambda \right)-R^{\mu\nu}\right)+\frac12\left(\frac{g^{\mu\nu}}{2}\left(\partial\phi^2-m_\phi^2\phi^2\right)-\partial^\mu\phi \partial^\nu\phi\right)\\&+\frac{i}{4}\psi^\dagger\left(g^{\mu\nu}\sigma\overleftrightarrow\nabla-\frac{\sigma^{\mu}\overleftrightarrow\nabla^{\nu}+\sigma^{\nu}\overleftrightarrow\nabla^{\mu}} 2\right)\psi +\frac{1}{8}g^{\mu\nu}(F F)-\frac12 (F F)^{\mu\nu}\Bigg)\,, \end{align} whereas for matter fields we have linear realizations, that is, with the chosen variables their `metrics' are flat and hence $\hat\Gamma[\phi,\psi,A]=0$. The metric itself ($g_{\mu\nu}$) in contrast does have a `metric' ($G^{\mu\nu,\rho\sigma}$), not to dwell in linguistics let us anticipate results and simply give it here: \begin{align}\label{MetCon} G^{\alpha\beta,\sigma\rho}(g)=&\frac14\left(g^{\alpha(\sigma}g^{\rho)\beta}-g^{\alpha\beta}g^{\rho\sigma}\right)\,, && \hat\Gamma^{\alpha\beta,\rho\sigma}_{\mu\nu}=-\frac18\, g^{(\alpha}_{\,\,(\mu}g^{(\rho}_{\,\,\nu)} g^{\beta)\sigma)}\,, \end{align} where parenthesis around indixes denotes symmetrization $V_{(\alpha} W_{\beta)}=V_\alpha W_\beta+V_\beta W_\alpha$ and with the opposite placing of indices as usual yet this convention follows from our component field $g_{\mu\nu}$. This somewhat unfamiliar language might be more accessible if we note that in the graviton propagator or the `inverse' of the two point action has in it the inverse of the metric $G$, $G^{-1}_{\alpha\beta,\rho\sigma}=g_{\alpha(\sigma}g_{\rho)\beta}-g_{\alpha\beta}g_{\rho\sigma}$. Otherwise this treatment for a covariant result is not new in gravity and is related to what is at times termed a Vilkovisky's action~\cite{Vilkovisky:1984st}. The covariant second order variation then reads \begin{align} \mathcal D^2S\equiv\frac12\delta g^2\mathcal D^2S+\frac12\delta\Phi^2\frac{\delta^2S}{\delta\Phi\delta\Phi}=\frac12\left((\delta g\frac{\delta^2 S}{\delta g\delta g}\delta g)+(\delta g\frac{\delta S}{\delta g}\delta g)\right)+\frac12\delta\Phi^2\frac{\delta^2S}{\delta\Phi\delta\Phi}\,. \end{align} Next the explicit expression for $(\mathcal D^2 S[\hat\phi]) G^{-1}$ arising from each piece of the action in~(\ref{OrAct}) is given, for which purpose we define: \begin{align} &S^{(2)}_{n}=\frac12\delta \phi^2 \mathcal D^2 S_{n}=\int dV \mathscr{L}^{(2)}_{n}\,,& &\{S_n\}=\{S_g\,,\,S_\phi\,,\,S_\psi\,,\,S_{A}\}\,. \end{align} \subsection{Hilbert-Einstein and cosmological constant} The covariant second order variation of the Hilbert-Einstein action with a cosmological constant reads (with an abuse of notation we compute variations from eq.~(\ref{OrAct}) with $g_{\mu\nu}\to g_{\mu\nu}+\delta g_{\mu\nu}$ so that the background field is $g$ which is also understood to raise and lower indices from now on) \begin{align}\nonumber S^{(2)}_g=\int \frac{-\sqrt{|g|}}{4\kappa^2}\Big(&(\delta g) \nabla^\alpha\nabla^\beta \delta g_{\alpha\beta}-\delta g_{\alpha\beta} \nabla^\beta\nabla^\rho\delta g_{\rho}^{\,\,\alpha}+\frac12\delta g_{\alpha\beta}\nabla^2\delta g^{\alpha\beta}-\frac{1}{2}(\delta g)\nabla^2(\delta g)\\ &+R^{\alpha\rho\beta\sigma}\delta g_{\alpha\beta}\delta g_{\rho\sigma}-(\delta g) R^{\alpha\beta}\delta g_{\alpha\beta}+\frac{R-2\Lambda}{4}(\delta g)^ \Big)d^dx\,, \end{align} where a two-index object within parenthesis means it is traced over, $(\delta g)=\delta g_{\mu\nu}g^{\mu\nu}$. As with other gauge theories, the path integral has a large redundant integration volume associated here to the linearised symmetry: \begin{align} \delta g_{\epsilon}=\delta g_{\mu\nu}+\nabla_{(\nu} \epsilon_{\mu)}\,, \end{align} which one disposes of with the Faddeev-Popov procedure. The function , $\mathcal X_\mu(\delta g)=\nabla^. \delta g_{.\mu}-\nabla_\mu(\delta g)_\mu$ is used for gauge fixing and requires of an extra term in the action \begin{align} 1&=\int\! D\epsilon \delta \left(\mathcal X(g_\epsilon)\right) {\rm det} \left(\frac{\delta \mathcal X(\delta g_{\epsilon})} {\delta \epsilon^\mu}\right)= \int \!D\epsilon \delta \left(\mathcal X(g_\epsilon)\right)\int\! D\bar c D c e^{-i\int dV \bar c^\mu\left(g_{\mu\nu}\nabla^2+R_{\mu\nu}\right)c^\nu}\,, \end{align} with $c_\mu$ the wrong-statistics auxiliary field, our ghosts, and adding the term \begin{align} S_\xi=\int\frac{1}{8\kappa^2\xi}\left(\nabla^\nu \delta g_{\nu\mu}-\frac12\nabla_\mu(\delta g)\right)^2dV\,, \end{align} leads to the Harmonic gauge when $\xi=1$ which is selected here for computational simplicity. In this gauge the kinetic term reads: \begin{align} -\frac{1}{4\kappa^2}\left(\frac{\delta g_{\mu\nu}}{2}\nabla^2\delta g_{\mu\nu}-\frac 14 (\delta g) \nabla^2(\delta g)\right)=-\frac{\delta g_{\alpha\beta}}{4\kappa^2}\nabla^2\left(\frac14 g^{\alpha(\rho}g^{\sigma)\beta}-\frac14g^{\alpha\beta}g^{\rho\sigma}\right)\delta g_{\rho\sigma}\,, \end{align} from where the metric in eq.~(\ref{MetCon}) follows. Note that as for the overall normalization this metric yields off-diagonal components as $\delta g G\delta g= \delta g_{i<j}^2+...$ for a flat metric. As a final step we raise the index of one of the variations with the metric $G$ so that the resulting operator is ready to be traced over which results in a remarkably simple expression: \begin{align} S_{g+\xi+c}^{(2)}=&-\int dV \bar c^\mu\left(g_{\mu\nu}\nabla^2+R_{\mu\nu}\right)c^\nu\\\nonumber &- \int \frac{1}{4\kappa^2} \delta g_{\alpha\beta}\left(g^{\alpha}_{\,\,(\rho}g^{\beta}_{\,\,\sigma)}\frac{\nabla^2}{2}+R^{\alpha\,\,\,\,\,\beta}_{\,\,\,(\rho\,\,\,\sigma)}-g^{\alpha\beta} R_{\sigma\rho}+\Lambda g^{\alpha\beta}g_{\rho\sigma}\right)(G\cdot\delta g)^{\rho\sigma}dV\,. \end{align} \subsection{Scalars} The addition of a scalar field brings an extra contribution to the graviton variation as well as mixed $\phi-g$ terms: \begin{align}\nonumber S^{(2)}_\phi=\int \Big(-&\frac12\delta \phi \nabla^2\delta \phi+\frac14\left((\partial \phi\delta g \delta g\partial\phi)-(\delta g)(\partial\phi\delta g\partial \phi)+\frac14 (\delta g)^2((\partial\phi)^2-m_\phi^2\phi^2) \right)\\ -&(\partial\phi\delta g\partial\delta \phi )+\frac{(\delta g)}{2}(\partial\phi\partial\delta \phi-m_\phi^2\phi\delta\phi)\Big)dV\,, \end{align} where again a two-index object within parenthesis means it is traced over and $\delta g$ in between $\partial\phi$ are taken as vector-matrix scalar products, e.g $(\partial\phi \delta g \partial\phi)=\partial^\mu\phi\delta g_{\mu\nu}\partial^\nu\phi$. The mixed terms are removed here completing squares without modifying the measure~\cite{Henning:2016lyp}: \begin{align} \delta \phi\to \delta \phi-\frac{1}{\nabla^2+m_\phi^2}\left(\frac{(\nabla \partial\phi (\delta g))+m_\phi^2\phi}{2}-(\nabla \delta g \partial \phi)\right)\,. \end{align} This results into, after raising the index in the graviton variation \begin{align}\label{ScUg} \mathscr{L}^{(2)}_\phi=& -\frac12\delta \phi (\nabla^2+m_\phi^2)\delta\phi\\ \nonumber &-\frac{\delta g_{\alpha\beta}}{4\kappa^2}\left(\kappa^2g^{\alpha\beta}\left(\phi_{;\rho}\phi_{;\sigma}-\frac{g_{\rho\sigma}(m_\phi\phi)^2}{2}\right)-\frac{\kappa^2}{2}\phi^{;(\alpha}\phi_{;(\rho}\, g^{\beta)}_{\,\,\,\sigma)} \right)(G\cdot \delta g)^{\rho\sigma}\\ \nonumber &-\frac{\delta g_{\alpha\beta}}{4\kappa^2}\left(\left(g^{\mu(\alpha}\phi^{;,\beta)}-g^{\alpha\beta}\phi^{;\mu}\right)\nabla_\mu+m_\phi^2\phi g^{\alpha\beta}\right)\frac{\kappa^2}{\nabla^2+m_\phi^2}\left(\nabla_{(\rho} \phi_{;\sigma)}+g_{\rho\sigma}m_\phi^2\phi \right) (G\delta g)^{\rho\sigma} \end{align} where, to keep the equations of manageable length we have used the semi-colon notation $\phi_{;\alpha}=\nabla_{\alpha}\phi$ and the explicit $\nabla$'s are to be taken as acting on everything on their right, termed `open' derivatives. A global transformation as $g_{\mu\nu}\to(1+ \alpha) g_{\mu\nu}$, $\delta \phi \to(1+ \frac{2-d}{4}\alpha )\delta \phi$ leaves the action the same (for $m_{\phi}\to0$) whereas one can change the scalar action into \begin{align} \mathscr{L}_{\phi_{CFT}}=-\frac12\phi\left(\nabla^2-\frac{d-2}{4(d-1)}R\right)\phi\,, \end{align} for a locally scale-invariant action. \subsection{Fermions} The diffeomorphism-invariant Weyl-fermion kinetic term in eq.~(\ref{OrAct}) is, explicitly \begin{align} \frac i2 \psi^\dagger \sigma_\mu\overleftrightarrow\nabla\ps =\frac i2 \psi^\dagger \sigma^c e^\mu_c\left(\partial_\mu +\frac{\bar\sigma^{[a}\sigma^{b]}}{8}e^a_{\nu}(\partial_\mu e^{b,\nu}+\Gamma_{\mu\rho}^\nu e^{b,\rho} ) \right)\psi+h.c. \end{align} where $e^{\mu}_ae_b^{\nu}\eta^{ab}=g^{\mu\nu}$, $\sigma^a=(1,\vec\sigma)$, $\bar\sigma^a=(1,-\vec\sigma)$, and $\psi$ is a RH fermion ($\psi^{\dot\alpha}$). In the following a Greek letter (or symbol) as index for the sigma matrices denotes contraction with the vierbein $\sigma\cdot e_\mu=\sigma_ae^a_\mu\equiv\sigma_{\mu}$. The second order covariant action is \begin{align}\nonumber S^{(2)}_\psi=\int\frac {i}{2}&\Big[\,\delta \psi^\dagger \sigma \nabla \delta \psi-h.c.+\frac{i(\nabla_\mu\delta g_{\alpha\beta})\delta g^\beta_{\,\,\,\rho}}{8}\psi^\dagger \varepsilon^{\mu\alpha\rho\nu}\sigma_{\nu}\psi\\ \nonumber &+\left( \frac{(\delta g)^2}{8}\psi^\dagger \sigma\nabla\psi+\frac 18 \psi^\dagger \sigma \delta g\delta g\nabla \psi-\frac{\delta g}{4}\psi^\dagger \sigma\delta g\nabla \psi\right)-h.c.\\ &+\left(\delta\psi^\dagger\frac{(\delta g)\sigma\nabla-(\sigma\delta g \nabla)}{2}\psi+\psi^\dagger\frac{(\delta g)\sigma\nabla-(\sigma\delta g \nabla)}{2}\delta\psi\right)-h.c. \Big]dV\,, \end{align} with $\varepsilon^{\mu\nu\rho\lambda}=e_a^\mu e_b^\nu e_c^\rho e_d^\lambda\epsilon^{abcd}$, $\epsilon^{0123}=1$. Here as well a field redefinition of the integrating field $\delta \psi$ can be used as \begin{align} \delta \psi\to \delta \psi -\frac{1}{\sigma\nabla}\frac{(\delta g)\sigma\nabla-(\sigma\delta g \nabla)}{2}\psi\,, \end{align} to reduce the action to diagonal form \begin{align}\nonumber \mathscr{L}^{(2)}_\psi= \frac {i}{2}&\Big[\,\delta \psi^\dagger \sigma \nabla \delta \psi-h.c. +\frac{i}{8}(\delta g \nabla_\mu\delta g)_{\rho\alpha}\psi^\dagger \varepsilon^{\mu\alpha\rho\nu}\sigma_\nu\psi\\ \nonumber &+\left( \frac{(\delta g)^2}{8}\psi^\dagger \sigma\nabla\psi+\frac 18 \psi^\dagger \sigma \delta g\delta g\nabla \psi-\frac{(\delta g)}{4}\psi^\dagger \sigma\delta g\nabla \psi\right)-h.c.\\ &-\left(\frac{1}{\sigma\nabla}\frac{(\delta g)\sigma\nabla-(\sigma\delta g \nabla)}{2}\psi\right)^\dagger\frac{(\delta g)\sigma\nabla-(\sigma\delta g \nabla)}{2}\psi-h.c.\quad\Big]\,, \end{align} this variation, modulo the equation of motion piece, agrees with the Feynman rule for a two-graviton two-fermion vertex as in~\cite{Bjerrum-Bohr:2014lea}. The raising of the rear index of the operator in metric space reads \begin{align}\label{S2psi} \mathscr L^{(2)}_\psi=\frac{i}{2}\delta \psi^\dagger\sigma&\overleftrightarrow\nabla\delta\psi\\ \nonumber -\frac{\delta g_{\alpha\beta}}{4}\Bigg[&\frac{g_{\rho\sigma}}{4}\left(g^{\alpha\beta}\psi^\dagger i\sigma_\mu\psi^{;\mu}-\frac{\psi^\dagger i\sigma^{(\alpha} \psi^{;\beta)}}{2}\right)+h.c.-\frac{1}{16}\left\{\psi^\dagger\varepsilon^{\mu(\alpha\,\,\,\,\nu}_{\,\,\,\,\,\,\,\,(\rho}\sigma_\nu\psi g^{\beta)}_{\sigma)}\psi,\nabla_\mu\right\}\\ \nonumber& +\frac{g^{\alpha\beta}\psi^\dagger i\sigma_{(\rho} \psi_{;\sigma)} }{4} -\frac{g^{(\beta}_{(\sigma}\psi^\dagger\left(i\sigma^{\alpha)} \psi_{;\rho)}+i\sigma_{\rho)}\psi^{;\alpha)}\right)}{16}+h.c.\\ \nonumber & +\frac12\left((\psi^{;\mu})^\dagger\sigma_\mu g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)\frac{i}{\sigma\overleftrightarrow\nabla}\left(g_{\rho\sigma}\sigma^\nu\psi_{;\nu}+\sigma_{(\rho}\psi_{;\sigma)}\right)\Bigg](G\delta g)^{\rho\sigma} \end{align} where once more we resorted to semicolon for derivatives on background fields whereas the remaining $\nabla$ act on anything on its arrow direction and $\{,\}$ is the anticommutator. Here as in the scalar case one has derivatives acting on the field variation, i.e. `open' derivatives, but as opposed to the spin 0,1 case the action is linear in $\nabla$ which is of relevance for the loop integral analysis as shown in sec.~\ref{CDEGR}. In addition we convert the Grassmanian gaussian integral into an opposite-sign scalar integral as $e^{{\rm tr\,log}\mathcal O}= e^{1/2{\rm tr\,log}(\mathcal O\mathcal O^\dagger)}$ for which purpose the following relations are used \begin{align} &\nabla _{[\mu}\nabla_{\nu]}\psi=\frac{\sigma^{[a}\bar\sigma^{b]}}8e_{a,\rho}e^\lambda_{b} R^{\rho}_{\,\,\lambda\mu\nu}\psi\,, && \sigma^\mu\sigma^\nu\nabla_\mu\nabla_\nu=\nabla^2-\frac R4\,. \end{align} \subsection{Vector boson} For gauge vector bosons one has a kinetic term, in our matrix notation \begin{align} S_{A}=-\int d^dx\frac{\sqrt{-g}}{4}F_{\mu\nu}F_{\alpha\beta}g^{\mu\alpha}g^{\nu\beta}=\int d^dx\frac{\sqrt{-g}}{4} (F\,F)\,, \end{align} whose second order covariant variation reads \begin{align}\nonumber S^{(2)}_A=\int dV\Bigg(&\frac14\left( \frac{(\delta g)^2}8(F F)+(F\delta g\delta g F)+(F\delta g F\delta g)-(\delta g) (F\delta g F )\right)\\ &+\frac14\left((\delta F \delta F)-2(F\delta F\delta g )-2(F\delta g\delta F )+(\delta g)( F \delta F )\right)\Bigg)\,. \end{align} The gauge symmetry acting on the variation of the vector boson field $\delta A_\mu$ is, in the limit of vanishing gauge coupling, \begin{align} (\delta A_\epsilon)_\mu=\delta A_\mu+\nabla_\mu \epsilon(x)\,. \end{align} The second order variation on gauge fields, explicitly, is \begin{align}\nonumber -&\frac{\sqrt{-g}}2\delta A_\lambda \left(g^{\lambda\alpha}g^{\sigma \beta} \nabla_\beta\nabla_\alpha-g^{\lambda\sigma}\nabla^2\right)\delta A_\sigma\\ =&-\frac{\sqrt{-g}}2\delta A_\lambda \left(g^{\lambda\alpha}g^{\sigma \beta} \nabla_\alpha\nabla_\beta+R^{\sigma\lambda}-g^{\lambda\sigma}\nabla^2\right)\delta A_\sigma\,, \end{align} which we supplement with gauge fixing via the function $\mathcal X(\delta A)=\nabla_\mu \delta A^\mu$. The ghost action is not innocuous even for a $U(1)$ symmetry since it involves a field-dependent ghost Lagrangian as, \begin{align} 1&=\int D\epsilon \delta \left(\mathcal X( \delta A_\epsilon)\right) {\rm det} \left(\frac{\delta \mathcal X( \delta A_\epsilon)} {\delta \epsilon}\right)= \int D\epsilon \delta \left(\mathcal X( \delta A_\epsilon)\right)\int Dc D\bar c e^{-i\int dV \bar c\nabla^2c}\,, \end{align} The gauge fixing term $\mathscr L_\xi=-(\nabla \delta A)^2/(2\xi)$ is added to the action and the Feynman gauge is selected in the following again for computational simplicity. As for the mixed terms, the redefinition that eliminates them is \begin{align} \delta A\to \delta A_\lambda-\frac12 (\nabla^2-R)^{-1}_{\lambda\omega}\nabla_\mu\left((\delta g F+F\delta g)^{[\omega\mu]}-(\delta g)F^{\omega \mu}\right)\,, \end{align} which leaves behind the term \begin{align}\nonumber \mathscr{L}_A^{(2)}\supset-\frac{1}{8}\nabla_\mu((\delta g F+ F\delta g)^{[\lambda\mu] })-(\delta g)F^{\lambda\mu})(\nabla^2-R)^{-1}_{\lambda\omega}\nabla_\nu((\delta g F+ F\delta g)^{[\omega\nu]})-(\delta g)F^{\omega\nu})\,, \end{align} that combines with the remaining terms to give \begin{align}\label{S2Amu} \mathscr L^{(2)}_{A+\xi+c}=\frac{1}{2} \delta A_\rho& \left(g^{\rho\sigma}\nabla^2-R^{\rho\sigma}\right)\delta A_\sigma-\bar c \nabla^2 c\\ \nonumber -\frac{\delta g_{\alpha\beta}}{4\kappa^2}&\Bigg[g_{\rho\sigma}\left((FF)^{\alpha\beta}-\frac{g^{\alpha\beta}}{4}(FF)\right)+g^{\alpha\beta}(FF)_{\rho\sigma}-F^{\alpha}_{\,\,\,(\rho} F^{\,\,\,\,\beta}_{\sigma)}-\frac{(FF)^{(\alpha}_{(\rho} g^{\beta)}_{\sigma)}}{2}\\ \nonumber &-\left(g^{[\lambda(\alpha}F^{\beta) \mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\nabla_\mu(\nabla^2-R)^{-1}_{\lambda\omega}\nabla_\nu\left(g^{[\omega}_{(\rho} F_{\sigma)}^{\,\,\nu]}-g_{\rho\sigma}F^{\omega\nu}\right)\Bigg](G\delta g)^{\rho\sigma}\,. \end{align} {\bf Collection of formulae} The one loop action then is the sum of the tr log of the operators above as \begin{align}\nonumber S_{1\rm loop} =\frac i2\mbox{tr}\left[\log \mathcal O_{\delta g}\right]-i\mbox{tr}\left[\log \mathcal O_{c^\mu}\right]+\frac i2\mbox{tr}\left[\log \mathcal O_\phi\right]-\frac i 2\mbox{tr}\left[\log \mathcal O_\psi\right]+\frac i2\mbox{tr}\left[\log \mathcal O_A\right]-i\mbox{tr}\left[\log \mathcal O_c\right]\,, \end{align} where the operators are, for the different Lorentz representations considered here, \begin{align} \label{SmmVar} \mathcal O_\phi&=\nabla^2 +m_\phi^2\,,& \mathcal O_c&=\nabla^2\,, & \mathcal O_{\psi}&= \nabla^2-\frac{R}{4}\,, & \mathcal O_A&=g_{\mu\nu}\nabla^2-R_{\mu\nu}\,, \end{align} \begin{align} \mathcal O_{c_\mu}&= g_{\mu\nu}\nabla^2+R_{\mu\nu}\,,& \mathcal O_g&= \frac{g^\alpha_{(\rho} g^\beta_{\sigma)}}{2}\nabla^2 +R^{\alpha\,\,\,\,\,\beta}_{\,\,\,(\rho\,\,\,\sigma)}-g^{\alpha\beta} R_{\sigma\rho}+\Lambda g^{\alpha\beta}g_{\rho\sigma}+\mathcal O_T\,, \end{align} where the matter-field-dependent operator $\mathcal O_T$ can be written as \begin{align}\nonumber \mathcal O_T\cdot G=&\frac{-2\kappa^2}{\sqrt{|g|}}\left(\mathcal D^2(\sqrt{|g|} \mathscr L_T)+ \mathcal D\left(\frac{\delta \sqrt{|g|} \mathscr L_T }{ \delta \Phi}\right)\frac{1}{\sqrt{|g|}\mathcal O_\Phi}\mathcal D\left(\frac{\delta(\sqrt{|g|} \mathscr L_T)}{\delta \Phi}\right)\right)\\ =&\frac{\kappa^2}{\sqrt{|g|}} \mathcal D (\sqrt{|g|}T)-\frac{\kappa^2}{2}\frac{\delta T}{\delta\Phi}\frac{1}{\mathcal O_\Phi} \frac{\delta T}{\delta \Phi} \end{align} where $\mathscr L_T$ is the matter Lagrangian, $T$ is the stress-energy tensor, $-\sqrt{|g|}T=2\delta (\sqrt{|g|}\mathscr L_T)=\mathcal D (\sqrt{|g|}\mathscr L_T)$ and $\mathcal D$ the covariant derivative in metric-field space. The first term above contains the connection $\hat \Gamma$ as in eq.~(\ref{MetCon}) whereas the second term does not since it is made up of first derivatives only. The explicit form of $\mathcal O_T$ here is collected from eqs.~(\ref{ScUg},\ref{S2psi},\ref{S2Amu}). \section{Covariant derivative transformation \label{CDEGR}} All the operators obtained from the second order variation of the action have the structure \begin{align} \mathcal O_\Phi\equiv\mathbb {I}_{\Phi}\nabla^2+\{\nabla,V\}+U_\Phi(\nabla,x)\,,\label{OpDef} \end{align} with the `identity' $\mathbb {I}_\Phi$ being on whatever state we are considering both on Lorentz representation and internal space and $U$ is a series in inverse powers of $\nabla$ starting at degree $0$. To evaluate the tr log of such operator one can introduce momentum and position eigenstates as customary~\cite{Henning:2014wua} and write \begin{align} \frac i2 {\rm tr}({\rm log}(\mathcal O))=\frac i2\int d^dx\frac{ d^dq}{(2\pi)^d}\mbox{tr}(e^{iqx}({\rm log}\mathcal O)e^{-iqx})\,, \end{align} which specifically turns open derivatives into $ e^{-iqx} \nabla e^{iqx}=iq+\nabla$ where $q$ is taken to be covariant $q_\mu$ as opposed to the contravariant $x^\mu$ so that $d^dqd^dx$ is invariant. This representation turns spacetime derivatives $\partial_\mu$ acting on the `quantum' field one is integrating (tracing) over into $iq$ yet this is not a covariant description; in the present case there is in addition the connection $\Gamma$ in our covariant derivatives. A general and simple way of evaluating the operator in a covariant manner all throughout is to perform a unitary transformation which turns covariant derivatives into field strenghts, i.e. commutators of $\nabla$~\cite{Gaillard:1985uh}. The naive application of this procedure to gravity nonetheless does not yield the desired outcome, \begin{align} e^{i\partial_q\nabla }e^{-iqx}\nabla_\mu e^{iqx}e^{-i\partial_q\nabla }=e^{i\partial_q\nabla }(iq_\mu+\nabla_\mu) e^{-i\partial_q\nabla } =iq_\mu+\partial_q^. [\nabla_.,q_\mu]+ \mathcal O(q^{-1})\,, \end{align} where $\partial_q=\partial/\partial q_\mu$, $\partial_q\nabla=\partial_q^\mu\nabla_\mu$ and $[\nabla_\mu,q_\nu]$ is $-\Gamma_{\mu\nu}^{\rho}q_\rho$. In addition this same non-commutativity means that the transformation as in the above is not unitary since: \begin{align}\label{nonUn} \left(\partial_q\nabla\right)^\dagger=\overleftarrow{\nabla}\overleftarrow{\partial_q}=\nabla\partial_q =\partial_q\nabla+[\nabla,\partial_q]\,. \end{align} The transformation to yield a covariant description must therefore be extended, let us write a transformation $e^{iT}$ and expansion in $q$ as \begin{align} &e^{iT}; &T&=\sum_n T_{(n)}\,,&T_{(n)}(\lambda q)&=\lambda^{-n}T_{(n)}(q)\,, \end{align} and so using the Baker-Campbell-Hausdorff formula one can expand the matrix product into a sum of nested commutators; for the first few terms \begin{align} e^{iT }e^{-iqx}\nabla_\mu e^{iqx}e^{-iT }=e^{iT }\left(iq+\nabla_\mu \right)e^{-iT }=iq_\mu-[T_{(1)},q_\mu]+\nabla_\mu +\mathcal O(q^{-1})\,, \end{align} and to first order \begin{align} T_{(1)}=\frac12\{\partial_q^\mu\,, \nabla_\mu \} +\frac{1}{4}\{[\partial_q\nabla ,\partial_q^\nu], q_\nu\}\,, \end{align} returns $e^{iT}(iq+\nabla) e^{-iT}=iq+\mathcal O(q^{-1})$. As in the case without gravity the field strength appear, at order $q^{-1}$ which, reads \begin{align} e^{iT }\left(iq+\nabla_\mu \right)e^{-iT }=iq_\mu-[T_{(2)},q_\mu]-\frac12[T_{(1)},[T_{(1)},iq]]+i[T_{(1)},\nabla_\mu]+\mathcal O(q^{-2})\,. \end{align} Here in contrast to the flat case and once more due to the non-commutativity of $\nabla$ and $q\,\&\,\partial_q$ one has that terms like $\{[\partial_q^\nu,\nabla_\mu],\nabla_\nu\}/2\subset[T_1,\nabla]$ with open derivatives together with non covariant $\Gamma$ terms appear. This is what complicates the procedure and means one has to iterate and determine $T_{(2)}$ by canceling these terms. Solving for $T_{(2)}$ results in \begin{align} T_{(2)}=&-\frac i8 \{[\partial_q\nabla,\partial_q^\mu],\nabla_\mu\}-\frac{i}{24}\{\left[\partial_q\nabla,[\partial_q\nabla,\partial_q^\mu]\right], q_\mu\}\,, \end{align} and \begin{align} e^{iT }\left(iq+\nabla_\mu \right)e^{-iT }=iq_\mu+\frac i 4\{\partial_q^\nu,[\nabla_\nu,\nabla_\mu]\}+\frac i{12}R^\nu_{\,\,..\mu}\{\partial_q^{.2},q_\nu\}+\mathcal O(q^{-2})\,. \end{align} After solving for $T_{(2)}$ nonetheless the order $q^{-2}$ transformed covariant derivative presents still open derivative and non-covariant terms and one iterates the procedure to solve for $T_{(3)}$. An all-order solution for this transformation could not be found here so the pertinent question is then how many orders in $q^{-1}$ are required to encompass UV divergences which are subject of study of this work; anticipating results from sec.~\ref{TrLogEv}, the answer, for four dimensions, is two more terms, \begin{align} T_{(3)}=&-\frac{1}{24}\{[\partial_q\nabla,\partial_q^\mu][\nabla_\mu,\partial_q^\nu] ,\nabla_\nu\} \\\nonumber&-\frac{1}{48}\{[\partial_q \nabla,\partial_q^\mu]\partial_q^\nu,[\nabla_\mu,\nabla_\nu] \}-\frac{1}{48}\{[\partial_q\nabla,\partial_q^\mu][\nabla_\mu,[\partial_q\nabla,\partial_q^\nu]],q_\nu\}+\mathcal O([\nabla,\partial_q]^2)\\ T_{(4)}=&-\frac{i}{288} \{\left[\partial_q\nabla,\left[\partial_q\nabla,\left[\partial_q\nabla,\partial_q^\mu\right]\right]\right],\nabla_\mu\} +\frac{i}{144}\{\left[\partial_q\nabla,\left[\partial_q\nabla,\partial_q^\mu\right]\partial_q^\nu,[\nabla_\mu,\nabla_\nu]\right]\}\\ \nonumber &-\frac{i}{1440} \{\left[\partial_q\nabla,\left[\partial_q\nabla,\left[\partial_q\nabla,\left[\partial_q\nabla,\partial_q^\mu\right]\right]\right]\right],q_\mu\}+\frac{i}{240} \{\left[\partial_q\nabla,\left[\partial_q\nabla,\partial_q^\mu\right]\right]\left[\nabla_\mu,\left[\partial_q\nabla,\partial_q^\nu]\right]\right],q_\nu\}\\ &-\frac{i}{1440}\{\left[\partial_q\nabla,\left[\partial_q\nabla,\partial_q^\mu\right]\right]\left[\partial_q\nabla,\left[\nabla_\mu,\partial_q^\nu]\right]\right],q_\nu\}+\mathcal O([\nabla,\partial_q])\nonumber \end{align} where by $\mathcal O([\nabla,\partial_q]^n)$ we mean terms which are proportional to the connection $\Gamma$ to the $n$ power (recall $[\nabla,\partial_q]\sim\Gamma\partial_q$) and vanish in an inertial frame $\Gamma\to0$ as opposed to derivative $\partial_x^n\Gamma$ terms. It is rightful to drop the terms we have since the final result for the covariant derivative $e^{iT}(iq+\nabla)e^{-iT}$ will be covariant and given the order we are working at, e.g. we need to consider $[T_{(3)},\nabla]$ so orders $\mathcal O([\nabla,\partial_q])$ must be retained in $T_{(3)}$ but $\mathcal O([\nabla,\partial_q]^2)$ can be dropped as we do. If one however were to descend one more order these omitted terms will be needed. The transformation, to this order, turns the derivative $iq+\nabla$ into: \begin{align}\label{FistCDE} e^{iT}(iq_\mu+\nabla_\mu)e^{-iT}=& iq_\mu+\frac i 4\{\partial_q^\nu,[\nabla_\nu,\nabla_\mu]\}+\frac i{12}R^\nu_{\,\,..\mu}\{\partial_q^{.2},q_\nu\}\\ \nonumber &-\frac 16 \{\left[\partial_q\nabla,[\nabla_\nu,\nabla_\mu]\right],\partial_q^\nu\}-\frac1{24}[\nabla_. ,R^{\nu}_{\,\,\,..\mu}]\{\partial_q^{.3},q_\nu\}\\ \nonumber &-\frac{i}{16}\{\left[\partial_q\nabla,\left[\partial_q\nabla,[\nabla_\nu,\nabla_\mu]\right]\right] ,\partial_q^\nu \}-\frac{i}{80}[\nabla_.,[\nabla_.,R^{\nu}_{\,\,\,..\mu}]]\{\partial_q^{.4},q_\nu\} \\ \nonumber &+\frac{i}{48}\{R^\nu_{\,\,\,..\mu}\partial_q^{.3},[\nabla_.,\nabla_\nu]\}+\frac{7i}{720}R^{\nu}_{\,\,\,..\rho}R^{\rho}_{\,\,\,..\mu}\{\partial_q^{.4},q_\nu\}+\mathcal O(q^{-4})\\ \nonumber &\equiv i(q_\mu+\mathcal K_{\mu}) \end{align} where given that $(\partial_q)^n$ is symmetric on its $n$ indices and for brevity we collapse them into `$.$' e.g. $R_{\alpha\beta}\partial_q^\alpha\partial_q^\beta=$ $R_{..}\partial_q^{.2}$ and we defined the `gravitational' covariant derivative $\mathcal K$. Obtaining this transformation is somewhat involved but the process has built-in consistency checks. The term $T_{(i)}$ first enters $e^{iT}(iq+\nabla)e^{-iT}$ at order $i-1$ through $-[T_{(i)},q]$ and it is determined by cancellation of open derivative and non-covariant terms produced by lower order terms, e.g. $[T_{(i-1)},\nabla]$. One has that the number of open derivative and non-covariant terms to be canceled exceeds the number of possible structures in $[T_{(i)},q]$. The system of equations is over-constrained which allows for checking a solution obtained with some minimal set of equations against the remaining conditions. The necessity of the anti-commutators $\{,\}$ follows from requiring a unitary transformation as sketched in eq.~(\ref{nonUn}). It is useful to organize the expansion in inverse powers of $q$ as with $T$ via the definition: \begin{align}\label{DefCDvi} e^{iT}e^{-iqx}(\nabla)e^{iqx}e^{-iT}\equiv &i(q+\mathcal K) & \mathcal K&=\sum_n\mathcal K_{(n)} & \mathcal K_{(n)}(\lambda q)&= \lambda^{-n}\mathcal K_{(n)}(q)\\ e^{iT}e^{-iqx}Ue^{iqx}e^{-iT}\equiv &\,\mathcal U& \mathcal U&=\sum_n\mathcal U_{(n)} & \mathcal U_{(n)}(\lambda q)&= \lambda^{-n}\mathcal U_{(n)}(q)\label{DefcU} \end{align} The transformation on a background field function $\hat S(x)$ is, to this order: \begin{align} e^{iT}\hat S(x)e^{-iT}=&\hat S+i\partial_q[\nabla,\hat S]-\frac 12 \partial_q^{.2}[\nabla_.,[\nabla_.,\hat S]]-\frac i 6\partial_q^{.3} [\nabla_.,[\nabla_.,[\nabla_.,\hat S]]]+\mathcal O(q^{-4})\\=& \hat S+i\partial_q^.\hat S_{;.}-\frac 12 \partial_q^{.2}\hat S_{;..}-\frac i 6\partial_q^{.3}\hat S_{;...}+\mathcal O(q^{-4}) \end{align} with the `$.$' notation for $\partial_q$ of eq.~(\ref{FistCDE}). It is not always the case however that either $\nabla^2$ or a background field is present, it is sometimes both. Take for instance the following construction that appears on eq.~(\ref{ScUg}) \begin{align}\nonumber &e^{iT}\left((iq+\nabla)_{\rho} \phi_{;\sigma}+m_\phi^2g_{\rho\sigma}\phi\right)e^{-iT}=e^{iT}(iq+\nabla)_{\rho}e^{-iT}\, e^{iT}\phi_{;\sigma}e^{-iT}+m_\phi^2g_{\rho\sigma} e^{iT}\phi e^{-iT}\\ \nonumber &=\left(iq+i\mathcal K_{(1)} +\mathcal O (q^{-2})\right)_{\rho} \left(\phi_{;\sigma}+i\phi_{;\sigma\star}\partial_q^\star+\mathcal O (q^{-2})\right) +m_\phi^2g_{\rho\sigma}(\phi+i\phi_{;\star}\partial_q^\star+\mathcal O (q^{-2}))\\ &=iq_\rho \phi_{;\sigma}+m_\phi^2g_{\sigma\rho}\phi-q_\rho\phi_{;\sigma\star}\partial_q^\star+\mathcal O(q^{-1}) \end{align} This is the result for a piece of~(\ref{ScUg}), itself part of the operator $U$ in metric-space. Last let us address the linear term in derivatives in eq.~(\ref{OpDef}). One has, after the transformation \begin{align} e^{iT}e^{-iqx}\mathcal Oe^{iqx}e^{-iT}= -(q+\mathcal K)^2 +i\{\mathcal V,q+\mathcal K\}+\mathcal U=(iq+i\mathcal K+\mathcal V)^2 +\mathcal U-\mathcal V^2 \end{align} As in conventional loop integrals a `shift' in our integration variable can remove the linear term only now this `shift' is again a transformation of the operator (note that V is a matrix in whatever spin-space is under consideration). The transformation $e^{i\mathcal V\partial_q}$ leaves: \begin{align}\nonumber e^{i\mathcal V\partial_q}\left(iq+i\mathcal K+\mathcal V\right)e^{-i\mathcal V\partial_q}=&iq-[\mathcal V_\mu,q]\partial_q^\mu+\frac i2[\mathcal V_\mu\partial_q^\mu\,,\mathcal A]+i\mathcal K_{(1)}+\dots\\ =&iq+i\mathcal K_{(1)}-[[i\partial_q\nabla,V_\mu] ,q]\partial_q^\mu+\frac i2 [V_\mu\partial_q^\mu,V]+\mathcal O(q^{-2})\\ \nonumber =&iq+i\mathcal K_{(1)}+\frac i2 \partial_q^\nu (\nabla_{[\nu}V_{\mu]}+V_{[\nu}V_{\mu]}) -\frac i2\partial_q^\nu \nabla_{(\nu}V_{\mu)}+ O(q^{-2}) \end{align} Higher order will enter our computation as well but as we shall see their contributions to the UV divergent action cancel and we need not make them explicit here. The final form of the operator is \begin{align} e^{i\mathcal V\partial_q}e^{iT}e^{-iqx}\mathcal O e^{iqx}e^{-iT}e^{-i\mathcal V\partial_q}\equiv -(q+\widetilde{\mathcal K})^2+\widetilde{\mathcal U} \end{align} with \begin{align} e^{i\mathcal V\partial_q}e^{iT}e^{-iqx}(\nabla +V) e^{iqx}e^{-iT}e^{-i\mathcal V\partial_q}&\equiv i(q+\widetilde{\mathcal K}) \\ e^{i\mathcal V\partial_q}e^{iT}e^{-iqx}(U-V^2)e^{iqx}e^{-iT}e^{-i\mathcal V\partial_q}&\equiv \widetilde{\mathcal U} \label{DeftcU} \end{align} and the action of the full transformation on a background field function is \begin{align}\nonumber e^{i\mathcal V} e^{iT}\hat Se^{-iT}e^{-i\mathcal V}=&\hat S+i\partial_q[\nabla,\hat S]-\frac{ \partial_q^2}{2}[\nabla,[\nabla,\hat S]]+\cdots\\&+i\partial_q[\mathcal V,\hat S+i\partial_q[\nabla,\hat S]+\dots]-\frac{\partial_q^2}2 [\mathcal V,[\mathcal V,\hat S+\dots]]+\dots\\ \nonumber =&\hat S+i\partial_q [\nabla+V,\hat S]-\frac{\partial_q^2}2 [\nabla,[\nabla,\hat S]]\\ &-\frac{\partial_q^2}2 [V,[V,\hat S]]-\partial_q^2[V,[\nabla,\hat S]]-\partial_q[\partial_q[\nabla,V],\hat S]+\mathcal O(q^{-3} \end{align} To close this section the derived transformation is applied to the operators obtained from the second order action of eq.~(\ref{OrAct}) in sec.~\ref{Del2S} to second order in inverse loop momenta. {\bf Spin $< 2$}\\ The case of lower spin ($< 2$) in this work has a simple operator, in particular all the operators for spin $(< 2)$ have $V=0$ and $U=e^{-iqx}Ue^{iqx}$ has only the zeroth term in the large momenta expansion as follows \begin{align} \nonumber &{\rm Scalar}&& {\rm CFT\,\,scalar}& & {\rm Weyl\,\,Fermion}&& {\rm Gauge\,\,boson}\\[1pt] \hline \nonumber &&&&&&&\\[-10pt] U=\quad & m_\phi^2&&-\frac R6&&-\frac R 4\mathbb \delta^{\dot\beta}_{\,\,\,\dot\alpha}&&-R_{\rho}^{\,\,\,\lambda} \end{align} with the ghost $c_\mu$ operator having $U=R_{\mu\nu}$ and the ghost $c$, $U=0$. The expansion of $\mathcal U$ in eq.~(\ref{DefcU}) is then \begin{align} \mathcal U_{(0)}&=U\,, & \mathcal U_{(1)}&=iU_{;.}\partial_q^.\,,& \mathcal U_{(2)}&=-\frac12 U_{;..}\partial_q^{.2}\,, \end{align} and $\widetilde{\mathcal U}=\mathcal U$. {\bf Graviton}\\ The case of the graviton has a linear term in $\nabla$ induced in our case by fermions, this is extracted from eq.~(\ref{S2psi}): \begin{align} \left(V^\mu\right)^{\alpha\beta}_{\rho\sigma}=-\frac{ \kappa^2} {16}\psi^\dagger\varepsilon^{\mu(\alpha\,\,\,\nu}_{\,\,\,\,\,\,\,\,(\rho}\sigma_{\nu}\psi g^{\beta)}_{\sigma)} \end{align} On the other hand $U$ has accommodated in this case the mixed graviton-matter terms produced after completing squares in the second order covariant action. These terms do depend on open derivatives $\nabla$ a fact that can be used to tell them apart through the definition \begin{align} &U=U_{\rm s}+U_{\rm mx}& &e^{-iqx}\bar U_{\rm s}e^{iqx}= U_{\rm s \end{align} where with the variation computed in sec.~\ref{Del2S} one has, for the single-species operator \begin{align} \frac{\left[ U_{\rm s}\right]^{\alpha\beta}_{\rho\sigma}}{\kappa^2}=&\kappa^{-2}\left(R^{\alpha\,\,\,\,\,\beta}_{\,\,\,(\rho\,\,\,\sigma)}-g^{\alpha\beta} R_{\sigma\rho}+\Lambda g^{\alpha\beta}g_{\rho\sigma}\right)\\ \nonumber &g^{\alpha\beta}\left(\phi_{,\rho}\phi_{,\sigma}-\frac{g_{\rho\sigma}m_\phi^2\phi^2}2\right)-\frac{1}{2}\phi^{,(\alpha}\phi_{,(\rho}\, g^{\beta)}_{\,\,\,\sigma)}-\frac{ig^{(\beta}_{(\sigma}\psi^\dagger(\sigma^{\alpha)}\psi_{;\rho)} +\sigma_{\rho)}\psi^{;\alpha)})}{16}+h.c.\\ \nonumber &+\frac{g_{\rho\sigma}}{4}\left(g^{\alpha\beta}\psi^\dagger i\sigma^\mu\psi_{;\mu}-\frac{\psi^\dagger i\sigma^{(\alpha} \psi^{;\beta)}}{2}\right) +\frac{g^{\alpha\beta}\psi^\dagger i \sigma_{(\rho} \psi_{;\sigma)}}{4}+h.c.\\ \nonumber &+g_{\rho\sigma}\left((FF)^{\alpha\beta}-\frac{g^{\alpha\beta}}{4}(FF)\right)+g^{\alpha\beta}(FF)_{\rho\sigma}-F^{\alpha}_{\,\,\,(\rho} F^{\,\,\,\,\beta}_{\sigma)}-\frac{(FF)^{(\alpha}_{(\rho} g^{\beta)}_{\sigma)}}{2}\,, \end{align} meanwhile the mixed term reads \begin{align} \frac{\left[U_{\rm mx}\right]^{\alpha\beta}_{\rho\sigma}}{\kappa^2}=& \left(\left(g^{\mu(\alpha}\phi^{;,\beta)}-g^{\alpha\beta}\phi^{;\mu}\right)\nabla_\mu+m_\phi^2\phi g^{\alpha\beta}\right)\frac{1}{\nabla^2+m_\phi^2}\left(\nabla_{(\rho} \phi_{;\sigma)}+g_{\rho\sigma}m_\phi^2\phi \right)\\ \nonumber & +\frac12\left((\psi^{;\mu})^\dagger\sigma_\mu g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)\frac{i}{\sigma\overleftrightarrow\nabla}\left(g_{\rho\sigma}\sigma^\nu\psi_{;\nu}+\sigma_{(\rho}\psi_{;\sigma)}\right)\\ \nonumber &-\left(g^{[\lambda(\alpha}F^{\beta) \mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\nabla_\mu(\nabla^2-R)^{-1}_{\lambda\omega}\nabla_\nu\left(g^{[\omega}_{(\rho} F_{\sigma)}^{\,\,\nu]}-g_{\rho\sigma}F^{\omega\nu}\right)\,. \end{align} In the notation of sec.~\ref{Del2S}, the open derivatives in $U_{\rm mx}$ are $\nabla$'s whereas for derivatives acting only on the background fields we have use the semicolon`$;$' notation. After the transformation $e^{iT}$ one has, to second order, for the single-species contribution \begin{align}\nonumber \frac{\left[ \mathcal{U}^{\rm s}_{(0)}-V^2\right]^{\alpha\beta}_{\rho\sigma}}{\kappa^2}=&\kappa^{-2}\left(R^{\alpha\,\,\,\,\,\beta}_{\,\,\,(\rho\,\,\,\sigma)}-g^{\alpha\beta} R_{\sigma\rho}+\Lambda g^{\alpha\beta}g_{\rho\sigma}\right)-\frac{\kappa^2}{16^2}\psi^\dagger\varepsilon^{\lambda(\alpha}_{\,\,\,\,(\mu}g^{\beta)}_{\nu)}\psi\, \psi^\dagger\varepsilon^{\omega(\mu}_{\,\,\,\,(\rho}g^{\nu)}_{\sigma)}\psi g_{\lambda\omega} \\ \nonumber &g^{\alpha\beta}\left(\phi_{,\rho}\phi_{,\sigma}-\frac{g_{\rho\sigma}m_\phi^2\phi^2}2\right)-\frac{1}{2}\phi^{,(\alpha}\phi_{,(\rho}\, g^{\beta)}_{\,\,\,\sigma)}-\frac{ig^{(\beta}_{(\sigma}\psi^\dagger(\sigma^{\alpha)}\psi_{;\rho)} +\sigma_{\rho)}\psi^{;\alpha)})}{16}+h.c.\\ \label{ctU0} &+\frac{g_{\rho\sigma}}{4}\left(g^{\alpha\beta}\psi^\dagger i\sigma^\mu\psi_{;\mu}-\frac{\psi^\dagger i\sigma^{(\alpha} \psi^{;\beta)}}{2}\right) +\frac{g^{\alpha\beta}\psi^\dagger i \sigma_{(\rho} \psi_{;\sigma)}}{4}+h.c.\\ \nonumber &+g_{\rho\sigma}\left((FF)^{\alpha\beta}-\frac{g^{\alpha\beta}}{4}(FF)\right)+g^{\alpha\beta}(FF)_{\rho\sigma}-F^{\alpha}_{\,\,\,(\rho} F^{\,\,\,\,\beta}_{\sigma)}-\frac{(FF)^{(\alpha}_{(\rho} g^{\beta)}_{\sigma)}}{2}\,, \end{align} with higher orders being total derivatives $\mathcal U_{(1)}^{\rm s}=i[\partial_q\nabla,\mathcal U_{(0)}^{\rm s}]$, $\mathcal U_{(2)}^{\rm s}=-[\partial_q\nabla,[\partial_q\nabla,\mathcal U_{(0)}^{\rm s}]]/2$. The mixed part of $\mathcal U$ has a decomposition as \begin{align}\nonumber \frac{\left[\mathcal U_{(0)}^{\rm mx}\right]^{\alpha\beta}_{\rho\sigma}}{\kappa^2}&=\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)\frac{q_\mu q_{\nu}}{q^2}g^\nu_{(\rho}\phi_{,\sigma)}\\ &-\left(g^{[\lambda(\alpha}F^{\beta)\mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\frac{q_\mu q^\nu}{q^2}\left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)\,,\label{mxcU0} \end{align} for the zeroth order while \begin{align}\label{mxcU1} \frac{\left[\mathcal {U}^{\rm mx}_{(1)}\right]^{\alpha\beta}_{\rho\sigma} }{\kappa^2}= \left((\psi_{;\mu})^\dagger\sigma^\mu g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)\frac{1}{\sigma\cdot q}(g_{\rho\sigma}\sigma^\nu\psi_{;\nu}+\sigma_{(\rho}\psi_{;\sigma)})\\ \nonumber &-\frac{im_\phi^2\phi}{q^2}\left(g^{\alpha\beta} q_{(\rho}\phi_{;\sigma)}+(g^{\mu(\alpha} \phi^{;\beta)}-g^{\alpha\beta}\phi^{;\mu})q_\mu g_{\rho\sigma} \right)\\\nonumber &+i\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)_{;\nu}\left[\partial_q^\nu,\frac{q_\mu q_{(\rho}}{q^2}\right]\phi_{;\sigma)}\\ \nonumber &-i\left(g^{[\lambda(\alpha}F^{\beta) \mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)_{;\omega}\left[\partial_q^\omega ,\frac{q_\mu q^\nu}{q^2}\right]\left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)\\ \nonumber &+i\frac{q_\mu q_{\nu}}{q^2}\left(\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)g^\nu_{(\rho}\phi_{,\sigma)}\right)_{;\omega}\partial_q^\omega\\ \nonumber &-i\frac{q_\mu q^\nu}{q^2}\left(\left(g^{[\lambda(\alpha}F^{\beta) \mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)\right)_{;\omega}\partial_q^\omega\,, \end{align} for first and for second \begin{align}\label{mxcU2} \frac{\left[{\mathcal U}_{(2)}^{\rm mx}\right]^{\alpha\beta}_{\rho\sigma}}{\kappa^2}=&\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)\left(\mathcal K^{(1)}_\mu\frac{ q_{\nu}}{q^2}+ \frac{q_\mu}{q^2}\mathcal K^{(1)}_{\nu}-\frac{q_\mu}{q^2}\left\{q,\mathcal K^{(1)}\right\}\frac{q_{\nu}}{q^2}\right)g^\nu_{(\rho}\phi_{,\sigma)}\\ \nonumber &-\left(g^{[\lambda(\alpha}F^{\beta) \mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\left(\mathcal K^{(1)}_\mu\frac{ q^{\nu}}{q^2}+ \frac{q_\mu}{q^2}\mathcal K_{(1)}^{\nu}-\frac{q_\mu}{q^2}\left\{q,\mathcal K_{(1)}\right\}\frac{q^{\nu}}{q^2}\right) \left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)\\ \nonumber &+\left(g^{[\lambda(\alpha} F^{\beta)\mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)\frac{q_\mu R_{\lambda}^{\,\,\omega}q^\nu}{q^4}\left(g_{[\omega(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\omega\nu}\right)\\ \nonumber &-\frac12\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)_{;..}\left[\partial_q^{.2},\frac{q_{\mu}q_{\nu}}{q^2}\right]g^\nu_{(\rho}\phi_{,\sigma)}\\ \nonumber &-\left(g^{\mu(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\mu}\right)_{;.}\left[\partial_q^.,\frac{q_{\mu}q_{\nu}}{q^2}\right]g^\nu_{(\rho}\phi_{;\sigma)\omega}\partial_q^\omega-\frac{m_\phi^4\phi^2 g^{\alpha\beta}g_{\rho\sigma}}{q^2}\\\nonumber &+m_\phi^2\left((q^{(\alpha}\phi^{\beta)}-g^{\alpha\beta}(q\phi^;))\frac{1}{q^2}g_{\rho\sigma}\phi_{;\omega}\partial_q^\omega +g^{\alpha\beta}\phi_{;.} \partial_q^. \frac{1}{q^2} q_{(\rho}\phi_{;\sigma)}+(q^{(\alpha}\phi^{\beta)}-g^{\alpha\beta}(q\phi^{;}))\frac{1}{q^4}(q_{(\rho}\phi_{;\sigma)})\right)\\ \nonumber &+\frac12\left(g^{[\lambda(\alpha}F^{\beta)\mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)_{;..}\left[\partial_q^{.2},\frac{q_{\mu}q^{\nu}}{q^2}\right] \left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)\\ \nonumber &+ \left(g^{[\lambda(\alpha}F^{\beta)\mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)_{;.}\left[\partial_q^.,\frac{q_{\mu}q^{\nu}}{q^2}\right] \left(g_{[\lambda(\rho} F_{\sigma)\nu]}-g_{\rho\sigma}F_{\lambda\nu}\right)_{;\omega}\partial_q^\omega\\\nonumber &+i\left[\partial_q^\mu,\frac{q^\nu}{q^2}\right]\left((\psi_{;\omega})^\dagger\sigma^\omega g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)_{;\mu}\sigma_\nu(g_{\rho\sigma}\sigma^\lambda\psi_{;\lambda}+\sigma_{(\rho}\psi_{;\sigma)})+({\rm\,total\,derivative})\,. \end{align} The last transformation, $e^{i\mathcal V\partial_q}$, together with the definition in eq.~(\ref{DeftcU}) determines $\widetilde{\mathcal U}_{(0)}=\mathcal U_{(0)}-V^2$ where for convenience this combination has been given in eq.~(\ref{ctU0}). In particular since $V$ is itself $q$-independent we allocate $V^2$ to $\widetilde{\mathcal U}^{\rm s}$ and so being explicit \begin{align} \widetilde{\mathcal U}_{(0)}^{\rm s}=&\mathcal U_{(0)}^{\rm s} -V^2\,, & \widetilde{\mathcal U}_{(1)}^{\rm s}=&i[\partial_q\nabla,\widetilde{\mathcal U}_{(0)}^{\rm s}]+i[\partial_qV,\widetilde{\mathcal U}_{(0)}^{\rm s}] \,, \end{align} and the second order \begin{align}\label{Ut2eq} \widetilde{\mathcal U}_{(2)}^{\rm s}=&i[V\partial_q,i[\partial_q\nabla,\widetilde{\mathcal U}_{(0)}^{\rm s}]] -\frac12[[\partial_q\nabla,[\partial_q\nabla,\widetilde{\mathcal U}_{(0)}^{\rm s}]]-\frac12[V\partial_q,[V\partial_q,\widetilde{\mathcal U}_{(0)}^{\rm s}]]-[[\nabla,V]\partial_q^2,\widetilde{\mathcal U}_{(0)}^{\rm s}]\,, \end{align} Meanwhile for the mixed term we have \begin{align} \widetilde{\mathcal U}_{(0)}^{\rm mx}&=\widetilde{\mathcal U}^{\rm mx}_{(0)}\,, &\widetilde{\mathcal U}_{(1)}^{\rm mx}&=\mathcal U_{(1)}^{\rm mx}+i[V\partial_q,\mathcal U^{\rm mx}_{(1)}]\,, \end{align} and a second order \begin{align}\label{Ut2eq} \widetilde{\mathcal U}_{(2)}^{\rm mx}=&\mathcal U_{(2)}^{\rm mx}+i[V\partial_q,\mathcal U_{(1)}^{\rm mx}] -\frac12[V\partial_q,[V\partial_q,\mathcal U_{(0)}^{\rm mx}]]-[[\nabla,V]\partial_q^2,\widetilde{\mathcal U}_{(0)}^{\rm mx}]\,. \end{align} With these transformed operators one is in a position to evaluate the one loop action. \section{Evaluation of the operator trace\label{TrLogEv}} The evaluation has now been cast into the log of the trace of the transformed operator \begin{align}\label{OpFin} e^{i\mathcal V\partial_q}e^{iT}e^{-iqx}\mathcal O e^{iqx}e^{-iT}e^{-i\mathcal V\partial_q} =-(q+\tilde \mathcal K)^2+\widetilde{\mathcal U}\,, \end{align} where the transformation $e^{iqx}e^{-iT}$ has turned open derivatives into functions of the commutator $[\nabla,\nabla]$ and $e^{i\mathcal V\partial_q}$ has removed a possible linear term in $\nabla$. However just like $\nabla$ did not commute with $\partial_q\,\&\, q$ so does its commutator, $[\nabla,\nabla]$. To illustrate the relevance of this fact let us rearrange the first term in $\mathcal K$ as \begin{align} \mathcal K_{(1)}=& \frac 14 \left\{\partial_q^\nu,[\nabla_\nu,\nabla_\mu] \right\}+\frac1{12}R^\nu_{\,\,..\mu}\{q_\nu,\partial_q^{.2}\}\\ =&\frac12 \partial_q^\nu [\nabla_\nu,\nabla_\mu]+\frac 14 \left[[\nabla_\nu,\nabla_\mu],\partial_q^\nu \right]+\frac16 R^\nu_{\,\,..\mu}q_\nu\partial_q^{.2}+\frac1{12}R^\nu_{\,\,..\mu}[\partial_q^{.2},q_\nu]\\ =&\frac12 \partial_q^\nu [\nabla_\nu,\nabla_\mu]+\frac13\partial_q^\nu R_{\nu\mu}+\frac16 R^\nu_{\,\,..\mu}q_\nu\partial_q^{.2}\,. \label{CDvAllRH} \end{align} In this way the commutator acts solely on whatever lies to the right of $\mathcal K_{(1)}$. The case for $\widetilde{\mathcal K}_{(1)}$ is not qualitatively different but for completeness it is \begin{align} &\widetilde{\mathcal K}_{(1)}=\partial_q^\nu\left(\frac12([\nabla_\nu,\nabla_\mu]+\nabla_{[\nu} V_{\mu]}+V_{[\nu}V_{\mu]})-\frac12 \nabla_{(\nu} V_{\mu)}+\frac13 R_{\nu\mu} \right)+\frac{R^\rho_{\,\,\,..\mu}}{6}q_\rho\partial_q^{.2}\,. \end{align} When the commutator is acting on the field we are integrating over, i.e. $[\nabla_\nu,\nabla_\mu]$ is to its rightmost in the operator of eq.~(\ref{OpFin}), one has depending on the spin of the field, \begin{align} [\nabla_\alpha,\nabla_\beta] \phi &=0 &[\nabla _{\alpha},\nabla_{\beta}]\psi&=\frac{\sigma^{[a}\bar\sigma^{b]}}8e_{a,\rho}e^\lambda_{b} R^{\rho}_{\,\,\lambda\alpha\beta}\psi\\ [\nabla_\alpha,\nabla_\beta] A^\mu &=R^\mu_{\,\,\rho \alpha\beta}A^\rho & [\nabla_\alpha,\nabla_\beta] T^{\mu\nu}&= R^{\mu}_{\,\,\rho\alpha\beta}T^{\rho\nu}+R^{\nu}_{\,\,\rho\alpha\beta}T^{\mu\rho}\,, \end{align} so it is useful to define \begin{align} [\nabla_\alpha,\nabla_\beta]({\rm Field\,}\Phi)\equiv\mathscr{R}_{\alpha\beta}({\rm Field\,}\Phi)\,. \end{align} In a way analogous to creation and annihilation operator rearrangement one can put in the form of eq.~(\ref{CDvAllRH}) all terms in the expansion, i.e. the commutator $[\nabla,\nabla]$ to its rightmost position and all $\partial_q$ to the right of $q$'s, e.g. the first order in $\{q,\mathcal K\}$ in this form \begin{align} \mathcal O\supset \left\{q,\mathcal K_{(1)}\right\}=&-\frac R 6+q^\mu\partial_q^\nu\left(\mathscr R_{\nu\mu}+\frac 13 R_{\nu\mu} \right)+\frac 13 R^{\star\,\,\,\,\star}_{\,\,\,..}q_\star^2 \partial_q^{.2}\, \end{align} where as for $\partial_q$ the notation $R^{\star\,\,\,\,\star}_{\,\,\,..}q_\star^2=R^{\mu\,\,\,\,\nu}_{\,\,\,..}q_\mu q_\nu$ whereas for the tilded case \begin{align} \mathcal O\supset \left\{q,\widetilde{\mathcal K}_{(1)}\right\}=&-\frac R 6-\nabla V+q^\mu\partial_q^\nu\left(\tilde{\mathscr R}_{\nu\mu}+\frac 13 R_{\nu\mu}-\nabla_{(\nu} V_{\mu)} \right)+\frac 13 R^{\star\,\,\,\,\star}_{\,\,\,..}q_\star^2 \partial_q^{.2}\,. \label{AllLK} \end{align} where we have defined \begin{align} \tilde{\mathscr {R}}_{\mu\nu}=\mathscr R_{\mu\nu}+\nabla_{[\mu} V_{\nu]}+ V_{[\mu}V_{\nu]}\,, \end{align} the fact that this structure arranges as $[\nabla+V,\nabla+V]$ suggests a combined transformation in place of $e^{iT}e^{i\partial_q \mathcal V}$ might simplify the algebra. Nevertheless here such option is not pursued since in contrast to the universal $\nabla$, the action of $V$ might be confined to a single operator. In the form of eq.~(\ref{AllLK}) the hermiticity is not an obvious property yet it is more adequate for computations since all commutators are `evaluated' as opposed to $\partial_q$, for whom it is still left to specify what is acts on. For this purpose let us rewrite the one loop correction introducing $m^2$, (not to be confused with the scalar mass $m_\phi^2$) the one loop action of eq.~(\ref{Origin}): \begin{align} \nonumber \frac{i}{2}\mbox{tr}\log(\mathcal O+m^2)=&\frac i2 \int \frac{d^dxd^d q}{(2\pi)^d}\int dm^2\mbox{tr}[(\mathcal O+m^2)^{-1}] \\ =& \frac{i}{2}\int \frac{d^dxd^d q}{(2\pi)^d}\int dm^2\mbox{tr}[(-q^2+m^2-\{\widetilde{\mathcal K},q\}-\widetilde{\mathcal K}^2+ \mathcal U)^{-1}]\\ \nonumber =&-\frac{i}{2}\int \frac{d^dxd^d q}{(2\pi)^d}\int dm^2\sum\mbox{tr}\left(\left[\frac{1}{q^2-m^2} (\widetilde{\mathcal U}-\{q,\widetilde{\mathcal K}\}-\widetilde{\mathcal K}^2)\right]^n \frac{1}{q^2-m^2}\right) \end{align} where $m^2$ will, at the end of the calculation be taken to $0$ but in general it is useful to keep it as an IR regulator as indeed not all terms converge for $m^2\to0$ and the order of integration shall be kept as above. Once all terms in $\tilde\mathcal K, \widetilde{\mathcal U}$ are in the form of eq.~(\ref{AllLK}) only $\partial_q$ is left to act on propagators and other terms in the expansion to its right. After allowing all $\partial_q$ to make their way to the right the result will be momenta $q$ contracted with Lorentz tensors made out of the background fields. The momentum dependence in $q$ after loop integration will yield tensors built out of the metric (recall $q$ is a covariant object $q_\mu$, $q^2=q_\mu q_\nu g^{\mu\nu}$). With our expansion of $\tilde\mathcal K, \widetilde{\mathcal U}$ in its dimensions in loop momenta we can organize the effective action; the first order is ${\color{blue}\mathcal O(q^{d-2})}$; \begin{align} &\mbox{tr}\log(\mathcal O+m^2)\\\nonumber =&\int \frac{d^dxd^d q dm^2}{(2\pi)^d} \left(\frac{1}{q^2-m^2}\left(\widetilde{\mathcal U}_{(0)}-\{q,\widetilde\mathcal K_{(1)}\}\right)\frac{1}{q^2-m^2 \right)+\mathcal{O}(q^{d-4}) \end{align} Taking for demonstration a scalar field and with the result in eq.~(\ref{AllLK}) \begin{align} &\int \frac{d^dxd^dq}{(2\pi)^d} \int dm^2\frac{1}{q^2-m^2}\left(U_{(0)}-\{q,\mathcal K_{(1)}\}\right)\frac{1}{q^2-m^2}\\ \nonumber =&\int \frac{d^dxd^dq}{(2\pi)^d} \int dm^2\frac{1}{q^2-m^2}\left(\frac 1 6 R -\left(\mathscr R_{.*}+\frac 13 R_{.*} \right)q^*\partial_q-\frac 13 R^{\star\,\,\,\star}_{\,\,\,..} q_\star^2\partial_q^{.2}\right)\frac{1}{q^2-m^2}\\ \nonumber=&\int d^dx\frac R 6\int \frac{d^dq}{(2\pi)^d(q^2-m^2)} \end{align} which for dimensional regularization is non vanishing (when $m^2\to0$) only for $d= 2$ and contributes for scalars the well-known $N_S/(24\pi)$ to Weyl's anomaly (the `$-26/24\pi$' contribution for the bosonic string we cannot reproduce since Weyl scaling was not taken as local symmetry). The focus of this paper is however $d=4$ and the UV divergences contained in the next non-vanishing order ${\color{blue}\mathcal O(q^{d-4})}$: \begin{align} \label{allofthem} & \mbox{tr}\log(\mathcal O+m^2)=\mathcal O(q^{d-2})\\ \nonumber +&\boxed{\int \frac{d^dxd^dq}{(2\pi)^d} \int dm^2\left( \Delta\left(\widetilde{\mathcal U}_{(2)}-\{q,\widetilde\mathcal K_{(3)}\}-\widetilde\mathcal K_{(1)}^2\right)+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}-\{q,\widetilde\mathcal K_{(1)}\}\right)\right)^2\right) \Delta}+\mathcal{O}(q^{d-6}) \end{align} where for brevity we introduced $\Delta=(q^2-m^2)^{-1}$ and this is the integral at the core of our computation. This expression, safe for the term $\tilde\mathcal K_{(3)}$, resembles the static flat background case~\cite{Henning:2014wua} taking loosely speaking $\mathcal K$ as our (field strength)$\times\partial_q $. Given the main novel result of this work, i.e. the covariant derivative in eq.~(\ref{FistCDE}), eq.~(\ref{allofthem}) can be evaluated in a straight-forward way as done for the $\mathcal O(q^{d-2})$ term as sketched above and in particular the UV terms can be computed with the regularization of choice. The amount of algebra now nonetheless makes it more digestible to split the computation into sections and introduce some minimal notation. Here dimensional regularization will be employed and the following definition for an integral and propagator \begin{align}\label{DefmqInt} &\int \!d\bar Q\equiv \underset{d\to4}{\rm lim}\frac{8\pi^2(4-d)}{i\sqrt{|g|}}\int \frac{d^dq dm^2}{(2\pi)^d}\,, &&\Delta\equiv\frac{1}{q^2-m^2}\,, \end{align} casts the UV contributions subject of this work as \begin{align} \mathscr L_{\rm UV}&=\frac{1}{(4\pi)^2(4-d)}\int \!d\bar Q \sum\mbox{tr}\left(\left(\Delta (\widetilde{\mathcal U}-\{q,\widetilde{\mathcal K}\}-\widetilde{\mathcal K}^2)\right)^n \Delta\right)\\ &\equiv \frac{1}{(4\pi)^2(4-d)} \int \!d\bar Q\left(\mathcal I_{\rm s}+\mathcal I_{\rm mx}\right)\,,\label{FinAct} \end{align} where \begin{align} \label{SingleSpecies} \mathcal I_{\rm s}=&\left( \Delta\left(\widetilde{\mathcal U}_{(2)}^{\rm s}-\{q,\widetilde\mathcal K_{(3)}\}-\widetilde\mathcal K_{(1)}^2\right)+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde\mathcal K_{(1)}\}\right)\right)^2\right) \Delta\,,\\ \label{Mixed} \mathcal I_{\rm mx}=& \left(\Delta \widetilde{\mathcal U}_{(2)}^{\rm mx}+\left(\Delta \widetilde{\mathcal U}^{\rm mx}_{(0)}\right)^2+\left\{\Delta \widetilde{\mathcal U}^{\rm mx}_{(0)}\,,\Delta\left( \widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde\mathcal K_{(1)}\}\right)\right\}\right)\Delta\,, \end{align} encode the contributions from single-spin species running in the loop and mixed contributions respectively. The following sections are concerned with the part of the effective action computation for each of these to cases: single species loops~\ref{sec:GRren} $\mathcal I_{\rm s}$, and mixed-species loops~\ref{sec:MatRen}, $\mathcal I_{\rm mx}$. \subsection{Single species loops\label{sec:GRren}} \begin{figure}[h]\centering \begin{tikzpicture} \draw [style={decorate, decoration={snake}}] (0,0) node [anchor=east] {$R$} -- (1,0); \draw [style={decorate, decoration={snake}}] (.02,-.1) -- (1,-.1); \draw [style={decorate, decoration={snake}}] (2,0) -- (3,0); \draw [style={decorate, decoration={snake}}] (2.02,-.1) -- (3,-.1) node [anchor=west] {$R$}; \draw[thick] (1.5,0) circle (15pt); \end{tikzpicture}\quad \begin{tikzpicture} \draw [style={decorate, decoration={snake}}] (-1,0) node [anchor=east] {$R$} -- (0,0); \draw [style={decorate, decoration={snake}}] (-.98,-.1) -- (0,-.1); \draw[thick] (.5,0) circle (15pt); \filldraw[gray] (1.03,-0) circle (2pt) node [anchor=west, black] {$\Lambda\,,m_\phi^2$}; \end{tikzpicture}\quad \begin{tikzpicture} \filldraw[gray] (2.03,-0) circle (2pt) node [anchor=west, black] {$\Lambda\,,m_\phi^2$}; \draw[thick] (1.5,0) circle (15pt); \filldraw[gray] (.97,0) circle (2pt) node [anchor=east, black] {$\Lambda\,,m_\phi^2$}; \end{tikzpicture} \caption{Schematic of the UV divergent curvature terms at one loop\label{fig1}} \end{figure} The integration of a given spin field results in the UV divergent terms of eqs.~(\ref{FinAct}) with \begin{align} \mathcal I_{\rm s}=\Delta\left(\widetilde{\mathcal U}_{(2)}^{\rm s}-\{q,\widetilde{\mathcal K}_{(3)}\}-\widetilde{\mathcal K}_{(1)}^2\right)\Delta+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde{\mathcal K}_{(1)}\}\right)\right)^2\Delta\,, \label{MstSP} \end{align} this subsection carries out the loop integrals and yields the 1-loop corrections. Let us start with \begin{align} \int \!d\bar Q \Delta{\rm tr}( \widetilde{\mathcal U}_{(2)}^{\rm s})\Delta\,, \end{align} here total derivatives are neglected and hence the $-[\nabla,[\nabla, \widetilde{\mathcal U}^{\rm s}_{(0)}]]\partial_q^2/2$ (with ${\widetilde{\mathcal U}}^{\rm s}_{(0)}=U^{\rm s}-V^2$) piece in $\mathcal U_{(2)}^{s}$ as per eq.~(\ref{Ut2eq}) can be ignored, whereas for the remainder of $\widetilde{\mathcal U}_{2}$ \begin{align}\nonumber {\widetilde{\mathcal U}}^{\rm s}_{(2)}+[\nabla,[\nabla, \widetilde{\mathcal U}^{\rm s}_{(0)}]]\frac{\partial_q^2}{2}& i[V\partial_q, i[\nabla,{\widetilde{\mathcal U}}^{\rm s}_{(0)}]\partial_q]-\frac12[V\partial_q,[V\partial_q,{\widetilde{\mathcal U}}^{\rm s}_{(0)}]]-[[\nabla,V]\partial_q^2,{\widetilde{\mathcal U}}^{\rm s}_{(0)}]\\ & i[V, i[\nabla,{\widetilde{\mathcal U}}^{\rm s}_{(0)}]]\partial_q^2-\frac12[V,[V,{\widetilde{\mathcal U}}^{\rm s}_{(0)}]]\partial_q^2-[[\nabla,V],{\widetilde{\mathcal U}}^{\rm s}_{(0)}]\partial_q^2\Delta \end{align} where we used that $[\partial_q,\widetilde{\mathcal U}^{\rm s}_{(0)}]=[\partial_q,V]=0$ in the second line. This form makes clear that these are commutators of matrices which yield zero when traced over. One has that for the mixed pieces $[\partial_q,\mathcal U^{\rm mx}_{(i)}]\neq 0$ and this terms do contribute, as made explicit in sec.~\ref{sec:MatRen}. On the other hand the results of tilding $\mathcal K_{(3)}$ are terms which vanish when tracing over them or of the form of \begin{align} \int \!d\bar Q \Delta\{ q, (\widetilde{\mathcal K}_{(3)}-\mathcal K_{(3)})\}\Delta\supset \int \!d\bar Q \Delta \{q,i[V\partial_q,\mathcal K_{(2)}]\}\Delta \end{align} where \begin{align} [V\partial_q,\mathcal K^{(2)}_\mu]=-\frac 16 \{\left[V\partial_q,\left[\partial_q\nabla,[\nabla_\nu,\nabla_\mu]\right]\right],\partial_q^\nu\}-\frac1{24}R^{\nu}_{\,\,\,..\mu;.}\{\partial_q^{.3},V_\rho\left[\partial_q^\rho,q_\nu\right]\} \end{align} which, regardless of the matrix structure contained, involve the vanishing integral \begin{align} \int \!d\bar Q \Delta \{q,\partial_q^3\}\Delta=0\,, \end{align} and so one can drop the tilde and consider $\mathcal K_{(3)}$ only. Given these cancellations and total derivative terms the part relevant of eq.~(\ref{MstSP}) is: \begin{align}\nonumber &\Delta\left(-\{q,\mathcal K_{(3)}\}-\widetilde{\mathcal K}_{(1)}^2\right)\Delta+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde{\mathcal K}_{(1)}\}\right)\right)^2\Delta\\ \label{CDv3term} = &-\Delta\left\{q^\mu,\frac{1}{48}\{R^{\nu}_{\,\,..\mu}\partial_q^{.3},[\nabla_.,\nabla_\nu]\}+\frac{7}{720}\{(R^\nu_{\,\,..\rho}R^{\rho}_{\,\,..\mu}\partial_q^{.4},q_\nu\}\right\}\Delta\\ \nonumber &-\Delta\left \partial_q^\nu\left(\frac12([\nabla_\nu,\nabla_\mu]+\nabla_{[\nu} V_{\mu]}+V_{[\nu}V_{\mu]})-\frac12 \nabla_{(\nu} V_{\mu)}+\frac13 R_{\nu\mu} \right)+\frac{R^\nu_{\,\,..\mu}}{6}q_\nu\partial_q^{.2} \right)^2\Delta\\ \nonumber &+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}+\frac 1 6 R +\nabla V-\left(\widetilde{\mathscr R}_{\nu\mu}+\frac 13 R_{\nu\mu}-\nabla_{(\nu} V_{\mu)} \right)q^\mu\partial_q^\nu-\frac {R^{*\,\,\,*}_{\,\,..}}3 q^2_*\partial_q^{.2}\right)\right)^2\Delta\,. \end{align} Here the detailed loop integral computation is not made explicit for all terms, rather it is carried out for the first term of eq.~(\ref{CDv3term}) since this is the novel term that differs with the flat metric case. First, via the relation \begin{align} \{A,\{B,C\}\}=\{\{A,B\},C\}+[B,[C,A]]=2\{A,B\}C+[C,\{A,B\}]+[B,[C,A]] \end{align} one has, making all $q$ dependence explicit, \begin{align} \frac7{720}R^\mu_{\,\,..\rho}R^{\rho\,\,\,\nu}_{\,\,..}\int\frac{d^dq dm^2}{(2\pi)^d} &\frac{1}{q^2-m^2}\left(4 q_\mu q_\nu \partial_q^{.4}+2g_{(\nu.}q_{\mu)}\partial_q^{.3}+g^{.}_\nu g^._\mu\partial_q^{.2}\right)\frac{1}{q^2-m^2}\\ \nonumber = \frac7{720}R^\mu_{\,\,..\rho}R^{\rho\,\,\,\nu}_{\,\,..}\int\frac{d^dq dm^2}{(2\pi)^d}\Bigg(&-2\frac{g_{\nu}^.g_{\mu}^.g^{..}_{(\times \rm \color{purple} 12)}}{(q^2-m^2)^3}\\ \nonumber &+\frac{8}{(q^2-m ^2)^4}\left(g_{\nu}^.g_{\mu}^.q^{.2}_{(\times \rm \color{purple} 12)}+2g_{(\nu .}q_{\mu)}q^.g^{..}_{(\times \rm \color{purple} 12)}+4q_\nu q_\mu g^{..}g^{..}_{(\times \rm \color{purple} 3)}\right)\\ \nonumber &-\frac{96}{(q^2-m^2)^5}\left(g_{(\nu}^.q_{\mu)}q^{.3}_{(\times \rm \color{purple} 4)}+2q_\nu q_\mu q^{.2}g^{..}_{(\times \rm \color{purple} 12)}\right)+1536\frac{q_\nu q_\mu q^{.4}}{(q^2-m^2)^6}\Bigg)\\ =\frac7{720}\frac13\left(R_{..}^2+\frac32R_{....}^2\right)&\int\left(\frac{d^dq}{(2\pi)^dq^4}+\mathcal O\left(\frac{m^2}{q^6}\right)\right) \end{align} where $R_{....}^2=R_{\alpha\beta\gamma\delta}R^{\alpha\beta\gamma\delta}$, the purple subscript indicates the multiplicity in terms from symmetrizing in `$.$' indices and we used $R_{\alpha\beta\gamma\delta}R^{\alpha\gamma\beta\delta}=R_{....}^2/2$. Even if somewhat involved the contrast with conventional Feynman-diagram techniques makes this integral, the basic element of the computation, a relatively simple exercise whereas no knowledge of the heat-kernel method or De-Witt coefficients was required. The other term in $\mathcal K_{(3)}$ adds up with the above to yield: \begin{align} \int\frac{d^dq dm^2}{(2\pi)^d}\Delta\left\{q,\mathcal K_{(3)} \right\}\Delta=\left(\frac7{720}-\frac1{48}\right)\frac13\left(R_{..}^2+\frac32R_{....}^2\right)&\int\frac{d^dq}{(2\pi)^dq^4}+\mathcal O\left(\frac{m^2}{q^2}\right) \end{align} The loop integration for the left-over terms in~(\ref{CDv3term}) follows the above lines and results in, with the abbreviated notation of~(\ref{DefmqInt}), one of the main results here derived \begin{align}\label{KurRes} \int \!d\bar Q\, \mathcal I_{\rm s}=&\int \!d\bar Q\left( \Delta\left(-\{q,\widetilde{\mathcal K}_{(3)}\}-\widetilde{\mathcal K}_{(1)}^2\right)\Delta+\left(\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde{\mathcal K}_{(1)}\}\right)\right)^2 \Delta\right) \\ \nonumber &= \left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)\mbox{tr}(\mathbb I)+\frac{1}{12}\mbox{tr}\left(\tilde{\mathscr R}_{\mu\nu}\tilde{\mathscr R}^{\mu\nu}\right)+\frac{1}{2}\mbox{tr}\left(\widetilde{\mathcal U}_{(0)}^{\rm s}+\frac R6\right)^2+{(\rm total\,\, der.)} \end{align} This 1 loop result has long been available in the literature, see~\cite{Fradkin_1977,Barvinsky:1985an,Buchbinder:1992rb}, yet the emphasis here is the new computational technique. In this regard the universal formulae for the flat case taking $[F_{\mu\nu}]^a_{\,\,b}\to R^{\alpha}_{\,\,\beta \mu\nu}$ reproduces all terms except the first one which `counts' the degrees of freedom, is connected to the $a$ theorem and has been explicitly computed here. If one splits the contribution by the dimension of the operators, for the action of eq.~(\ref{OrAct}) and according to eq.~(\ref{ctU0}) the sum runs from a CC term to dimension twelve (see~\cite{Ruhdorfer:2019qmk} for a study of the operator basis) which here we organize as \begin{align} \mathcal I_{\rm s}= \sum_{n=0}^6\kappa^{2n-4}\mathcal I_{\rm s}^{2n}(\alpha_{m_\phi},\alpha_\Lambda,R,\phi,\psi,F) \end{align} where the action taken as a function of only one dimensionfull parameter $\kappa^{-1}= M_{\rm pl}/\sqrt{8\pi}$ and ratios $\alpha_{m_\phi}\equiv m_\phi^2\kappa^2$, $\alpha_\Lambda\equiv\Lambda\kappa^2$. A set of diagrams, which although incomplete represents all the possible external fields is given in figs~\ref{fig1}-\ref{fig3}. Let us look at the curvature square $R^2$ terms explicitly caring for the ghosts contributions as well in the structure of eq.~(\ref{KurRes}): \begin{align} \label{R2UV} {\rm Field} && &{\rm tr}(\mathbb{I})(R_{....}^2-R_{..}^2)/180&&{\rm tr}(\mathscr R^2)/12&&{\rm tr}(\widetilde{\mathcal U}_{(0)}^2)/2\\[1pt] \hline \nonumber &&&&&&&\\[-10pt] \nonumber {\rm Ghost}(c^\mu) & &(-2)\Bigg[&4\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right) &+&\frac{1}{12}(-R_{....}^2)&+&\frac12\left(R_{..}^2+\frac49R^2\right)\Bigg]\\ \nonumber {\rm Metric} & & &10\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)&+&\frac{1}{12}(-6R_{....}^2)&+&\frac12\left(3R_{....}^2-4R_{..}^2+\frac{22}{36}R^2\right)\\ \nonumber {\rm Scalar} & &&\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)&&&+&\frac12\left(\frac{R}{6}\right)^2\\\nonumber {\rm CFT~Scalar} && &\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right) &&&& \\\nonumber {\rm Weyl\,Fermion} & &(-1)\Bigg[&2\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)&+&\frac{1}{12}\frac{(-1)}4 R_{....}^2 &+&\frac12\frac {R^2}{72}\,\,\Bigg]\\ \nonumber {\rm Vector\,boson}& && 4\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)&+&\frac1{12}(-R_{....}^2)&+&\frac12\left(R_{..}^2-\frac29R^2\right)\\ \nonumber {\rm Ghost} (c)&&(-2)&\Bigg[\left(\frac{R_{....}^2}{180}-\frac{R_{..}^2}{180}\right)&&&+&\frac12\left(\frac{R}{6}\right)^2\,\,\Bigg] \end{align} If there are $N_\phi$ scalars, $N_\psi$ fermions and $N_A$ (spin 1) gauge bosons the contribution reads \begin{align}\nonumber R_{....}^2\left(\frac{N_\phi-13N_A}{180}+\frac{7N_\psi}{720} \right)+R_{..}^2\left(\frac{2N_\psi-N_\phi+88N_A}{180}\right)+R^2\left(\frac{2N_\phi-N_\psi-20N_A}{144}\right) \end{align} and so for the SM input $N_i=\{4,45,12\}$. One can also project onto the basis of Euler number density ($\tilde R_{....}^2=R_{....}^2-4R_{..}^2+R^2$) and Weyl tensor ($C_{....}^2=R_{....}^2-2R_{..}^2+R^2/3$) and a total derivative ($\nabla J=R_{....}^2+R_{..}^2+3R^2$) with the transformation \begin{align}\left(\begin{array}{c}c_{\tilde R}\\ c_{C}\\c_{\nabla J}\end{array}\right)= \frac1{22}\left(\begin{array}{ccc} 39&6&-15\\ -19&-8&9\\ 2&2&6 \end{array}\right)\left(\begin{array}{c} c_{R_{....}}\\c_{R_{..}}\\c_R\end{array}\right) \end{align} for the coefficients of each operator to check that the trace anomaly is reproduced as in e.g.~\cite{Duff:1993wm}. \begin{figure}\centering \begin{tikzpicture} \draw (0.22,0.33) -- (1,0); \draw (0.22,-0.33) node [anchor=south east] {$T$} -- (1,0); \draw (2,0) -- (2.78,0.33) node [anchor=north west]{$T$}; \draw (2,0) -- (2.78,-0.33); \draw[style={decorate, decoration={snake}}] (1.530,0) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,0) circle (15pt); \draw (0,-.8) node {$.$}; \end{tikzpicture}\qquad \begin{tikzpicture} \draw (0.5,.75) node [anchor=south east] {$T$} -- (1.133,1/2); \draw (1,1.15) -- (1.133,1/2); \draw (2,1.15) -- (1.866,1/2); \draw (2.6,.75) node [anchor=south west] {$T$} -- (1.866,1/2); \draw (1.533,-.41) -- (1.2,-1); \draw (1.5,-.41) -- (1.8,-1) node [anchor=south west] {$T$}; \draw[style={decorate, decoration={snake}}] (1.530,0) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,0) circle (15pt); \end{tikzpicture}\qquad \begin{tikzpicture} \draw (0.5,.75) node [anchor= north east] {$T$}-- (1.1465,.3535); \draw (1,1.1) -- (1.1465,.3535); \draw (2,1.1) -- (1.8535,.3535); \draw (2.5,.75) node [anchor=north west] {$T$} -- (1.8535,.3535); \draw (0.5,-.75) node [anchor=south east] {$T$}-- (1.1465,-.3535); \draw (1,-1.1) -- (1.1465,-.3535); \draw (2.1,-1.1) -- (1.8535,-.3535); \draw (2.5,-.75) node [anchor=south west] {$T$}-- (1.8535,-.3535); \draw[style={decorate, decoration={snake}}] (1.530,-.1) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,-.1) circle (15pt); \end{tikzpicture} \caption{Schematic of UV divergent matter terms at one loop where T stands for the stress energy tensor\label{fig2} so schematically $T\sim \phi^2+\psi^2+F^2$.} \end{figure} \begin{figure}[h]\centering \begin{tikzpicture} \draw (0.22,0.4) node [anchor=north east] {$T$} -- (1,0); \draw (0.22,-0.4) -- (1,0); \draw [style={decorate, decoration={snake}}] (2,0) -- (3,0); \draw [style={decorate, decoration={snake}}] (2.02,-.075) -- (3,-.075) node [anchor=west] {$R$}; \draw[style={decorate, decoration={snake}}] (1.530,0) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,0) circle (15pt); \draw (0,-1) node {\,}; \end{tikzpicture}\qquad \begin{tikzpicture} \draw (0.22,0.4) node [anchor=north east] {$T$} -- (1,0); \draw (0.22,-0.4) -- (1,0); \draw[style={decorate, decoration={snake}}] (1.530,0) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,0) circle (15pt); \filldraw[gray] (2,0) circle (2pt) node [anchor=west, black]{$\,\,\,\Lambda$}; \draw (0,-1) node {\,}; \end{tikzpicture}\qquad \begin{tikzpicture} \draw (0.5,.75) node [anchor=south east] {$T$} -- (1.133,1/2); \draw (1,1.15) -- (1.133,1/2); \draw (2,1.15) -- (1.866,1/2); \draw (2.6,.75) node [anchor=south west] {$T$} -- (1.866,1/2); \draw [style={decorate, decoration={snake}}] (1.47,-.40) -- (1.45,-1); \draw [style={decorate, decoration={snake}}] (1.53,-.42) -- (1.55,-1.02) node [anchor=west] {$R$}; \draw[style={decorate, decoration={snake}}] (1.530,0) circle (15pt); \draw[style={decorate, decoration={snake}}] (1.470,0) circle (15pt); \end{tikzpicture} \caption{Schematic of the UV divergent terms at one loop\label{fig3}} \end{figure} The remaining terms are contained in $\mathscr R^2$ or $(\widetilde{\mathcal U}+R/6)^2$ and are straightforward to obtain. Here we do not reproduce them all but give for scope the lowest dimensional operators generated \begin{align} \int \!d\bar Q \,(\mathcal I_{\rm s}^{0}+\mathcal I_{\rm s}^{2})=\frac 1 2 m_\phi^4{\rm tr}(\mathbb{I}_\phi)+5\Lambda^2+\left(\frac{m_\phi^2}{6}{\rm tr}(\mathbb{I}_\phi)+\frac43\Lambda\right) R+8\Lambda m_{\phi}^2\kappa^2\phi^2\label{RpLUV} \end{align} where this contribution together with those in eq.~(\ref{R2UV}) encapsulates all spin $\leq 1$ contributions and on the other end the highest dimensional term generated is \begin{align} \int \!d\bar Q \,\mathcal I_{\rm s}^{12}=\frac{45\kappa^8}{2048}\left(\psi^\dagger \sigma \psi\right)^4\,, \end{align} which produces an 8-point amplitude that grows with energy E as $\kappa^8E^4$. \subsection{Mixed contributions in the loop \label{sec:MatRen}} \begin{figure}[h]\centering \begin{tikzpicture} \draw (0,0) node [anchor=south west] {$\sqrt{T}$} -- (1,0); \draw (2,0) -- (3,0) node [anchor=north east]{$\sqrt{T}$}; \draw[thick] (2,0) arc (0:180:15pt); \draw[style={decorate, decoration={snake}}] (2.025,0) arc (0:-180:15pt); \draw[style={decorate, decoration={snake}}] (1.975,0) arc (0:-180:15pt); \draw (0,-.8) node {$.$}; \end{tikzpicture}\quad \begin{tikzpicture} \draw (-1.7,1) node [anchor=north east] {$\sqrt{T}$} -- (-0.866,1/2); \draw (0.,1/2) -- (.86,1); \draw [style={decorate, decoration={snake}}] (-0.44,-.37) -- (-.44,-1); \draw [style={decorate, decoration={snake}}] (-0.39,-.40) -- (-.39,-1) node [anchor=west] {$R$}; \draw[style={decorate, decoration={snake}}] (-0.83,1/2) arc (150:390:15pt); \draw[style={decorate, decoration={snake}}] (-0.89,1/2) arc (150:390:15pt); \draw[style=thick] (0.05,1/2) arc (30:150:15pt); \end{tikzpicture}\quad \begin{tikzpicture} \draw[style={decorate, decoration={snake}}] (.77,0) arc (0:45:.75); \draw[style={decorate, decoration={snake}}] (.77,0) arc (0:-45:.75); \draw[style={decorate, decoration={snake}}] (.72,0) arc (0:45:.75); \draw[style={decorate, decoration={snake}}] (.72,0) arc (0:-45:.75); \draw[style=thick] (0.53,.53) arc (45:135:.75); \draw[style={decorate, decoration={snake}}] (-.77,0) arc (180:225:.75); \draw[style={decorate, decoration={snake}}] (-.77,0) arc (180:135:.75); \draw[style={decorate, decoration={snake}}] (-.72,0) arc (180:225:.75); \draw[style={decorate, decoration={snake}}] (-.72,0) arc (180:135:.75); \draw[style=thick] (-0.53,-.53) arc (225:315:.75); \draw (-1,1) node [anchor= north east] {$\sqrt{T}$}-- (-0.53,0.53); \draw (-1,-1) node [anchor= south east] {$\sqrt{T}$}-- (-0.53,-0.53); \draw (1,1) node [anchor= north west] {$\sqrt{T}$}-- (.53,.53); \draw (1,-1) node [anchor= south west] {$\sqrt{T}$}-- (.53,-.53); \end{tikzpicture} \caption{Non-exhaustive set of diagrams for mixed contributions\label{FigMx}} \end{figure} Diagrams with internal particles of different spin contribute terms like those in fig.~\ref{FigMx} and the UV divergences that they give rise to in the effective action read \begin{align} \int \!d\bar Q\mathcal I_{\rm mx}=\int \!d\bar Q \left(\Delta \widetilde{\mathcal U}^{\rm mx}_{(2)}+\left\{\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\left\{q,\widetilde{\mathcal K}_{(1)}\right\}\right)\,,\Delta \widetilde{\mathcal U}^{\rm mx}_{(0)}\right\}+\left(\Delta \widetilde{\mathcal U}_{(0)}^{\rm mx}\right)^2 \right)\Delta\,. \end{align} Let us first address the the $\widetilde{\mathcal U}_{(2)}$ term which is given in terms of $\mathcal U$ in eqs.~(\ref{mxcU0}-\ref{mxcU2}) \begin{align} \widetilde{\mathcal U}_{(2)}^{\rm mx}=\mathcal U_{(2)}^{\rm mx}+ i[V\partial_q, \mathcal U^{\rm mx}_{(1)}]]-\frac12[V\partial_q,[V\partial_q,\mathcal U^{\rm mx}_{(0)}]]-[[\nabla,V]\partial_q^2,\mathcal U^{\rm mx}_{(0)}]\,. \end{align} Tracing over these operators one can simplify to \begin{align}\label{mxtcU2} {\rm tr}(\widetilde{\mathcal U}_{(2)}^{\rm mx}-\mathcal U_{(2)}^{\rm mx})={\rm tr}\left( iV[\partial_q, \mathcal U^{\rm mx}_{(1)}]]-\frac12[V\partial_q,[V\partial_q,\mathcal U^{\rm mx}_{(0)}]]-[\nabla,V][\partial_q^2,\mathcal U^{\rm mx}_{(0)}]\right)\,, \end{align} since for algebraic commutators like $[V,\widetilde{\mathcal U}_{(0)}^{\rm mx}]$ one has a vanishing trace. Given the structure in eq.~(\ref{mxcU0}) and the result \begin{align} \int \!d\bar Q\, \Delta \left[\partial_q^{.2},\frac{q_{\alpha}q_{\beta}}{q^2}\right]\Delta=0\,, \label{LoopInt0} \end{align} the last term in eq.~(\ref{mxtcU2}) cancels. The first term on the RHS of eq.~(\ref{mxtcU2}) contains the integrals \begin{align}\label{loopInt1} &\int \!d\bar Q \Delta \left[\partial_q^\nu,\frac{q_\mu}{q^2}\right]\Delta=\frac{g^\nu_\mu}{2}& &\int \!d\bar Q\Delta \left[\partial_q^\mu,\frac{q_{\alpha}q_{\beta}}{q^2}\right]\partial_q^\nu\Delta=\frac1{12}g^{\mu\nu}g_{\alpha\beta}-\frac16 g^{\mu}_{(\alpha } g^\nu_{\beta)} \end{align} so that \begin{align} \int \!d\bar Q \, (i \Delta [\partial_q^\mu,\mathcal U^{\rm mx}_{(1)}]\Delta)&=\frac i2\left((\psi_{;\nu})^\dagger\sigma^\nu g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)\sigma^\mu (g_{\rho\sigma}\sigma^.\psi_{;.}+\sigma_{(\rho}\psi_{;\sigma)})\\ \nonumber &+\frac{m_\phi^2\phi}{2}\left(g^{\alpha\beta} g_{(\rho}^\mu\phi_{;\sigma)}+(g^{\nu(\alpha} \phi^{;\beta)}-g^{\alpha\beta}\phi^{;\nu})g_{\nu}^\mu g_{\rho\sigma} \right)\\ \nonumber &-\left(\frac1{12}g^{\mu\omega}g_{\gamma\nu}-\frac16 g^{\mu}_{(\gamma } g^\omega_{\nu)}\right)\Bigg(\left(g^{\gamma(\alpha}\phi^{,\beta)}-g^{\alpha\beta}\phi^{,\gamma}\right)g^\nu_{(\rho}\phi_{,\sigma)}\\ \nonumber &\qquad-\left(g^{[\lambda(\alpha}F^{\beta) \gamma]}-g^{\alpha\beta}F^{\lambda\gamma}\right)\left(g_{[\lambda(\rho} F_{\sigma)\delta]}-g_{\rho\sigma}F_{\lambda\delta}\right)g^{\delta\nu}\Bigg)_{;\omega} \end{align} however when tracing the above times $V\propto \varepsilon^{.\alpha\rho.}g^{\beta\sigma}$ all terms but the fermionic one cancel: \begin{align}\nonumber &\int \!d\bar Q {\rm tr}(\Delta i V[\partial_q,\mathcal U_{(1)}^{\rm mx}])\Delta\\ =&\frac{\kappa^4}{16}\left(10\psi^{\dagger;\mu}\sigma^\alpha\psi_{;\mu}\psi^\dagger\sigma_\alpha\psi-2\psi^{\dagger ;(\alpha}\sigma_{\alpha}\psi^{;\mu)}\psi^\dagger\sigma_{\mu}\psi-6i(\psi^{\dagger;\alpha}\sigma^\mu\psi^{;\rho})(\psi^\dagger\sigma^\nu\psi)\varepsilon_{\alpha\mu\rho\nu}\right)\,. \end{align} The remaining term in eq.~(\ref{mxtcU2}) cancels as can be seen as follows introducing the notation $\mathcal U^{\rm mx}{(0)}=\mathcal U^{\rm mx,\mu\nu}_{(0)}q_\mu q_\nu/q^2=\mathcal U^{\rm mx,\star\star}_{(0)}q_\star^2/q^2$ \begin{align}\nonumber {\rm tr}(\left[V\partial_q,\left[V\partial_q,\mathcal U^{\rm mx,\star\star}_{(0)}\frac{q_\star^2}{q^2}\right]\right])=&{\rm tr}(\left[V\partial_q,V_.\mathcal U^{\rm mx,\star\star}_{(0)}\left[\partial_q^., \frac{q_{\star}^2}{q^2}\right] +\left[V,\mathcal U^{\rm mx,\star\star}_{(0)}\right]\frac{q_\star^2}{q^2}\partial_q^.\right])\\ =&{\rm tr}(V_.V_. \mathcal U^{\rm mx,\star\star}_{(0)} [\partial_q^{.2},\frac{q_\star^2}{q^2}]+V_.\left[V_.,\mathcal U^{\rm mx,\star\star}_{(0)}\right]\left[\partial_q^.,\frac{q_\star^2}{q^2}\right]\partial_q^.) \end{align} again given that the integral in eq.~(\ref{LoopInt0}) cancels one has \begin{align} \int \!d\bar Q {\rm tr} (\Delta [V\partial_q,[V\partial_q,\mathcal U^{\rm mx}_{(0)}]]\Delta )=&\left( \frac{1}{12}g^{\mu\nu}g_{\alpha\beta}-\frac16 g^{(\mu}_{\alpha } g^{\nu)}_{\beta}\right){\rm tr} (V_\mu\left[V_\nu,\mathcal U^{\rm mx,\alpha\beta}_{(0)}\right])\\ \nonumber =&\left( \frac{1}{12}g^{\mu\nu}g_{\alpha\beta}-\frac16 g^{(\mu}_{\alpha } g^{\nu)}_{\beta}\right){\rm tr} ([V_\mu,V_\nu],\mathcal U^{\rm mx,\alpha\beta}_{(0)}])=0\,.\label{mxtU2cancels} \end{align} The terms in eq.~(\ref{mxtcU2}) then reduce to: \begin{align} \nonumber &\int \!d\bar Q \Delta\left( \widetilde{\mathcal U}^{\rm mx}_{(2)}-\mathcal U_{(2)}^{\rm mx}\right)\Delta\\ =&\frac{\kappa^4}{16}\left(10\psi^{\dagger;\mu}\sigma^\alpha\psi_{;\mu}\psi^\dagger\sigma_\alpha\psi-2\psi^{\dagger ;(\alpha}\sigma_{\alpha}\psi^{;\mu)}\psi^\dagger\sigma_{\mu}\psi-6i(\psi^{\dagger;\alpha}\sigma^\mu\psi^{;\rho})(\psi^\dagger\sigma^\nu\psi)\varepsilon_{\alpha\mu\rho\nu}\right)\,. \end{align} Now we turn to the term $\mathcal U_{(2)}^{\rm mx}$ given in eq.~(\ref{mxcU2}). Useful relations for the trace of the operator are \begin{align} (g^{\mu(\alpha}\phi^{;\beta) \gamma}-g^{\alpha\beta} \phi^{;\mu \gamma})g^{\nu}_{(\alpha}\phi_{;\beta) \delta}=&2\phi^{;\alpha \gamma} \phi_{;\alpha \delta}g^{\mu\nu}\,,\\ \left(g^{[\lambda(\alpha}F^{\beta)\mu]}-g^{\alpha\beta}F^{\lambda\mu}\right)^{;\gamma}\left(g_{[\lambda (\alpha}F_{\beta ) \nu]}-g_{\alpha\beta}F_{\lambda\nu}\right)_{;\delta}=&4 F^{\alpha \mu;\gamma}F_{\alpha\nu;\delta}+2g^\mu_\nu F^{\alpha\beta;\gamma}F_{\alpha\beta;\delta}\,,\\ (q^{(\alpha}\phi^{\beta)}-g^{\alpha\beta}q\phi^;)\frac{1}{q^2}g_{\alpha\beta}\phi_{;.}\partial_q^. +g^{\alpha\beta}\phi_{;.} \partial_q^. \frac{1}{q^2} q_{(\alpha}\phi_{;\beta)}=&2\phi_{;\mu}\phi^{;\nu}\left[\partial_q^\mu\,,\frac{q_{\nu}}{q^2}\right] \,, \end{align} and the possible integrals reduce to those in eqs.~(\ref{loopInt1},\ref{LoopInt0}) plus the following \begin{align} \int \!d\bar Q\Delta\left(\mathcal K_{(1)}^\mu\frac{ q^{\nu}}{q^2}+ \frac{q^\mu}{q^2}\mathcal K_{(1)}^{\nu}-\frac{q^\mu}{q^2}\left\{q,\mathcal K_{(1)}\right\}\frac{q^{\nu}}{q^2}\right)\Delta&=\frac{g^{\mu\nu}R+2R^{\mu\nu}}{24} \, \end{align} so that the result is \begin{align} \nonumber \int \!d\bar Q \Delta\left(\mathcal U_{(2)}^{\rm mx}\right)\Delta \label{U2T} =&\frac{\kappa^2}{3}\left(F^{\alpha\beta;\lambda}F_{\alpha\beta;\lambda}-2F^{\alpha\beta;\lambda}F_{\alpha\lambda;\beta}-2F^{\alpha\mu}_{\,\,\,\,\,\,\,;\mu}F_{\alpha\nu}^{\,\,\,\,\,\,;\nu}\right)+3m_\phi^2\kappa^2\phi_;^2-4m_\phi^4\kappa^2\phi^2\\ &+\frac {i\kappa^2}2\left((\psi_{;\nu})^\dagger\sigma^\nu g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)_{;\mu}\sigma^\mu(g_{\alpha\beta}\sigma^\lambda\psi_{;\lambda}+\sigma_{(\alpha}\psi_{;\beta)})\\ &+\frac{\kappa^2}2 R\phi_{;.}^2+\frac {\kappa^2}6R (FF)-\frac{2\kappa^2}3 (FFR)\,, \end{align} where $(FFR)=F^{\mu\nu}F_{\nu\rho}R^{\rho\mu}$. The square of the $\widetilde{\mathcal U}_{(0)}^{\rm mx}$ term involves a trace and a simple integral, carrying on the notation of eq.~(\ref{mxtU2cancels}), they combine into, \begin{align}\nonumber\int \!d\bar Q{\rm tr}\left(\Delta\widetilde{\mathcal U}_{(0)}^{\rm mx}\right)^2\Delta& ={\rm tr}(\mathcal U^{\rm mx,\mu\nu}_{(0)}\mathcal U^{\rm mx,\lambda\omega}_{(0)}) \frac{g_{\mu\nu}g_{\lambda\omega}+g_{\mu(\lambda} g_{\omega)\nu}}{48}\\ & = 2\kappa^{4}\phi_;^4+\frac{7\kappa^{4}}6 (FF)^2+\frac{4\kappa^{4}}3(FFFF)+3 \kappa^{4}(\phi_; FF\phi_;)\,, \end{align} where $(FF)=$tr$FF=F_{\alpha\beta}F^{\beta\alpha}$, $(FFFF)=F^{ab}F_{bc}F^{cd}F_{da}$. Lastly the crossed term, given the integrals \begin{align} &\int \!d\bar Q \left\{\Delta ,\Delta\frac{q_\alpha q_\beta}{q^2}\right\}=\frac {g_{\alpha\beta}}4\,,&&\int \!d\bar Q \left\{\Delta\frac{q_\alpha q_\beta}{q^2} ,\Delta\{q,\mathcal K_{(1)}\}\right\}=-\frac R6\frac{g_{\alpha\beta}}{4}\, \end{align} results in \begin{align}\nonumber &\int \!d\bar Q\left\{\Delta \widetilde{\mathcal U}_{(0)}^{\rm mx}\,,\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\{q,\widetilde{\mathcal K}_{(1)}\}\right)\right\}\Delta =\frac14 {\rm tr}\left(\widetilde{\mathcal U}^{\rm mx,\alpha\beta}_{(0)}g_{\alpha\beta}(\widetilde{\mathcal U}^{\rm s}_{(0)}+R/6)\right)\\ \nonumber =&\frac{\kappa^2}4\left[\widetilde{\mathcal U}_{(0)}^{\rm s}\right]^{\alpha\beta}_{\rho\sigma}\left(g_{\nu(\alpha}\phi_{;\beta) }(g^{\nu(\rho}\phi^{;\sigma) }-g^{\rho\sigma} \phi^{;\nu })- \left(g_{[\lambda (\alpha}F_{\beta \} \nu]}-g_{\alpha\beta}F_{\lambda\nu}\right)\left(g^{[\lambda(\rho}F^{\sigma)\nu]}-g^{\rho\sigma}F^{\lambda\nu}\right)\right)\\ & +\frac{\kappa^2}3 R\phi_;^2+\frac{\kappa^2}2 (FF)R\,, \end{align} So to summarize, we have that \begin{align} \nonumber &\int \!d\bar Q\mathcal I_{\rm mx}=\int \!d\bar Q \left(\Delta \widetilde{\mathcal U}^{\rm mx}_{(2)}+\left\{\Delta\left(\widetilde{\mathcal U}_{(0)}^{\rm s}-\left\{q,\widetilde{\mathcal K}_{(1)}\right\}\right)\,,\Delta \widetilde{\mathcal U}^{\rm mx}_{(0)}\right\}+\left(\Delta \widetilde{\mathcal U}_{(0)}^{\rm mx}\right)^2 \right)\Delta\\ \nonumber =&\frac{\kappa^2}4\left[\widetilde{\mathcal U}_{(0)}^{\rm s}\right]^{\alpha\beta}_{\rho\sigma}\left(g_{\nu(\alpha}\phi_{;\beta) }(g^{\nu(\rho}\phi^{;\sigma) }-g^{\rho\sigma} \phi^{;\nu })- \left(g_{[\lambda (\alpha}F_{\beta \} \nu]}-g_{\alpha\beta}F_{\lambda\nu}\right)\left(g^{[\lambda(\rho}F^{\sigma)\nu]}-g^{\rho\sigma}F^{\lambda\nu}\right)\right) \\ \nonumber &+\frac{\kappa^4}{16}\left(10\psi^{\dagger;\mu}\sigma^\alpha\psi_{;\mu}\psi^\dagger\sigma_\alpha\psi-2\psi^{\dagger ;(\alpha}\sigma_{\alpha}\psi^{;\mu)}\psi^\dagger\sigma_{\mu}\psi-6i(\psi^{\dagger;\alpha}\sigma^\mu\psi^{;\rho})(\psi^\dagger\sigma^\nu\psi)\varepsilon_{\alpha\mu\rho\nu}\right)\\ \nonumber &+\frac{\kappa^2}{3}\left(F^{\alpha\beta;.}F_{\alpha\beta;.}-2F^{\alpha\beta;.}F_{\alpha.;\beta}-2F^{\alpha\nu}_{\,\,\,\,\,\,\,;\nu}F_{\alpha.}^{\,\,\,\,\,\,;.}\right)+3m_\phi^2\kappa^2\phi_;^2-4m_\phi^4\kappa^2\phi^2\\ \nonumber &+\frac {i\kappa^2}2\left((\psi_{;*})^\dagger\sigma^* g^{\alpha\beta}-\frac{(\psi^{;(\alpha})^\dagger\sigma^{\beta)}}{2}\right)_{;\nu}\sigma^\nu(g_{\alpha\beta}\sigma^.\psi_{;.}+\sigma_{(\alpha}\psi_{;\beta)})\\ \nonumber &+2\kappa^{4}\phi_;^4+\frac{7\kappa^{4}}6 (FF)^2+\frac{4\kappa^{4}}3(FFFF)+3 \kappa^{4}(\phi_; FF\phi_;)\\ &+\frac{5\kappa^2}6 R\phi_{;.}^2+\frac {2\kappa^2}3\left(R (FF)- (FFR)\right)\,, \label{MxRst} \end{align} and the dimension of operators generated goes from 2 to 10. \section{Conclusions} A novel method for computing loop corrections in gravity was presented based on a covariant derivative expansion. The generalization for the covariant derivative expansion to gravity was carried out explicitly to 3rd order in inverse loop momenta and employed to compute the one loop UV divergences in Hilbert-Einstein gravity with a cosmological constant $\Lambda$ and spin 0,1/2 and 1 matter. Our results are summarized in eqs.~(\ref{FinAct}-\ref{Mixed},\ref{R2UV},\ref{MxRst}). While the selected target here was the UV, this technique could be extended to obtain the full one loop action in a universal formula akin to the flat case and in doing so study the model independent properties of gravity on the IR. This extension would require pushing to higher orders in inverse loop momenta in the covariant derivative expansion which stands as a computational challenge. Inflation or the recent interest on low energy consequences of the UV completion of gravity are fields where this technique could be put to use. \acknowledgments The author acknowledges fruitful discussions with Enrique Alvarez, Diego Blas, Brian Henning and Hitoshi Murayama. This work was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
1,314,259,996,835
arxiv
\section{Introduction and Main Results} \ \ \ \ \ Many mathematical models of phenomena occuring in sociology, physics, biology and engineering involve random differential and integral equations. Theoretical and applied treatments of problems concerning random differential and integral equations can be found in many papers and monographs: Bharucha-Reid ([4]); Doob ([5]); Padgeett and Tsokos ([6]); Tsokos and Padgett ([7]); Rao and Tsokos ([8]). \\ For stochastic differential equations, firstly we should prove the existence for the solutions. In filtering problem we know the system $X_t$ satisfying $dX_t=b(t,X_t)dt+\sigma (t,X_t)dB(t),$ the observations $Z_s$ satisfying $dZ_s=c(s,X_s)ds+d(s,X_s)dV(s),Z_0=0 $ for $0\leq s\leq t$, where V(s) is Brownian motion, we want to find the best estimate $\hat{X} _t$ of the state $X_t$ of the system based on these observations, before we find the estimate $\hat{X} _t$, we should give some assumptions for the existence of the corresponding stochastic integral equations. \\ Some mathematicians have used Picard's iteration or Banach's contraction mapping principle to prove the existence and uniqueness of some stochastic integral equations. The goal of this paper is to give a new existence theorem about stochastic integral equations using Schauder's fixed point theorem, in order to apply Schauder's fixed point theorem we need to construct a compact operator A and a convex bounded and closed nonempty subset M. Furthermore, comparing with Banach's fixed point theorem we weak some conditions. \vspace{0.3cm} \textbf{Theorem 1.1(\cite{1})}(Schauder's fixed point theorem).\ {\em The compact operator $$A: M\longrightarrow M$$ has at least one fixed point when M is a bounded, closed, convex, nonempty subset of a Banach space X over real field.} \vspace{0.3cm} \textbf{Theorem 1.2(\cite{1})}(Banach's fixed point theorem).{\em We assume that:\\ (a) M is a closed nonempty subset in the Banach space X over field K, and\\ (b) the operator $A: M\longrightarrow M$ is k-contractive, i.e., there is $0\leq k<1$ such that: $$ {\| Au-Av\| \leq k\| u-v\| \ \ for \ all \ u,v\in M.} $$ Then, the operator A has exactly one fixed point $u$ on the set M.}\\ Schauder's Fixed Point Theorem can be applied to many fields in mathematics, especially to the integral equation: \begin{eqnarray} {u(x)=\lambda\int_a^bF(x,y,u(y))dy, \ \ \ a\leq x\leq b,\label{91.1}} \end{eqnarray} where $-\infty <a<b<+\infty$ and $\lambda\in R$. Let $$ Q={\{(x,y,u)\in R^3 ;\ x,y\in [a,b],\ |u|\leq r\}} \ \ for \ fixed \ r>0.$$ \vspace{0.3cm} \textbf{Theorem 1.3(\cite{1})}{\em Assume the following conditions:\\ (a) The function $F:Q\longrightarrow R $ is continuous.\\ (b) We define $(b-a)$$\mathcal {M}$ $ : =\max _{(x,y,u)\in Q}$ \ $|F(x,y,u)|$, let the real number $\lambda $ be given such that \begin{eqnarray} |\lambda | {\mathcal {M}}\leq r \end{eqnarray} (c) We set $X: =C[a,b]$ and $ M:=\{u\in X;\| u\| \leq r\}$.\\ Then the original integral equation (1) has at least one solution $u\in M$.}\\ It well known that Banach's Fixed Point Theorem can be used to prove the existence and uniqueness of the solution for the integral equation(1). \vspace{0.3cm} \textbf{Theorem 1.4(\cite{1})}{\em Assume the following conditions:\\ (a) The function $F:[a,b]\times [a,b]\times R\longrightarrow R$ is continuous, and the partial derivative $$F_u:[a,b]\times [a,b]\times R\longrightarrow R$$ is also continuous. \\ (b) There is a number $\mathcal {L}$ such that $$ |F_u(x,y,u)|\leq {\mathcal {L}} \ \ for \ all \ x,y\in [a,b],u\in R. $$ (c) Let the real number $\lambda $ be given such that \begin{eqnarray} (b-a)|\lambda |{\mathcal {L}}<1. \end{eqnarray} (d) Set $X:=C[a,b]$ and $\| u\| :=max_{a\leq x\leq b}|u(x)|$.\\ Then the original equation (1) has a unique solution $u\in M$.}\\ From Theorem 1.1 and Theorem 1.2 we know that Schauder's fixed point theorem is a existence principle, while Banach's fixed point theorem is a existence and uniqueness theorem. It seems that the conditions in Theorem 1.3 are weaker comparing with Theorem 1.4, that is, (2) \ is \ easier \ to \ reach \ than \ (3).\\ Here, a nature question is whether Schauder's and Banach's fixed point theorems can be applied to stochastic integral equations. Furthermore, whether the conditions coming from Banach's fixed point theorem are stronger than the conditions from Schauder's fixed point theorem.\\ \vspace{0.3cm} {\bf Notation 2.1(\cite{2})}\ {\em For convenience, we will use $X:=L_{ad}^2([a,b]\times \Omega )$ to denote the space of all stochastic process $f(t,\omega ), a\leq t\leq b,\omega \in \Omega $ satisfying the following conditions:\\ (1)$f(t,\omega )$ is adapted to the filtration $\mathcal {F}$$_t$; \\ (2)$\int _a^b E(|f(t)|^2)dt<+\infty .$} \\ Now we want to solve the stochastic integral equation: \begin{eqnarray} {x(t;w)=h(t_0;w)+\int_a^t \sigma (s,x(s;w))dB(s)+\int_a^t f(s,x(s;w))ds,\ a\leq t\leq b,} \end{eqnarray} where: (1)$\omega \in \Omega $ , where $\Omega $ is the supporting set of the probability measure space $(\Omega ,{\mathcal {F}} ,P)$ with $\mathcal {F} $ being the $\sigma $ -algebra and P the probability measure;\\ (2)$x(t;w)$ is the unknown random variable for each $t\in [a,b]$;\\ (3)$h(t_0;w)$ is the knowm random variable and $E|h|^2<+\infty $;\\ (4)$B(t)$ be a Brownian motion and $\{{\mathcal {F}} _t ; a\leq t\leq b\}$ be a filtration so there $B(t)$ is ${\mathcal {F}} _t$ -measurable for each $t$ and $B(t)-B(s)$ is independent of ${\mathcal {F}}_s$ for any $s<t$. Let $$ Q={\{(t,X_t)\in R^2 ;\ t\in [a,b], \ and \ \| X_t\| \leq r \ for \ fixed \ r>0\}}. $$ \vspace{0.3cm} \textbf{Theorem 1.5 }{\em Assume the following conditions: \\ (a) $f(s,X_s),\ \sigma (s,X_s)$ are measurable on $[a,b]\times \Omega $;\\ ~~~~ $f(s,X_s):Q\longrightarrow R$ is continuous;\\ ~~~~ $\sigma (s,X_s):Q\longrightarrow R$ is continuous;\\ (b) We define \begin{eqnarray} d=\sup _{(s,X_s)\in Q}\{ \| f(s,X_s)\| ,\| \sigma (s,X_s)\| \}, \end{eqnarray} and let the real number $a,\ b,\ d $ and random variable $h$ be given that \begin{eqnarray} 3E[h^2]+3(1+b-a)(b-a)d^2\leq r^2. \end{eqnarray} (c) We set $X:=L_{ad}^2([a,b]\times \Omega )$ (see Notation 2.1) and $M:=\{ X_t\in X;\| X_t\| \leq r\}$.\\ Then the stochastic integral equation (4) has at least one solution $X_t\in M$}. \vspace{0.3cm} \textbf{Theorem 1.6 }{\em Assume the following conditions: \\ (a) $f(s,X_s), \ \sigma (s,X_s)$ are measurable on $[a,b]\times \Omega $;\\ (b)\begin{eqnarray} \ |f(s,X_s)-f(s,Y_s)|&\leq& k_1 |X_s-Y_s|;\\ |\sigma (s,X_s)-\sigma (s,Y_s)|&\leq& k_2 |X_s-Y_s|; \end{eqnarray} (c) Let the real number $a,\ b,\ c=\{k_1,k_2\}$ be given such that \begin{eqnarray} 0\leq 2c^2(1+b-a)(b-a)<1. \end{eqnarray} (d) We set $X:=L_{ad}^2([a,b]\times \Omega )$ (see Notation 2.1) and $M:=\{ X_t\in X;\| X_t\| \leq r\}.$ \\ Then the stochastic integral equation (4) has a unique solution $X_t\in M$}. \section{Some Lemmas } \ \ \ \ \ \ We require the following Lemmas for proving the existence of the stochastic integral equation. \vspace{0.3cm} {\bf Lemma 2.1(\cite{1})}\ {\em Let X and Y be normed spaces over field K, and let $$A:M\subseteq X\longrightarrow Y$$ be a continuous operator on the compact nonempty subset M of X. Then, A is uniformly continuous on M. \vspace{0.3cm} {\bf Lemma 2.2(\cite{1})}\ Let $X:=L_{ad}^2([a,b]\times \Omega )$ with $\| X_t\|:=(E|X_t|^2)^\frac{1}{2}$ and $-\infty <a<b<+\infty $. Suppose that we are given a set M in X such that \\ (1)M is bounded , i.e., $\| X_t\| \leq r$ for all $X_t\in M$ and fixed $r\geq 0$.\\ (2)M is equicontinuous, i.e., for each $\varepsilon >0$, there is a $\delta >0$ such that \\ $$|t_1-t_2|<\delta \ \ \ and \ \ X_t\in M \ \ \ imply \ \ |X_{t_1}-X_{t_2}|<\varepsilon .$$ Then, M is a relatively compact subset of X. \vspace{0.3cm} {\bf Definition 2.1(\cite{1})}\ Let X and Y be normed spaces over field K. The operator $$A:M\subseteq X\longrightarrow Y$$ is called compact iff\\ (1) A is continuous, and\\ (2) A transforms bounded sets into relatively compact sets. \vspace{0.3cm} {\bf Lemma 2.3(\cite{3})}\ (It$\hat{o}$ Isometry)\\ For each $X_t,Y_t\in L_{ad}^2([a,b]\times \Omega )$, we have\\ $$E[(\int _a^b f(t,w)dB(t))^2]=E[\int _a^b f^2(t,w)dt]$$} \section{The Proof of Theorem 1.5} \ \ \ \ \ \ \vspace{0.3cm} {\bf Proof}: We divide the proof into three steps: $Step$ 1: We prove that $M=\{ X_{t}\in X;\| X_t\| \leq r\}$ is closed, convex subset of $L_{ad}^2([a,b]\times \Omega )$ (see Notation 2.1). \\ (A) We prove M is closed. \\ Let $X_t^{(n)}\in M$ for all $n$, i.e., $$\| X_t^{(n)}\| \leq r \ \ \ for \ all \ n.$$ If $X_t^{(n)}\longrightarrow X_t$ as $n\longrightarrow +\infty $, then $\|X_t\|\leq r$, and hence $X_t\in M$.\\ (B) We prove M is convex. \\ If $X_t,Y_t\in M$ and $0\leq \alpha \leq 1$, then \begin{eqnarray*} \ \|\alpha X_t+(1-\alpha )Y_t\| &\leq&\| \alpha X_t\| +\|(1-\alpha ) Y_t\| \\ &\leq& \mbox{} \alpha r+(1-\alpha )r\\ &=& \mbox{} r \end{eqnarray*} Hence $$\alpha X_t+(1-\alpha )Y_t\in M.$$ $Step$ 2: We prove that $A: M\longrightarrow M$ is a compact operator.\\ Define $A: M\longrightarrow M$ \begin{eqnarray} {A(X_t)=h(t_0;w)+\int_a^t \sigma (s,x(s;w))dB(s)+\int_a^t f(s,x(s;w))ds, \ a\leq t\leq b,} \end{eqnarray} Then\\ (a) $A: M\longrightarrow M$ is a continuous operator.\\ By Lemma 2.1, we know $f(s,X_s), \sigma (s,X_s)$ are uniformly continuous on the compact set Q. This implies that, for each $\varepsilon >0$, there is a number $\delta >0$ such that $$\| \sigma (s,X_s)-\sigma (s,Y_s)\| <\varepsilon _1$$ $$\| f(s,X_s)-f(s,Y_s)\| <\varepsilon _2$$ for all$(s,X_s),(s,Y_s)\in Q$ with $\| X_s-Y_s\| <\delta .$\\ For each $X_t,Y_t\in M$, we have \begin{eqnarray*} \ \| AX_t-AY_t \| ^2&=&E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s)\\ &+& \mbox{} \int _a^t (f(s,X_s)-f(s,Y_s))ds)^2 \end{eqnarray*} Using the inequality $(a+b)^2\leq 2(a^2+b^2)$ to get \begin{eqnarray} \ \| AX_t-AY_t \| ^2&\leq&2E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s))^2{\nonumber}\\ &+& \mbox{} 2E(\int _a^t (f(s,X_s)-f(s,Y_s))ds)^2\label{95} \end{eqnarray} Applying the It$\hat{o} $ Isometry to $E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s))^2$ , we get: \begin{eqnarray} \ E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s))^2&=&E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))^2ds){\nonumber}\\ &=& \mbox{} \int _a^t E(\sigma (s,X_s)-\sigma (s,Y_s))^2ds{\nonumber}\\ &=& \mbox{} \int _a^t \| \sigma (s,X_s)-\sigma (s,Y_s)\| ^2ds{\nonumber}\\ &<& \mbox{} (b-a)\varepsilon _{1}^2\label{95} \end{eqnarray} For $E(\int _a^t (f(s,X_s)-f(s,Y_s))ds)^2$, we use Schwarz's inequality to get \begin{eqnarray} \ E(\int _a^t (f(s,X_s)-f(s,Y_s))ds)^2&\leq&E((t-a)\int _a^t (f(s,X_s)-f(s,Y_s))^2ds){\nonumber}\\ &\leq& \mbox{} (b-a)\int _a^t E(f(s,X_s)-f(s,Y_s))^2ds{\nonumber}\\ &=& \mbox{} (b-a)\int _a^t \| f(s,X_s)-f(s,Y_s)\| ^2ds{\nonumber}\\ &<& \mbox{} (b-a)^2\varepsilon _{2}^2\label{95} \end{eqnarray} Put equations (12) and (13) into equation (11) to get \begin{eqnarray*} \ \| AX_t-AY_t \| ^2&<&2(b-a)[(b-a)\varepsilon _{2}^2+\varepsilon _{1}^2]\\ &=& \mbox{} \varepsilon ^2 \end{eqnarray*} Therefore for each $X_t,Y_t\in M$, there exists $\delta >0$, when $\| X_t-Y_t\| \leq \delta $, we have \begin{eqnarray*} \| AX_t-AY_t \| < \varepsilon . \end{eqnarray*} That is: A is a continuous operator. \\ (b) A(M) is bounded.\\ For each $X_t\in M$ \begin{eqnarray} \ \| AX_t\| ^2&=&E(h(t_0;\omega )+\int _a^t \sigma (s,X_s)dB(s)+\int _a^t f(s,X_s)ds)^2{\nonumber}\\ &\leq& \mbox{} 3E[h^2]+3E(\int _a^t \sigma (s,X_s)dB(s))^2+3E(\int _a^t f(s,X_s)ds)^2{\nonumber}\\ &\leq& \mbox{} 3E[h^2]+3\int _a^t E|\sigma (s,X_s)|^2ds+3(b-a)\int _a^t E|f(s,X_s)|^2ds{\nonumber}\\ &=& \mbox{} 3E[h^2]+3\int _a^t \| \sigma (s,X_s)\| ^2ds+3(b-a)\int _a^t \| f(s,X_s)\| ^2ds{\nonumber}\\ &\leq& \mbox{} 3E[h^2]+3(b-a)(1+b-a)d^2{\nonumber}\\ &\leq& \mbox{} r^2 \end{eqnarray} Thus A(M) is bounded.\\ (c) A(M) is equicontinuous. \\ For each $X_t\in M$, we have \begin{eqnarray*} \ \| AX_{t_1}-AX_{t_2} \| ^2&=&E(\int _{t_2}^{t_1} \sigma (s,X_s)dB(s)+\int _{t_2}^{t_1} f(s,X_s)ds)^2\\ &\leq& \mbox{} 2E(\int _{t_2}^{t_1} \sigma (s,X_s)dB(s))^2+2E(\int _{t_2}^{t_1} f(s,X_s)ds)^2\\ &\leq& \mbox{} 2\int _{t_2}^{t_1} E|\sigma (s,X_s)|^2ds+2(b-a)\int _{t_2}^{t_1} E|f(s,X_s)|^2ds\\ &\leq& \mbox{} 2(1+b-a)|t_1-t_2|d^2\\ \end{eqnarray*} Take $$\delta =\frac{\varepsilon ^2}{2(1+b-a)d^2},$$ then for each $\varepsilon >0$, there exists $$\delta =\frac{\varepsilon ^2}{2(1+b-a)d^2},$$ when $|t_1-t_2|<\delta $, we have $$\| AX_{t_1}-AX_{t_2}\| < \varepsilon.$$ Hence A(M) is equicontinuous.\\ Then by Lemma 2.2 and Definition 2.1, we know $A:M\longrightarrow M$ is a compact operator.\\ $Step$ 3: we prove that $A(M)\subseteq M$.\\ For each $X_t\in M$, we have \begin{eqnarray*} \ \| AX_t\| ^2&=&E(h+\int _a^t \sigma (s,X_s)dB(s)+\int _a^t f(s,X_s)ds)^2\\ &\leq& \mbox{} 3E[h^2]+3E(\int _a^t \sigma (s,X_s)dB(s))^2+3E(\int _a^t f(s,X_s)ds)^2\\ &\leq& \mbox{} 3E[h^2]+3\int _a^t E|\sigma (s,X_s)|^2ds+3(b-a)\int _a^t E|f(s,X_s)|^2ds \end{eqnarray*} where $f(s,X_s),\sigma (s,X_s)\in L_{ad}^2([a,b]\times \Omega )$. So $$\int _a^b \| AX_t\| ^2dt<+\infty ,$$\\ that is $$ AX_t\in L_{ad}^2([a,b]\times \Omega ),$$ meanwhile, we have proved $\| AX_t\| \leq r$ in (14), therefore $AX_t\in M$, that is $A(M)\subseteq M$. \\ Thus the Schauder's Fixed Point Theorem tells us that equation (4) has at least one solution $X_t\in M$.\\ \section{The Proof of the Theorem 1.6} \ \ \ \ \ \ \vspace{0.3cm} {\bf Proof}: \ In the proof of Theorem 1.5, we have proved that M is closed.\\ We now show that A is a contractive mapping:\\ For each $X_t,Y_t\in M$, we have \begin{eqnarray*} \ \| AX_t-AY_t \| ^2&=&E(\int _a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s) + \mbox{} \int _a^t (f(s,X_s)-f(s,Y_s))ds)^2\\ &\leq& \mbox{} 2E(\int_a^t (\sigma (s,X_s)-\sigma (s,Y_s))dB(s))^2 +2E(\int _a^t (f(s,X_s)-f(s,Y_s))ds)^2\\ &\leq& \mbox{} 2\int _a^t E(\sigma (s,X_s)-\sigma (s,Y_s))^2ds+2(b-a)\int _a^t E(f(s,X_s)-f(s,Y_s))^2ds\\ &\leq& \mbox{} 2k_2^2\int _a^t E|X_s-Y_s|^2ds+2k_1^2(b-a)\int _a^t E|X_s-Y_s|^2ds\\ &\leq& \mbox{} 2c^2(1+b-a)(b-a)\| X_t-Y_t\| ^2 \end{eqnarray*} Let $$k^2=2c^2(1+b-a)(b-a)<1,$$ then\\ $$\| AX_t-AY_t\| \leq k\| X_t-Y_t\|, \ \ 0\leq k<1 ,$$ Therefore A is k-contractive.\\ Then the Banach's fixed point theorem tells us that the stochastic integral equation (4) has a unique solution $X_t\in M$. \section{Comparing Theorem 1.5 with Theorem 1.6} \ \ \ \ \ \ Comparing Theorem 1.5 with Theorem 1.6 we know when we use Schauder's fixed point theorem to prove the existence of the solution for the integral equation, we need conditions (5) and (6). But when we use Banach's fixed point theorem to prove the existence of the solution for the stochastic integral equation, we need conditions (7), (8) and (9). Obviously, the condition (6) is weaker than the condition (9). \section{An Example for Theorem 1.5} \ \ \ \ \ \ We apply the above Theorem 1.5 to the linear stochastic integral equation: \begin{eqnarray} X_t=\int _0^t f(s)X_sds+\int _0^t g(s)X_sdB(s), \ \ 0\leq t\leq 1, \end{eqnarray} \vspace{0.3cm} {\bf Proof}: Define the operator: $$A(X_t)=\int _0^t f(s)X_sds+\int _0^t g(s)X_sdB(s), \ \ 0\leq t\leq 1,$$ Obviously, $f(s)X_s$ and $g(s)X_s$ are measurable and continuous. We define that $d=\sup E|X_s|^2$, $c=\max \{f(s)^2,g(s) ^2\}$, $6cd=r^2$ and set $X:=L_{ad}^2([a,b]\times \Omega )$ and $M=\{ X_t\in X;\| X_t\| \leq r\}$.\\ Then all conditions of Theorem 1.5 hold, so (15) has at least one solution.\\ Especially, if we let $f(s)=u,g(s)=\sigma $, then the equation is the geometric Brownian motion equation \begin{eqnarray} X_t=\int _0^t uX_sds+\int _0^t \sigma X_sdB(s), \ \ 0\leq t\leq 1, \end{eqnarray} where $u$ is the expected return rate(constant), $\sigma $ is volatility(constant), $B(t)$ is standard Brown motion.\\ We can easily to get that the equation (16) has at least one solution. It well known that the existence of the solution of the equation (16) is important to the financial Black-Scholes model . {\bf Acknowledgements.}\ \ We would like to thank our teacher Processor Zhang Shiqing for his lectures on Functional Analysis, meanwhile We would like to thank him for organizing the seminar on financial mathematics and his many helpful discussions, suggestions and corrections about this paper. \vspace{0.3cm}
1,314,259,996,836
arxiv
\section{figure captions} Fig.1 The spin wave dispersions obtained by (a) LMTO calculations and (b) the tight binding approach (model A) along the symmetry directions $\Gamma$X, XM and MR shown as a function of doping. Fig. 2 The doping dependence of the exchange couplings $J_1$, $J_2$, $J_4$ and $J_8$ between atoms at ($a$ 0 0), ($a$ $a$ 0), (2$a$ 0 0) and (3$a$ 0 0), where $a$ is the lattice parameter. Fig. 3 The (a) minority-spin and (c) majority-spin $d$ partial density of states within models A, B and C. The spin wave dispersions along $\Gamma$X as a function of doping within models (b) B and (d) C are shown along with (e) the combined contributions of models B and C. $y$ refers to the hole concentration in the majority-spin $e_g$ band with reference to its half-filled case. $z$ is the electron concentration in the minority-spin $t_{2g}$ band. $x$ is the net concentration of the doped holes and is given by $x=y-z$. Fig. 4 The dependence of the spin wave energies on the $e_g$ hole doping $y$ along $\Gamma$X within model D. The hopping between oxygen atoms and the $t_{2g}$ orbitals on the Mn atom have been left out of the model. (a) $pd\sigma$=-2.02~eV and (b) $pd\sigma$=-2.25~eV. Fig. 5 The variation of the exchange couplings $J_1$, $J_2$, $J_4$ and $J_8$ with the $e_g$ hole doping $y$. Open circles are for the case (model B) including the hopping between oxygen atoms and $pd\sigma$=-2.02~eV. Open and filled squares are for the cases (model D) without the hopping between oxygen atoms: $pd\sigma$=-2.02~eV (open squares) and $pd\sigma$=-2.25~eV (filled squares). The $t_{2g}$ orbitals on the Mn atom have been left out of the basis set. \end{document}
1,314,259,996,837
arxiv
\section{Introduction} \label{sec:intro} Evolutionary game theory (EGT) is a suitable framework to model the dynamics of populations in which the success of one type depends on the actions of the others. In EGT, selection varies with the species densities and is thus ``frequency dependent''~\cite{EGT1,EGT2,EGT3,Nowak}. This means that in the realm of EGT the population composition and each species' fitness change continuously in time. The ensuing evolutionary dynamics is commonly modeled deterministically in terms of the celebrated ``replicator equations'', which are nonlinear differential equations~\cite{EGT1,EGT2,EGT3,Nowak,freqdepsel1,freqdepsel2} well-suited to describe very large populations. The size of a real population is however always finite and is more realistically described by stochastic models whose properties, such as demographic fluctuations, are known to often greatly influence the evolutionary dynamics~\cite{weaksel1,weaksel2,weaksel3}. In particular, due to randomness, individuals of one species can take over and fixate the entire population. This stochastic phenomenon, referred to as ``fixation'', is characterized by the fixation probability -- the probability that a ``mutant type'' takes over~\cite{Nowak,Kimura,Ewens} -- and the {\it mean fixation time} (MFT) which is the mean time for such an event to occur. Population dynamics is also known to depend on the individuals' spatial arrangement: while a large body of work has focused on EGT on spatially-homogeneous (well-mixed) populations~\cite{EGT1,EGT2,EGT3}, it is known that the outcome of the dynamics may be very different in spatial settings, see {\it e.g.}~\cite{RPS1,RPS2,RPS3,RPS4,RPS5,RPS6}. For instance, spatial degrees of freedom have been found to promote cooperation in the prisoner's dilemma game but to hinder coexistence in snowdrift games~\cite{Spatialcoop1,Spatialcoop2,Spatialcoop3,Xia15a,EGT1,EGT2,EGT3}. EGT was first introduced to study ecological dynamics~\cite{EGT1} and is also particularly well suited to model the evolution of social behavior of interacting agents~\cite{EGT2,EGT3}. In this context, evolutionary games on graphs~\cite{NetRef,Nets1,Nets2,Nets3} provides a general unifying framework that is able to capture the dynamics of spatially structured populations: it models how individuals interact with their neighbors, to reproduce or die, as prescribed by the underlying game ~\cite{EGTnets1,EGTnets2,EGTnets3,EGTgraphs,EGTgraphs-bis}. Of particular interest is the question of understanding how the network's topology affects the fixation properties of a given process, see {\it e.g.} Refs.~\cite{EGTgraphs,EGTgraphs-bis,Allen2016}. While it is difficult to give a general answer to this question~\cite{Ibsen15}, significant progress can be made when the selection pressure is weak, see {\it e.g.}~\cite{weaksel1,weaksel2,weaksel3}. In fact, most studies have focused on the biologically relevant and tractable limit of weak selection in which exact results have been obtained on regular graphs, see {\it e.g.}~\cite{EGTgraphs-bis,Taylor07,Chen13}. However, much less is known about the fixation properties of evolutionary games on degree-heterogeneous graphs such as scale-free networks~\cite{NetRef,Nets1,Nets2,Nets3}. Most investigations on this important class of graphs have been carried out by means of computer simulations~\cite{Pacheco1,Pacheco2,hubs,Macie14,Xia15b}, various approximation schemes~\cite{Tarnita09,Konno11,Allen2016}, and by considering special graphs~\cite{Broom11,Macie14}. The prisoner's dilemma game has been extensively studied on scale-free networks and it has been found that the existence of nodes with high connectivity (hubs) can promote cooperation even under adverse conditions~\cite{Pacheco1,Pacheco2,EGTgraphs,hubs,Xia15b}, whereas the presence at the hubs of so-called facilitators~\cite{facil1,facil1bis}, which are agents that cooperate with cooperators and defect with defectors, tame the cooperation dilemma~\cite{facil2}. Recently, tools of statistical mechanics have been used to study simple evolutionary processes in which two types, say {\it cooperators} and {\it defectors}, interact on degree-heterogeneous graphs under constant fitness and fixed (weak) selection pressure~\cite{VotNets31,VotNets32,VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23}. While these models shed light on intriguing properties of evolution on complex graphs, such as the fact that it depends on the microscopic details of the update rules, they cannot describe evolutionary processes characterized by a {\it metastable} species coexistence prior to fixation like in the paradigmatic {\it anti-coordination games} (ACGs)~\cite{EGT1,EGT2,EGT3,Gore09}, see {\it e.g.}~\cite{AM1,AM2,AM3,AM4,MA10,AM10,AM16}. In spite of the importance of systems like ACGs, their properties have been investigated mostly on regular lattices~\cite{Killingback96,Doebeli05,Spatialcoop2} and on small-world networks~\cite{Shang06,Qiu10}. In fact, there are very few results, mostly based on computer simulations~\cite{Pacheco1}, on how the scale-free topology affects the metastability and fixation properties of ACGs. Here, the analysis attempted in Refs.~\cite{PRL12,AM13} is critically revisited and generalized. In this work we study the joint effect of degree-heterogeneous topology and frequency-dependent selection~\cite{freqdepsel1,freqdepsel2} on the dynamics of ACGs on scale-free networks under weak selection. These are well-known EGT models, of particular importance in biology and ecology~\cite{EGT1,EGT2,EGT3,Gore09}, characterized by a long-lived coexistence state (metastability), and in which fixation is driven by large fluctuations~\cite{MA10,AM10}. In particular, we investigate the influence of the microscopic update rule on the evolutionary dynamics. For the sake of concreteness, here the individual-based dynamics is implemented according to two common update rules, namely with the voter model (VM, death-first/birth-second process) and the link dynamics (LD, birth/death or death/birth at random)~\cite{VotNets31,VotNets32,VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23}. By combining analytical and simulation means, we consider systems of large but finite size and determine the effect of the complex topology on the evolutionary dynamics. In particular, with the VM we show that the typical fluctuations in the number of cooperators in the metastable state is anomalous (their variance grows superlinearly in the population size), and that the MFT has a stretched exponential dependence on the population size. We also show that with the LD these quantities coincide in the leading order with the results on complete graphs. It is worth noting that our approach is not limited to scale-free graphs, and is valid for general degree-heterogeneous networks, see, {\it e.g.} Refs.~\cite{VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23}. This paper is organized as follows: The class of ACGs on networks that we consider is introduced in the next section. Section III is dedicated to a description of the implementation of our computer simulations, while the analytical approach in terms of a multivariate diffusion theory is presented in Sec.~IV. The various timescales that characterize the dynamics are presented in Sec.~V, where timescale separation is used to derive an effective single-variate diffusion theory. Such an approach, corroborated by extensive simulations of the individual-based system, allows us to characterize the typical fluctuations in the metastable state by determining the variance in the number of cooperators in Sec.~VI, and to obtain the MFT in Sec.~VII. Finally, we summarize our findings and present our conclusions. Technical details about our computational methods are provided in Appendix A and B. \section{The Models: ACGs on complex networks} \label{sec:model} As in Ref.~\cite{PRL12}, we consider a scale-free network consisting of $N$ nodes on which population dynamics takes place between two types of agents (see Appendix~\ref{app:AppendixA}), here referred to as cooperators ($\textsf{C}$'s) and defectors ($\textsf{D}$'s). Each node is associated with a binary random variable: $\eta_i=1$ if the node $i$ is occupied by a $\textsf{C}$, whereas $\eta_i=0$ if it is occupied by a $\textsf{D}$. The topology of the network is defined by its adjacency matrix ${\bm A}=[A_{ij}]$~\cite{NetRef,Nets1,Nets2,Nets3}, whose elements are $1$ if the nodes $ij$ are connected and $0$ otherwise, and its state (or population composition) is described by $\{\eta_i\}^N=\{\eta_1, \dots, \eta_N\}$. The underlying complex graph is characterized by a degree distribution $n_k=N_k/N$, where $N_k$ denotes the number of nodes of degree $k$, whose $m^{th}$ moment is defined by \begin{eqnarray} \label{mu} \mu_m\equiv \sum_k k^m n_k=\sum_i k_i^m /N. \end{eqnarray} Here $k_i$ is the degree of the node $i$, $\mu_1$ is the graph's mean degree and $N\mu_1/2$ is the average number of links. There are standard methods to generate random networks whose nodes are distributed according to a prescribed degree distribution, see {\it e.g.} Ref.~\cite{Nets1,Nets2,Nets3}. Here we use the method outlined in Appendix~\ref{app:AppendixA} to generate the scale-free graphs that we will consider in this work. To study how the population composition changes in time, we introduce the density $\rho$ of cooperators in the network, and the subgraph density $\rho_k$ of $\textsf{C}$'s on nodes of degree $k$. These quantities are defined by \begin{equation} \label{rho} \rho\equiv \sum_{i} \eta_i/N\,,\;\;\;\;\;\;\rho_k \equiv \sum_{i}' \eta_i/N_k, \end{equation} where $\sum_{i}'$ denotes a summation restricted to the nodes $i$ of {\it fixed degree $k$}. We note that $\rho=\sum_k n_k \rho_k$. It is also useful to introduce the degree-weighted density of cooperators~\cite{VotNets11,VotNets12,VotNets13}: \begin{eqnarray} \label{omega} \omega\equiv \frac{1}{N\mu_1}\sum_i k_i \eta_i=\sum_k \frac{k}{\mu_1} n_k \rho_k. \end{eqnarray} Here, we are interested in the class of anti-coordination games (ACGs) which are symmetric two-player two-strategy games. According to the tenets of EGT~\cite{EGT1,EGT2,EGT3}, connected cooperators and defectors compete (or ``play'') pairwise according to the payoff matrix \begin{eqnarray} \label{payoffM} \begin{tabular}{c|c c} vs & $\textsf{C}$ & $\textsf{D}$ \\ \hline $\textsf{C}$ & $a$ & $b$ \\ $\textsf{D}$ & $c$ & $d$ \\ \end{tabular} \end{eqnarray} This specifies that two interacting cooperators get a payoff $a$, whereas defectors playing against each other both get a payoff $d$. Moreover, when a $\textsf{C}$ plays against a $\textsf{D}$ the former's payoff is $b$ and the latter gets a payoff $c$. It is well known that various scenarios, including ``cooperation dilemma'', emerge depending on the various values of the entries of (\ref{payoffM})~\cite{EGT1,EGT2,EGT3,EGTnets1,EGTnets2,EGTnets3}. In this work, we focus on the important class of ACGs for which $c>a$ and $b>d$. The class of ACGs includes the snowdrift game, for which $c>a>b>d$, that is particularly relevant to biological applications, see {\it e.g.} Ref.~\cite{Gore09}. \subsection{Well-mixed setting} In the usual setting of EGT where the population is well-mixed, the expected payoffs (per individual) to cooperators and defectors are respectively $\Pi^C(\rho)=a\rho+b(1-\rho)$ and $\Pi^D(\rho)=c\rho+d(1-\rho)$, while the population average payoff is $\bar{\Pi}(\rho)=\rho \Pi^C(\rho) + (1-\rho)\Pi^D(\rho)$. In such a setting, one of the main features of ACGs is a coexistence state in which a fraction $\rho^*$ of $\textsf{C}$'s coexist with a density $1-\rho^*$ of $\textsf{D}$'s over a long period of time. In the game-theoretic language the strategy $(\rho^*, 1-\rho^*)$ corresponds to playing cooperation $\textsf{C}$ with a frequency $\rho^*$ and defection $\textsf{D}$ with frequency $1-\rho^*$. This {\it mixed strategy} is known to be evolutionary stable (but it is not a strict Nash equilibrium)~\cite{EGT1,EGT2,EGT3}. In order to find $\rho^*$, we write down the mean-field replicator equation (RE), which reads~\cite{EGT1,EGT2,EGT3,Nowak}: \begin{equation} \label{RE} \frac{d}{dt}\rho(t)=\rho(t)(1-\rho(t))[\Pi^{C}(\rho(t))-\Pi^{D}(\rho(t))]=(a+d-b-c)\rho(t)(1-\rho(t))(\rho(t)-\rho^*). \end{equation} This equation is characterized by a stable interior fixed point $\rho^*=(b-d)/(b+c-a-d)$ and unstable absorbing states $\rho=0$ (all-$\textsf{D}$) and $\rho=1$ (all-$\textsf{C}$). That is, in the deterministic picture, starting from any $0<\rho<1$ the system settles into the stable point $\rho^*$ and stays there forever. This picture, however, is altered when the population size is finite ($N< \infty$), due to demographic fluctuations which ultimately drive the system into one of its two absorbing states. As a result, the stable fixed point in the language of the RE, $\rho^*$, becomes \textit{metastable}, and the probability to be in its vicinity slowly decays while the probability to be absorbed in either absorbing states slowly grows. It is well known that the mean decay time of the metastable state, which approximately equals the MFT, grows exponentially with $N$ on complete graphs (well-mixed population, $A_{ij}=1,\forall ij$)~\cite{MA10,AM10}. \subsection{Spatially-structured setting} When the population is spatially-structured and occupies the vertices of a graph, the interactions are among nearest-neighbor agents. The corresponding expected payoffs are thus defined locally: If a node $j$ is occupied by a $\textsf{D}$ individual, its neighbor $i$ receives a payoff $\Pi_{i}^C=b$ if the node $i$ is occupied by a $\textsf{C}$ and the payoff of the agent node $i$ is $\Pi_{i}^D=d$ if it is a $\textsf{D}$ individual. The local reproductive potential, or fitness, of the agent at node $i$ whose neighbor $j$ is a $\textsf{D}$ individual is proportional to the difference of their expected payoff relative to the population average payoff $\bar{\Pi}_{i}(t)$ as perceived by the agent at node $i$. For the latter, we make the choice to consider $\bar{\Pi}=\rho(t) \Pi_{i}^C +(1-\rho(t))\Pi_{i}^D$~\cite{PRL12}. This mean-field-like form reflects in a simple manner the fact that agents compare their payoffs with those of all others, leading to metastability via a natural and analytically amenable mechanism. As customary in EGT, we also introduce in the definition of the fitness a selection strength $s>0$, accounting for the interplay between demographic fluctuations and selection, as well as a baseline contribution, accounting for the chance contribution to the reproduction, which we set to $1$~\cite{Nowak,weaksel1,weaksel2,weaksel3,Kimura,Ewens}. In this setting, the fitnesses of a $\textsf{C}$ ($\textsf{D}$) player at node $i$ against a $\textsf{D}$ ($\textsf{C}$) player at a neighboring node $j$ are~\footnote{Naturally, other update rules, such as the Moran-like, the local-update or Fermi-like rules, may also be considered. However, for $s\ll 1$, all these update rules approximately coincide, see, {\it e.g.}, Refs.~\cite{weaksel2,weaksel3,facil1}.} \begin{equation} \label{fitCD} f_{i}^C=1+s[\Pi_{i}^C-\bar{\Pi}_{i}]=1+s(b-d)(1-\rho)\,,\;\;\;\;f_{i}^D=1+s[\Pi_{i}^D-\bar{\Pi}_{i}]=1+s(c-a)\rho. \end{equation} We now proceed to specify the update rules by which, at an individual-based level, the population evolves. Various types of update rules are possible and, in the case of models without fitness-dependent selection, it has been found that the dynamics may depend crucially on the details of the underlying rules~\cite{VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23,VotNets31,VotNets32}. Here, we consider two important choices: the so called (i) ``voter model'' (VM) rule~\cite{VotNets11,VotNets12,VotNets13}; (ii) and the ``link dynamics'' (LD)~\cite{VotNets31,VotNets32,PRL12,AM13}: \\ (i) In the VM dynamics, a focal agent $i$ is chosen at random with probability $1/N$, and then one of its neighbors is picked with a probability $1/k_i$. In this death-first/birth-second process, the focal agent dies and is replaced by the picked neighbor with a probability proportional to the fitness of the latter. Or, equivalently when $0<s\ll 1$, and as implemented in Ref.~\cite{VotNets11,VotNets12,VotNets13} for the biased VM and here in our simulations (see below), the focal agent dies and is replaced by an offspring of the picked neighbor with a probability proportional to the inverse of the focal agent's fitness, see Sec.~\ref{sec:Impl}. In practice, this means that with the VM at each time increment the population composition changes only when a neighboring \textsf{C}\textsf{D} or \textsf{D}\textsf{C} pair interacts. The reactions $\textsf{C}\textsf{D} \to \textsf{D}\textsf{D}$ (death and replacement of \textsf{C} at $i$ by a \textsf{D}) and $\textsf{D}\textsf{C} \to \textsf{C}\textsf{C}$ (death and replacement of a \textsf{D} at $i$ by a \textsf{C}) thus occur with rates given by the inverse of the individual \textsf{C}'s and \textsf{D}'s fitness respectively, i.e. with rates \begin{equation} \label{rates_VM} 1/f^{C}=1-s(b-d)(1-\rho)+{\cal O}(s^2)\,,\;\;\;\;\;1/f^D=1-s(c-a)\rho+{\cal O}(s^2). \end{equation} \\ (ii) In the LD, a link is randomly selected at each time step and if it connects a \textsf{C}\textsf{D} pair, one of the neighbors is randomly selected for reproduction with a rate proportional to its fitness, while the other is replaced by the newly produced offspring. In practice, this means that with the LD at each time increment the population composition changes only when a \textsf{C}\textsf{D} or \textsf{D}\textsf{C} pair at neighboring nodes interact. Thus, the reactions $\textsf{D}\textsf{C} \to \textsf{C}\textsf{C}$ and $\textsf{C}\textsf{D} \to \textsf{D}\textsf{D}$ occur with rates given by the fitness $f^{C/D}$ of the agent \textsf{C} and \textsf{D} respectively, i.e. with rates \begin{equation} \label{rate_LD} f^{C}=1+s(b-d)(1-\rho)\,,\;\;\;\;\;f^D=1+s(c-a)\rho. \end{equation} In what follows, we consider the evolution of the ACGs under both VM and LD update rules and we show that markedly different behavior emerges. \section{Computer simulations of the evolutionary dynamics} \label{sec:Impl} Before theoretically analyzing the evolution of ACGs with the VM and LD, in this section we describe how the evolutionary dynamics with the VM and LD have been implemented in our computational individual-based simulations. We begin by outlining the simulation of the evolution with the LD which has been performed using the Gillespie algorithm~\cite{Gillespie}. Our starting point is a scale-free network, see Appendix~\ref{app:AppendixA}, where each node is in one of two states - \textsf{C} (cooperator) or \textsf{D} (defector). The following steps are repeated until fixation of either \textsf{C} or \textsf{D} occur (or until a prescribed maximal number of time-steps have been performed): \begin{enumerate} \item Compute the density $\rho$ of \textsf{C} and the fitnesses of \textsf{C} and \textsf{D} according to Eqs.~(\ref{fitCD}). \item Draw random numbers $R_1$ and $R_2$ uniformly distributed between $0$ and $1$. \item Pick a random node $i$ and a random neighbor $j$ of $i$. If the states of $i$ and $j$ are different, then if $R_1<\frac{f^{C}}{f^{C}+f^{D}}$, both nodes become \textsf{C}. Otherwise, they become \textsf{D}. \item Increment time by $\Delta t=-\frac{1}{N}\frac{\ln R_2}{f^{C}+f^{D}}$. \end{enumerate} The implementation of the VM case goes along the same lines as that of the LD model, except for the update of the randomly chosen node $i$ and its neighbor $j$ in step $3$. For the VM, if node $i$ is a \textsf{C} and node $j$ is a \textsf{D}, then $i$ becomes a \textsf{D} with probability $1/f^C$. Otherwise, if node $i$ is a \textsf{D} and node $j$ is a \textsf{C}, then $i$ becomes a \textsf{C} with probability $1/f^D$. \section{Diffusion theory} We are particularly interested in the biologically-relevant regime of weak selection strength. In such a limit where $0<s\ll 1$~\cite{weaksel1,weaksel2,weaksel3,Kimura,Ewens}, the evolutionary dynamics is generally well described in terms of the so-called diffusion theory using the of appropriate forward and backward Fokker-Planck equations (FPEs)~\cite{Gardiner}. Below, the multivariate FPEs for the subgraph densities are derived. We introduce the following quantities for the evolution with the VM \begin{eqnarray} \label{psiVM} \Psi_{ij}^{{\rm VM}}=(1- \eta_i) \eta_j /f^D \quad \text{and}\quad \Psi_{ji}^{{\rm VM}}=(1-\eta_j)\eta_i /f^C, \end{eqnarray} while for the LD we use the quantities \begin{eqnarray} \label{psiLD} \Psi_{ij}^{{\rm LD}}=(1- \eta_i) \eta_j f^C \quad \text{and}\quad \Psi_{ji}^{{\rm LD}}=(1-\eta_j)\eta_i f^D, \end{eqnarray} where $(1- \eta_i)\eta_j$ and $(1- \eta_j)\eta_i$ are non-zero only when the nodes $ij$ are occupied by a pair $\textsf{D}\textsf{C}$ and $\textsf{C}\textsf{D}$, respectively. In an infinitesimal time increment $\delta t=1/N$ the subgraph density $\rho_k$ changes by $\pm \delta \rho_k=\pm 1/N_{k}$ according to a birth-death process~\cite{Gardiner} defined respectively by the transition rates \begin{eqnarray} \label{transition} T^{+}(\rho_k)=\sum_{i}'\sum_j \frac{A_{ij}}{N{\cal Q}} \Psi_{ij}^{{\rm VM}/{\rm LD}} \quad \text{and} \quad T^{-}(\rho_k)=\sum_{i}'\sum_j \frac{A_{ij}}{N{\cal Q}} \Psi_{ji}^{{\rm VM}/{\rm LD}}, \end{eqnarray} where, from the definition of the VM and LD, we have~\cite{VotNets11,VotNets12,VotNets13} \begin{eqnarray} \label{Q} {\cal Q}= \left\{ \begin{array}{ll} k_i & \mbox{for the VM},\\ \mu_1 & \mbox{for the LD}.\end{array} \right. \end{eqnarray} Given an agent at node $i$, the probability to pick one of its neighbors $j$ for an update is $A_{ij}/(N{\cal Q})$, and the transition $\eta_i \to 1-\eta_i$ occurs with probability $ \sum_j \frac{A_{ij}}{N{\cal Q}} \left[\Psi_{ij}^{{\rm VM}/{\rm LD}} +\Psi_{ji}^{{\rm VM}/{\rm LD}}\right]$ \cite{VotNets11,VotNets12,VotNets13,PRL12}. To make analytical progress, we assume that the degrees of the nodes of the underlying networks are uncorrelated, i.e. we consider degree-uncorrelated heterogeneous graphs. As explained in Appendix~\ref{app:AppendixA}, such an assumption, which is exact for Molloy-Reed networks~\cite{Molloy}, is here valid because degree-correlations are essentially negligible for the scale-free graphs that we consider. In the realm of this often called ``heterogeneous mean-field'' or ``degree-based mean-field'' approximation, see {\it e.g.} Refs~\cite{epidemicsRMP,Vespignani12} and references therein, we therefore write $A_{ij}=k_i k_j/(N{\cal Q})$. We now substitute (\ref{rate_LD}), (\ref{psiVM}), (\ref{psiLD}) and (\ref{Q}) into (\ref{transition}), use the above heterogeneous mean-field approximation and the identity $\sum_{i}'N^{-1} \eta_i =N n_k\rho_k$. In the limit of $s\ll 1$, the transition rates, $T^{+} (\rho_k) \equiv T^+_k$ and $T^{-} (\rho_k) \equiv T^-_k$, thus become \begin{eqnarray} \label{T_VM} T^+_k=n_k \omega (1-\rho_k)\left[1-s\rho (c-a) \right]\,,\;\;\;\;T^-_k=n_k (1-\omega)\rho_k \left[1-s(1-\rho)(b-d)\right], \end{eqnarray} for the VM, while for the LD the transition rates are \begin{equation} \label{T_LD} T^+_k= n_k \frac{k}{\mu_1} \omega (1-\rho_k)\left[1+s(1-\rho)(b-d)\right]\,,\;\;\;\;T^-_k= n_k \frac{k}{\mu_1} (1-\omega)\rho_k \left[1+s\rho (c-a)\right]. \end{equation} To remind the reader, $\omega$ is the degree-weighted density of cooperators, and is given by Eq.~(\ref{omega}). In the limit of weak selection intensity ($0<s\ll 1$), the birth-and-death process defined by transition rates (\ref{T_VM}) or (\ref{T_LD})~\cite{MA10,AM10} is well described in terms of the multivariate forward and backward FPEs whose generators are respectively~\cite{Gardiner,Kimura,Ewens} \begin{eqnarray} \label{Gf} {\cal G}_{\rm f}(\{\rho_k\})&=&\sum_k \left[-\frac{\partial}{\partial \rho_k}\frac{(T^+_k\!-\!T^-_k)}{n_k} + \frac{\partial^2}{\partial \rho_k^2} \frac{(T^+_k\!+\!T^-_k)}{2Nn_k^2} \right], \\ \label{Gb} {\cal G}_{\rm b}(\{\rho_k\})&=&\sum_k \left[\frac{(T^+_k\!-\!T^-_k)}{n_k} \frac{\partial}{\partial \rho_k} + \frac{(T^+_k\!+\!T^-_k)}{2Nn_k^2} \frac{\partial^2}{\partial \rho_k^2} \right]. \end{eqnarray} \section{Timescale separation and effective diffusion approximation} Solving the multivariate FPEs associated with the generators (\ref{Gf}) and (\ref{Gb}) is a formidable task. Fortunately, the analysis greatly simplifies in the weak selection limit ($0<s\ll 1$) on which we focus our attention. This simplification stems from a separation of timescales which allows us to reduce the dynamics to that of an effective single-variate process~\cite{VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23,PRL12}. Interestingly, timescale separation has also been used to simplify the analysis of the spread of epidemics on degree-heterogeneous networks, see {\it e.g.}~\cite{Parra16}. Here, we first examine the case of the VM update, and then critically revisit the case of the LD discussed in Refs.~\cite{PRL12,AM13}. ` \begin{figure} \includegraphics[width=0.9\linewidth]{fig1.pdf} \caption{(Color online). Timescale separation with the VM update rule for weak (upper panel) and strong (lower panel) selection. Here, we illustrate the dynamics of different density quantities in the system: the total density of the cooperators, $\rho$, the subgraph densities of the cooperators of degree $2$ and $7$, $\rho_2$ and $\rho_7$, and the degree-weighted density of cooperators, $\omega$, calculated by summing up to degree $k = 200$. In both panels we see a system with $N = 100, 000$ nodes and a power-law degree distribution with exponent $\nu = 2.5$. The dynamics in both panels is defined by the payoff matrix (\ref{payoffM}) with entries $a = 1, b = 4, c = 1.75, d = 1$, with weak selection, $s = 0.01$, in the upper panel and strong selection, $s = 1$, in the lower panel. The initial state of the system is such that only nodes with degree $k \leq 3$ are cooperators, the others being defectors (at $t = 0$: $\rho_2 = 1$ and $\rho_7= 0$). In both panels the total density $\rho$ and the subgraph densities $\rho_k$ converge to $\omega$ on a time scale of ${\cal O}(1)$ , and then $\omega$ converges to the stable interior fixed point $\rho^* = 0.8$ (dashed line) on a timescale of order ${\cal O}(1/s)$. Fixation occurs, on a much larger timescale $t\gg s^{-1}$ (not shown here), see text. } \label{Fig1} \end{figure} \subsection{Timescale separation \& effective diffusion theory for the VM update rule} For the dynamics with the VM update, the quantity $\omega$ is conserved on a timescale $t\ll s^{-1}$~\cite{VotNets11,VotNets12,VotNets13}. This result, illustrated in Fig.~\ref{Fig1}, can be understood by computing the rate of change of $\delta \bar{\omega}=\sum_i k_i (1-2\eta_i)/(N\mu_1)$\cite{VotNets11,VotNets12,VotNets13}: $\delta\bar{\omega}/\delta t=\dot{\bar{\omega}} =\sum_{ij} A_{ij}[\bar{\psi}_{ij}-\bar{\psi}_{ji}]/(N\mu_1)$, where the bar denotes the ensemble average. By using the mean-field approximation $A_{ij}=k_i k_j/(N{\cal Q})$, we find that $\dot{\bar{\omega}}=(a + d-b-c)s\bar{\omega}(1-\bar{\omega})(\bar{\rho}-\rho^*)$. This means that on a timescale $t\ll s^{-1}$ the quantity $\omega$ is approximately constant. Furthermore, on such a timescale, the mean-field rate equation for the subgraph density $\bar{\rho}_k$ satisfies $\dot{\bar{\rho}}_k=(T_k^+ -T_k^-)/n_k=\bar{\omega}- \bar{\rho}_k + {\cal O}(s)$. Hence, after a timescale $t={\cal O}(1)$, all $\bar{\rho}_k$'s converge to $\omega$ and we have $\bar{\rho}_k \approx \bar{\omega}\approx$ constant. Thus, the subgraph densities become independent of $k$. Clearly this implies that $\rho=\sum_k \rho_k n_k\approx \rho_k \approx \omega$. From the above, we therefore infer that when $t\gtrsim {\cal O}(1)$ we simply have $\rho_k\approx \rho \approx \omega$ and these quantities evolve together towards their common value $\rho^*$. As confirmed in Fig.~\ref{Fig1}, at timescales $t\gtrsim {\cal O}(1)$ all the densities $\rho$ and $\rho_k$ converge to $\omega$, and reach the vicinity of $\rho^*$ after a timescale of ${\cal O}(s^{-1})$. This scenario lasts until a chance fluctuation eventually causes the fixation of either $\textsf{C}$ or $\textsf{D}$ on a much longer time scale ($t\gg s^{-1}$). In summary, as illustrated by Fig.~\ref{Fig1}, with the VM update rule we distinguish three timescales: (i) at $t\gtrsim {\cal O}(1)$, $\omega$ is approximately constant and $\rho$ and $\rho_k$ converge to it; (ii) at $t\gtrsim {\cal O}(s^{-1})$, $\rho_k \approx \rho \approx \omega$ converge to $\rho^*$, and (iii) at $t\gg {\cal O}(s^{-1})$ fixation occurs. The comparison of the top and bottom panels of Fig.~\ref{Fig1}, illustrates that the separation of the three timescales breaks down when $s$ becomes ${\cal O}(1)$. Overall, the effective diffusion approximation presented below is valid only in the limit of weak selection, i.e. for $0<s\ll 1$. Since we are interested in metastability and fixation, which occur when $\rho_k \approx \rho \approx \omega$, we can legitimately approximate $\rho_k$ and $\rho$ by $\omega$~\cite{VotNets11,VotNets12,VotNets13}. We therefore substitute $\rho_k$ and $\rho$ by $\omega$ in the transition rates (\ref{T_VM}) and replace $\partial_{\rho_k}$ by $(k n_k/\mu_1)\partial_{\omega}$ in the generators (\ref{Gf}) and (\ref{Gb}). Under selection of weak intensity, $0<s\ll 1$, this yields the single-variate effective forward and backward generators: \begin{eqnarray} \label{Gf_eff} \widetilde{{\cal G}}_{\rm f}(\omega)&=& -\tilde{s}\frac{\partial}{\partial \omega} [\omega (1-\omega)(\rho^*-\omega)] +\frac{\mu_2}{N(\mu_1)^2} \frac{\partial^2}{\partial \omega^2}[\omega (1-\omega)] \\ \label{Gb_eff} \widetilde{{\cal G}}_{\rm b}(\omega)&=&\omega (1-\omega)\left[\tilde{s}(\rho^*- \omega)\frac{\partial}{\partial \omega} +\frac{\mu_2}{N(\mu_1)^2} \frac{\partial^2}{\partial \omega^2} \right], \end{eqnarray} where $\tilde{s}=(b+c-a-d)s$. Equations~(\ref{Gf_eff}) and (\ref{Gb_eff}) are among the main results of this paper. It has to be noted that in the diffusion term we have neglected the subleading contributions on the order of ${\cal O}(s)$ since we are working in the weak selection limit. The effective diffusion theory therefore predicts that the main influence of the scale-free topology under the VM update rule is to renormalize the population size $N$ into $N_{{\rm eff}} =N (\mu_1)^2/\mu_2$~\cite{VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23}. Below we show that numerical simulations fully support this prediction, and we discuss how the effective population size $N_{{\rm eff}} $ affects the metastability and fixation properties of the ACGs. A relevant result for our analysis is the fact that for scale-free networks the degree distribution $n_k \sim k^{-\nu}$ with $\nu>2$, such that $\mu_1$ is finite, holds up to the maximum degree estimated to scale as $k_{{\rm max}}\sim N^{1/(\nu-1)}$\footnote{In principle, the algorithm that we have used generates scale-free graphs and allows for the existence of nodes of degree $k>k_{{\rm max}}$. Yet, as their number is negligible, these nodes play no statistical role, see Appendix A. It is also worth noting that we focus on scale-free graphs with exponent $\nu>2$ since these have a finite mean degree $\mu_1$.} (size of the largest hub)~\cite{KS02}. Therefore, the effective population size scales as~\cite{VotNets11,VotNets12,VotNets13} \begin{eqnarray} \label{Neff} N_{{\rm eff}} =N \frac{(\mu_1)^2}{\mu_2}\sim \left\{ \begin{array}{ll} N, & \nu>3,\\ N/\ln{N}, & \nu=3,\\ N^{\alpha}, & 2<\nu<3, \end{array} \right. \end{eqnarray} where we have introduced the exponent \begin{eqnarray} \label{alpha} \alpha=\frac{2(\nu-2)}{\nu-1}. \end{eqnarray} \begin{figure} \includegraphics[width=0.9\linewidth]{fig2.pdf} \caption{(Color online). Timescale separation with the LD update rule for weak (upper panel) and strong (lower panel) selection. Here, we illustrate the dynamics of different density quantities in the system: the total density of the cooperators, $\rho$, the subgraph densities of the cooperators of degree $2$ and $7$, $\rho_2$ and $\rho_7$, and the degree-weighted density of cooperators, $\omega$, calculated by summing up to order $k=100$. In both panels we see a system with $N=100,000$ nodes and power-law degree distribution with exponent $\nu=2.9$. The dynamics in both panels is defined by the payoff matrix (\ref{payoffM}) with entries $a=1,\,b=1.5,\,c=1.75,\,d=1$, and selection strength $s=0.01$ (upper panel) and $s=1$ (lower panel). The initial state of the system was such that nodes of an even degree were cooperators, and nodes with an odd degree were defectors (at $t=0$: $\rho_{2}=1$ and $\rho_{7}=0$). We can see in both cases that $\omega$ and the subgraph densities $\rho_k$ converge to the total density $\rho$ on a time scale of $O\left(1\right)$, and then $\rho$ converges to the stable interior fixed point $\rho^{*}=0.4$ (dashed line) on a time scale of $O\left(1/s\right)$. Fixation occurs, on a much larger timescale $t\gg s^{-1}$ (not shown here), see text.} \label{Fig2} \end{figure} \subsection{Timescale separation \& effective diffusion theory for the LD update rule} A similar reasoning holds also for the LD update rule. Yet, here it is crucial to realize that the reference observable is the density of cooperators $\rho$. Indeed, under the LD $\rho$ is conserved on the time scale $t\ll s^{-1}$~\cite{VotNets11,VotNets12,VotNets13,VotNets31,VotNets32} as one can check by noting that $\delta \bar{\rho}= \sum_i(1-2\eta_i)/(N\mu_1)$ which, proceeding as above, yields $\dot{\bar{\rho}} =\sum_{ij} A_{ij}[\bar{\psi}_{ij}-\bar{\psi}_{ji}]/(N\mu_1) =(a + d-b-c)s\bar{\rho}(1-\bar{\rho})(\bar{\rho}-\rho^*)$. This indicates that $\rho$ remains constant when $t\ll s^{-1}$ and then relaxes to its metastable value $\rho^*$ on a timescale $t ={\cal O}(s^{-1})\gg 1$; this is corroborated by Fig.~\ref{Fig2}. Furthermore, at the mean-field level one obtains $\dot{ \bar{\rho}}_k=(T^+_k(\bar{\rho}_k)-T^-_k(\bar{\rho}_k))/n_k \sim (\bar{\omega}- \bar{\rho}_k)k/\mu_1$, valid at times $t\gtrsim {\cal O}(1)$. Thus, as illustrated in Fig.~\ref{Fig2}, this means that on a time scale of $t\gtrsim{\cal O}(1)$, $\rho_k$ and $\omega$ approach $\rho$. Then, all these quantities evolve together towards their common metastable value $\rho^*$, which is reached at a timescale of $t ={\cal O}(s^{-1})\gg 1$, and fluctuate around it before fixation occurs at a much later stage, see below. To study metastability and fixation in ACGs, we can therefore restrict our attention in the regimes where $\rho_k \approx \omega \approx \rho$, and approximate $\rho_k$ and $\omega$ by $\rho$~\cite{VotNets11,VotNets12,VotNets13}. A one-body description of the dynamics is thus obtained by substituting $\rho_k$ and $\omega$ by $\rho$ in the transition rates (\ref{T_LD}) and by replacing $\partial_{\rho_k}$ by $n_k\partial_{\rho}$ in the generators (\ref{Gf}) and (\ref{Gb}). This yields the single-variate effective forward and backward generators, respectively: \begin{equation} \label{Gf_eff_LD} \hspace{-5mm} \widetilde{{\cal G}}_{\rm f}(\rho)= \frac{\partial}{\partial \rho} \left[\tilde{s}\rho (1\!-\!\rho)(\rho\!-\!\rho^*) +\frac{1}{N} \frac{\partial}{\partial \rho}\rho (1\!-\!\rho)\right], \;\; \widetilde{{\cal G}}_{\rm b}(\rho)=\rho (1\!-\!\rho)\left[\tilde{s}(\rho^*\!-\! \rho)\frac{\partial}{\partial \rho} +\frac{1}{N} \frac{\partial^2}{\partial \rho^2} \right]. \end{equation} It has to be noted that these generators are the same as those obtained in a well-mixed population (on a complete graph) of size $N$; here, as opposed to Eqs.~(\ref{Gf_eff}) and (\ref{Gb_eff}), the population size is not renormalized by the graph's topology. This is similar to what was found for models without selection and with frequency-\textit{independent} selection evolving with the LD~\cite{VotNets11,VotNets12,VotNets13,VotNets21,VotNets22,VotNets23,VotNets31,VotNets32}. As discussed in more detail below, here we show that the metastability and fixation properties of ACGs evolving with the LD are the same in the leading order as those on a complete graph of size $N$. Therefore, the fixation probability and MFT of the ACGs with the LD are not affected by the scale-free topology, contrary to what was reported in Ref.~\cite{PRL12,AM13}. Yet, a completely different scenario emerges with the VM update rule, where the scale-free topology strongly affects the long-time behavior. \section{Fluctuations in the metastable state} We can use the effective diffusion approximation to study the typical fluctuations in the long-lived metastable state at times $t\gtrsim s^{-1}\gg 1$. In this regime, the one-body forward FPE generators (\ref{Gf_eff}) and (\ref{Gf_eff_LD}) allow us to describe the dynamics in the metastable state in terms of an Ornstein-Uhlenbeck process (OUP) obtained by linearizing the drift term about the metastable state $\rho^*$ in (\ref{Gf_eff}) and (\ref{Gf_eff_LD}) and by evaluating the diffusion term at $\rho^*$. \begin{figure} \includegraphics[width=0.90\linewidth]{fig3.pdf} \caption{(Color online). Mean number of cooperators, $\left\langle N\rho\right\rangle $, with the VM update rule as a function of the system size $N$: Results are shown for selection $s = 0.075$ (upper panel) and $s = 0.15$ (lower panel), and for different values of the exponent $\nu$ of the degree distribution $n_k$, see legend and Appendix \ref{app:AppendixB}. In the initial state of the system each node was a cooperator with a $50\%$ probability. The dynamics is defined by the payoff matrix (\ref{payoffM}) with entries $a=1,\,b=1.5,\,c=1.75,\,d=1$. Here, $\rho^*=0.4$ and the averaging was done from $t=500$ until $t=25000$ (where we have omitted those simulations where fixation occurred within such a time window). The errors are inside the range of the plotted data points.} \label{FigMean} \end{figure} \begin{figure} \includegraphics[width=1.0\linewidth]{fig4.pdf} \caption{(Color online). Variance of the number of cooperators, ${\rm var}\left(N\rho\right)$, with the VM update rule as a function of the system size $N$: Results are shown for $s=0.075$ and $s=0.15$, see legend, and different values of $\nu$ of the degree distribution $n_k$, see panel titles and Appendix \ref{app:AppendixB}. The initial conditions and payoff matrix parameters (\ref{payoffM}) are the same as in Fig.~\ref{FigMean}. The results are compared with the theoretical prediction $\log\left({\rm var}\left(N\rho\right)\right)\sim 2/\left(\nu-1\right)\log N$. As seen in the plots, there is a good agreement between the measured data and the theoretical prediction. For each subplot with a different value of $\nu$, the legend reports the theoretical prediction $2/(\nu-1)$ for the slope, and the measured slope (averaged over those obtained for different values of $s$). Here to get the collapse between the lines with different $s$, the data with $s=0.15$ is shifted by $\log 2$ since ${\rm var}\left(N\rho\right)\sim s^{-1}$. In addition, averaging was done from $t=500$ until $t=25000$ (where we have omitted those simulations where fixation occurred within such a time window).} \label{FigVar} \end{figure} \subsection{Fluctuations in the metastable state with the VM update} For ACGs evolving with the VM, in the realm of the effective diffusion approximation, the (forward) generator of the OUP for $\xi=\omega-\rho^*\approx \rho-\rho^*$ is therefore \begin{eqnarray} \label{Gf_VM_OU} \widetilde{{\cal G}}_{\rm f}(\xi)= \rho^* (1-\rho^*)\left[-\tilde{s} \frac{\partial}{\partial \xi} \xi + \frac{1}{N_{\rm eff}} \frac{\partial^2}{\partial \xi^2}\right]. \end{eqnarray} Here, we readily recognize the generator of an OUP with drift and diffusion parameters $\tilde{s}\rho^* (1-\rho^*)$ and $2\rho^* (1-\rho^*)/N_{\rm eff}$, respectively. Using the well-known properties of the OUP~\cite{Gardiner}, we readily verify that on average the deviations from the metastable state vanish: $\langle \xi(t)\rangle= \xi(0)e^{-\tilde{s}\rho^* (1-\rho^*)t} \to 0$, where the square bracket denotes the ensemble average over the probability density $p(\xi,t)$ of the forward FPE $(\partial_t -\widetilde{{\cal G}}_{\rm f}(\xi))p(\xi,t)=0$ [with zero-flux boundary conditions]. This means that, on average, the deviations from $\rho^*$ vanish very quickly in the metastable state. Since $\langle \xi(t)\rangle=0$, the mean number of cooperators in the metasable state is $\langle N\rho \rangle=N\rho^*$ and therefore grows linearly with the number of nodes $N$. This result is confirmed in Fig.~\ref{FigMean} (obtained from the analysis of stochastic simulations data, see Appendix \ref{app:AppendixB}), showing that the mean number of cooperators in the metastable state increases linearly in $N$ with a slope $\rho^*$. It is quite interesting to consider the variance of the variable $\xi$. Since initially, ${\rm var}(\xi(0))=\langle \xi^2(0)\rangle=0$, we find at times $t\gtrsim s^{-1}\gg 1$ \begin{eqnarray} \label{var_VM} \langle \xi^2(t)\rangle= \frac{1}{\tilde{s}N_{\rm eff}}\left[1-e^{-2\tilde{s}\rho^* (1-\rho^*)t}\right]\to \langle \xi^2\rangle=\frac{1}{\tilde{s}N_{\rm eff}} \end{eqnarray} where $N_{\rm eff}$ is given by Eq.~(\ref{Neff}). Using~(\ref{var_VM}), one can find the variance of the number of cooperators in the metastable state: ${\rm var}(N\rho)=N^2(\langle \rho^2\rangle -\langle \rho\rangle^2)\approx N^2(\langle \omega^2\rangle -(\rho^*)^2)\approx N^2\langle \xi^2\rangle\sim N^2/(\tilde{s}N_{\rm eff})$. Hence, with the definition of $\alpha$ given by (\ref{alpha}), we find \begin{eqnarray} \label{var2} {\rm var}(N\rho)= N^2(\langle \rho^2\rangle - \langle \rho \rangle^2) \sim \left\{ \begin{array}{ll} N, & \nu>3,\\ N\ln{N}, & \nu=3,\\ N^{2-\alpha}=N^{2/(\nu-1)}, & 2<\nu<3. \end{array} \right. \end{eqnarray} In particular, we notice that for scale-free graphs with $2<\nu<3$ that are characterized by nodes of high degree~\cite{NetRef}, the variance increases faster than linearly in the system size since $2/(\nu-1)>1$. In other words, the effective diffusion theory predicts that $\log{({\rm var}(N\rho))}\sim 2/(\nu-1)\log N$ when $2<\nu<3$. This result is corroborated by extensive computer simulations outlined in Appendix \ref{app:AppendixB} and illustrated in Fig.~\ref{FigVar}. Here, the theoretical prediction (\ref{var2}) is in very good agreement (within error bars) with the results of stochastic simulations over a broad range of values $2<\nu<3$. These results mean that the typical fluctuations in the metastable state on scale-free graphs with exponent $2<\nu<3$ are much stronger than on complete graphs where the cooperator variance scales as $N$, see below. In Section VII, we explore how the strong fluctuations arising on scale-free networks with exponent $2<\nu<3$ also dramatically affect the MFT. \subsection{Fluctuations in the metastable state with the LD} For ACGs evolving with the LD, in the realm of the effective diffusion approximation, the (forward) generator of the OUP for $\xi= \rho-\rho^*$ is again given by (\ref{Gf_VM_OU}) but with $N$ instead of $N_{\rm eff}$ on the right hand side. This crucial difference means that for the LD the diffusion parameter of (\ref{Gf_VM_OU}) reads $2\rho^* (1-\rho^*)/N$ and is therefore independent of the graph's structure. The OUP drift parameter is still $\tilde{s}\rho^* (1-\rho^*)$ and therefore coincides with that of the VM update rule. Proceeding as above, we readily find that on average the deviations from the metastable state still vanish exponentially in time, as $\langle \xi(t)\rangle =\xi(0)e^{-\tilde{s}\rho^* (1-\rho^*)t}$. We can also compute the second moment of $\xi$: $\langle \xi^2(t)\rangle= (1-e^{-2\tilde{s}\rho^* (1-\rho^*)t})/(\tilde{s}N)\sim 1/N$. Thus, the variance of the number of cooperators in the metastable state, ${\rm var}(N\rho)=N^2(\langle \rho^2(t)\rangle - \langle \rho(t)\rangle^2)\approx N^2\langle \xi^2(t)\rangle \sim N$, scales similarly as on a complete graph. In stark contrast with the VM case on scale-free graphs with highly-connected nodes (when $2<\nu<3$), we thus find that the typical fluctuations in the LD case are not affected by the graph's structure. \section{Fixation properties} \begin{figure} \includegraphics[width=0.9\linewidth]{fig5.pdf} \caption {(Color online). Stretched exponential dependence of the MFT, $\ln\left(T_{\rm fix}\right)\sim N^{\alpha}$, when the dynamics is implemented according to the VM. Here we plot the exponent $\alpha$ as a function of the exponent $\nu$ of the degree distribution. The solid line is theoretical prediction (\ref{alpha}) giving $\alpha=2(\nu-2)/(\nu-1)$ while the symbols (with error bars) are obtained from the simulations. The ranges of the $N$ and $s$ parameters which were used in the simulations are $N\in [2\cdot 10^{4},1.6\cdot 10^{5}]$ and $s\in [5\cdot 10^{-3},10^{-2}]$, see Appendix~\ref{app:AppendixB}. The initial conditions and payoff matrix parameters (\ref{payoffM}) are the same as in Fig.~\ref{FigMean}. Inset: $\alpha$ obtained from simulations versus the theoretical prediction (\ref{alpha}), where the line $y=x$ is a guide for the eye.} \label{FigMFT} \end{figure} The Moran-like evolutionary processes that we are considering are absorbing Markov chains and their fate is to reach one of the states corresponding to the entire network being populated only by cooperators or by defectors. Two important quantities to characterise the underlying evolutionary dynamics are therefore the (unconditional) mean fixation time (MFT), $T_{\rm fix}(\rho)$, which is the mean time to reach either of the absorbing states from an initial density $\rho$ of cooperators, and the fixation probability $\phi^C(\rho)$ that cooperation prevails (the final state is all-$\textsf{C}$, with the extinction of all $\textsf{D}$'s) starting from a fraction $\rho$ of cooperators. It is well known that on complete graphs, for ACGs these quantities scale exponentially with the population size: $\ln{T_{\rm fix}(\rho)}\sim sN$ and $\ln{\phi^C(\rho)}\sim -sN$ or $\ln{(1-\phi^C(\rho))}\sim -sN$, see {\it e.g.}~\cite{Antal06,MA10,AM10}. Here, we use the effective diffusion theory to compute $T_{\rm fix}(\rho)$ and $\phi^C(\rho) $ on scale-free graphs and, together with large-scale computer simulations, to uncover under which circumstances, the complex topology affects the MFT. While we are mainly interested in the MFT, it is useful to start our discussion by outlining the calculation of the fixation probability. In the realm of the effective diffusion approximation, by assuming that fixation occurs after lingering for a long time in the metastable state (see Figs.~\ref{Fig1} and \ref{Fig2}), we use the backward operators (\ref{Gb_eff}) and (\ref{Gf_eff_LD}) to compute the fixation probability which satisfies $\widetilde{{\cal G}}_{ \rm b}(\rho)\phi^C(\rho)=0$ with boundary conditions $\phi^C(0)=1-\phi^C(1)=0$~\cite{weaksel1,weaksel2,weaksel3,Gardiner}, and by now approximating $\omega\approx \rho$ in (\ref{Gb_eff}). This yields $\phi^C(\rho)=\frac{ {\rm erfi}\left[\rho^*\sqrt{\sigma}\,\right] - {\rm erfi}\left[(\rho^*-\rho)\sqrt{\sigma}\,\right] }{{\rm erfi}\left[\rho^*\sqrt{\sigma}\,\right]+{\rm erfi}\left[(1 -\rho^*)\sqrt{\sigma}\,\right]}$~\cite{PRL12}, where ${\rm erfi}(z)\equiv\frac{2}{\sqrt{\pi}}\int_{0}^z e^{u^2} du$ and \begin{eqnarray} \label{sigma} \sigma=\left\{ \begin{array}{ll} \tilde{s}N\frac{\mu_1^2}{\mu_2}, & \text{(for the VM)},\\ \tilde{s}N, & \text{(for the LD)}. \end{array} \right. \end{eqnarray} This clearly indicates that the VM/LD lead to very different fixation properties on scale-free networks with nodes of high degree: For the VM, the topology yields an effective population size $N_{\rm eff}$ (\ref{Neff}) leading to $\sigma\ll \tilde{s}N$ when $2<\nu<3$. The dependence of $\phi^C$ on the system size when $2<\nu<3$ is thus a stretched exponential with exponent $-N^{\alpha}$ and $\alpha<1$, whereas the fixation probability with the LD (and for the VM with $\nu>3$) is the same as on a complete graph of size $N$. \begin{figure} \includegraphics[width=0.9\linewidth]{fig6.pdf} \caption {(Color online). $\ln\left(d\ln\left(T_{{\rm fix}}\right)/ds\right)$ when the dynamics is implemented according to the VM. Results are shown for different values of the exponent $\nu$ of the power-law degree distribution, see legend. The range of the $s$ values which were used here was $s\in [5\cdot10^{-3},10^{-2}]$, see Appendix~\ref{app:AppendixB}. The initial conditions and payoff matrix parameters (\ref{payoffM}) are the same as in Fig.~\ref{FigMean}. Data were linearly fitted for each $\nu$ giving the slopes: $0.3497\pm 0.0632$ (for $\nu=2.2$), $0.6805\pm 0.0509$ (for $\nu=2.5$), $0.8651\pm 0.1867$ (for $\nu=2.8$). The corresponding theoretical slopes [Eq.~(\ref{alpha})] are $0.3333$, $0.6667$ and $0.8889$. } \label{FigMFTslope} \end{figure} In addition to the fixation probability, we can similarly compute the MFT, $T_{\rm fix}(\rho)$. This is done by solving $\widetilde{{\cal G}}_{ \rm b}(\rho)T_{\rm fix}(\rho)=-1$ with boundary conditions $T_{\rm fix}(0)=T_{\rm fix}(1)=0$~\cite{Gardiner}. Using standard methods~\cite{Kimura,Ewens,Gardiner}, the solution to this inhomogeneous backward FPE is given in Ref.~\cite{PRL12} and to leading order we find: \begin{eqnarray} \label{MFT1} T_{\rm fix}(\rho)\sim \left\{ \begin{array}{ll} (1-\phi^C(\rho)) e^{(\rho^*)^2\,\sigma}, & \text{when $\rho > \rho^*$},\\ \phi^C(\rho) e^{(1-\rho^*)^2\,\sigma}, & \text{otherwise.} \end{array} \right. \end{eqnarray} Hence, with the expression of $\phi^C$, when $\rho^*<1/2$ and $\rho > \rho^*$ this gives \begin{equation} \label{MFTsmall} \ln{T_{\rm fix}(\rho)}\simeq (\rho^*)^2~\sigma. \end{equation} This shows that when initially the fraction of cooperators is not too low, metastability occurs prior to fixation and the leading contribution to the MFT~(\ref{MFTsmall}) is independent of the initial condition~\cite{AM1,AM2,AM3,AM4,MA10,AM10,AM16}. It stems from this result that fixation occurs much more rapidly for the VM on scale-free networks with $2<\nu<3$ than with the LD. In the VM case, the MFT grows as the stretched exponential $\ln{T_{\rm fix}}\sim N^{\alpha}\ll N$ with the exponent $\alpha$ given by (\ref{alpha}). % This theoretical prediction is confirmed by our stochastic simulations (within error bars) from which we have computed the exponent $\alpha$ reported in Fig.~\ref{FigMFT} (see Appendix \ref{app:AppendixB} for technical details). The results reported in Fig.~\ref{FigMFTslope} corroborate, within error bars, the theoretical prediction (\ref{MFTsmall}), $\ln\left(d\ln\left(T_{{\rm fix}}\right)/ds\right)\sim [2(\nu-2)/(\nu-1)] \ln{N}$ when $2<\nu<3$. A similar analysis with LD, reported in Fig.~\ref{FigMFTslopeLD}, see Appendix \ref{app:AppendixB}, confirms our theoretical result (\ref{MFTsmall}) that the MFT of ACGs evolving with the LD grows purely exponentially with $N$ as on complete graphs~\cite{Antal06,MA10,AM10}. We therefore emphasize that, contrary to what stated in Refs.~\cite{PRL12,AM13}, $\phi^C$ and the MFT of ACGs evolving with the LD are not affected by the scale-free topology. Therefore, in ACGs with the LD, $T_{\rm fix}$ always grows exponentially with $\tilde{s}N$ (within error bars) as illustrated in Fig.~\ref{FigMFTslopeLD}. Our analysis of the LD in Refs.~\cite{PRL12,AM13} was based on relatively small systems (up to $N=4,000$) which apparently were governed by finite-size effects leading to a nonlinear dependence on $N$ when $2<\nu<3$. Two of us tried to explain these effects, that turn out to disappear at large-enough $N$, in terms of an effective diffusion theory derived by assuming that $\omega$ was approximately conserved~\footnote{In Ref.~\cite{AM13} the fixation probability of coordination games subject to the LD was considered similarly.}. Even though the difference between $\omega$ and $\rho$ is negligible for $s\ll 1$ (see Fig.~\ref{Fig2}), considering $\omega$ to be constant instead of $\rho$ leads to a large error in the MFT. Here, by carrying out extensive simulations on large systems (up to two orders of magnitude larger than in \cite{PRL12,AM13}), and by correcting our theoretical analysis, we have confirmed that the fixation properties of ACGs evolving with the LD exhibit a linear dependence on $N$ which, as seen above, is consistent with the (approximate) conservation of $\rho$ by the LD under weak selection (see Fig.~\ref{Fig2}). \begin{figure} \includegraphics[width=0.9\linewidth]{fig7.pdf} \caption {(Color online). $\ln\left(d\ln\left(T_{{\rm fix}}\right)/ds\right)$ as a function of $\ln\left(N\right)$ when the system evolves with the LD. Results are shown for different values of the exponent $\nu$ of the power-law degree distribution, see legend. The range of the $s$ values which were used here was $s\in [1\cdot10^{-4},1.9\cdot10^{-3}]$, see Appendix~\ref{app:AppendixB}. The initial conditions and payoff matrix parameters (\ref{payoffM}) are the same as in Fig.~\ref{FigMean}. Data for each $\nu$ were linearly fitted to get the following slopes: $1.0316\pm0.0096$ (for $\nu=2.2$), $1.0094\pm0.0093$ (for $\nu=2.5$), $0.9876\pm 0.0136$ (for $\nu=3$). The line with a slope of $1$ is plotted as a guide for the eye.} \label{FigMFTslopeLD} \end{figure} \section{Summary \& Conclusion} In this work we have studied the interplay between scale-free topology, demographic fluctuations and frequency-dependent selection on the dynamics of anti-coordination games. This class of paradigmatic games, particularly relevant in theoretical biology, between two competing types (cooperators and defectors) is characterized by their long-lived coexistence, and eventually by the fixation of one type. These features are here investigated by combining analytical methods and extensive stochastic simulations. Since evolutionary dynamics on heterogeneous graphs is known to depend on the underlying microscopic update rule, we have investigated two types of individual-based dynamics: according to the death-first/birth-second voter model (VM), or according to the links dynamics (LD) in which birth/death or death/birth events occur randomly between connected agents. While we here specifically focus on scale-free graphs, it is noteworthy that our approach is valid for any degree-heterogeneous networks. Our analytical approach, valid in the weak selection limit, is based on a diffusion approximation for the density of cooperators at nodes of a prescribed degree. The resulting multi-variate Fokker-Planck equations are greatly simplified by exploiting a timescale separation: After a transient, when the selection pressure is weak, the multi-body dynamics towards fixation can be described in terms of the (approximately conserved) cooperator degree-weighted density (for the VM) and the cooperator density (for the LD). As a result, the typical fluctuations in the number of cooperators in the coexistence metastable state, and the fixation properties can be determined by means of effective single-variate forward and backward Fokker-Planck equations. In particular, with the VM update rule the complex scale-free topology is responsible for an effective reduction of the population size $N$ when $2<\nu<3$, into $N_{{\rm eff}}=N^{\alpha}$ with an exponent $0<\alpha<1$ that depends on the first two moments of the graph's degree distribution $n_k\sim k^{-\nu}$. This results in larger typical fluctuations and a superlinear dependence on $N$ of the variance of the number of cooperators in the metastable state. As an outcome, the MFT is exponentially decreased and displays a non-trivial stretched-exponential dependence on the population size. In the absence of ``hubs'' ($\nu>3$), in the leading exponential order, the dynamics is independent on the scale-free topology and the MFT scales linearly with $N$ as in the well-mixed dynamics. When the system evolves according to the LD, the scale-free topology does not effectively renormalize the population size and we find $ N_{{\rm eff}}=N$ (for $\nu>2$). With the LD, the variance of the number of cooperators in the metastable state grows linearly with $N$ and the MFT displays a pure exponential dependence on $N$ as shown in Fig.~\ref{FigMFTslopeLD}, similarly to the well-mixed case. These results are in contrast with what was reported in \cite{PRL12,AM13}. The correction here was made possible due to a subtle amendment that we have made in the theoretical description accompanied by an efficient numerical code that allowed us to attain very large system sizes and hence to carefully analyze the dependence of the fixation properties on $N$. \section{Acknowledgments} DS and MA were supported by Grant No. 300/14 of the Israel Science Foundation. MM is grateful for the hospitality of the LSFrey at the Arnold Sommerfeld Center (University of Munich) where part of this work was done, as well as for the financial support of the Alexander von Humboldt Foundation by Grant No. GBR/1119205 STP. \section*{References}
1,314,259,996,838
arxiv
\section{Introduction} Exciton-polaritons are quasiparticles emerging due to the strong coupling between quantum well excitons and cavity photons in high quality factor cavities~\cite{Microcavities}. Due to their hybrid light-matter nature, polaritons exhibit set of intriguing collective effects such as high temperature Bose-Einstein condensation (BEC)~\cite{BEC0}, superfluidity~\cite{SuperFluid} and macroscopic self trapping \cite{SelfTrapp}. Those phenomena originate from very small effective mass of polaritons inherited from the photonic component combined with strong polariton- polariton interactions provided by the excitonic component. Together with the intriguing fundamental properties, exciton-polaritons are attractive for their perspective applications, mainly in the field of low-threshold bosonic lasers~\cite{Pol_Laser0} and all-optical integrated circuits~\cite{Pol_Circuits,Pol_Circuits2,Pol_Transistor, Log_gate1,Log_gate2,Log_gate3,Log_gate4,Log_gate5}. The advantage of the polaritonic platform over the conventional nonlinear optical materials for the purposes of the development of optical logic elements is their ultra-strong and ultra-fast nonlinear response ~\cite{NLresp1,NLresp2,NLresp3}. One of the important features of non-equilibrium exciton-polariton BEC that can be used in practice relays to so-called permanent Rabi oscillations occurring between lower and upper branch polaritons~\cite{Permanent1} or between photonic and excitonic components of the condensate~\cite{Permanent2,Permanent3}. An enhansement of the coherence time of polariton Rabi oscillations has been reported experimentally in Ref.~\cite{Perm_exp}. The oscillation pattern strongly depends on the mechanism of the pumping, but the role of different contributions to the nonlinear excitonic responce and different types of the pumping in polariton Rabi oscillations in microcavities is still not fully clarified. In this paper we examine the problem of the onset of permanent Rabi oscillations between excitonic and photonic components in the microcavity in the presence of strong nonlinearities caused by both exciton-exciton interaction and saturation of the excitonic absorbtion provided by the effects of Pauli blocking~\cite{exc-exc1,exc-exc2,sat1,sat2}. The latter plays major role in the regime of very strong pumps. Indeed, as exciton density $n_X$ approaches the critical concentration of $1/a_B^2$, where $a_B$ is the excitonic Bohr radius, the system undergoes the Mott transition, and the excitons dissolve forming the electron-hole plasma. We also take into account the possibility of having two types of the pump: resonant pump of the cavity mode by external laser with well defined frequency and non- resonant pump of the excitonic component provided e.g. by electrical injection of the electrons and holes into active region \cite{Pol_Laser0,Sci_Rep_PolLas} \begin{figure}[!h] \centerline{\includegraphics[width = 0.4\columnwidth]{fig1.eps}} \caption{Sketch of the system under consideration: a polaritonic microcavity is pumped simultaneously with an electrical pump $P_X$ and optical pump $P_C$. } \label{fig_1} \end{figure} It is well known that one of the direct consequences of the polariton nonlinear response is the optical bistability ~\cite{Bistable_pred,BS_exp_opt}. Importantly, bistable behavior was observed for the cases of both resonant optical and non- resonant pump when excitonic subsystem is pumped via the electron- hole reservoir. The mechanisms of the bistability, however, are very different in these two cases. For resonant optical pump it appears due to the resonance between interaction- induced blueshift and detuning between the energies of the cavity mode and the pumping laser. On the contrary, for the non- resonant pump bistabile behavior was reported for the electrically pumped polaritonic diode as resulting from the transition to the weak coupling regime \cite{BS_exp_elec2} or dependence of the electron-hole tunneling lifetime on the carrier density provided by the effects of the screening ~\cite{BS_exp_elec}. In this work we consider the coupled exciton-photon system in a microcavity configuration shown in Fig.~\ref{fig_1} subject to the \textit{simultaneous} electrical and optical pump and account for the exciton-photon coupling saturation due to the Mott transition. We reveal that the system may exhibit the multistable behaviour as well as undergo the transitions from stationary solutions to stable periodic oscillations of excitonic and photonic components and then chaotic behaviour if the intensities of the pumps are tuned. The multistable behaviour has been previously observed in the microcavities~\cite{Multi_1,Multi_2,Multi_3}, however conventionally the observation of multistability requires an account for spin(polarization) degree of freedom. Contrary to this case, in this paper multistability emerges due to the interplay of the exciton-exciton interaction and the saturation of the exciton-photon coupling. The chaotic behaviour of exciton polaritons has been previously studied in the optically pumped polariton Josephson junctions~\cite{chaos1} and planar microcavities with briken polarization symmetry~\cite{chaos2,chaos3}. In this work the transition to the chaotic behaviour can be achieved by tailoring the electrical pump which could be advantageous for the applications in chaos communication devices~\cite{chaos_comm}. The remaining paper is organized as follows. In section~\ref{sec1} we introduce the Hamiltonian of the system and obtain the equations of motion for the excitonic and photonic amplitudes as functions of time. In section~\ref{sec2} the analysis of the stationary solutions is performed. Section~\ref{sec3} contains the stability analysis of the stationary solutions. Moreover, the regimes of limiting cycles characterized by permanent oscillations for excitonic and photonic field amplitudes as well as Hopf bifurcations and transition to chaos through period- doubling cascade are discussed. The conclusions are presented in section~\ref{sec4}. \section{Model}\label{sec1} We consider the coupled exciton-photon semiclassical Hamiltonian written as \begin{align} &\mathcal{H}=\hbar\omega_{C}\phi^*\phi+\hbar\omega_{X}\chi^*\chi+\frac{g}{2}|\chi|^4+\hbar\Omega_R\left(1-\lambda a_B^2|\chi|^2\right)\left(\phi^*\chi+\chi^*\phi\right), \end{align} where $\phi,\chi$ are the complex amplitudes of the photonic and excitonic macroscopic wave functions, respectively, $\omega_C,\omega_X$ are the frequencies of the excitonic and photonic mode, $g$ defines the strength of the exciton-exciton interaction and can be approximated by~\cite{Yamamoto} $g\approx 6E_b a_B^2$, where $E_b$ is the exciton binding energy and $a_B$ is the exciton Bohr radius. The dimensionless constant $\lambda$ defines the efficiency of the saturation of the excitonic absorption. If $\lambda=0$ the saturation is absent and one regains the conventional Hamiltonian of coupled excitons and photons with only source of nonlinearity provided by exciton- exciton interactions which was extensively used for description of the polariton dynamics and leads to the standard bistable behavior in the regime of the resonant pump (see e.g. \cite{CiutiReview} and references therein). Our main goal will be to analyze the additional source of nonlinearity appearing for $\lambda\neq0$. Note, that Mott transition from excitons to electron-hole plasma occurs roughly when the distance between the individual excitons become comparable to their Bohr radius, and thus realistically $\lambda\approx1$. Throughout the paper we neglect spatial degree of freedom assuming the condensate to be at zero momentum state. Equations of motions for the excitonic and photonic amplitudes are derived from the Hamiltonian as $i\hbar\dot{\xi}=\partial{\mathcal{H}}/\partial {\xi^*}, \quad \xi=\{\phi,\chi\}$. We also will phenomenologically add the terms corresponding to the exciton and photon damping $\gamma_X,\gamma_C$, as well optical and electrical pumping terms, $P_Ce^{-i\omega t},\quad P_X\chi(1-\lambda a_B^2|\chi|^2)$ , respectively. Note, that saturation of the incoherent pump originating from the same effect of Mott transition is introduced. Making the substitution $\xi=\tilde{\xi}e^{-i\omega t}a_B^{-1}$, $\xi=\{\phi,\chi\}$ and introducing the dimensionless time $\tau=\Omega_R t$ brings us to a couple of dimensionless differential equations: \begin{align} &i\dot{\tilde{\phi}}=(\tilde{\delta}_C-i\tilde{\gamma}_C)\tilde{\phi}+(1-\lambda|\tilde{\chi}|^2)\tilde{\chi}+\tilde{P}_C,\nonumber\\ &i\dot{\tilde{\chi}}=(\tilde{\delta}_X-i\tilde{\gamma}_X)\tilde{\chi}+(1-\lambda|\tilde{\chi}|^2)\tilde{\phi}-2\lambda\tilde{\chi}\mathrm{Re}\left(\tilde{\phi}^*\tilde{\chi}\right)+\tilde{g}|\tilde{\chi}|^2\tilde{\chi}+i\tilde{P}_X\tilde{\chi}(1-\lambda|\tilde{\chi}|^2), \end{align} where $\tilde{\delta}_{C(X)}=(\omega_{C(X)}-\omega)/\Omega_R$ are detunings of the excitonic and photonic modes with respect to the laser frequency and all the parameters with the dimension of energy were normalized to $\Omega_R$. Thereafter we remove tilde sign from the symbols to spare the notations. \begin{figure}[!h] \centerline{\includegraphics[width = 0.8\columnwidth]{fig2_new.eps}} \caption{(a) Exciton amplitude vs the optical pump power for the absent electrical pumping at the pumping detuning equal to -9 meV (green line), -2 meV (red line) and 5 meV (black line); all other parameters can be found in the text. (b) Exciton amplitude vs the optical pump power at $\delta_C=\delta_X=-9$meV at ${P}_X=0$ (black line), 1.12 (blue line) and 1.2 (green line. (c) Exciton amplitude vs the electrical pump power at $\delta_C=\delta_X=-9$meV at ${P}_C=0.2$ (green line), 0.5 (black line), and 0.02 (blue line). In figures (a-c) solid lines correspond to the saturable case $\lambda=1$ and dotted to $\lambda=0$. (d) Three dimensional plot of exciton amplitude vs electrical and optical pump for $\lambda=1$ and $ \delta_C=\delta_X=-9$~meV. } \label{fig_2} \end{figure} In the stationary case $\dot{{\phi}}=0,\dot{{\chi}}=0$, the photon amplitude can be expressed via the exciton amplitude as \begin{align} {\phi}=-\frac{e^{i\varphi_0}}{\sqrt{{\delta}_C^2+{\gamma}_C^2}}\left(\left[1-\lambda|{\chi}|^2\right]{\chi}+{P}_C\right), \label{phiexp} \end{align} where $\tan\varphi_0=\gamma_C/\delta_C$. Equation~\eqref{phiexp} can then be substituted to the equation for $\chi$. We then can look for the solution for ${\chi}$ in the form ${\chi}=Xe^{i\varphi}$, $X,\phi \in \mathcal{R}$. The equation for $\chi$ can then be decomposed into two equations with real coefficients yielding (here and throughout the manuscript we neglect the exciton damping $\gamma_x$ assuming it to be much smaller than all characteristic energies of the system): \begin{align} &{\delta}^{\prime}_X X+{g}^{\prime}X^3-(1-3\lambda X^2){P}_C\cos(\varphi-\varphi_0)-(X-4\lambda X^3+3\lambda X^5)\cos\varphi_0=0, \label{stat01} \\ &\sin(\varphi-\varphi_0)=\frac{X}{{P}_C}\left[(1-\lambda X^2)\sin\varphi_0-{{P}_X\alpha_0}\right] \label{stat02}, \end{align} where $\alpha_0=\sqrt{{\delta}_C^2+{\gamma}_C^2}$, $\{{g}^{\prime},{\delta}^{\prime}_X\}=\{{g},{\delta}_X\}\times\sqrt{{\delta}_C^2+{\gamma}_C^2}$. Equations (4), (5) are the subject of our analysis below. \section{Stationary solutions.}\label{sec2} At first, we analyse the stationary solutions of Eqs.~\eqref{stat01},\eqref{stat02}. We are looking for the real solutions for $X \in [0;1]$ and $\varphi \in [0; 2\pi[$. We start from the simplest case of only the optical pumping, setting ${P}_X=0$ in Eqs.~\eqref{stat01},\eqref{stat02}. Typical results for this limit are displayed in Fig.~\ref{fig_2}(a). In particular, Eq.~\eqref{stat02} has two real solutions for $\varphi$ for any value of $X$: \begin{align} \varphi=\varphi_0\pm \sin^{-1}\left[\frac{X}{{P}_C}(1-\lambda X^2)\sin\varphi_0\right]+\frac{\pi}{2}(1\mp 1) \end{align} In order to elucidate the multistability regime we here adopt the assumption of off-resonant pumping and high-Q cavity, assuming ${\gamma}_C\ll {\delta}_C$. We also assume zero detuning between the exciton and photon mode leading to $\delta_C=\delta_X=\delta$ Within this approximation, $\sin\phi_0\approx 0$, $\cos\phi_0\approx \mathrm{signum}(\delta)$, and $\cos (\phi-\phi_0)\approx \pm\cos\phi_0$. Eq.~\eqref{stat01} thus reduces to \begin{align} 3\lambda X^5-({g}\delta+4\lambda)X^3-({\delta}^2-1) X\pm(1-3\lambda X^2){P}_C=0.\label{stat11} \end{align} Let us note that in the absence of the losses, the threshold for the multistability appearance should be obtained at arbitrary small pumping intensity. Thus, it is instructive to analyze the limiting case ${P}_C=0$. Then, in the non-saturable case non-trivial solutions of Eq.~\eqref{stat11} satisfy the condition \begin{align} g\delta X^2+(\delta^2-1)=0, \label{nonsat} \end{align} and for the case with $\lambda=1$ we get \begin{align} 3X^4-({g}\delta+4)X^2-(\delta^2-1)=0. \label{stat2} \end{align} For the nonsaturable case of Eq.~\eqref{nonsat} we can readily obtain the condition for the bistability existence, since a positive root exists only if $\delta \in ]-\infty,-1[\cup ]0;1[$. Thus in the non-saturable system the bistability can arise only if the resonant pump is either above the upper polariton frequency or between the lower polariton frequency and the exciton frequency. For the saturable case of Eq.~\eqref{stat2}, there are three distinct cases: the equation has no roots in the region $]0;1[$ and thus there is no multistable behaviour in the system. If it has one root, then system will be bistable. Finally, if it has two roots, the tri-stability may be observed. In stark contrast to the non-saturable case, the tri-stability is observed in the region $\delta\in ]-1;0[$ i.e. when the resonant pump is between the exciton and upper polariton frequency, where no bistable behaviour is observed for the case of $\lambda=0$. We thus observe that the multistability behaviour originates merely due to the saturation nonlinearity. In order to support the approximate analytical results we have performed the numerical calculation of the stationary solutions of the system. For the numerical calculations we use the following set of parameters: the Rabi-splitting $\Omega_R$ is set to 10 meV, usual for the GaAs based structures, the photon lifetime is set to $2$ ps which can be easily achieved in the state of the art high-quality microcavities and corresponds to $\gamma_C/\Omega_R=0.03$; the binding energy $E_B\approx 4$ meV which is usual for the GaAs structures and the Bohr radius $a_B=10$ nm. The optical pumping intensity is bound by approximately $1$ kW/cm$^2$ which corresponds to the dimensionless ${P}\approx 1$ and the electrical pumping current is bound by approximately $100 \mu$A which corresponds to ${P}_X\approx 1$. These values of optical pump intensities and currents are easily achievable in the state of the art microcavity set ups~\cite{BS_exp_elec}. We take $\lambda=1$ and compare the results with the case $\lambda=0$. The results of the numerical calculations are shown in Figs.~\ref{fig_2}(a). In Fig.~\ref{fig_2}(a) one can see that for the detunings lying in the region $]-\Omega_R;0[$ there is no bistable behaviour in the non-saturable case (green dotted line), but we observe the tri-stability for $\lambda=1$ (green solid line). Despite the presence of the multistability regime in the phase diagram of the system, it is problematic to construct an experimental protocol exhibiting the multistable behaviour. this is connected to the large ${P}_C$ behaviour of the system. Namely, as can be seen from Eq.~\eqref{stat11} in this limit the amplitude of the exciton field is approaching $1/\sqrt{3}$. At this limit, exciton and photon subsystems become decoupled and increasing the photon pumping will not affect the exciton subsystem. This asymptote is shown with a dashed line in Fig.~\ref{fig_2}(a). Thus it is not possible to switch between different stable exciton states by adiabatic change of the optical pump. We now consider the case of simultaneous electrical and optical pump acting upon the system. One of the effects observed in this case, is the shift of the $|{\chi}|$ vs ${P}_C$ dependence to the larger values of ${P}_C$ as ${P}_X$ is increased. To illustrate that we can expand Eqs.~\eqref{stat01},\eqref{stat02} for small and large ${P}_C$ respectively. We again assume limit of low radiative losses $\gamma_C\ll \delta_C$, i.e. $\sin\phi_0\approx 0$. In this case, Eq.~\eqref{stat02} yields $\sin\phi=-X{P}_X|{\delta}_C|/{P}_C$. The requirement for the real phase $\phi$ leads to the condition $X<{P}_C/|({\delta}_C|{P}_X)$. Thus, as electrical pump increases, the exciton amplitude will decrease slower with the optical pump intensity. This also can be observed in Fig.~\ref{fig_2}(c) where the dependence of the exciton amplitude vs electric pump for different values of ${P}_C$ is shown. We can see that at the low branch of the stationary solutions, lying below the asymptote $|{\chi}|=1/\sqrt{3}$ the exciton amplitude decays with increasing the pump intensity. In order to better illustrate the exciton amplitude dependence on the electrical and optical pump, the three dimensional plot of $|\chi|$ vs $P_C$ and $P_X$ is shown in Fig.~\ref{fig_2}(d). \section{Stability Analysis}\label{sec3} \begin{figure*}[t] \centerline{\includegraphics[width = 1.0\textwidth]{fig3.eps}} \caption{Spectrum (left column and phase space $(|{\phi}(t)|,|{\chi}(t)|)$ (middle column) and $(\mathrm{Re}{\phi}(t),\mathrm{Im}{\chi}(t))$ (right column) trajectories of the system at $\delta_C=-9$ meV and ${P}_C=0.5,\quad {P}_X=0.55$ (a,b,c), ${P}_C=0.5,\quad {P}_X=0.27$ (d,e,f), ${P}_C=0.2,\quad {P}_X=0.09$ (g,h,i). In all figures solid black lines correspond to saturable case $\lambda=1$, and dashed green to $\lambda=0$. In figure (b) red point corresponds to the stationary solution for the saturable case.} \label{fig_3} \end{figure*} To analyze the stability of the obtained stationary solutions we have computed the Jacobian of the system at the stationary points~\cite{DynamicalAnalysis}. The appearance of the positive real parts of at least one eigenvalue of the Jacobian indicates the unstable behaviour. The stable stationary points are shown in bold in Figs.~\ref{fig_2}(a-c). We have focused at the regime of the detuning $\delta$ lying in the region $[-\Omega;0]$ corresponding to the tri-stable behaviour in the saturable case. We have identified the three types of the non-stationary dynamics of the system which are shown in Figs.\ref{fig_3}. The first type is the periodic oscillations of the excitonic and photonic components of the condensate. This behaviour is corresponding to the limiting cycle~\cite{DynamicalAnalysis} and is shown at Figs.~\ref{fig_3}(a,b,c). Relevant time dependent behaviour of photonic $|\phi|$ and excitonic $|\chi|$ amplitudes is shown at Fig.~\ref{fig_4}. Physically, this regime occurs due to the balance between dissipation and pumping in the presence of the nonlinearity and can be recognized as permanent Rabi oscillations in the condensate~\cite{Permanent1,Permanent2}. For the non-saturable case $\lambda=0$ with the same parameters, the system is stable. The transition from the stationary solutions to the limiting cycle occurs as we decrease the electrical pump ${P}_X$ and as shown at Fig.~\ref{fig_2}(c). The spectrum of the oscillations is shown at Fig.~\ref{fig_3}(a). It can be seen that the oscillations occur primarily at the frequency of $\tilde{\Omega }\approx 0.95\Omega_R$. However, the small peaks corresponding to the multiples of ${\Omega}$ are also present. The trajectories in the phase space projections $(|\phi|,|\chi|)$ and $(\mathrm{Re}\chi,\mathrm{Im}\chi)$ in this case are closed contours as it is illustrated in Figs.~\ref{fig_3}(b,c). We also observed that the closed contour contains the stationary point of the dynamical system which signifies the stability of the limiting cycle. \begin{figure}[!h] \centerline{\includegraphics[width = 0.4\columnwidth]{fig4.eps}} \caption{ Time dependance the of the exciton and photon field at the same parameters as in Fig.~\ref{fig_3}(a) for $\lambda=1$.} \label{fig_4} \end{figure} Further decreasing of ${P}_X$ leads to the period multiplication in the system which is shown in Figs.~\ref{fig_3}(d,e,f). The spectrum that is shown in Fig.~\ref{fig_3}(d) exhibits the peaks at the frequencies $5\tilde{\Omega}/5$. We note that transition to 5th order multiplication of the period of the structure is sharp i.e. the system abruptly switches to the period multiplication by a factor of 5 rather than by consecutive period doubling. In this case the trajectories shown in Fig.~\ref{fig_3}(e,f) are still closed contours while with far more complex topology. For the case of $\lambda=0$ with the same parameters, the transition to the permanent Rabi oscillations, i.e. limiting cycle is observed. As the electrical pump is decreased further the system undergoes the transition to chaotic behaviour through a series of consecutive period multiplication. The spectrum in this case, shown in Fig.~\ref{fig_3}(g) becomes effectively a continuous one and the trajectories shown in Fig.~\ref{fig_3}(h,i) become unclosed. For the non-saturable case $\lambda=0$ at these parameters the system is stable again. We have also observed a threshold of the optical pump intensity of $0.06$, below which the solutions are stable regardless of the electrical pump intensity. For the non-saturable case the threshold corresponds to the $0.11$. \section{Conclusions}\label{sec4} We analyzed the dynamics of the excitonic and photonic fields in a microcavity under simultaneous optical and electric pump accounting for the exciton-photon coupling saturation provided by Pauli exclusion principle. We have shown that in the limit of the vanishing electrical pump the system may exhibit the multistable behaviour at certain values of the detunings between the frequencies of the photon mode and the pump. Moreover, we observed the transition from the stable solutions to the chaotic behavior through the cascade of period multiplication as electrical pump in the system is tuned. The discovered regimes can be useful for further development of applications of polariton lasers. \section{Acknowledgements}\label{sec5} A.P.A. acknowledges useful discussions with Yuri Rubo. This work is supported under the project No RFMEFI58715X0020 of the Federal Targeted Programme “Research and Development in Priority Areas of Development of the Russian Scientific and Technological Complex for 2014-2020” of The Ministry of Education and Science of Russia.. I.A.S. acknowledges support from Singapore Ministry of Education under AcRF Tier 2 grant MOE2015-T2-1-055.
1,314,259,996,839
arxiv
\section{Kernel function} \subsection{The algebra $\mathcal{A}$} We briefly recall the definition and the basic facts about the commutative algebra $\mathcal{A}$ introduced in \cite{FHHSY:2009}. Let $q_1,q_2$ be two independent indeterminates and set $q_3\mathbin{:=} 1/q_1q_2$. We also use the symbols $\mathbb{F}\mathbin{:=} \mathbb{Q}(q_1,q_2)$, $\mathbb{N}\mathbin{:=}\{0,1,2,\ldots\}$ and $\mathbb{N}_+\mathbin{:=}\{1,2,\ldots\}$. For $n,k\in\mathbb{N}_+$, we define two operators $\partial^{(0,k)},\partial^{(\infty,k)}$ acting on the space of symmetric rational functions in $n$ variables $x_1,\ldots,x_n$ by \begin{align*} \begin{array}{l l l l l l l l l} \partial^{(0,k)}&: &f &\mapsto &\displaystyle \dfrac{n!}{(n-k)!} \lim_{\xi \to 0} f(x_1,\ldots,x_{n-k},\xi x_{n-k+1},\xi x_{n-k+2},\ldots,\xi x_n) \\ \partial^{(\infty,k)}&: &f &\mapsto &\displaystyle \dfrac{n!}{(n-k)!} \lim_{\xi \to \infty} f(x_1,\ldots,x_{n-k},\xi x_{n-k+1},\xi x_{n-k+2},\ldots,\xi x_n) \end{array} \end{align*} whenever the limit exists. We also set $\partial^{(0,k)} c=0, \partial^{(\infty,k)} c=0$ for $c\in \mathbb{F}$. Finally we define $\partial^{(0,0)}$ and $\partial^{(\infty,0)}$ to be the identity operator. \begin{dfn For $n\in\mathbb{N}$, the vector space $\mathcal{A}_n=\mathcal{A}_n(q_1,q_2,q_3)$ is defined by the following conditions (i), (ii), (iii) and (iv). \\ (i) $\mathcal{A}_0 \mathbin{:=} \mathbb{F}$. For $n\in\mathbb{N}_+$, $f(x_1,\ldots,x_n)\in \mathcal{A}_n$ is a rational function with coefficients in $\mathbb{F}$, and symmetric with respect to the $x_i$'s. \\ (ii) For $n\in\mathbb{N}$, $0\leq k\leq n$ and $f \in \mathcal{A}_n$, the limits $\partial^{(\infty,k)}f$ and $\partial^{(0,k)}f$ both exist and coincide: $\partial^{(\infty,k)}f=\partial^{(0,k)}f$ (degenerate $\mathbb{C} \mathbb{P}^1$ condition). \\ (iii) The poles of $f\in \mathcal{A}_n$ are located only on the diagonal $\{(x_1,\ldots,x_n) \mid \exists (i,j), i\neq j ,x_i=x_j\}$, and the orders of the poles are at most two. \\ (iv) For $n\geq 3$, $f\in \mathcal{A}_n$ satisfies the wheel conditions \begin{align*} f( x_1,q_1 x_1,q_1 q_2 x_1,x_4,\ldots)=0,\qquad f( x_1,q_2 x_1,q_1 q_2 x_1,x_4,\ldots)=0. \end{align*} Then we set the graded vector space $\mathcal{A}=\mathcal{A}(q_1,q_2,q_3)\mathbin{:=}\bigoplus_{n\geq 0}\mathcal{A}_n$. \end{dfn} \begin{dfn} For an $m$-variable symmetric rational function $f$ and an $n$-variable symmetric rational function $g$, we define an $(m+n)$-variable symmetric rational function $f*g$ by \begin{align}\label{eq:star} (f*g)(x_1,\ldots,x_{m+n})&\mathbin{:=} \operatorname{Sym} \bigg[ f(x_1,\ldots,x_m) g(x_{m+1},\ldots,x_{m+n}) \prod_{\substack{1\le\alpha\le m\\m+1\le\beta\le m+n}} \omega(x_\alpha,x_\beta) \bigg]. \end{align} Here $\omega(x,y)$ is the rational function \begin{align}\label{eq:omega} \omega(x,y)=\omega(x,y;q_1,q_2,q_3) \mathbin{:=}\dfrac{(x-q_1 y)(x - q_2 y)(x-q_3 y)}{(x-y)^3}, \end{align} and the symbol $\operatorname{Sym}$ means $\operatorname{Sym} (f(x_1,\ldots,x_n))\mathbin{:=} (1/n!) \;\sum_{\sigma\in\mathfrak{S}_{n}} f(x_{\sigma(1)},\ldots,x_{\sigma(n)})$. \end{dfn} \begin{fct}[{\cite[Theorem 1.5]{FHHSY:2009}}]\label{thm:1} $\mathcal{A}$ is closed with respect to $*$, and the pair $(\mathcal{A},*)$ is a unital associative commutative algebra. The Poincar\'e series is $\sum_{n\ge 0} (\dim_\mathbb{F} \mathcal{A}_n) z^n=\prod_{m\ge 1}(1-z^m)^{-1}$. \end{fct} \subsection{The ring $\Lambda_{\mathbb{F}}$ of symmetric functions} As for the notations and definitions concerning the partitions, we basically follow the notation in \cite{M:1995:book}. A partition of $n\in \mathbb{N}$ is a sequence $\lambda=(\lambda_1,\lambda_2,\ldots)$ of non-negative integers satisfying $\lambda_1\geq\lambda_2\geq\cdots$. We define $|\lambda|\mathbin{:=}\lambda_1+\lambda_2+\cdots$, $\ell(\lambda)\mathbin{:=}\#\{i\mid\lambda_i\neq0\}$, and write $\lambda \vdash n$ if $|\lambda|=n$. We denote the conjugate (transpose) of a partition $\lambda$ by $\lambda'$. We work with the dominance partial ordering defined as : $\lambda\geq \mu \overset{\rm def}{\iff} |\lambda|=|\mu|,\ \lambda_1+\cdots+\lambda_i\geq \mu_1+\cdots+\mu_i \mbox{ for all } i\geq 1$. We recall some basic facts about the ring of symmetric functions. As was in \cite{FHHSY:2009}, we set $q_1=q^{-1},q_2=t$ (hence $q_3=q t^{-1}$) and $\mathbb{F}=\mathbb{Q}(q_1,q_2)=\mathbb{Q}(q,t)$. Let $\Lambda_\mathbb{F}$ be the ring of symmetric functions over the base field $\mathbb{F}$, constructed in the category of graded ring with the projection operators $\rho_{m,n}:f(x_1,\ldots,x_m)\mapsto f(x_1,\ldots,x_n,0\ldots,0)$. Let $p_n(x)\mathbin{:=} \sum_i x_i^n$ be the power sum function. For a partition $\lambda=(\lambda_1,\lambda_2,\ldots)$, the monomial symmetric function is defined by $m_\lambda(x)\mathbin{:=} \sum_{\alpha}x^\alpha$, where $\alpha$ runs over all the distinct permutations of $\lambda$. The elementary symmetric function $e_n(x)$ is defined by the generating function $E(y)\mathbin{:=}\prod_{i}(1+x_i y)=\sum_{n\geq 0} e_n(x) y^n$. Set $G(y)\mathbin{:=}\prod_i \{(t x_i y;q)_\infty / (x_i y;q)_\infty\} =\sum_{n\geq 0} g_n(x;q,t) y^n$, where $(x;q)_\infty\mathbin{:=}\prod_{i\geq 0}(1-q^i x)$. For a partition $\lambda=(\lambda_1,\lambda_2,\ldots)$ set $p_\lambda\mathbin{:=} p_{\lambda_1}p_{\lambda_2}\cdots$. Similarly we write $e_\lambda\mathbin{:=} e_{\lambda_1}e_{\lambda_2}\cdots$ and $g_\lambda\mathbin{:=} g_{\lambda_1}g_{\lambda_2}\cdots$. It is known that $\{p_\lambda\}$, $\{m_\lambda\}$, $\{e_\lambda\}$ and $\{g_\lambda\}$ form bases of $\Lambda_\mathbb{F}$. Recall Macdonald's scalar product $\qtpr{p_\lambda,p_\mu} \mathbin{:=} \delta_{\lambda,\mu} \prod_{i \ge 1}i^{m_i}m_i! \prod_{j \ge 1}(1-q^{\lambda_j})/(1-t^{\lambda_j})$, where $m_i$ denotes the number of parts $i$ in the partition $\lambda$. For any dual bases $\{u_\lambda\}$ and $\{v_\lambda\}$, we have \begin{align}\label{eq:kernel:original} \Pi(x,y;q,t)\mathbin{:=} \prod_{i,j} \dfrac{(t x_iy_j;q)_\infty}{(x_i y_j;q)_\infty} =\sum_\lambda u_\lambda(x)v_\lambda(y). \end{align} It is known that $\{m_\lambda\}$ and $\{g_\lambda\}$ form dual bases, namely we have $\qtpr{m_\lambda,g_\mu}=\delta_{\lambda,\mu}$. Macdonald polynomials $P_\lambda(x;q,t)$ are uniquely characterized by (a) the triangular expansion $P_\lambda=m_\lambda+\sum_{\mu<\lambda} a_{\lambda\mu} m_\mu$ ($a_{\lambda\mu}\in \mathbb{F}$), and (b) the orthogonality $\qtpr{P_\lambda,P_\mu}=0$ if $\lambda\neq \mu$. Se set \begin{align} \label{eq:b_def} b_\lambda(q,t)\mathbin{:=}\qtpr{P_\lambda(z;q,t),P_\lambda(z;q,t)}^{-1}, \quad Q_\lambda(z;q,t)\mathbin{:=} b_\lambda(q,t)P_\lambda(z;q,t). \end{align} Then $\{Q_\lambda\}$ forms a dual basis to $\{P_\lambda\}$. \subsection{The isomorphism $\iota:\Lambda_\mathbb{F} \rightarrow \mathcal{A}$} Both $\Lambda_\mathbb{F}$ and $\mathcal{A}$ are commutative rings having the same Poincar\'e series $ \sum_{n\ge 0} (\dim_\mathbb{F} \Lambda_\mathbb{F}^n) z^n= \sum_{n\ge 0} (\dim_\mathbb{F} \mathcal{A}_n) z^n=\prod_{m\ge 1}(1-z^m)^{-1}$, where $\Lambda_\mathbb{F}^n$ denotes the ring of symmetric functions of degree $n$. Moreover it was shown in \cite{FHHSY:2009} that there is a natural way to identify $\Lambda_\mathbb{F}$ and $\mathcal{A}$ from the point of view of the free field construction of the Macdonald operators. Based on the finding in \cite{FHHSY:2009} we give an isomorphism $\iota:\Lambda_\mathbb{F} \rightarrow \mathcal{A}$ as follows. For $p \in \mathbb{F}$, let \begin{align} &\epsilon_n(z_1,z_2,\ldots,z_n;p)\mathbin{:=} \prod_{1\le i<j\le n}\dfrac{(z_i-p z_j)(z_i-p^{-1}z_j)}{(z_i-z_j)^2}, \label{eq:ep} \end{align} and set $\epsilon_{\lambda}(z;p):= (\epsilon_{\lambda_1}*\epsilon_{\lambda_2}* \cdots*\epsilon_{\lambda_l}) (z;p)$ for a multi-index $\lambda = (\lambda_1, \lambda_2, \ldots, \lambda_l)$. \begin{fct}[{\cite[Propositions 2.20 \& 2.23]{FHHSY:2009}}] For $i=1,2,3$, $\{ \epsilon_{\lambda}(z;q_i) \}_{\lambda\vdash n}$ forms a basis of $\mathcal{A}_n$. \end{fct} Let us write the expansions of $P_\lambda$ in the bases $\{e_\mu\}$ and$\{g_\mu\}$ by \begin{align} &\label{eq:gtoP} P_\lambda(z;q,t)=\sum_{\mu\ge\lambda'}c_{\lambda\mu}^{e\to P}(q,t)e_\mu(z;q,t), \quad P_\lambda(x;q,t)=\sum_{\mu\ge\lambda}c_{\lambda\mu}^{g\to P}(q,t)g_\mu(x;q,t). \end{align} A detailed study of the algebra $\mathcal{A}$ with the help of the free field representation allowed us to establish the following equality. \begin{fct}[{\cite[\S 3 E]{FHHSY:2009}}] Set the next two elements in $\mathcal{A}$. \begin{align} &f^{(q^{-1})}_\lambda(z;q,t) \mathbin{:=} \dfrac{t^{-|\lambda|}}{(1-t^{-1})^{|\lambda|}|\lambda|!} \sum_{\mu\ge\lambda'}c_{\lambda\mu}^{e\to P}(q,t)\epsilon_{\mu}(z;q) \dfrac{|\mu|!}{\prod_{i=1}^{\ell(\mu)}\mu_i!}, \label{eq:fq} \\ &f^{(t)}_\lambda(z;q,t) \mathbin{:=} \dfrac{(-1)^{|\lambda|}}{(1-q)^{|\lambda|}|\lambda|!} \sum_{\mu\ge\lambda}c_{\lambda\mu}^{g\to P}(q,t)\epsilon_{\mu}(z;t) \dfrac{|\mu|!}{\prod_{i=1}^{\ell(\mu)}\mu_i!}. \label{eq:ft} \end{align} Then we have $f^{(q^{-1})}_\lambda(z;q,t)=f^{(t)}_\lambda(z;q,t)$\footnote{ Note that the first and second lines of Page 25 in \cite{FHHSY:2009} contains typos and should be read as \eqref{eq:fq} and \eqref{eq:ft}.}. \end{fct} \begin{dfn} Let $F_\lambda(z;q,t)\mathbin{:=} f^{(q^{-1})}_\lambda(z;q,t)=f^{(t)}_\lambda(z;q,t)$. \end{dfn} \begin{dfn} Define the isomorphism $\iota : \Lambda_\mathbb{F} \rightarrow \mathcal{A}$ by \begin{align*} \iota(e_\lambda)= \dfrac{t^{-|\lambda|}}{(1-t^{-1})^{|\lambda|}} \dfrac{1}{\prod_{i=1}^{\ell(\mu)}\lambda_i!}\epsilon_{\lambda}(z;q). \end{align*} \end{dfn} \begin{prp}\label{prp:iota} (1) We have \begin{align*} \iota(g_\lambda)= \dfrac{(-1)^{-|\lambda|}}{(1-q)^{|\lambda|}} \dfrac{1}{\prod_{i=1}^{\ell(\mu)}\lambda_i!}\epsilon_{\lambda}(z;t). \end{align*} (2) We have $\iota(P_\lambda)=F_\lambda(z;q,t)$. \end{prp} \begin{proof} (1) By the Wronski relation given in \cite[Proposition 3.11]{FHHSY:2009}. (2) By \eqref{eq:gtoP} and the definitions of $\iota$ and $F_\lambda$. \end{proof} \begin{rmk} To explain the importance of the element $F_\lambda(z;q,t)$, we recall the Gordon filtration on $\mathcal{A}$. For $p\in \mathbb{F}$ and $\lambda=(\lambda_1,\ldots,\lambda_l)\vdash n$, we defined a linear map \begin{align}\label{eq:special} \begin{array}{l c c l} \varphi_\lambda^{(p)} : &\mathcal{A}_n &\longrightarrow&\mathbb{F}(y_1,\ldots,y_l)\\ &f(z_1,\ldots,z_n)&\mapsto &f(y_1,p y_1,\ldots,p^{\lambda_1-1}y_1,\\ & & &\phantom{f(} y_2,p y_2,\ldots,p^{\lambda_2-1}y_2,\\ & & &\phantom{f(} \ldots \\ & & &\phantom{f(} y_l,p y_l\ldots,p^{\lambda_l-1}y_l), \end{array} \end{align} called the \emph{specialization map}. The Gordon filtration is given by $\mathcal{A}_{n,\lambda}^{(q_i)} \mathbin{:=}\bigcap_{\mu\not\le\lambda}\ker\varphi_\mu^{(q_i)}$ for $i=1,2,3$. Then by \cite[Theorem 1.19]{FHHSY:2009} , $\mathcal{A}_{n,\mu}^{(q^{-1})} \cap \mathcal{A}_{n,{\mu'}}^{(t)}$ is one dimensional and is spanned by $F_\lambda(z;q,t)$. \end{rmk} \subsection{The kernel function} Now we are ready to study the kernel function from the point of view of the algebra $\mathcal{A}$. \begin{dfn} Introduce $K_n(x,z;q,t)\in \Lambda_\mathbb{F}^n \otimes \mathcal{A}_n$ as \begin{align*} K_n(x,z;q,t) \mathbin{:=} \sum_{\lambda\vdash n} Q_\lambda(x)F_\lambda(z;q,t). \end{align*} \end{dfn} \begin{rmk} The name ``kernel function'' comes from $\Pi(x,y)$ in \eqref{eq:kernel:original}. By Proposition \ref{prp:iota} (2), we have, in a suitable completion of $\Lambda_\mathbb{F}\otimes \mathcal{A}$, \begin{align*} \sum_{n\geq 0} K_n(x,z;q,t)=\sum_\lambda Q_\lambda(x) \iota(P_\lambda(y)), \end{align*} where $\lambda$ runs over all the partitions of every non-negative integer. Thus $K_n$ is a homogeneous component of the analogue of $\Pi(x,y)$. \end{rmk} \begin{prp}\label{prp:K2} In $\Lambda_\mathbb{F}\otimes \mathcal{A}$ we have \begin{align}\label{eq:K2} K_n(x,z;q,t)=\dfrac{(-1)^n}{(1-q)^n n!} \sum_{\lambda\vdash n}m_\lambda(x) \epsilon_\lambda(z;t) \dfrac{|\lambda|!}{\prod_{i=1}^{\ell(\lambda)}\lambda_i!} \end{align} \end{prp} \begin{proof} First we show \begin{align}\label{eq:Qtom} m_\lambda(x)=\sum_{\mu\le\lambda}c_{\mu\lambda}^{g\to P}(q,t)Q_\mu(x;q,t). \end{align} Since $\{Q_\mu(x;q,t)\}$ is a basis of $\Lambda_\mathbb{F}$, we can expand $m_\lambda(x)=\sum_{\nu}c_{\nu\lambda} Q_\nu(x;q,t)$ with $c_{\nu\lambda}\in\mathbb{F}$. Then the pairing $\qtpr{m_\lambda,P_\mu}$ is calculated as $\qtpr{m_\lambda,P_\mu} =\qtpr{\sum_{\nu}c_{\nu\lambda} Q_\nu(z;q,t),P_\mu} =c_{\mu\lambda}$, where we used the fact that $\{P_\lambda\}$ and $\{Q_\lambda\}$ are dual. On the other hand, by \eqref{eq:gtoP}, we have $\qtpr{m_\lambda,P_\mu} =\qtpr{m_\lambda,\sum_{\nu\ge\mu}c_{\mu\nu}^{g\to P}(q,t)g_\mu} =c_{\mu\lambda}^{g\to P}(q,t)$. Comparing both expressions, we obtain \eqref{eq:Qtom}. Then we have \begin{align*} \text{RHS of } \eqref{eq:K2} &=\dfrac{(-1)^n}{(1-q)^n n!}\sum_{\lambda\vdash n} \sum_{\mu\le\lambda}c_{\mu\lambda}^{g\to P}(q,t)Q_\mu(x;q,t) \epsilon_\lambda(z;t) \dfrac{|\lambda|!}{\prod_{i=1}^{\ell(\lambda)}\lambda_i!}\\ &=\dfrac{(-1)^n}{(1-q)^n n!}\sum_{\mu\vdash n}Q_\mu(x;q,t) \sum_{\lambda\ge\mu}c_{\mu\lambda}^{g\to P}(q,t) \epsilon_\lambda(z;t) \dfrac{|\lambda|!}{\prod_{i=1}^{\ell(\lambda)}\lambda_i!} =\sum_{\mu\vdash n}Q_\mu(x;q,t)F_\mu(z;q,t). \end{align*} \end{proof} Consider the case of finitely many variables and set $x=(x_1,x_2,\ldots,x_m)$. Also let $z=(z_1,z_2,\ldots,z_n)$ be the set of variables for the elements in $\mathcal{A}_n$. \begin{prp}\label{prp:K} We have \begin{align} \label{eq:K:dfn} K_n(x,z;q,t) = \dfrac{(-1)^{n}}{(1-q)^{n}n!} \sum_{i_1=1}^m \sum_{i_2=1}^m \cdots \sum_{i_n=1}^m x_{i_1}x_{i_2}\cdots x_{i_n} \prod_{1\le\alpha<\beta\le n}\gamma_{i_\alpha,i_\beta}(z_\alpha,z_\beta;q,t), \end{align} where the function $\gamma_{i,j}(z,w;q,t)$ is given by \begin{align}\label{eq:gamma} \gamma_{i,j}(z,w;q,t)\mathbin{:=} \begin{cases} \dfrac{(z-t w)(z-t^{-1}w)}{(z-w)^2} & i=j,\\ \dfrac{(z-q^{-1} w)(z-t w)(z-q t^{-1}w)}{(z-w)^3} & i<j,\\ \dfrac{(z-q w)(z-t^{-1} w)(z-q^{-1} t w)}{(z-w)^3}& i>j. \end{cases} \end{align} \end{prp} \begin{proof} Note that we have \begin{align}\label{eq:gamma-omega} \gamma_{i,j}(z,w;q,t) = \begin{cases} \epsilon_2(z,w;t) & i=j,\\ \omega(z,w;q^{-1},t,q t^{-1}) & i<j,\\ \omega(z,w;q,t^{-1},q^{-1} t)=\omega(w,z;q^{-1},t,q t^{-1})& i>j, \end{cases} \end{align} which is obtained from \eqref{eq:omega}, \eqref{eq:ep} and \eqref{eq:gamma}. Thus we have \begin{align*} &\sum_{i_1=1}^m \sum_{i_2=1}^m \cdots \sum_{i_n=1}^m x_{i_1}x_{i_2}\cdots x_{i_n} \prod_{1\le\alpha<\beta\le n}\gamma_{i_\alpha,i_\beta}(z_\alpha,z_\beta;q,t)\\ &= \sum_{I_1,\ldots,I_m}x_1^{a_1}x_2^{a_2}\cdots x_m^{a_m} \prod_{k=1}^m\epsilon_{a_k}(z_{I_k};t) \prod_{1\le i<j\le m}\prod_{\alpha\in I_i,\beta\in I_j} \omega(z_\alpha,z_\beta;q^{-1},t,q t^{-1}), \end{align*} where $I_k$ ($k=1,2,\ldots,m$) is a subset of $\{1,2,\ldots,n\}$ such that $|I_k|=a_k$, $I_1\cup I_2\cup\cdots\cup I_m=\{1,\ldots,n\}$. Using the multi-index notation $a=(a_1,\ldots,a_m)\in\mathbb{N}^m$, we have \begin{align* =\sum_{a\in\mathbb{N}^m, |a|=n}x^{a} \dfrac{n!}{\prod_{k=1}^m a_k!} \epsilon_{a}(z;t) \end{align*} with $|a|\mathbin{:=} a_1+\cdots+a_m$. Applying $\mathfrak{S}_m$ on the running index $a$ and averaging them, we have \begin{align*} =\dfrac{1}{n!}\sum_{\sigma\in\mathfrak{S}_m}\sum_{a\in\mathbb{N}^m,|a|=n} x^{\sigma(a)}\dfrac{n!}{\prod_{k=1}^m a_k!} \epsilon_{\sigma(a)}(z;t). \end{align*} Dividing $\mathfrak{S}_m$ by the stabilizer $\text{Stab}(a)$ of $a\in\mathbb{N}^m$ and using the commutativity of $\mathcal{A}$, we have \begin{align*} =\dfrac{1}{n!}\sum_{a\in\mathbb{N}^m,|a|=n} \#\text{Stab}(a) \dfrac{n!}{\prod_{k=1}^m a_k!} \Big( \sum_{\overline{\sigma}\in\mathfrak{S}_m/\text{Stab}(a)} x^{\overline{\sigma}(a)} \Big)\epsilon_{a}(z;t). \end{align*} Then we obtain the result by taking a partition $\lambda$ as the running index. \end{proof} \subsection{Macdonald's tableau sum formula}\label{subsec:tableau} We recall the tableau sum formula for the Macdonald polynomials. Let $\Tb{\lambda;m}$ denotes the set of all the ways of drawing numbers $1,2,\ldots,m$ into the Young diagram of shape $\lambda$ \emph{without any conditions}. Reading the numbers from left to right then top to bottom, namely in the English reading manner, we get a bijection between $\Tb{\lambda;m}$ and the set $ \{1,2,\ldots,m\}^n$. Let $\Tbr{\lambda;m}$ denotes the subset of $\Tb{\lambda;m}$ in which the numbers in each row are arranged in non-decreasing manner. The element of $\Tbr{\lambda;m}$ is uniquely specified by the set of numbers $\theta_{i,j}$ which denote the number of $j$ in the $i$-th row. We have $\lambda_i = \sum_{k=1}^{n} \theta_{i,k}$ for $1 \leq i \leq n$. Next we introduce a sequence $\lambda^{(j)}=(\lambda^{(j)}_1,\lambda^{(j)}_2,\ldots)$ by setting $\lambda^{(j)}_i\mathbin{:=} \sum_{k=1}^j\theta_{i,k}$. It is clear that we have $\emptyset=\lambda^{(0)}\subset \lambda^{(1)}\subset\cdots \subset \lambda^{(m)}=\lambda$. Note that $\lambda^{(j)}$ may not be a partition. Let $\SSTb{\lambda;m}$ be the set of semi-standard tableaux. A semi-standard tableau $T$ is expressed as a sequence of partitions $\emptyset=\lambda^{(0)}\subset \lambda^{(1)}\subset\cdots \subset \lambda^{(m)}=\lambda$, where the skew diagrams $\lambda^{(k)}/\lambda^{(k-1)}$ ($k=1,2,\ldots,m$) are horizontal strips. We have $\theta_{i,j}=0$ for $i>j$, $\lambda_i = \sum_{k=i}^{n} \theta_{i,k}$ for $1 \leq i \leq n$, and \begin{align*} 0\leq \theta_{i,j}\leq \lambda_i-\lambda_{i+1}- \sum_{k=j+1}^{\ell(\lambda)}(\theta_{i,k}-\theta_{i+1,k}) \end{align*} for $1\leq i<j\leq \ell(\lambda)$. It is known that the $b_\lambda(q,t)$ in \eqref{eq:b_def} has the factorized form. \begin{align} \label{eq:b_fact} Q_\lambda(x;q,t)=b_\lambda(q,t) P_\lambda(x;q,t),\quad b_\lambda(q,t) = \prod_{s\in\lambda}\dfrac{1-q^{a(s)}t^{\ell(s)+1}}{1-q^{a(s)+1}t^{\ell(s)}}, \end{align} where for a box $s=(i,j)$ of $\lambda$, $a(s)\mathbin{:=} \lambda_i-j$ is the arm-length and $\ell(s)\mathbin{:=} \lambda'_j-i$ is the leg-length. The $P_\lambda(x;q,t)$ has the tableau sum formula: \begin{align*} P_\lambda(x;q,t)=\sum_{T\in\SSTb{\lambda;m}}x^T \psi_T(q,t). \end{align*} Here the coefficient $\psi(q,t)\in\mathbb{F}$ is determined by \begin{align}\label{eq:tableau:formula} \begin{split} &\psi_T(q,t)\mathbin{:=} \prod_{k=1}^m \psi_{\lambda^{(k)}/\lambda^{(k-1)}}(q,t),\\ &\psi_{\lambda/\mu}(q,t)\mathbin{:=} \prod_{1\le i \le j\le \ell(\mu)}^n \dfrac{f(q^{\mu_i-\mu_j}t^{j-i})f(q^{\lambda_i-\lambda_{j+1}}t^{j-i})} {f(q^{\lambda_i-\mu_j}t^{j-i})f(q^{\mu_i-\lambda_{j+1}}t^{j-i})},\quad f(u)\mathbin{:=}\dfrac{(t u;q)_\infty}{(q u;q)_\infty}. \end{split} \end{align} The next proposition is obtained by simple combinatorics, and we omit the proof for lack of space. \begin{prp}\label{prp:tableau} Let $T\in \Tbr{\lambda;m}\setminus \SSTb{\lambda;m}$ and regard $T$ as a sequence $\lambda^{(j)}$ explained as above. Then $\psi_T(q,t)$ calculated from (\ref{eq:tableau:formula}) vanishes. \end{prp} \subsection{Tableau sum formula and $K_n(x,z;q,t)$} Now we investigate the relationship between the function $K_n(x,z;q,t)$ and the tableau formula of Macdonald polynomial. We fix a natural number $m$ and consider the case $x=(x_1,\ldots,x_m)$. In order to state the main result, we need to consider the composition of the specialization maps $\varphi_\lambda^{(p)}$ of \eqref{eq:special}. For a partition $\lambda=(\lambda_1,\ldots,\lambda_l)$ of $n$ and $\zeta\in\mathbb{F}$, we define the map $\widetilde{\varphi}^{(\zeta)}_\lambda$ by \begin{align}\label{eq:zeta} \begin{array}{l c c l} \widetilde{\varphi}^{(\zeta)}_\lambda\mathbin{:=} \varphi_{(l)}^{(\zeta)}\circ\varphi_\lambda^{(q^{-1})} : &\mathbb{F}(z_1,\ldots,z_n) &\longrightarrow&\mathbb{F}(y)\\ &f(z_1,\ldots,z_n)&\mapsto&f(y,q^{-1} y,\ldots,q^{-(\lambda_1-1)}y,\\ & & &\hskip 1em \zeta y,q^{-1}\zeta y,\ldots,q^{-(\lambda_2-1)}\zeta y,\\ & & &\hskip 1em \ldots,\\ & & &\hskip 1em \zeta^{l-1}y,q^{-1}\zeta^{l-1}y, \ldots,q^{-(\lambda_l-1)}\zeta^{l-1}y). \end{array} \end{align} Here the map $\varphi_{(l)}^{(\zeta)}$ denotes the substitution $\varphi_{(l)}^{(\zeta)}g(y_1,\ldots,y_{l})=g(y,\zeta y,\ldots,\zeta^{l-1} y)$. \begin{thm}\label{thm:tableau} For partitions $\mu,\lambda$ of $n$, $\widetilde{\varphi}^{(\zeta)}_\lambda(F_\mu / F_\lambda)$ is regular at $\zeta=t$ and its value is $\delta_{\lambda,\mu}$. \end{thm} Our proof uses the tableau sum formula of $P_\lambda(x;q,t)$. Let us express the statement as \begin{align*} \lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{F_\mu(z;q,t)}{F_\lambda(z;q,t)} =\delta_{\lambda,\mu}. \end{align*} Then by using Proposition~\ref{prp:K2}, it can be rewritten into the next equivalent form. \begin{align}\label{eq:thm:tableau} \lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{K_n(x,z;q,t)}{F_\lambda(z;q,t)} =Q_\lambda(x;q,t). \end{align} Regard $T=(i_1,i_2,\ldots,i_n)\in \{1,2,\ldots,m\}^n $ as an element of $\Tb{\lambda;m}$. For simplicity we set \begin{align*} \gamma_T(z)\mathbin{:=} \prod_{1\le\alpha<\beta\le n}\gamma_{i_\alpha,i_\beta}(z_\alpha,z_\beta;q,t). \end{align*} We also use the same symbol for the cases $T\in \Tbr{\lambda;m}$ and $T\in \SSTb{\lambda;m}$. By Proposition \ref{prp:K2}, \eqref{eq:thm:tableau} is equivalent to \begin{align* \dfrac{(-1)^{n}}{(1-q)^{n}n!} \sum_{T\in \Tb{\lambda;m}} x^{T} \lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_T(z)}{F_\lambda(z;q,t)} =Q_\lambda(x;q,t). \end{align*} It is easy to see from the definition of $\gamma_{i,j}$ that all the terms with $T\in \Tb{\lambda;m}\setminus\Tbr{\lambda;m}$ vanish after the first specialization $\varphi^{(q^{-1})}_\lambda$. Thus we may replace $\sum_{T\in \Tb{\lambda;m}}$ by $\sum_{T\in \Tbr{\lambda;m}}$. Hence it is enough to show that for $T\in \Tbr{\lambda;m}$ we have \begin{align*} \dfrac{(-1)^{n}}{(1-q)^{n}n!}\lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_T(z)}{F_\lambda(z;q,t)}=b_\lambda(q,t)\psi_T(q,t). \end{align*} We prove this in two steps. \begin{prp}\label{prp:b-psi} Let $D\in\SSTb{\lambda;m}$ given by $\theta_{i,i}=\lambda_i$ and $\theta_{i,j}=0$ for $i\neq j$. Then we have \begin{align} \label{eq:b} &\dfrac{(-1)^{n}}{(1-q)^{n}n!} \lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_D(z)}{F_\lambda(z;q,t)}=b_\lambda(q,t),\\ \label{eq:psi} &\lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_T(z)}{\gamma_D(z)}=\psi_T(q,t). \end{align} \end{prp} \begin{proof} The proof is postponed until \S \ref{subsec:prf:b-psi}. \end{proof} \section{Ding-Iohara algebra and kernel function} In this section all objects are defined on $\widetilde{\mathbb{F}}\mathbin{:=}\mathbb{Q}(q^{1/2},t^{1/2})$. We will also use $p\mathbin{:=} q/t$. \subsection{Review of the Ding-Iohara algebra $\mathcal{U}(q,t)$} Recall that the Ding-Iohara algebra \cite{DI:1997} was introduced as a generalization of the quantum affine algebra, which respects the structure of the Drinfeld coproduct. In \cite[Appendix A]{FHHSY:2009}, the authors introduced a version $\mathcal{U}(q,t)$ of the Ding-Iohara algebra having two parameters $q$ and $t$. \begin{dfn}\label{dfn:uqt} Set \begin{align*} g(z)\mathbin{:=}\dfrac{G^+(z)}{G^-(z)},\qquad G^\pm(z)\mathbin{:=}(1-q^{\pm1}z)(1-t^{\mp 1}z)(1-q^{\mp1}t^{\pm 1}z). \end{align*} Then we define $\mathcal{U}(q,t)$ to be a unital associative algebra generated by the Drinfeld currents \begin{align*} x^\pm(z)=\sum_{n\in \mathbb{Z}}x^\pm_n z^{-n},\qquad \psi^\pm(z)=\sum_{\pm n\in \mathbb{N}}\psi^\pm_n z^{-n}, \end{align*} and the central element $\gamma^{\pm 1/2}$, satisfying the defining relations \begin{align*} \begin{array}{ll} \psi^\pm(z) \psi^\pm(w)= \psi^\pm(w) \psi^\pm(z), &\psi^+(z)\psi^-(w)= \dfrac{g(\gamma^{+1} w/z)}{g(\gamma^{-1}w/z)}\psi^-(w)\psi^+(z),\\ \psi^+(z)x^\pm(w)=g(\gamma^{\mp 1/2}w/z)^{\mp1} x^\pm(w)\psi^+(z), &\psi^-(z)x^\pm(w)=g(\gamma^{\mp 1/2}z/w)^{\pm1} x^\pm(w)\psi^-(z),\\ \multicolumn{2}{c}{\parbox{15cm}{ $[x^+(z),x^-(w)]=\dfrac{(1-q)(1-1/t)}{1-q/t} \big( \delta(\gamma^{-1}z/w)\psi^+(\gamma^{1/2}w)- \delta(\gamma z/w)\psi^-(\gamma^{-1/2}w) \big)$,}}\\ G^{\mp}(z/w)x^\pm(z)x^\pm(w)=G^{\pm}(z/w)x^\pm(w)x^\pm(z). \end{array} \end{align*} \end{dfn} \begin{fct}[{\cite[Proposition A.2]{FHHSY:2009}}] The algebra $\mathcal{U}(q,t)$ has a Hopf algebra structure with \\ Coproduct $\Delta$: \begin{align*} \begin{array}{ll} \Delta(\gamma^{\pm 1/2})=\gamma^{\pm 1/2} \otimes \gamma^{\pm 1/2}, & \Delta (x^+(z))= x^+(z)\otimes 1+ \psi^-(\gamma_{(1)}^{1/2}z)\otimes x^+(\gamma_{(1)}z), \\ \Delta (\psi^\pm(z))= \psi^\pm (\gamma_{(2)}^{\pm 1/2}z)\otimes \psi^\pm (\gamma_{(1)}^{\mp 1/2}z), &\Delta (x^-(z))= x^-(\gamma_{(2)}z)\otimes \psi^+(\gamma_{(2)}^{1/2}z)+1 \otimes x^-(z), \end{array} \end{align*} where $\gamma_{(1)}^{\pm 1/2}=\gamma^{\pm 1/2}\otimes 1$ and $\gamma_{(2)}^{\pm 1/2}=1\otimes \gamma^{\pm 1/2}$.\\ Counit $\varepsilon$: \begin{align*} \varepsilon(\gamma^{\pm 1/2})=1,\qquad \varepsilon(\psi^\pm(z))=1, \qquad \varepsilon(x^\pm(z))=0. \end{align*} Antipode $a$: \begin{align*} \begin{array}{ll} a(\gamma^{\pm 1/2})=\gamma^{\mp 1/2}, &a(x^+(z))=-\psi^-(\gamma^{-1/2}z)^{-1}x^+(\gamma^{-1} z), \\ a(\psi^{\pm}(z))=\psi^{\pm}(z)^{-1}, &a(x^-(z))=-x^-(\gamma^{-1} z)\psi^+(\gamma^{-1/2}z)^{-1}. \end{array} \end{align*} \end{fct} \subsection{Level one representation of $\mathcal{U}(q,t)$} We say a representation of $\mathcal{U}(q,t)$ is of level $k$, if the central element $\gamma$ is realized by the constant $(t/q)^{k/2}=p^{-k/2}$. \begin{fct}[{\cite[Proposition A.6]{FHHSY:2009}}]\label{fct:level1rep} Consider the Heisenberg Lie algebra $\mathfrak{h}$ over $\mathbb{F}$ with the generators $a_n$ ($n\in\mathbb{Z}$) and the relations \begin{align}\label{eq:boson_macdonald} [a_m,a_n] = m\dfrac{1-q^{|m|}}{1-t^{|m|}}\delta_{m+n,0}\, a_0. \end{align} Let $\mathfrak{h}^{\ge0}$ (resp. $\mathfrak{h}^{<0}$) be the subalgebra generated by $a_n$ for $n\ge 0$ (resp. $n< 0$). Consider the one dimensional representation $\widetilde{\mathcal{F}}$ of $\mathfrak{h}^{\ge0}$, where $a_n$ ($n>0$) acts trivially and $a_0$ acts by some fixed element of $\widetilde{\mathcal{F}}$. Then one has the induced Fock representation $\mathcal{F}\mathbin{:=}\operatorname{Ind}_{\mathfrak{h}^{\ge0}}^\mathfrak{h}\widetilde{\mathbb{F}}$ of $\mathfrak{h}$. Let us also introduce the following four vertex operators \cite[(1.7),(3.23),(3.27),(3.28)]{FHHSY:2009}. \begin{align*} &\eta(z)\mathbin{:=} \exp\Big( \sum_{n>0} \dfrac{1-t^{-n}}{n}a_{-n} z^{n} \Big) \exp\Big(-\sum_{n>0} \dfrac{1-t^{n} }{n}a_n z^{-n}\Big), \\ &\xi(z) \mathbin{:=} \exp\Big(-\sum_{n>0} \dfrac{1-t^{-n}}{n}p^{-n/2}a_{-n} z^{n}\Big) \exp\Big(\sum_{n>0} \dfrac{1-t^{n}}{n} p^{-n/2} a_n z^{-n}\Big), \\ &\varphi^+(z) \mathbin{:=} \exp\Big(-\sum_{n>0} \dfrac{1-t^{n}}{n} (1-p^{-n})p^{n/4} a_n z^{-n} \Big), \\ &\varphi^-(z) \mathbin{:=} \exp\Big(\sum_{n>0} \dfrac{1-t^{-n}}{n}(1-p^{-n})p^{n/4} a_{-n} z^{n} \Big). \end{align*} Then for a fixed $c\in \widetilde{\mathbb{F}}^{\times}$, we have a level one representation $\rho_c(\cdot)$ of $\mathcal{U}(q,t)$ on $\mathcal{F}$ by setting \begin{align*} \rho_c(\gamma^{\pm 1/2})=p^{\mp 1/4},\quad \rho_c(\psi^\pm(z))=\varphi^\pm(z),\quad \rho_c(x^+(z))=c\, \eta(z),\quad \rho_c(x^-(z))=c^{-1} \xi(z). \end{align*} \end{fct} \begin{rmk}\label{rmk:level1rep} We can rephrase this fact as follows. Let us define $b_n$'s by the expansion of $\psi^{\pm}$: \begin{align}\label{eq:psi_boson} \psi^+(z)=\psi^+_0 \exp\left(+\sum_{n>0} b_n \gamma^{n/2} z^{-n} \right) , \quad \psi^-(z)=\psi^-_0 \exp\left(-\sum_{n>0} b_{-n} \gamma^{n/2}z^{n} \right). \end{align} Then we have \begin{align}\label{eq:boson_b} [b_m,b_n] =\dfrac{1}{m}(1-q^{-m})(1-t^m)(1-p^m)(\gamma^m-\gamma^{-m}) \gamma^{-|m|}\delta_{m+n,0}, \end{align} and the coproduct for $b_n$ reads \begin{align}\label{eq:Delta_boson} \Delta(b_n)=b_n\otimes \gamma^{-|n|}+1\otimes b_n. \end{align} Then the representation $\rho_c$ is given by $\gamma^{\pm 1/2} \mapsto p^{\mp 1/4}$ and \begin{align*} &b_n \mapsto -\dfrac{1-t^n}{|n|}(p^{|n|/2}-p^{-|n|/2})a_n, \quad \psi_0^{\pm}\mapsto 1,\quad x^+(z) \mapsto c\, \eta(z),\quad x^-(z)\mapsto c^{-1} \xi(z). \end{align*} \end{rmk} \begin{dfn} Consider the $m$-fold tensor representation $\rho_{y_1}\otimes\cdots \otimes\rho_{y_m}$ on $\mathcal{F}^{\otimes m}$ for $m\in\mathbb{Z}_{\geq 2}$. Define $\Delta^{(m)}$ inductively by \begin{align*} \Delta^{(2)}\mathbin{:=} \Delta,\quad \Delta^{(m)}\mathbin{:=}({\rm id}\otimes \cdots \otimes{\rm id}\otimes \Delta)\circ \Delta^{(m-1)}. \end{align*} Since we have $\rho_{y_1}\otimes\cdots \otimes\rho_{y_m}\Delta^{(m)}(\gamma)= \gamma_{(1)}\cdots \gamma_{(m)}=p^{-m/2}$, the level is $m$. We also define \begin{align}\label{eq:rhom} \rho_y^{(m)}\mathbin{:=}\rho_{y_1}\otimes\cdots \otimes\rho_{y_m} \circ\Delta^{(m)} : \mathcal{U}(q,t)\to\mathcal{F}^{\otimes m}. \end{align} \end{dfn} \begin{lem}\label{lem:tildeLambda} We have \begin{align*} \rho^{(m)}_y(x^+(z))=\sum_{i=1}^m y_i \widetilde{\Lambda}_i(z), \quad \rho^{(m)}_y(x^-(z))=\sum_{i=1}^m y_i^{-1} \widetilde{\Lambda}^*_i(z), \end{align*} where the $\widetilde{\Lambda}_i(z)$, $\widetilde{\Lambda}_i^*(z)$ are defined to be \begin{align} \label{eq:tildeL} \widetilde{\Lambda}_i(z) &\mathbin{:=} \varphi^-(p^{-1/4}z)\otimes\varphi^-(p^{-3/4}z)\otimes\cdots \otimes \varphi^-(p^{-(2i-3)/4}z) \otimes \eta(p^{-(i-1)/2}z)\otimes 1\otimes \cdots\otimes 1, \\ \label{eq:tildeL*} \widetilde{\Lambda}_i^*(z) &\mathbin{:=} 1\otimes \cdots \otimes 1 \otimes \xi(p^{-(m-i)/2}z) \otimes \varphi^+ (p^{-(2m-2i-1)/4}z)\otimes\cdots \otimes\varphi^+(p^{-1/4}z), \end{align} where $\eta(p^{-(i-1)/2}z)$ and $\xi(p^{-(m-i)/2}z)$ sit in the $i$-th tensor component. \end{lem} \begin{proof} By the definition \eqref{eq:rhom}, Fact \ref{fct:level1rep} and Remark \ref{rmk:level1rep}. \end{proof} \subsection{New currents $t(z)$ and $t^*(z)$} \begin{dfn}\label{dfn:tt*} We define \begin{align} \label{eq:tt*} t(z) \mathbin{:=} \alpha(z) x^{+}(z) \beta(z),\quad t^*(z) \mathbin{:=} \alpha(p^{-1} z)^{-1}x^{-}(p^{-1} \gamma^{-1}z) \beta(\gamma^{-2}p^{-1} z)^{-1}. \end{align} Here we used auxiliary vertex operators \begin{align} \label{eq:alpha_beta} \alpha(z) \mathbin{:=} \exp\Big(-\sum_{n=1}^\infty \dfrac{1}{\gamma^n-\gamma^{-n}} b_{-n} z^{n}\Big), \quad \beta(z) \mathbin{:=} \exp\Big(\sum_{n=1}^\infty \dfrac{1}{\gamma^n-\gamma^{-n}} b_{n}z^{-n}\Big). \end{align} Here the part $1/(\gamma^n-\gamma^{-n})$ is considered to be the formal power sum $\sum_{i=0}^\infty \gamma^{-(2i+1)n}$. \end{dfn} \begin{rmk}\label{rmk:t*} The definition of $t^*(z)$ can be read as \begin{align*} t^*(\gamma p z) = \alpha(\gamma z)^{-1}x^{-}(z) \beta(\gamma^{-1}z)^{-1}. \end{align*} This form is convenient in the actual calculations. \end{rmk} \begin{prp}\label{prp:tt*} (1) The elements $t(z)$ and $t^*(z)$ commutes with $\alpha(w)$, $\beta(w)$ and $\psi^{\pm}(w)$: \begin{align*} &[t(z),\alpha(w)]=[t(z),\beta(w)]=[t^*(z),\alpha(w)]=[t^*(z),\beta(w)]=0,\\ &[t(z),\psi^{\pm}(w)]=[t^*(z),\psi^{\pm}(w)]=0. \end{align*} (2) Set \begin{align} \label{eq:A} A(z)\mathbin{:=} \exp\Big(\sum_{n=1}^\infty\dfrac{1}{n}\dfrac{(1-q^n)(1-t^{-n})(1-p^{-n} \gamma^{-2n})}{1-\gamma^{-2n}}z^n\Big), \end{align} where the part $1/(1-\gamma^{-2n})$ is considered to be the formal power sum $\sum_{i=0}^\infty \gamma^{-2i n}$. Then we have \begin{align} \label{eq:Att} &A(\tfrac{w}{z})t(z)t(w)-A(\tfrac{z}{w})t(w)t(z) =\dfrac{(1-q)(1-t^{-1})}{1-p} [\delta(p^{-1}\tfrac{w}{z})t^{(2)}(z)-\delta(p \tfrac{w}{z})t^{(2)}(w)], \\ \label{eq:Att*} &A(\tfrac{w}{z})t^*(z)t^*(w)-A(\tfrac{z}{w})t^*(w)t^*(z) =\dfrac{(1-q^{-1})(1-t)}{1-p^{-1}} [\delta(p \tfrac{w}{z})t^{*(2)}(z)-\delta(p^{-1}\tfrac{w}{z})t^{*(2)}(w)], \end{align} where $\delta(z)\mathbin{:=} \sum_{n\in\mathbb{N}}z^n + z^{-1}\sum_{n\in\mathbb{N}}z^{-n}$ is the formal delta function, and \begin{align*} &t^{(2)}(z)\mathbin{:=} \alpha(p z)\alpha(z)x^+(p z)x^+(z)\beta(p z)\beta(z), \\ &t^{*(2)}(z)\mathbin{:=} \alpha(\gamma p z)^{-1}\alpha(\gamma z)^{-1}x^-(p z)x^-(z) \beta(\gamma^{-1}p z)^{-1} \beta(\gamma^{-1} z)^{-1}. \end{align*} (3) As in (2), set \begin{align} \label{eq:B} B(z)\mathbin{:=} \exp\Big(\sum_{n=1}^\infty\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(p^{-2n} \gamma^{-2n} -p^{-n} \gamma^{-2n} )} {1-\gamma^{-2n}} z^n \Big). \end{align} Then \begin{align} \label{eq:Btt*} B(\tfrac{w}{z})t(z)t^*(w)-B(\gamma^2 p^2\tfrac{z}{w})t^*(w)t(z) =\dfrac{(1-q)(1-t^{-1})}{1-p} \Big(\delta(p^{-1}\tfrac{w}{z})\psi_0^+ -\delta(\gamma^{-2}p^{-1}\tfrac{w}{z})\psi_0^-)\Big). \end{align} \end{prp} \begin{proof} See \S \ref{subsec:prf:tt*}. \end{proof} In the next subsection we show that the currents $t(z)$, $t^*(z)$ are connected to the realization of deformed $\mathcal{W}$ algebra in the Fock representation of $\mathcal{U}(q,t)$. \subsection{Deformed algebra $\mathcal{W}_{q,p}(\mathfrak{sl}_m)$} We basically follow the description of $\mathcal{W}_{q,p}(\mathfrak{sl}_m)$ in \cite[\S 4]{FF:1996}. As for the connection between the singular vectors of the $\mathcal{W}_{q,p}(\mathfrak{sl}_m)$ and the Macdonald polynomials, see \cite{SKAO:1996,AKOS:1996}. \begin{dfn}Set \begin{align*} f_{k,l}(z)\mathbin{:=} \exp\Big(\sum_{n=1}^\infty \dfrac{(1-q^n)(1-t^{-n})(p^{(k-1)n}-p^{(l-1)n})}{1-p^{l n}}z^n \Big). \end{align*} \end{dfn} \begin{rmk}\label{rmk:ABf} Our functions $A(z)$ and $B(z)$ give special cases of this function under $\rho_y^{(m)}$, that is, \begin{align*} \rho_y^{(m)}(A(z))=f_{1,m}(z),\quad \rho_y^{(m)}(B(z))=f_{m-1,m}(z). \end{align*} \end{rmk} \begin{dfn} Set \begin{align} \label{eq:TT*} &T(z)=T_1(z)\mathbin{:=} \rho^{(m)}_y(t(z)),\quad T^*(z)=T_1^*(z)\mathbin{:=} \rho^{(m)}_y(t^*(z)). \end{align} Let us also define \begin{align} \label{eq:LL*} \Lambda_i(z) \mathbin{:=} \rho^{(m)}_y(\alpha(z)) \widetilde{\Lambda}_i(z) \rho^{(m)}_y(\beta(z)), \quad \Lambda_i^*(z) \mathbin{:=} \rho^{(m)}_y(\alpha(p^{-1} z)^{-1}) \widetilde{\Lambda}^*_i(p^{(m-2)/2}z) \rho^{(m)}_y(\beta(\gamma^{-2}p^{-1} z)^{-1}). \end{align} Then by Definition \ref{dfn:tt*} and Lemma \ref{lem:tildeLambda} we have \begin{align}\label{eq:TL} T_1(z)=\sum_{i=1}^m y_i \Lambda_i(z), \quad T^*_1(z)=\sum_{i=1}^m y_i^{-1} \Lambda_i^*(z). \end{align} For $i=2,\ldots,m$, we further define \begin{align} \label{eq:TiL} &T_i(z) \mathbin{:=} \sum_{1\le j_1<\cdots<j_i\le m} y_{j_1}y_{j_2}\cdots y_{j_i} :\Lambda_{j_1}(z)\Lambda_{j_2}(z p)\cdots\Lambda_{j_i}(z p^{i-1}):, \\ \label{eq:TiL*} &T_i^*(z) \mathbin{:=} \sum_{1\le j_1<\cdots<j_i\le m} y_{j_1}^{-1}y_{j_2}^{-1}\cdots y_{j_i}^{-1} :\Lambda_{j_1}^*(z)\Lambda_{j_2}^*(z p^{-1})\cdots\Lambda_{j_i}^*(z p^{-i+1}):. \end{align} \end{dfn} \begin{prp}\label{prp:Lambda} (1) The operator product of $\Lambda_i(z)$ and $\Lambda_j(w)$ is given by \begin{align} \label{eq:prp:Lambda:1} f_{1,m}(\tfrac{w}{z})\Lambda_i(z)\Lambda_j(w) =:\Lambda_i(z)\Lambda_j(w):\times \begin{cases} 1&i=j,\\ \gamma_+(z,w;q,p)&i<j,\\ \gamma_-(z,w;q,p)&i>j. \end{cases} \end{align} Here we used the symbol \begin{align}\label{eq:gamma+-} \gamma_+(z,w;q,t) \mathbin{:=} \dfrac{(z-q^{-1} w)(z-q t^{-1}w)}{(z-w)(z-t^{-1}w)},\quad \gamma_-(z,w;q,t) \mathbin{:=} \dfrac{(z-q w)(z-q^{-1} t w)}{(z-w)(z-t w)}. \end{align} (2) We have \begin{align*} :\Lambda_1(z)\Lambda_2(p z)\cdots\Lambda_m(p^{m-1}z): =1. \end{align*} Therefore $T_m(z)=y_1y_2\cdots y_m$. (3) The $\Lambda_i(z)$ and $\Lambda_j^*(z)$ are connected by the following equation. \begin{align} \label{eq:prp:Lambda:3} \Lambda_k^*(z)= :\prod_{i=1}^{k-1}\Lambda_i(p^{k-1}z)\prod_{l=k+1}^m \Lambda_l(p^{l-2}z):. \end{align} Thus we also have \begin{align} \label{eq:prp:Lambda:3:2} T^*_1(z)=y_1^{-1}y_2^{-1}\cdots y_m^{-1}T_{m-1}(z). \end{align} (4) The operator product of $\Lambda_i^*(z)$ and $\Lambda_j^*(w)$ is given by \begin{align} \label{eq:prp:Lambda:4} f_{1,m}(\tfrac{w}{z})\Lambda_i^*(z)\Lambda_j^*(w) =:\Lambda_i^*(z)\Lambda_j^*(w):\times \begin{cases} 1&i=j,\\ \gamma_-(z,w;q,p)&i<j,\\ \gamma_+(z,w;q,p)&i>j. \end{cases} \end{align} (5) We have \begin{align*} :\Lambda_1^*(z)\Lambda_2^*(p^{-1} z)\cdots\Lambda_m^*(p^{-m+1}z): =1. \end{align*} Therefore $T^*_m(z)=y_1^{-1}y_2^{-1}\cdots y_m^{-1}$. \end{prp} \begin{proof} See \S \ref{subsec:prf:Lambda}. \end{proof} \begin{prp}\label{prp:TT} We have \begin{align} \label{eq:TT:1i} &f_{1,m}(\tfrac{w}{z})T_1(z)T_i(w)-f_{1,m}(p^{1-i}\tfrac{z}{w})T_i(w)T_1(z) =\dfrac{(1-q)(1-t^{-1})}{1-p} [\delta(p^{-1}\tfrac{w}{z}) T_{i+1}(z)-\delta(p^i\tfrac{w}{z}) T_{i+1}(w)], \\ \label{eq:TT:mm} \begin{split} &f_{1,m}(\tfrac{w}{z})T_{m-1}(z)T_{m-1}(w) -f_{1,m}(\tfrac{z}{w})T_{m-1}(w)T_{m-1}(z) \\ &\hskip10em =\dfrac{(1-q^{-1})(1-t)}{1-p^{-1}} [\delta(p \tfrac{w}{z})T_2^*(z)-\delta(p^{-1}\tfrac{w}{z})T_2^*(w)]. \end{split} \end{align} \end{prp} \begin{proof} \eqref{eq:TT:1i} follows from \eqref{eq:TL}, \eqref{eq:TiL} and \eqref{eq:prp:Lambda:1}. See \cite[Theorem 2]{FF:1996} for detail\footnote{ It seems that \cite{FF:1996} contains some typo. In (6.2) of that paper, the term $f_{m,N}(\frac{z}{w})$ should be $f_{m,N}(p^{1-m}\frac{z}{w})$.}. \eqref{eq:TT:mm} is also shown by the same method using \eqref{eq:prp:Lambda:3:2}, \eqref{eq:TiL*} and and \eqref{eq:prp:Lambda:4}. \end{proof} \subsection{Deformed $\mathcal{W}$ algebra and kernel function} Our final consequence of this paper relates the vacuum expectation values of the deformed algebra $\mathcal{W}_{q,p}$ with the finite kernel function. \begin{thm} Let $\left|0\right>$ be the vacuum of $\mathcal{F}$, that is, $a_0\left|0\right>=\left|0\right>$ and $a_n\left|0\right>=0$ for $n>0$. Let $\left<0\right|$ to be the dual vacuum. We denote the tensor $\left|0\right>^{\otimes m}\in\mathcal{F}^{\otimes m}$ by the same symbol $\left|0\right>$. We use the similar abbreviation for the tensored dual vacuum. Then, denoting $y=(y_1,\ldots,y_m)$, we have \begin{align*} \dfrac{(-1)^n}{(1-q)^n n!} \prod_{i<j}f_{1,m}(z_i/z_j) \left<0\right| T_1(z_1)T_1(z_2)\cdots T_1(z_n)\left|0\right> =K_n(y,z;q,p). \end{align*} \end{thm} \begin{proof} This follows from \eqref{eq:TL}, the operator product \eqref{eq:prp:Lambda:1} and the definition \eqref{eq:K:dfn}. \end{proof} \section{Proofs of the propositions} \subsection{Proof of Proposition \ref{prp:b-psi}}\label{subsec:prf:b-psi} Using the $\gamma_{\pm}$ defined in \eqref{eq:gamma+-}, we have \begin{align}\label{eq:oeg} &\dfrac{\omega(z,w)}{\epsilon_2^{(t)}(z,w)}=\gamma_+(z,w;q,t),\quad \dfrac{\omega(w,z)}{\epsilon_2^{(t)}(w,z)}=\gamma_-(z,w;q,t), \quad \dfrac{\omega(w,z)}{\omega(z,w)} =\dfrac{\gamma_-(z,w;q,t)}{\gamma_+(z,w;q,t)}. \end{align} For later purpose, we prepare the following formulae. Let $\theta$ and $\rho$ be natural numbers. Then \begin{align} \prod_{1\le i<j\le \theta} \gamma_+(q^{-i}z,q^{-j}w;q,t) &=\left(\dfrac{1-tz/w}{1-q z/w}\right)^{\theta} \dfrac{(q z/w)_\theta}{(t z/w)_\theta} \label{eq:+:tri} \\ \prod_{1\le i<j\le \theta} \gamma_-(q^{-i}z,q^{-j}w;q,t) &=\left(\dfrac{1-z/w}{1-q t^{-1}z/w}\right)^{\theta} \dfrac{(q t^{-1}z/w)_\theta}{(z/w)_\theta} \label{eq:-:tri} \\ \prod_{l=1}^\theta \prod_{k=1}^\rho \gamma_+(q^{-l}z,q^{-k}w;q,t) &=\dfrac{(q^{-\rho}w/z)_\theta}{(w/z)_\theta} \dfrac{(q t^{-1}w/z)_\theta}{(q^{-\rho+1}t^{-1}w/z)_\theta} \label{eq:+:1} \\ &=\dfrac{(q^{\rho-\theta+1} z/w)_\theta}{(q^{-\theta+1} z/w)_\theta} \dfrac{(q^{-\theta} t z/w)_\theta}{(q^{\rho-\theta}t z/w)_\theta}, \label{eq:+:2} \\ \prod_{l=1}^\theta \prod_{k=1}^\rho \gamma_-(q^{-l}z,q^{-k}w;q,t) &=\dfrac{(q w/z)_\theta}{(q^{-\rho+1}w/z)_\theta} \dfrac{(q^{-\rho} t w/z)_\theta}{(t w/z)_\theta} \label{eq:-:1} \\ &=\dfrac{(q^{-\theta} z/w)_\theta}{(q^{\rho-\theta}z/w)_\theta} \dfrac{(q^{\rho-\theta+1} t^{-1} z/w)_\theta} {(q^{-\theta+1}t^{-1} z/w)_\theta}. \label{eq:-:2} \end{align} Here we used $(u)_n\mathbin{:=}(u;q)_n=\prod_{i=1}^n(1-u q^{i-1})$. These equations are checked by simple calculations. \subsubsection{Proof of \eqref{eq:b}} By \eqref{eq:fq} we have \begin{align*} \dfrac{(1-q)^n n!}{(-1)^n}F_\lambda(z;q,t)= \left(\dfrac{1-q}{1-t}\right)^{|\lambda|} \sum_{\mu\ge\lambda'}c_{\lambda\mu}^{e\to P}(q,t)\epsilon_{\mu}(z;q) \dfrac{|\mu|!}{\prod_{i=1}^{\ell(\mu)}\mu_i!}. \end{align*} Recalling the argument of \cite[Proposition 2.19]{FHHSY:2009}, we find that under the specialization $\varphi^{(q^{-1})}_\lambda$ only the term $\epsilon_{\lambda'}$ in $F_\lambda(z;q,t)$ survives and the other terms $\epsilon_{\mu}$ vanish. The specialization result is \footnote{This expression is given at the last equation in the proof of \cite[Proposition 2.19]{FHHSY:2009}, although it contains a typo. The range ``$1\le j< k\le l$" of the third product should be ``$1\le j<k\le \ell(\lambda')$"} \begin{align*} \varphi^{(q^{-1})}_\lambda\epsilon_{\lambda'}(y) &= \dfrac{\prod_{h=1}^{\ell(\lambda)}\lambda'_h!}{n!} \prod_{i=1}^{\ell(\lambda)} \epsilon_{\lambda'_i}(y_1,\ldots,y_{\lambda'_i};q) \prod_{1\le j<k\le \ell(\lambda')} \prod_{\alpha=1}^{\lambda_j'} \prod_{\beta=1}^{\lambda_k'}\omega(q^{-j+1} y_\alpha,q^{-k+1} y_\beta)\\ &= \dfrac{\prod_{h=1}^{\ell(\lambda)}\lambda'_h!}{n!} \prod_{i=1}^{\ell(\lambda)} \epsilon_{\lambda'_i}(y_1,\ldots,y_{\lambda'_i};q) \times \prod_{\alpha=1}^{\ell(\lambda)} \prod_{1\le i<j\le \lambda_\alpha} \omega(q^{-i+1} y_\alpha,q^{-j+1} y_\alpha) \\ &\phantom{=}\times \prod_{1\le \alpha<\beta\le \ell(\lambda)} \Big[\prod_{1\le i<j\le \lambda_\beta} \omega(q^{-i+1} y_\alpha,q^{-j+1} y_\beta) \omega(q^{-i+1} y_\beta,q^{-j+1} y_\alpha) \\ &\phantom{=\times \prod_{1\le \alpha<\beta\le \ell(\lambda)}\Big[} \times \prod_{i=1}^{\lambda_\beta}\prod_{j=\lambda_\beta+1}^{\lambda_\alpha} \omega(q^{-i+1} y_\beta,q^{-j+1} y_\alpha) \Big]. \end{align*} We also note that $c_{\lambda'\lambda}^{e\to P}(q,t)=1$. Recalling \eqref{eq:gamma-omega}, we can also calculate the first specialization $\varphi^{(q^{-1})}_\lambda$ of the numerator in \eqref{eq:b} as \begin{align*} &\varphi^{(q^{-1})}_\lambda \gamma_{D}(z) =\Big[\prod_{k=1}^{\ell(\lambda)} \prod_{1\le i<j\le \lambda_k}\epsilon_2(q^{-i},q^{-j};t)\Big] \Big[\prod_{\alpha=1}^{\ell(\lambda)}\prod_{\beta=\alpha}^{\ell(\lambda)} \prod_{i=1}^{\lambda_\alpha}\prod_{j=1}^{\lambda_\beta} \omega(q^{-i}y_\alpha,q^{-j}y_\beta)\Big] \\ &=\Big[\prod_{k=1}^{\ell(\lambda)} \prod_{1\le i<j\le \lambda_k}\epsilon_2(q^{-i},q^{-j};t)\Big] \prod_{1\le \alpha<\beta\le \ell(\lambda)}\Big[ \big( \prod_{1\le i<j\le \lambda_\beta} \omega(q^{-i}y_\alpha,q^{-j}y_\beta) \big) \big( \prod_{1\le j<i\le \lambda_\beta} \omega(q^{-i}y_\alpha,q^{-j}y_\beta) \big) \\ &\phantom{=\times\prod_{1\le \alpha<\beta\le \ell(\lambda)}\Big[} \big( \prod_{j=1}^{\lambda_\beta}\prod_{i=\lambda_\beta+1}^{\lambda_\alpha} \omega(q^{-i}y_\alpha,q^{-j}y_\beta) \big) \big( \prod_{i=1}^{\lambda_\alpha} \omega(q^{-i}y_\alpha,q^{-i}y_\beta) \big) \Big]. \end{align*} Thus we have \begin{align*} & \dfrac{(-1)^n}{(1-q)^n n!} \widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_{D}(z)}{F_\lambda(z;q,t)} = \left(\dfrac{1-t}{1-q}\right)^{|\lambda|} \prod_{\alpha=1}^{\ell(\lambda)} \prod_{1\le i<j\le\lambda_\alpha} \dfrac{\epsilon_2^{(t)}(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)} {\omega(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)}\times \\ \prod_{1\le\alpha<\beta\le\ell(\lambda)}\Big[ \big(\prod_{1\le i<j\le\lambda_\beta} \dfrac{\omega(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)} {\omega(q^{-i+1}y_\beta,q^{-j+1}y_\alpha)}\big) \big(\prod_{i=1}^{\lambda_\beta}\prod_{j=\lambda_\beta+1}^{\lambda_\alpha} \dfrac{\omega(q^{-j+1}y_\alpha,q^{-i+1}y_\beta)} {\omega(q^{-i+1}y_\beta,q^{-j+1}y_\alpha)}\big) \big( \dfrac{\omega(y_\alpha,y_\beta)} {\epsilon_2^{(q)}(y_\beta, y_\alpha)}\big)^{\lambda_\beta} \Big]. \end{align*} Then recalling \eqref{eq:oeg} and using \eqref{eq:+:tri} and \eqref{eq:-:tri}, one has \begin{align*} & \prod_{1\le i<j\le\lambda_\alpha} \dfrac{\epsilon_2^{(t)}(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)} {\omega(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)} = \left(\dfrac{1-q}{1-t}\right)^{\lambda_\alpha} \dfrac{(t)_{\lambda_\alpha}}{(q)_{\lambda_\alpha}}, \\ & \prod_{1\le i<j\le\lambda_\beta} \dfrac{\omega(q^{-i+1}y_\alpha,q^{-j+1}y_\beta)} {\omega(q^{-i+1}y_\beta,q^{-j+1}y_\alpha)} \Big[\dfrac{\omega(y_\alpha,y_\beta)} {\epsilon_2^{(q)}(y_\beta, y_\alpha)}\Big]^{\lambda_\beta} = \dfrac{(t y_\beta/y_\alpha)_{\lambda_\alpha}} {(q y_\beta/y_\alpha)_{\lambda_\alpha}} \dfrac{(q t^{-1} y_\beta/y_\alpha)_{\lambda_\alpha}} {(y_\beta/y_\alpha)_{\lambda_\alpha}}, \\ & \prod_{i=1}^{\lambda_\beta}\prod_{j=\lambda_\beta+1}^{\lambda_\alpha} \dfrac{\omega(q^{-j+1}y_\alpha,q^{-i+1}y_\beta)} {\omega(q^{-i+1}y_\beta,q^{-j+1}y_\alpha)} \\ &= \dfrac{(q y_\beta/y_\alpha)_{\lambda_\beta}} {(t y_\beta/y_\alpha)_{\lambda_\beta}} \dfrac{( y_\beta/y_\alpha)_{\lambda_\beta}} {(q t^{-1} y_\beta/y_\alpha)_{\lambda_\beta}} \dfrac{(q^{\lambda_\alpha-\lambda_\beta}t y_\beta/y_\alpha)_{\lambda_\beta}} {(q^{\lambda_\alpha-\lambda_\beta} y_\beta/y_\alpha)_{\lambda_\beta}} \dfrac{(q^{\lambda_\alpha-\lambda_\beta+1}t^{-1} y_\beta/y_\alpha)_{\lambda_\beta}} {(q^{\lambda_\alpha-\lambda_\beta+1}y_\beta/y_\alpha)_{\lambda_\beta}}. \end{align*} Combining these factors, we obtain \begin{align*} \dfrac{(-1)^n}{(1-q)^n n!} \lim_{\zeta\to t}\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_{D}(z;t)}{F_\lambda(z;q,t)} &= \prod_{\alpha=1}^{\ell(\lambda)} \dfrac{(t)_{\lambda_\alpha}}{(q)_{\lambda_\alpha}} \prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(q^{\lambda_\alpha-\lambda_\beta}t y_\beta/y_\alpha)_{\lambda_\beta}} {(q^{\lambda_\alpha-\lambda_\beta} y_\beta/y_\alpha)_{\lambda_\beta}} \dfrac{(q^{\lambda_\alpha-\lambda_\beta+1}t^{-1} y_\beta/y_\alpha)_{\lambda_\beta}} {(q^{\lambda_\alpha-\lambda_\beta+1}y_\beta/y_\alpha)_{\lambda_\beta}} \\ &= \prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(q^{\lambda_\alpha-\lambda_\beta}t^{\beta-\alpha+1})_{\lambda_\beta-\lambda_{\beta+1}}} {(q^{\lambda_\alpha-\lambda_\beta+1}t^{\beta-\alpha})_{\lambda_\beta-\lambda_{\beta+1}}}. \end{align*} But one can easily find that the last expression equals to $b_\lambda(q,t)$ using the form \eqref{eq:b_fact}. \subsubsection{Proof of \eqref{eq:psi}} For a tableau $T\in\Tbr{\lambda;m}$, define $\theta_{\alpha,k}$ and $\lambda_\alpha^{(k)}$ as explained in \S \ref{subsec:tableau}. Then by the direct calculation we have \begin{align} \widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_{T}(z)}{\gamma_{D}(z)} = & \prod_{1\le\alpha<\beta\le\ell(\lambda)} \prod_{i=1}^{\lambda_\alpha} \prod_{j=1}^{\lambda_\beta} \gamma_+(q^{-i}\zeta^\alpha,q^{-j}\zeta^\beta)^{-1} \label{eq:psi:1} \\ &\times \prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)} \prod_{\beta=\alpha+1}^{\ell(\lambda)} \prod_{i=1}^{\theta_{\alpha,k}} \prod_{j=1}^{\lambda_\beta^{(k-1)}} \gamma_-(q^{-i-\lambda_\alpha^{(k-1)}}\zeta^\alpha,q^{-j}\zeta^\beta) \label{eq:psi:2} \\ &\times \prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)} \prod_{\beta=\alpha}^{\ell(\lambda)} \prod_{i=1}^{\theta_{\alpha,k}} \prod_{j=1+\lambda_\beta^{(k)}}^{\lambda_\beta} \gamma_+(q^{-i-\lambda_\alpha^{(k-1)}}\zeta^\alpha,q^{-j}\zeta^\beta) \label{eq:psi:3} \end{align} By the formula \eqref{eq:+:1} we find that \begin{align}\label{eq:psi:1:2} \lim_{\zeta\to t}\eqref{eq:psi:1}= \prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(t^{\beta-\alpha})_{\lambda_\alpha}} {(q^{-\lambda_\beta}t^{\beta-\alpha})_{\lambda_\alpha}} \dfrac{(q^{-\lambda_\beta+1}t^{\beta-\alpha-1})_{\lambda_\alpha}} {(q t^{\beta-\alpha-1})_{\lambda_\alpha}}. \end{align} Note that the regularity of \eqref{eq:psi:1} at $\zeta=t$ is included in this equation. Similarly by the formula \eqref{eq:-:1}, \eqref{eq:psi:2} is regular at $\zeta=t$ and its value is \begin{align}\label{eq:psi:2:2} \lim_{\zeta\to t}\eqref{eq:psi:2}= \prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)} \prod_{\beta=\alpha+1}^{\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}+1}t^{\beta-\alpha})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}+1} t^{\beta-\alpha})_{\theta_{\alpha,k}}}. \end{align} The rest term \eqref{eq:psi:3} is calculated by the formula \eqref{eq:+:1} and \eqref{eq:+:2}: \begin{align}\label{eq:psi:3:2} \begin{split} \lim_{\zeta\to t}\eqref{eq:psi:3}= \prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)}\Big[ &\dfrac{(t)_{\theta_{\alpha,k}}}{(q)_{\theta_{\alpha,k}}} \dfrac{(q^{\lambda_\alpha-\lambda_\alpha^{(k)}+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha-\lambda_\alpha^{(k)}})_{\theta_{\alpha,k}}} \\ &\prod_{\beta=\alpha+1}^{\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}+1} t^{\beta-\alpha-1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k)}} t^{\beta-\alpha})_{\theta_{\alpha,k}}} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta} t^{\beta-\alpha})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta+1} t^{\beta-\alpha-1})_{\theta_{\alpha,k}}} \Big]. \end{split} \end{align} Note that some parts of \eqref{eq:psi:2:2} and \eqref{eq:psi:3:2} are combined into the next form. \begin{align*} & \Big[\prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)} \prod_{\beta=\alpha+1}^{\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}+1}t^{\beta-\alpha})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}}\Big] \\ & \times \Big[ \prod_{k=1}^m \prod_{\alpha=1}^{\ell(\lambda)} \dfrac{(q^{\lambda_\alpha-\lambda_\alpha^{(k)}+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha-\lambda_\alpha^{(k)}})_{\theta_{\alpha,k}}} \prod_{\beta=\alpha+1}^{\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta} t^{\beta-\alpha})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta+1} t^{\beta-\alpha-1})_{\theta_{\alpha,k}}}\Big] \\ &= \Big[\prod_{\alpha=1}^{\ell(\lambda)} \dfrac{(q)_{\lambda_\alpha}}{(t)_{\lambda_\alpha}}\Big] \Big[\prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(q^{-\lambda_\beta}t^{\beta-\alpha})_{\lambda_\alpha}} {(q^{-\lambda_\beta+1}t^{\beta-\alpha-1})_{\lambda_\alpha}} \dfrac{(q t^{\beta-\alpha})_{\lambda_\alpha}} {( t^{\beta-\alpha+1})_{\lambda_\alpha}}\Big] \\ &= \Big[\prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(q^{-\lambda_\beta}t^{\beta-\alpha})_{\lambda_\alpha}} {(q^{-\lambda_\beta+1}t^{\beta-\alpha-1})_{\lambda_\alpha}}\Big] \Big[\prod_{1\le\alpha<\beta\le\ell(\lambda)+1} \dfrac{(q t^{\beta-\alpha+1})_{\lambda_\alpha}} {( t^{\beta-\alpha})_{\lambda_\alpha}}\Big] =\eqref{eq:psi:3:2}^{-1}\times \prod_{\alpha=1}^{\ell(\lambda)} \dfrac{(q t^{\ell(\lambda)-\alpha})_{\lambda_\alpha}} {(t^{\ell(\lambda)-\alpha+1})_{\lambda_\alpha}}. \end{align*} Therefore we have \begin{align} \lim_{\zeta\to t} &\widetilde{\varphi}^{(\zeta)}_\lambda \dfrac{\gamma_{T}(z)}{\gamma_D(z)} =\Big[ \prod_{\alpha=1}^{\ell(\lambda)} \dfrac{(q t^{\ell(\lambda)-\alpha})_{\lambda_\alpha}} {( t^{\ell(\lambda)-\alpha+1})_{\lambda_\alpha}} \prod_{k=1}^m\dfrac{(t)_{\theta_{\alpha,k}}}{(q)_{\theta_{\alpha,k}}} \Big]\times \nonumber \\ &\phantom{==}\prod_{k=1}^m \prod_{1\le\alpha<\beta\le\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}+1} t^{\beta-\alpha})_{\theta_{\alpha,k}}} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}+1} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k)}} t^{\beta-\alpha})_{\theta_{\alpha,k}}} \nonumber \\ &=\prod_{k=1}^m \prod_{1\le\alpha\le\beta\le\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_\beta^{(k-1)}+1} t^{\beta-\alpha})_{\theta_{\alpha,k}}} \times \prod_{k=1}^m \prod_{1\le\alpha\le\beta\le\ell(\lambda)} \dfrac{(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta+1}^{(k)}+1} t^{\beta-\alpha})_{\theta_{\alpha,k}}} {(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta+1}^{(k)}+1} t^{\beta-\alpha+1})_{\theta_{\alpha,k}}}. \label{eq:psi:4} \end{align} Note that the function $f(u)\mathbin{:=} (t u)_\infty/(q u)_\infty$ satisfies $f(u)/f(q^{-\theta}u)=(q^{-\theta+1} u)_\infty/(q^{-\theta} t u)_\infty$. Then \eqref{eq:psi:4} can be rewritten into \begin{align}\label{eq:psi:5} \eqref{eq:psi:4} =\prod_{k=1}^m \prod_{1\le\alpha\le\beta\le\ell(\lambda)} \dfrac{f(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta}^{(k-1)}}t^{\beta-\alpha})} {f(q^{\lambda_\alpha^{(k)} -\lambda_{\beta}^{(k-1)}}t^{\beta-\alpha})} \dfrac{f(q^{\lambda_\alpha^{(k)}-\lambda_{\beta+1}^{(k)} }t^{\beta-\alpha})} {f(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta+1}^{(k)}}t^{\beta-\alpha})}. \end{align} Finally, if $T\in\SSTb{\lambda;m}$, we have $k\ge\ell(\lambda^{(k)})$, Therefore if $\beta\ge k$ then $\lambda_{\beta+1}^{(k)}=\lambda_{\beta}^{(k-1)}=0$. Thus one can see that \begin{align*} \eqref{eq:psi:5} &=\prod_{k=1}^m \prod_{1\le\alpha\le\beta\le\ell(\lambda^{(k-1)})} \dfrac{f(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta}^{(k-1)}}t^{\beta-\alpha})} {f(q^{\lambda_\alpha^{(k)} -\lambda_{\beta}^{(k-1)}}t^{\beta-\alpha})} \dfrac{f(q^{\lambda_\alpha^{(k)}-\lambda_{\beta+1}^{(k)} }t^{\beta-\alpha})} {f(q^{\lambda_\alpha^{(k-1)}-\lambda_{\beta+1}^{(k)}}t^{\beta-\alpha})} =\prod_{k=1}^m \psi_{\lambda^{(k)}/\lambda^{(k-1)}}(q,t), \end{align*} which is $\psi_T(q,t)$. On the other hand if $T\in\Tbr{\lambda;m}\setminus\SSTb{\lambda;m}$, one can see that $\eqref{eq:psi:5}=0$. Using Proposition \ref{prp:tableau}, we have the desired equality. \subsection{Proof of Proposition \ref{prp:tt*}} \label{subsec:prf:tt*} First we rewrite the relation of $\psi^{\pm}(z)$ and $x^{\pm}(w)$ given in Definition \ref{dfn:uqt} into the next adjoint form. \begin{align*} &\exp\Big( \sum_{n>0} \operatorname{ad}_{b_n}\gamma^{n/2}z^{-n} \Big) x^{\pm}(w) =\exp\Big( \mp\sum_{n>0} \dfrac{1}{n}(1-q^n)(1-t^{-n})(1-p^{-n})\gamma^{\mp n/2} \big(\dfrac{w}{z}\big)^n \Big) x^{\pm}(w), \\ &\exp\Big( -\sum_{n>0} \operatorname{ad}_{b_{-n}}\gamma^{n/2}z^{n} \Big) x^{\pm}(w) =\exp\Big(\pm\sum_{n>0} \dfrac{1}{n}(1-q^n)(1-t^{-n})(1-p^{-n})\gamma^{\mp n/2} \big(\dfrac{w}{z}\big)^n \Big) x^{\pm}(w). \end{align*} Here we used the exponential form \eqref{eq:psi_boson} of $\psi^{\pm}$. Then we see that \begin{align} \nonumber \alpha(z) x^{\pm}(w) \alpha(z)^{-1} & =\exp\Big( -\sum_{n>0} \operatorname{ad}_{b_{-n}}\dfrac{z^{n}}{\gamma^{n}-\gamma^{-n}} \Big) x^{\pm}(w) \\ \label{eq:alpha_x} & =\exp\Big( \pm\sum_{n>0} \dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^{n}-\gamma^{-n}} \gamma^{-n/2\mp n/2} \big(\dfrac{z}{w}\big)^{n} \Big) x^{\pm}(w), \\ \nonumber \beta(z) x^{\pm}(w) \beta(z)^{-1} & =\exp\Big( \sum_{n>0} \operatorname{ad}_{b_{n}}\dfrac{z^{-n}}{\gamma^{n}-\gamma^{-n}} \Big) x^{\pm}(w) \\ \label{eq:beta_x} & =\exp\Big( \mp\sum_{n>0} \dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^{n}-\gamma^{-n}} \gamma^{-n/2\mp n/2} \big(\dfrac{w}{z}\big)^{n} \Big) x^{\pm}(w). \end{align} We also prepare the operator product of $\alpha(w)$ and $\beta(z)$, which is easily obtained from the definitions \eqref{eq:alpha_beta} and the commutation relations \eqref{eq:boson_b} of $b_n$'s: \begin{align} \label{eq:alpha_beta_ope} \beta(z)\alpha(w) =\alpha(w)\beta(z) \exp\Big(\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^n-\gamma^{-n}} \gamma^{-n}\big(\dfrac{w}{z}\big)^{n} \Big). \end{align} \subsubsection{Proof of (1)} Using \eqref{eq:alpha_beta_ope} and \eqref{eq:alpha_x}, we see that \begin{align*} &\alpha(z) t(w) \alpha(z)^{-1} =\alpha(z) \alpha(w) x^+(w) \beta(w) \alpha(z)^{-1} \\ & =\alpha(w) \alpha(z) x^+(w) \alpha(z)^{-1} \beta(w) \times \exp\Big(-\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^n-\gamma^{-n}} \gamma^{-n}\big(\dfrac{z}{w}\big)^{n} \Big) \\ &=\alpha(w) x^+(w) \beta(w) \times \exp\Big(-\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^n-\gamma^{-n}} \gamma^{-n}\big(\dfrac{z}{w}\big)^{n} \\ &\phantom{=\alpha(w) x^+(w) \beta(w) \times \exp\Big(|} +\sum_{n>0} \dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^{n}-\gamma^{-n}} \gamma^{-n} \big(\dfrac{z}{w}\big)^{n} \Big) =t(w). \end{align*} Thus we have $[t(z),\alpha(w)]=0$. The other relations $[t(z),\beta(w)]=0$, $[t^*(z),\alpha(w)]=[t^*(z),\beta(w)]=0$, $[t(z),\psi^{\pm}(w)]=[t^*(z),\psi^{\pm}(w)]=0$ also follow from equations \eqref{eq:alpha_x}-\eqref{eq:alpha_beta_ope} and we omit the detail. \subsubsection{Proof of (2)} Using the commutativity $[t(z),\alpha(w)]=0$ given in (1), we have \begin{align*} &A(w/z)t(z)t(w) =A(w/z)\alpha(z)x^+(z)\beta(z)\alpha(w)x^+(w)\beta(w) =\alpha(z)\alpha(w)x^+(z)x^+(w)\beta(z)\beta(w) \\ &\times \exp\Big(\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n}\gamma^{-2n})}{1-\gamma^{-2n}} \big(\dfrac{w}{z}\big)^{n} -\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^n-\gamma^{-n}} \gamma^{-n}\big(\dfrac{w}{z}\big)^{n} \Big). \end{align*} Here the first summation in the exponential comes from the $A(w/z)$, and the second from transposition of $\beta(w) x^+(z)$ using \eqref{eq:beta_x}. Thus we have \begin{align*} A(w/z)t(z)t(w) &=\alpha(z)\alpha(w)x^+(z)x^+(w)\beta(z)\beta(w) \times \exp\Big(\sum_{n>0}\dfrac{1}{n} (1-q^n)(1-t^{-n}) \big(\dfrac{w}{z}\big)^{n} \Big) \\ &=\dfrac{(1-q \tfrac{w}{z})(1-t^{-1}\tfrac{w}{z})} {(1-\tfrac{w}{z})(1-p \tfrac{w}{z})} \alpha(z)\alpha(w)x^+(z)x^+(w)\beta(z)\beta(w). \end{align*} Then \begin{align} \nonumber &A(w/z)t(z)t(w)-A(z/w)t(w)t(z) =\alpha(z)\alpha(w) \\ \label{eq:tt:1} &\phantom{=}\times \Big[ \dfrac{(1-q\tfrac{w}{z})(1-t^{-1}\tfrac{w}{z})} {(1-\tfrac{w}{z})(1-p \tfrac{w}{z})} x^+(z)x^+(w) -\dfrac{(1-q\tfrac{z}{w})(1-t^{-1}\tfrac{z}{w})} {(1-\tfrac{z}{w})(1-p\tfrac{z}{w})} x^+(w)x^+(z) \Big] \\ \nonumber &\phantom{=}\times \beta(z)\beta(w). \end{align} Now recall the relation of $x^+(z)$ and $x^+(w)$ given in Definition \ref{dfn:uqt}: \begin{align}\label{eq:Gxx} -(\tfrac{z}{w})^3 G^+(\tfrac{z}{w})x^+(z)x^+(w)=G^+(\tfrac{z}{w})x^+(w)x^+(z). \end{align} Using this equation, the line \eqref{eq:tt:1} is rewritten into \begin{align*} \eqref{eq:tt:1}&= \Big[ \dfrac{1}{(1-\tfrac{w}{z})(1-p \tfrac{w}{z})(1-p^{-1} \tfrac{w}{z})} +\dfrac{(\tfrac{z}{w})^3} {(1-\tfrac{z}{w})(1-p\tfrac{z}{w})(1-p^{-1}\tfrac{z}{w})} \Big] G^+(\tfrac{w}{z})x^+(w)x^+(z) \\ &= \Big[ \dfrac{\delta(\tfrac{w}{z})}{(1-p^{-1})(1-p)} +\dfrac{\delta(p\tfrac{w}{z})}{(1-p^{-1})(1-p^{-2})} +\dfrac{\delta(p^{-1}\tfrac{w}{z})}{(1-p)(1-p^2)} \Big] G^+(\tfrac{w}{z})x^+(w)x^+(z). \end{align*} Now from \eqref{eq:Gxx} and $G^+(1)\neq0$, we see that $\delta(w/z)G^+(w/z)x^+(w)x^+(z)=0$. We also find from \eqref{eq:Gxx} and $G^+(p^{-1})=0$ that $\delta(p \tfrac{w}{z})G^+(\tfrac{w}{z})x^+(w)x^+(z) =\delta(p \tfrac{w}{z})G^+(p^{-1})x^+(p w)x^+(w)$. Similarly from \eqref{eq:Gxx} and $G^+(p)\neq0$ we have $\delta(p^{-1} \tfrac{w}{z})G^+(\tfrac{w}{z})x^+(w)x^+(z) =\delta(p^{-1} \tfrac{w}{z})G^+(p)x^+(p z)x^+(z)$. Thus after a short calculation we have \begin{align*} \eqref{eq:tt:1}= \dfrac{(1-t^{-1})(1-q)}{1-p} \Big[ \delta(p^{-1}\tfrac{w}{z}) x^+(p z)x^+(z) -\delta(p \tfrac{w}{z}) x^+(p w)x^+(w) \Big]. \end{align*} Then we have the desired consequence \eqref{eq:Att}. The equation \eqref{eq:Att*} can be similarly shown, so that we omit the detail. \subsubsection{Proof of (3)} We apply the same method as in (2). Recalling Remark \eqref{rmk:t*}, we calculate $B(\gamma p w/z)t(z)t^*(\gamma p w) -B(\gamma^{-1}p^{-1}z/w)t^*(\gamma p w)t(z)$. From the definition \eqref{eq:B} of $B(z)$, the commutativity $[t(z),\alpha(w)]=0$ given in (1) and the formula \eqref{eq:alpha_beta_ope}, we have \begin{align*} &B(\gamma p w/z)t(z)t^*(\gamma p w) =B(\gamma p \tfrac{w}{z}) \alpha(z)x^+(z)\beta(z) \alpha(\gamma w)^{-1}x^-(w)\beta(\gamma^{-1}w)^{-1} \\ &= \alpha(z)\alpha(\gamma w)^{-1} x^+(z) x^-(w) \beta(z)\beta(\gamma^{-1}w)^{-1} \\ &\phantom{=} \times \exp\Big(\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{-n})}{\gamma^n-\gamma^{-n}} \big(\dfrac{w}{z}\big)^{n} \\ &\phantom{= \times \exp\Big(} +\sum_{n>0}\dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(\gamma^{-2n}p^{-2n}-\gamma^{-2n}p^{-n})} {1-\gamma^{-2n}} \gamma^n p^{-n} \big(\dfrac{w}{z}\big)^{n}\Big) \\ &= \alpha(z)\alpha(\gamma w)^{-1} x^+(z) x^-(w) \beta(z)\beta(\gamma^{-1}w)^{-1}. \end{align*} A similar calculation shows that $B(\gamma p \tfrac{z}{w})t^*(\gamma p w) t(z)= \alpha(z)\alpha(\gamma w)^{-1} x^-(w) x^+(z) \beta(z)\beta(\gamma^{-1}w)^{-1} $. Thus we have \begin{align*} & B(\gamma p \tfrac{w}{z})t(z)t^*(\gamma p w) -B(\gamma p \tfrac{z}{w})t^*(\gamma p w) t(z) \\ &= \alpha(z)\alpha(\gamma w)^{-1} [ x^+(z) x^-(w)- x^-(w) x^+(z)] \beta(z)\beta(\gamma^{-1}w)^{-1} \end{align*} Using the expression of $[x^+(z),x^-(w)]$ given in Definition \ref{dfn:uqt}, the expansion \eqref{eq:psi_boson} and the defnition \eqref{eq:alpha_beta}, one may immediately find that \begin{align*} B(\gamma p \tfrac{w}{z})t(z)t^*(\gamma p w) -B(\gamma p \tfrac{z}{w})t^*(\gamma p w)t(z) =\dfrac{(1-q)(1-t^{-1})}{1-p} \Big(\delta(\gamma^{-1}\tfrac{z}{w})\psi_0^+ -\delta(\gamma\tfrac{z}{w})\psi_0^-\Big). \end{align*} Replacing $w$ in the above equation with $\gamma^{-1}p^{-1} w$, we have the desired equation \eqref{eq:Btt*}. \subsection{Proof of Proposition \ref{prp:Lambda}} \label{subsec:prf:Lambda} Let us define $a_{n,(i)}\mathbin{:=} 1\otimes \cdots \otimes 1\otimes a_{n}\otimes 1\otimes \cdots 1$, where $a_{n}$ sits in the $i$-th tensor component. Then from \eqref{eq:Delta_boson} and \eqref{eq:rhom} one finds that $\rho_y^{(m)}(b_{n}) =-\sum_{i=1}^m a_{n,(i)}(1-t^n)(1-p^{-|n|}) p^{(m-i+1)|n|/2}/|n|$. Thus we have \begin{align} \label{eq:rho_alpha} &\rho_y^{(m)}(\alpha(z))= \prod_{i=1}^m \alpha_{(i)}^m(z), \quad \alpha_{(i)}^m(z) \mathbin{:=} \exp\Big(\sum_{n>0}\dfrac{1}{n}\dfrac{p^{(m-i+1)n/2}(1-t^{-n})(1-p^{-n})} {p^{-m n/2}-p^{m n/2}}a_{-n,(i)}z^n\Big), \\ \label{eq:rho_beta} & \rho_y^{(m)}(\beta(z))= \prod_{i=1}^m \beta_{(i)}^m(z), \quad \beta_{(i)}^m(z)\mathbin{:=} \exp\Big(-\sum_{n>0}\dfrac{1}{n}\dfrac{p^{(m-i+1)n/2}(1-t^n)(1-p^{-n})} {p^{-m n/2}-p^{m n/2}}a_{n,(i)}z^{-n}\Big). \end{align} \subsubsection{Proof of (1)} We calculate each tensor component of $\Lambda_i(z)\Lambda_j(w)$. First assume $i=j$. If $k>i$, then the $k$-th tensor component comes from $\alpha_{(k)}^m(z) \beta_{(k)}^m(z) \alpha_{(k)}^m(w) \beta_{(k)}^m(z)$. Under the normal ordering, the following coefficient arises. \begin{align} \label{eq:prf:L:1:i=j:k>i} \exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n}) \big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-k+1)n} \big(\dfrac{w}{z}\big)^n\Big). \end{align} For $k=i$, the $i$-th tensor component comes from $\alpha_{(k)}^m(z)\eta(p^{-(i-1)/2}z) \beta_{(k)}^m(z) \alpha_{(k)}^m(w)\eta(p^{-(i-1)/2}w) \beta_{(k)}^m(w)$. Under the normal ordering, the following coefficient arises. \begin{align} \label{eq:prf:L:1:i=j:k=i} \begin{split} &\exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n})\big(\dfrac{w}{z}\big)^n \\ &\phantom{\exp\Big(-\sum_{n>0}} \big\{\big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-i+1)n} +\dfrac{1-p^{-n}}{1-p^{m n}}p^{m n} +\dfrac{1-p^{-n}}{1-p^{m n}}p^{(m-i+1)n} +1 \big\}\Big). \end{split} \end{align} If $k<i$, then the $k$-th tensor component is $\alpha_{(k)}^m(z)\varphi^{-}(p^{-(2k-1)/4}z) \beta_{(k)}^m(z) \alpha_{(k)}^m(w)\varphi^{-}(p^{-(2k-1)/4}w)$ $\beta_{(k)}^m(w)$. The normal ordering coefficient is \begin{align} \label{eq:prf:L:1:i=j:k<i} &\exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n})\big(\dfrac{w}{z}\big)^n \big\{\big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-k+1)n} +\dfrac{(1-p^{-n})^2}{1-p^{m n}} p^{(m-k+1)n} \big\}\Big). \end{align} By simple calculations, the product of \eqref{eq:prf:L:1:i=j:k>i}, \eqref{eq:prf:L:1:i=j:k=i} and \eqref{eq:prf:L:1:i=j:k<i} is shown to be $f_{1,m}(w/z)^{-1}$. Thus the statement holds. Next we consider the case $i<j$. If $k<i$, then the normal order coefficient is the same as \eqref{eq:prf:L:1:i=j:k<i}. For $k=i$, the normal order coefficient is \begin{align} \label{eq:prf:L:1:i<j:k=i} \begin{split} &\exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n})\big(\dfrac{w}{z}\big)^n \\ &\phantom{\exp\Big(-\sum_{n>0}} \big\{\big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-i+1)n} +\dfrac{1-p^{-n}}{1-p^{m n}}p^{m n} +\dfrac{(1-p^{-n})^2}{1-p^{m n}}p^{(m-i+1)n} +1-p^{-n} \big\}\Big). \end{split} \end{align} If $i<k<j$, then the normal order coefficient is \begin{align} \label{eq:prf:L:1:i<j:i<k<j} &\exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n})\big(\dfrac{w}{z}\big)^n \big\{\big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-k+1)n} +\dfrac{(1-p^{-n})^2}{1-p^{m n}}p^{(m-k+1)n} \big\}\Big). \end{align} If $k=j$, then the normal order coefficient is \begin{align} \label{eq:prf:L:1:i<j:k=j} &\exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n})\big(\dfrac{w}{z}\big)^n \big\{\big(\dfrac{1-p^{-n}}{1-p^{m n}}\big)^2 p^{(2m-j+1)n} +\dfrac{1-p^{-n}}{1-p^{m n}}p^{(m-j+1)n} \big\}\Big). \end{align} If $k>j$, then the normal order coefficient is \eqref{eq:prf:L:1:i=j:k>i}. The product of \eqref{eq:prf:L:1:i=j:k<i}, \eqref{eq:prf:L:1:i<j:k=i}, \eqref{eq:prf:L:1:i<j:i<k<j}, \eqref{eq:prf:L:1:i<j:k=j}, \eqref{eq:prf:L:1:i=j:k>i} is equal to $f_{1,m}(w/z)^{-1}\gamma_{+}(z,w;q,p)$. Thus we obtain the result. The case $i>j$ is similar, so we omit the detail. \subsubsection{Proof of (2)} The desired equation is equivalent to \begin{align*} \rho_{y}^{(m)}(\alpha(z)\cdots\alpha(p^{m-1}z)) :\prod_{k=1}^{m}\widetilde{\Lambda}_k(p^{k-1}z): \rho_{y}^{(m)}(\beta(z)\cdots\beta(p^{m-1}z))=1. \end{align*} We will show this equation by comparing each tensor component. By \eqref{eq:rho_alpha}, the $k$-th tensor component of $\rho_{y}^{(m)}(\alpha(z)\cdots\alpha(p^{m-2}z))$ is equal to \begin{align} \exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-t^{-n})p^{(2m-k-1)n/2}a_{-n}z^n\Big). \label{eq:LL:2:k:alpha} \end{align} Similarly, the $k$-th tensor component of $\rho_{y}^{(m)}(\beta(z)\cdots\beta(p^{m-1}z))$ is equal to \begin{align} \exp\Big(\sum_{n>0}\dfrac{1}{n}(1-t^{-n})p^{(-k+1)n/2}a_{n}z^{-n}\Big). \label{eq:LL:2:k:beta} \end{align} The $k$-th tensor component of $:\prod_{k=1}^{m}\widetilde{\Lambda}_k(p^{k-1}z):$ is \begin{align} \nonumber &:\eta(p^{-(k-1)/2}p^{k-1}z) \varphi^-(p^{-(2k-1)/4}p^{k}z) \varphi^-(p^{-(2k-1)/4}p^{k+1}z)\cdots \varphi^-(p^{-(2k-1)/4}p^{m-1}z): \\ \label{eq:LL:2:k:L} & =\exp\Big( \sum_{n>0} \dfrac{1-t^{-n}}{n}p^{n(2m-k-1)/2}a_{-n}z^{n} \Big) \exp\Big(-\sum_{n>0} \dfrac{1-t^{n}}{n} p^{-n(k-1)/2}a_{n} z^{-n} \Big) \end{align} It is easy to see that \eqref{eq:LL:2:k:alpha},\eqref{eq:LL:2:k:beta} and \eqref{eq:LL:2:k:L} cancel. Thus we have the consequnce. \subsubsection{Proof of (3)} The desired equation is equivalent to \begin{align} \begin{split} \label{eq:LL:3} &\rho_{y}^{(m)}(\alpha(p^{-1}z)\cdots\alpha(p^{m-2}z)) :\prod_{k=1}^{i-1}\widetilde{\Lambda}_k(p^{k-1}z) \prod_{l=i+1}^{m}\widetilde{\Lambda}_l(p^{l-2}z): \rho_{y}^{(m)}(\beta(z)\cdots\beta(p^{m-1}z)) \\ &=\widetilde{\Lambda}_i^*(p^{(m-2)/2}z). \end{split} \end{align} We will show this equation by comparing each tensor component. As in \eqref{eq:LL:2:k:alpha} , the $k$-th tensor component of $\rho_{y}^{(m)}(\alpha(p^{-1}z)\cdots\alpha(p^{m-2}z))$ is equal to \begin{align} \exp\Big(-\sum_{n>0}\dfrac{1}{n}(1-t^n)p^{(2m-k-3)n/2}a_{-n}z^n\Big). \label{eq:LL:3:k:alpha} \end{align} The $k$-th tensor component of $\rho_{y}^{(m)}(\beta(z)\cdots\beta(p^{m-1}z))$ is given by \eqref{eq:LL:2:k:beta}. The $k$-th tensor component of $:\prod_{k=1}^{i-1}\widetilde{\Lambda}_k(p^{k-1}z) \prod_{l=i+1}^{m}\widetilde{\Lambda}_l(p^{l-2}z):$ depends on $k$. If $k=i$, then by Lemma \ref{lem:tildeLambda} and some simple calculations, the component turns out to be \begin{align} \nonumber &\varphi^-(p^{-(2i-1)/4}p^{i-1}z)\varphi^-(p^{-(2i-1)/4}p^{i}z)\cdots \varphi^-(p^{-(2i-1)/4}p^{m-2}z)\\ \label{eq:LL:3:L:i} &=\exp\Big(-\sum_{n>0}\dfrac{1-t^{-n}}{n}p^{(i-3)/2}(1-p^{n(m-i)}) a_{-n}z^n\Big). \end{align} Similarly, if $k<i$, then by Lemma \ref{lem:tildeLambda} the component is \begin{align} \nonumber &:\eta(p^{-(k-1)/2}p^{k-1}z) \varphi^-(p^{-(2k-1)/4}p^{k}z) \varphi^-(p^{-(2k-1)/4}p^{k+1}z)\cdots \varphi^-(p^{-(2k-1)/4}p^{m-2}z): \\ \label{eq:LL:3:L:<i} & =\exp\Big( \sum_{n>0} \dfrac{1-t^{-n}}{n}p^{n(2m-k-3)/2}a_{-n}z^{n} \Big) \exp\Big(-\sum_{n>0} \dfrac{1-t^{n}}{n} p^{-n(k-1)/2}a_{n} z^{-n} \Big): \end{align} If $k>i$, then by Lemma \ref{lem:tildeLambda} the component is \begin{align} \nonumber &:\eta(p^{-(k-1)/2}p^{k-2}z) \varphi^-(p^{-(2k-1)/4}p^{k-1}z) \varphi^-(p^{-(2k-1)/4}p^{k}z)\cdots \varphi^-(p^{-(2k-1)/4}p^{m-2}z): \\ \label{eq:LL:3:L:>i} & =\exp\Big( \sum_{n>0} \dfrac{1-t^{-n}}{n}p^{n(2m-k-3)/2}a_{-n}z^{n} \Big) \exp\Big(-\sum_{n>0} \dfrac{1-t^{n}}{n} p^{-n(k-3)/2}a_{n} z^{-n} \Big): \end{align} Then the $i$-th tensor component of \eqref{eq:LL:3} is the product of \eqref{eq:LL:3:k:alpha},\eqref{eq:LL:2:k:beta} and \eqref{eq:LL:3:L:i}. After a short calculation, one finds that it is $\xi(p^{(i-2)/2}z)$, which is the $i$-th component of $\widetilde{\Lambda}_i^*(p^{(m-2)/2}z)$. If $k<i$, then the $k$-th tensor component of \eqref{eq:LL:3} is the product of \eqref{eq:LL:3:k:alpha},\eqref{eq:LL:2:k:beta} and \eqref{eq:LL:3:L:<i}. It is $1$, that is, the $k$-th component of $\widetilde{\Lambda}_i^*(p^{(m-2)/2}z)$. Finally, for $k>i$, the $k$-th tensor component of \eqref{eq:LL:3} is the product of \eqref{eq:LL:3:k:alpha},\eqref{eq:LL:2:k:beta} and \eqref{eq:LL:3:L:>i}. It turns out to be $\varphi^-(p^{(2j-5)/4}z)$, which is the $k$-th component of $\widetilde{\Lambda}_i^*(p^{(m-2)/2}z)$. \subsubsection{Proof of (4)} From the known identities \eqref{eq:prp:Lambda:1} and \eqref{eq:prp:Lambda:3}, it is not difficult to calculate $\Big[\prod_{k,l=1}^{m-1}f_{1.m}(p^{-k+l}w/z)\Big] \Lambda_i^*(z)\Lambda_j^*(w)$ in terms of $\Lambda_k$'s. First we consider the case $i=j$. From the operator product \eqref{eq:prp:Lambda:1}, we have \begin{align*} &\Big[\prod_{k,l=1}^{m-1}f_{1,m}(p^{-k+l}\tfrac{w}{z})\Big] \Lambda_i^*(z)\Lambda_i^*(w) \\ &=\Big[\prod_{k=1}^{m-2}\prod_{l=k+1}^{m-1} \gamma_+(p^{-k+l}\tfrac{w}{z})\Big] \Big[\prod_{k=2}^{m-1}\prod_{l=1}^{k-1} \gamma_-(p^{-k+l}\tfrac{w}{z})\Big] :\Lambda_i^*(z)\Lambda_i^*(w): \\ &=\exp\Big(\sum_{n>0}\dfrac{1}{n}(1-q^n)(1-t^{-n}) \dfrac{1-p^{-n(m-2)}}{1-p^{-n}} \dfrac{1-p^{n(m-1)}}{1-p^{n}} \big(\dfrac{w}{z}\big)^n \Big) :\Lambda_i^*(z)\Lambda_i^*(w):. \end{align*} Here we used the abbreviation $\gamma_\pm(\tfrac{w}{z})\mathbin{:=}\gamma_\pm(z,w;q,p)$. Then we have \begin{align*} &\Big[\prod_{k,l=1}^{m-1}f_{1,m}(p^{-k+l}\tfrac{w}{z})\Big] \times \exp\Big(-\sum_{n>0} \dfrac{1}{n}(1-q^n)(1-t^{-n}) \dfrac{1-p^{-n(m-2)}}{1-p^{-n}} \dfrac{1-p^{n(m-1)}}{1-p^{n}} \big(\dfrac{w}{z}\big)^n \Big) \\ &=\exp\Big(-\sum_{n>0} \dfrac{1}{n} \dfrac{(1-q^n)(1-t^{-n})(1-p^{(m-1)n}}{1-p^{m n}} \big(\dfrac{w}{z}\big)^n \Big) =f_{1,m}(\tfrac{w}{z}). \end{align*} Thus the desired equation $f_{1,m}(\tfrac{w}{z})\Lambda_i^*(z)\Lambda_i^*(w) =:\Lambda_i^*(z)\Lambda_i^*(w):$ is proved. Next, note that the calculation of the case $i\neq j$ reduces to that of $k=i$. If $i<j$, then \begin{align*} f_{1,m}(\tfrac{w}{z})\Lambda_i^*(z)\Lambda_j^*(w) =:\Lambda_i^*(z)\Lambda_j^*(w): \dfrac{\gamma_-(\tfrac{w}{z})^{j-i}}{\gamma_+(p\tfrac{w}{z})^{j-i-1}} =:\Lambda_i^*(z)\Lambda_j^*(w): \gamma_-(\tfrac{w}{z}). \end{align*} At the last line we used the formula $\gamma_-(z)/\gamma_+(p z)=1$. For the final case $i>j$, we have \begin{align*} f_{1,m}(\tfrac{w}{z})\Lambda_i^*(z)\Lambda_j^*(w) =:\Lambda_i^*(z)\Lambda_j^*(w): \dfrac{\gamma_+(\tfrac{w}{z})^{i-j}}{\gamma_-(p^{-1}\tfrac{w}{z})^{i-j-1}} =:\Lambda_i^*(z)\Lambda_j^*(w): \gamma_+(\tfrac{w}{z}). \end{align*} Thus all the cases are proved. \subsubsection{Proof of (5)} This is similary shown as (2) and (3), so we omit the detail. \begin{ack} S.Y. is supported by JSPS Fellowships for Young Scientists (No.21-2241). \end{ack}
1,314,259,996,840
arxiv
\section{Introduction and Background} The last two and a half decades have seen a renaissance in the study of the theory of factorization. The main focus of this research has been in the theater of integral domains, but much work has also been done in the more general setting of commutative rings with identity; for example, the interested reader should consult the papers \cite{DDAZ},\cite{DDAZ2}, and \cite{DDAZ3}. Even for factorization in integral domains, rather surprising effects can occur. For example, Roitman has produced an example of an atomic domain, $R$, whose polynomial extension $R[t]$ is not atomic (\cite{R1}). Of course, for domains it is the case that if $R[t]$ is atomic, then $R$ must be atomic, but even now, the subtle interplay of atomicity between a domain and its polynomial extension is not completely understood. Perhaps at least as suprising is Roitman's result showing that for the conditions ``$R$ is atomic" and ``$R[[x]]$ is atomic" neither one implies the other (\cite{R2}). The intent of this note is to provide a companion to the Roitman papers \cite{R1} and \cite{R2}, and to provide a cautionary tale of the subtleties of factorization without the assumption of ``integral domain". We will provide an example of a non-atomic (in fact, with no irreducibles whatsoever) commutative ring with identity, $R$, whose polynomial extension $R[t]$ {\it is} atomic (and is, in fact, strongly atomic). We first recall the distinction between ``atom'' and ``strong atom'' in a ring with zero divisors (we note that we will be using the terminology ``(strong) irreducible'' and ``(strong) atom'' interchangably). \begin{defn} Let $R$ be a commutative ring with identity. We say that $a\in R$ is an atom if $a=bc$ implies that $a$ is associated to either $b$ or $c$ (in the sense that $(a)=(b)$ or $(a)=(c)$). We say that $a\in R$ is a strong atom if $a=bc$ implies that $a$ is strongly associated to either $b$ or $c$ (in the sense that either $b$ or $c$ is a unit in $R$). \end{defn} We say that a ring is (strongly) atomic if every nonzero nonunit is a product of (strong) atoms. These two notions are distinct (see \cite{DDAZ} for example). But a simple example of the distinction occurs in the ring $\mathbb{Z}/6\mathbb{Z}$ where the element $\overline{3}$ is an atom, but not a strong atom (hence $\mathbb{Z}/6\mathbb{Z}$ is atomic, but not strongly atomic). \section{Preliminaries and the Example} We first outline the construction of the ring that we will consider throughout this paper. Let $\mathbb{F}$ be a perfect field of characteristic $p$ and $\{x_1, x_2,\cdots, x_n,\cdots\}$ be a countable collection of indeterminates. We first define the domain $T$ as follows: \[ T:=\mathbb{F}[x_1^{\alpha_1}, x_2^{\alpha_2},\cdots, x_n^{\alpha_n},\cdots] \] \noindent where the exponents $\alpha_i\in\mathbb{Q}^+\bigcup\{0\}$ range over the non-negative rationals for all $i\geq 1$. We now define the ideal \[ I:=\langle\{\prod_{i=1}^\infty x_i^{\beta_i}\}\rangle \] \noindent where $\beta_i=0$ for all but finitely many $i$, and $\sum_{i=1}^\infty\beta_i>1$ (essentially $I$ is the ideal generated by monomials of total degree greater than 1). The ring of our focus will be the ring \[ R:=T/I. \] We record some results concerning the properties of the ring $R$ for later use. We first remark that a typical element (coset) of $T$ can be represented in the form \[ \epsilon_0+\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n+I \] \noindent where each $\epsilon_i\in\mathbb{F}$ and each $\overline{X}_i$ is a monomial from $R$ of the form $\overline{X}_i=x_{i,1}^{a_{i,1}}x_{i,2}^{a_{i,2}}\cdots x_{i,t_i}^{a_{i,t_i}}$. Additionally if $\overline{X}_i=x_{i,1}^{a_{i,1}}x_{i,2}^{a_{i,2}}\cdots x_{i,t_i}^{a_{i,t_i}}$, we say that $\overline{X}_i$ is {\it composed} of the elements $\{x_{i,1}, x_{i,2},\cdots , x_{i, t_i}\}$, and has {\it potential} $\sum_{j=1}^{t_i} a_{i,j}$, and we will write $\text{pot}(\overline{X}_i)=\sum_{j=1}^{t_i} a_{j,t_j}$. If we want to specify a single $x_{i,j}$ we write $\text{pot}_{x_{i,j}}(\overline{X}_i)=a_{i,j}$. Also in the sequel, we will abuse the notation and represent elements of $R$ as elements of $T$ and suppress the coset notation. \begin{lem}\label{0dim} $R$ is $0-$dimensional and quasi-local. In particular, every element of $R$ is either nilpotent or a unit. \end{lem} \begin{proof} Using the notation from above, we let $d_i=\sum_{j=1}^{t_i}a_{i, j}$ be the potential of the monomial $\overline{X}_i$. If $m=\min_{1\leq i\leq n}(d_i)$ then there is an $N\in\mathbb{N}$ such that $p^Nm> 1$ and hence $p^Nd_i> 1$ for all $1\leq i\leq n$. Note first that if $\epsilon_0=0$ then because the characteristic of $R$ is $p$, we have that \[ (\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n)^{p^N}=\epsilon_1^{p^N}\overline{X}_1^{p^N}+\cdots +\epsilon_n^{p^N}\overline{X}_n^{p^N}=0. \] \noindent Hence $\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n$ is nilpotent. This computation shows that every nonunit is nilpotent and the statements of the lemma follow. \end{proof} \begin{prop}\label{nonatomic} $R$ has no irreducible elements. In particular, $R$ is non-atomic. \end{prop} \begin{proof} If \[ \overline{X}=x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k} \] \noindent then $\overline{X}^{\frac{1}{p}}=x_1^{\frac{a_1}{p}}x_2^{\frac{a_2}{p}}\cdots x_k^{\frac{a_k}{p}}\in R$. Since any monomial has a nontrivial (nonassociate) $p^{\text{th}}$ root in $R$, an arbitrary nonzero, nonunit $\epsilon_1\overline{X}_1+\cdots +\epsilon_n\overline{X}_n$ has the nonassociate $p^{\text{th}}$ root $\epsilon_1^{\frac{1}{p}}\overline{X}_1^{\frac{1}{p}}+\cdots +\epsilon_n^{\frac{1}{p}}\overline{X}_n^{\frac{1}{p}}$ (since $\mathbb{F}$ is a perfect field). Hence $R$ contains no irreducibles and is therefore non-atomic. \end{proof} The following lemma is straightforward, but will be useful later. The content basically asserts that multiplying the two lowest potential terms (of highest degree) yields a nonzero term in the product of two polynomials (assuming, of course, that the product is not identically $0$). \begin{lem}\label{survive} Let $f(t)=\sum_{i=0}^nf_it^i, g(t)=\sum_{i=0}^mg_it^i\in R[t], (f_i,g_i\in R)$ be such that $f(t)g(t)\neq 0$. If $f_j$ contains a monomial that minimizes potential among all monomials in $f(t)$ (and $j$ is maximized in the case that there are multiple monomials of minimal potential) and $g_{j^{\prime}}$ is the analog term for $g(t)$, then the coefficient of $t^{j+j^{\prime}}$ has a (surviving) monomial that is the sum of these minimal potentials. \end{lem} \begin{proof} Suppose that $f_j$ contains a monomial of minimal potential in $f(t)$ (and in the case of multiple minimums, we assume that $j$ is maximal) and $g_{j^{\prime}}$ is the analog for $g(t)$. We will call these monomials (reordering if necessary), $z_1:=x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k}$ and $z_2:=x_1^{b_1}x_2^{b_2}\cdots x_k^{b_k}$ respectively. Here each $a_i, b_i\geq 0$ and $a_i+b_i>0$ for all $1\leq i\leq k$. Since the potential of every term in $f(t)$ (resp. $g(t)$) at degree greater than $j$ (resp. $j^{\prime}$) strictly exceeds $\text{pot}(z_1)$ (resp. $\text{pot}(z_2)$), it remains only to show that there is a a monomial of $\text{pot}(z_1)$ and one of $\text{pot}(z_2)$ whose product cannot be cancelled by the product of two other monomials. To, this end we reselect $z_1$ and $z_2$ as follows. Among all monomials of minimal potential, select to maximize $a_1$ (resp. $b_1$). If there are multiple solutions in either case select from among these to maximize $a_2$ (resp. $b_2$). The process terminates for either monomial if a unique maximum is found, and in any case it will terminate for both by the $k^{\text{th}}$ step. We now observe that if we can find two other monomials of $f_j$ and $g_{j^{\prime}}$ respectively, say $w_1, w_2$ such that $\text{pot}(w_i)=\text{pot}(z_i)$ for $i=1,2$ and $w_1w_2=z_1z_2$ then given our selection of of $z_1$ and $z_2$, we can see that $\text{pot}_{x_i}(w_1)=\text{pot}_{x_i}(z_1), 1\leq i\leq k$. Hence $w_1=z_1$ and $w_2=z_2$, and this establishes the lemma. \end{proof} For simplicity, we consider a two-variable analog of the ring we constructed earlier. \begin{prop}\label{two} Let $A:=K[x^{\alpha},y^{\beta}]$ where $\alpha, \beta$ range over the non-negative rationals. If $I$ is the ideal generated by all monomials of degree strictly greater than 1, then in the ring $(A/I)[t]$, the polynomial $x+yt$ (abusing the notation) is strongly irreducible. \end{prop} \begin{comment} \begin{proof} We first observe that in $A/I$ if $z_1z_2\neq 0$ and if $x^ay^b$ has minimal potential in $z_1$ and $x^cy^d$ has minimum potential in $z_2$ then the product has a monomial of potential $a+b+c+d$. And, in fact, it will have one that maximizes the $x$ potential and one that maximizes the $y$ potential, but these need not be distinct. Now we suppose that $x+yt=f(t)g(t)$ for some $f(t), g(t)\in (A/I)[t]$. So there is a monomial of the form $x^a$ at the degree-0 level in $f(t)$ and $x^{1-a}$ at the degree-0 level in $g(t)$. By the same token (and without loss of generality) there is a monomial of the form $y^b$ and the degree-0 level in $f(t)$ and a monomial of the form $y^{1-b}$ at the degree-1 level in $g(t)$. We observe that if $b\leq a$ then by the above remark, there must be a surviving constant term in the product of potential no more than $b+1-a\leq 1$ with $y$ potential at least $b$, contradicting that the constant term of the product is $x$. Hence, we record that $a<b$. Since $a<b$, we have that $a+1-b<1$ and hence we have the monomial $x^ay^{1-b}$ is nonzero. Once again, Lemma \ref{survive} shows that somewhere in the product there must be a surviving monomial of potential no more than $a+1-b<1$ which is our desired contradiction. \end{proof} \end{comment} \begin{proof} Let $K$ be a field and let $K[x,y; M]$ be the monoid domain with the indeterminates $x$ and $y$. In $R:=K[x,y;\mathbb{Q}^+]$ we impose the deglex order (see, for example, \cite{adams}) as follows. If $a,b,c,d\in\mathbb{Q}$ are positive we declare that $x^ay^b\prec x^cy^d$ if $a+b<c+d$. If $a+b=c+d$ we again say that $x^ay^b\prec x^cy^d$ if $a<b$ or $c<d$ in the case that $a=b$. So this totally orders the subset of nonzero monomials. To simplify the argument, we argue from the point of domains as follows. We now suppose that $x+yt=fg +h$ where $f,g\in R[t]$ are two nonunits $\text{mod}(IR[t])$ where is $I$ is the ideal of $R$ generated by all monomials of total degree greater than 1 and $h\in IR[t]$. We denote by $\text{min}(f)$ to be the monomial(s) of least degree in $f$. Since we have that $x+yt=fg+h$, it must be the case that $1=\text{deg}(\text{min}(fg))=\text{deg}(\text{min}(f))+\text{deg}(\text{min}(g))$. Letting $a=\text{deg}(\text{min}(f))$ and $b=\text{deg}(\text{min}(g))$, we take $u,v$ to be two monomials occuring in $f,g$ respectively such that $uv\neq 0\text{mod}( IR[t])$. Note that $1\geq \text{deg}(u)+\text{deg}(v)\geq a+b=1$, forces $\text{deg}(u)=a$ and $\text{deg}(v)=b$. We now throw out all monomials of $f$ of degree larger than $a$ and all monomials of $g$ with degree larger than $b$ and observe this means that $\text{deg}(u)=a$ for all monomials occuring in $f$ (respectively $\text{deg}(v)=b$ for all monomials occuring in $g$). From this, we conclude that $x+yt=fg$ (hence this factorization is analogous to a factorization in an integral domain). Without loss of generality, we can assume that $g\in R$ and so if we write $f=f_0+f_1t$ then $x=\text{min}(f_0)\text{min}(g)$ and $y=\text{min}(f_1)\text{min}(g)$. Hence $g$ has minimal monomials of the form $x^b$ and $y^b$. We first assume that $0<a,b< 1$. So we consider a (minimal) monomial of $f_0$ (say $z$) that maximizes $\text{deg}_y(z)$ and write $z=x^\alpha y^\beta$ with $\alpha+\beta=a$ and note that the monomial $zy^b=x^\alpha y^{\beta+b}$ must survive in the product and this is our contradiction. Hence either $a=0$ or $b=0$. If $a=0$ then each coefficient of $f$ is a unit in which case the previous argument demonstrates that the degree $0$ term of $fg$ cannot be (just) $x$. Hence we conclude that $b=0$ and hence $g$ is a unit. So we see that $x+yt$ is a strong atom. \end{proof} \begin{prop}\label{unit} Any element of $f(t)\in R[t]$ that has at least one unit coefficient is either a unit or has a factorization into no more than $n$ strong atoms where $n$ is the highest degree term of $f(t)$ that has a unit coefficient. \end{prop} \begin{proof} Let $\mathfrak{M}$ be the maximal ideal of $R$ generated by all the monomials $x_i$ and consider the image of $f(t)$ in the domain $R[t]/\mathfrak{M}[t]\cong\mathbb{F}[t]$, which we will denote by $\overline{f}(t)$. Any factorization of $f(t)\in R[t]$ must have the property that each factor must have at least one unit coefficient. Hence given any decomposition \[ f(t)=f_1(t)f_2(t)\cdots f_m(t) \] \noindent there is a corresponding factorization in $\mathbb{F}[t]$ \[ \overline{f}(t)=\overline{f}_1(t)\overline{f}_2(t)\cdots \overline{f}_m(t). \] Note that if $\text{deg}(\overline{f}_i(t))=0$, then $f_i(t)$ is a unit in $R[t]$ and so we will discount this possibility and assume that each $\text{deg}(\overline{f}_i(t))\geq 1$. Since $\mathbb{F}[t]$ is a UFD, this puts an upper bound (namely $\text{deg}(\overline{f}(t))$) on the length of the second decomposition. Since each factor of $f(t)\in R[t]$ must have a (positive degree) unit coefficient, we see that factoring $f(t)$ must terminate after no more than $n$ steps where $n$ is the largest degree term of $f(t)$ that has a unit coefficient. Note this argument also demonstrates that each $f_i(t)$ must be strongly irreducible. Indeed if $f_i(t)=g(t)h(t)$ then $\overline{f}_i(t)=\overline{g}(t)\overline{h}(t)$. Then one of these factors (say $\overline{g}(t)$) is a unit and hence its degree is $0$. So $g(t)$ is a unit (constant term) plus a sum of nilpotent elements (higher degree terms) and hence is a unit in $R[t]$. This establishes the proposition. \end{proof} \begin{thm} The ring $R$ is a non-atomic ring such that $R[t]$ is strongly atomic. What is more, given $f(t)\in R[t]$, a nonzero, nonunit polynomial, one of the following occurs. \begin{enumerate} \item If $f(t)$ has a unit coefficient, then $f(t)$ can be written as a product of no more than $n$ strong atoms where $n$ is the highest degree term of $f(t)$ that has a unit coefficient. \item If $f(t)\in\mathfrak{M}[t]$ has a factorization $f(t)=g(t)h(t)$ with both $g(t), h(t)\in\mathfrak{M}[t]$ then $f(t)$ can be factored into two strong atoms. \item If $f(t)\in\mathfrak{M}[t]$ does not have a factorization $f(t)=g(t)h(t)$ with both $g(t), h(t)\in\mathfrak{M}[t]$ then $f(t)$ has a factorization of length no more than $\text{deg}(f(t))+2$ strong atoms. \end{enumerate} \end{thm} \begin{proof} The fact that $R$ is non-atomic is from Proposition \ref{nonatomic}. To verify that $R[t]$ is strongly atomic, it suffices to show that if $f(t)\in R[t]$ is a nonzero nonunit, then one of the three statements holds. As the first statement is immediate from Proposition \ref{unit}, we focus on the last two. To verify the last two statements, we build in tandem. First suppose that \[ f(t)=g(t)h(t) \] \noindent with both $g(t), h(t)\in\mathfrak{M}[t]$. Suppose also that $g(t)$ and $h(t)$ are composed of the elements $x_1,x_2,\cdots, x_m$ and let $y$ and $z$ two other elements (homomorphic images of the original indeterminates $\{x_i\}$) that are distinct from the elements composing $g(t)$ and $h(t)$. Since $y, z$ annihilate all of $\mathfrak{M}$, we have the factorization \[ f(t)=(g(t)+yt+z)(h(t)+yt+z). \] It now suffices to show (without loss of generality) that $g(t)+yt+z$ is strongly irreducible. By way of contradiction assume that $g(t)+yt+z=p(t)q(t)$ and consider the ideal $\mathfrak{N}[t]$ where $\mathfrak{N}$ is the ideal generated by all positive rational powers of the elements $x_i$ with the exception of $y$ and $z$. Passing to the homomorphic image $R[t]/\mathfrak{N}[t]\cong(R/\mathfrak{N})[t]$, we obtain the equation \[ yt+z=\overline{p}(t)\overline{q}(t). \] \noindent But Proposition \ref{two} assures us that $yt+z$ is strongly irreducible. Hence, without loss of generality, $\overline{p}(t)$ is a unit in $(R/\mathfrak{N})[t]$ and since $\mathfrak{N}$ is generated by nilpotents, $p(t)$ is a unit in $R[t]$. For the final case, we assume that $f(t)$ cannot be factored into a product of two elements from $\frak{M}[t]$. If $f(t)$ is strongly irreducible, then we are done. If not, we can assume $f(t)=g(t)h(t)$ with $g(t)\in(\mathfrak{M}, t)$ and $h(t)\in\mathfrak{M}[t]$. Additionally, it must be the case that $g(t)$ has a term with unit coefficient. Applying Lemma \ref{survive} to the product $g(t)h(t)$, we see that the highest degree term of $g(t)$ that can possibly be a unit coefficient must occur at degree $m\leq\text{deg}(f(t))$. Hence, Proposition \ref{unit} shows that $g(t)$ can be factored into no more than $m\leq\text{deg}(f(t))$ strongly irreducible factors. Once we have produced $g(t)$ that has the largest possible maximal degree term with a unit coefficient, we see that $h(t)$ cannot be decomposed with a factor that is not in $\mathfrak{M}[t]$ (lest we would have the ability produce a factor of $f(t)$ with unit term at a higher degree level than the one in $g(t)$) . So if $h(t)$ is not irreducible, we apply the previously proved statement of this theorem to see that $h(t)$ can be decomposed into two strong irreducibles. Putting it all together, $f(t)$ has a factorization into no more than $\text{deg}(f(t))+2$ strong irreducibles. \end{proof} It is interesting to point out that the fact that our example is ``full of'' nilpotents should not be too surprising given the following observation (made possible by an interesting observation by the referee). \begin{lem} Let $R$ be reduced and $a\in R$ be a nonzero nonunit. If $a=fg$ with $f,g\in R[t]$, and $f$ is a strong atom in $R[t]$, then $g\in R$ \end{lem} \begin{proof} We assume that $g\notin R$ and let $c$ be the leading coefficient of $g$. If $P$ is any prime ideal of $R$, we consider the reduction to $(R/P)[t]\cong R[t]/PR[t]$. Since $(R/P)[t]$ is a domain, we must conclude that $cf\in PR[t]$. Hence $cf$ is in every prime ideal of $R[t]$ and so must be nilpotent. As $R$ (and hence $R[t]$) is reduced, it must be the case that $cf=0$. So $f=f(1+ct)$. Since $f$ is a strong atom, this means that $1+ct$ is a unit in $R[t]$, but $R$ is reduced, so we conclude that $c=0$. Hence $g\in R$. \end{proof} \begin{cor} If $R$ is a reduced ring and $R[t]$ is stongly atomic, then $R$ is (strongly) atomic. \end{cor} \begin{proof} Suppose that $a=f_1f_2\cdots f_n$ with each $f_i\in R[t]$ being strong atoms. By the previous lemma, we can inductively see that each $f_i$ is a (strong) atom of $R$. \end{proof} \section*{Acknowledgement} The authors express gratitude to the referee whose careful reading facilitated an improved version of this paper. Additionally, we thank the referee for the observation that made the last result on strong atomicity possible.
1,314,259,996,841
arxiv
\section{Introduction} The social web has become a common means for seeking romantic companionship, made evident by the wide assortment of online dating sites that are available on the Internet. As such, the notion of relationship recommendation systems is not only interesting but also highly applicable. This paper investigates the possibility and effectiveness of a deep learning based relationship recommendation system. An overarching research question is whether modern artificial intelligence (AI) techniques, given social profiles, can successfully approximate successful relationships and measure the relationship compatibility of two users. Prior works in this area \cite{Xia:2015:RRS:2808797.2809282,ICWSM148061,IAAI148187,Xia:2015:RRS:2808797.2809282} have been mainly considered the `online dating recommendation' problem, i.e., focusing on the reciprocal domain of dating social networks (DSN) such as Tinder and OKCupid. While the functionality and mechanics of dating sites differ across the spectrum, the main objective is usually to facilitate communication between users, who are explicitly seeking relationships. Another key characteristic of many DSNs is the functionality that enables a user to express interest to another user, e.g., swiping right on Tinder. Therefore, many of prior work in this area focus on reciprocal recommendation, i.e., predicting if two users will \textit{like} or \textit{text} each other. Intuitively, we note that likes and replies on DSNs are not any concrete statements of compatibility nor evidence of any long-term relationship. For instance, a user may have many reciprocal matches on Tinder but eventually form meaningful friendships or relationships with only a small fraction. Our work, however, focuses on a seemingly similar but vastly different problem. Instead of relying on reciprocal signals from DSNs, our work proposes a novel distant supervision scheme, constructing a dataset of real world couples from regular\footnote{We define regular social networks (RSN) as any social network that is not primarily a DSN, e.g., Facebook, Twitter.} social networks (RSN). Our distant supervision scheme is based on Twitter, searching for tweets such as \textit{`good night baby love you 😘'} and \textit{`darling i love you so much 💓'} to indicate that two users are in a stable and loving relationship (at least at that time). Using this labeled dataset, we train a distant supervision based learning to rank model to predict relationship compatibility between two users using their social profiles. The key idea is that social profiles contain cues pertaining to personality and interests that may be a predictor if whether two people are romantically compatible. Moreover, unlike many prior works that operate on propriety datasets \cite{ICWSM148061,IAAI148187,Xia:2015:RRS:2808797.2809282}, our dataset is publicly and legally obtainable via the official Twitter API. In this work, we construct the first public dataset of approximately 2 million tweets for the task of relationship recommendation. Another key advantage is that our method trains on regular social networks, which spares itself from the inherent problems faced by DSNs, e.g., deceptive self-presentation, harassment, bots, etc. \cite{Masden:2015:URC:2702123.2702417}. More specifically, self-presented information on DSNs might be inaccurate with the sole motivation of appearing more attractive \cite{Toma:2010:RLL:1718918.1718921,hancock2007truth}. In our work, we argue that measuring the compatibility of two users on RSN might be more suitable, eliminating any potential explicit self-presentation bias. Intuitively, social posts such as tweets can reveal information regarding personality, interests and attributes \cite{ICWSM1715681,Wei:2017:BWP:3018661.3018717}. Finally, we propose \textsc{CoupleNet}, an end-to-end deep learning based architecture for estimating the compatibility of two users on RSNs. \textsc{CoupleNet} takes the social profiles of two users as an input and computes a compatibility score. This score can then be used to serve a ranked list to users and subsequently embedded in some kind of `who to follow' service. \textsc{CoupleNet} is characterized by its Coupled Attention, which learns to pay attention to parts of a user's profile dynamically based on the current candidate user. \textsc{CoupleNet} also does not require any feature engineering and is a proof-of-concept of a completely text-based relationship recommender system. Additionally, \textsc{CoupleNet} is also capable of providing explainable recommendations which we further elaborate in our qualitative experiments. \subsection{Our Contributions} This section provides an overview of the main contributions of this work. \begin{itemize} \item We propose a novel problem of \textit{relationship recommendation} (RSR). Different from the reciprocal recommendation problem on DSNs, our RSR task operates on regular social networks (RSN), estimating long-term and serious relationship compatibility based on social posts such as tweets. \item We propose a novel distant supervision scheme to construct the first publicly available (distributable in the form of tweet ids) dataset for the RSR task. Our dataset, which we call the \textsc{LoveBirds2M} dataset consists of approximately 2 million tweets. \item We propose a novel deep learning model for the task of RSR. Our model, the \textsc{CoupleNet} uses hierarchical Gated Recurrent Units (GRUs) and coupled attention layers to model the interactions between two users. To the best of our knowledge, this is the first deep learning model for both RSR and reciprocal recommendation problems. \item We evaluate several strong machine learning and neural baselines on the RSR task. This includes the recently proposed DeepCoNN (\textit{Deep Co-operative Neural Networks}) \cite{zheng2017joint} for item recommendation. \textsc{CoupleNet} significantly outperforms DeepCoNN with a $200\%$ relative improvement in precision metrics such as Hit Ratio (HR@N). Overall findings show that a text-only deep learning system for RSR task is plausible and reasonably effective. \item We show that \textsc{CoupleNet} produces explainable recommendation by analyzing the attention maps of the coupled attention layers. \end{itemize} \section{Related Work} In this section, we review existing literature that is related to our work. \subsection{Reciprocal and Dating Recommendation} Prior works on online dating recommendation \cite{Xia:2015:RRS:2808797.2809282,Tu:2014:ODR:2567948.2579240,IAAI148187,DBLP:conf/ijcai/AkehurstKYPKR11} mainly focus on designing systems for dating social networks (DSN), i.e., websites whereby users are on for the specific purpose of finding a potential partner. Moreover, all existing works have primarily focused on the notion of reciprocal relationships, e.g., a successful signal implied a two way signal (likes or replies) between two users. Tu et al. \cite{Tu:2014:ODR:2567948.2579240} proposed a recommendation system based on Latent Dirichlet Allocation (LDA) to match users based on messaging and conversational history between users. Xia et al. \cite{Xia:2015:RRS:2808797.2809282,ICWSM148061} cast the dating recommendation problem into a link prediction task, proposing a graph-based approach based on user interactions. The CCR (Content-Collaborative Reciprocal Recommender System) \cite{DBLP:conf/ijcai/AkehurstKYPKR11} was proposed by Akehurtst et al. for the task of reciprocal recommendation, utilizing content-based features (user profile similarity) and collaborative filtering features (user-user interactions). However, all of their approaches operate on a propriety dataset obtained via collaboration with online dating sites. This hinders research efforts in this domain. Our work proposes a different direction from the standard reciprocal recommendation (RR) models. The objective of our work is fundamentally different, i.e., instead of finding users that might reciprocate to each other, we learn to functionally approximate the essence of a good (possibly stable and serious) relationship, learning a compatibility score for two users given their regular social profiles (e.g., Twitter). To the best of our knowledge, our work is the first to build a relationship recommendation model based on a distant supervision signal on real world relationships. Hence, we distinguish our work from all existing works on online dating recommendation. Moreover, our dataset is obtained legally via the official twitter API and can be distributed for future research. Unlike prior work \cite{Xia:2015:RRS:2808797.2809282} which might invoke privacy concerns especially with the usage of conversation history, the users employed in our study have public twitter feeds. We note that publicly available twitter datasets have been the cornerstone of many scientific studies especially in the fields of social science and natural language processing (NLP). Across scientific literature, several other aspects of online dating have been extensively studied. Nagarajan and Hearst \cite{DBLP:conf/icwsm/NagarajanH09} studied self-presentation on online dating sites by specifically examining language on dating profiles. Hancock et al. presented an analysis on deception and lying on online dating profiles \cite{hancock2007truth}, reporting that at least $50\%$ of participants provide deceptive information pertaining to physical attributes such as height, weight or age. Toma et al. \cite{Toma:2010:RLL:1718918.1718921} investigated the correlation between linguistic cues and deception on online dating profiles. Maldeniya et al. \cite{ICWSM1715634} studied how textual similarity between user profiles impacts the likelihood of reciprocal behavior. A recent work by Cobb and Kohno \cite{DBLP:conf/www/CobbK17} provided an extensive study which tries to understand users’ privacy preferences and practices in online dating. Finally, \cite{garimella2014love} studied the impacts of relationship breakups on Twitter, revealing many crucial insights pertaining to the social and linguistic behaviour of couples that have just broken up. In order to do so, they collect likely couple pairs and monitor them over a period of time. Notably, our data collection procedure is reminscent of theirs, i.e., using keyword-based filters to find highly likely couple pairs. However, their work utilizes a second stage crowdworker based evaluation to check for breakups. \subsection{User Profiling and Friend Recommendation} Our work is a cross between user profiling and user match-making systems. An earlier work, \cite{DBLP:conf/sigir/DiazMA10} proposed a gradient-boosted learning-to-rank model for match-making users on a dating forum. While the authors ran experiments on a dating service website, the authors drew parallels with other match-making services such as job-seeking forums. The user profiling aspect in our work comes from the fact that we use social networks to learn user representations. As such, our approach performs both user profiling and then match-making within an end-to-end framework. \cite{Wei:2017:BWP:3018661.3018717} proposed a deep learning personality detection system which is trained on social posts on Weibo and Twitter. \cite{ICWSM1715681} proposed a Twitter personality detection system based on machine learning models. \cite{DBLP:conf/acl/BentonAD16} learned multi-view embeddings of Twitter users using canonical correlation analysis for friend recommendation. From an application perspective, our work is also highly related to `People you might know' or `who to follow' (WTF) services on RSNs \cite{Gupta:2013:WFS:2488388.2488433} albeit taking a romantic twist. In practical applications, our RSN based relationship recommender can either be deployed as part of a WTF service, or to increase the visibility of the content of users with high compatibility score. \subsection{Deep Learning and Collaborative Ranking} One-class collaborative filtering (also known as collaborative ranking) \cite{hu2008collaborative} is a central research problem in IR. In general, deep learning \cite{He:2017:NCF:3038912.3052569,Tay:2018:LRM:3178876.3186154,zhang2018neurec} has also been recently very popular for collaborative ranking problems today. However, to the best of our knowledge, our work is the first deep learning based approach for the online dating domain. \cite{zhang2017deep} provides a comprehensive overview of deep learning methods for CF. Notably, our approach also follows the neural IR approach which is mainly concerned with modeling document-query pairs \cite{DBLP:conf/sigir/SeverynM15,DBLP:conf/sigir/TayPLH17,tay2017cross} or user-item pairs \cite{zheng2017joint,DBLP:journals/corr/abs-1801-09251} since we deal with the textual domain. Finally, our work leverages recent advances in deep learning, namely Gated Recurrent Units \cite{DBLP:journals/corr/ChoMGBSB14} and Neural Attention \cite{yang2016hierarchical,luong2015effective,bahdanau2014neural}. The key idea of neural attention is to learn to attend to various segments of a document, eliminating noise and emphasizing the important segments for prediction. \newtheorem{Definition}{Definition}[section]{} \section{Problem Definition and Notation} In this section, we introduce the formal problem definition of this work. \begin{Definition} Let $U$ be the set of Users. Let $s_i$ be the social profile of user $i$ which is denoted by $u_i \in U$. Each social profile $s_i \in S$ contains $\eta$ documents. Each document $d_i \in s_i$ contains a maximum of $L$ words. Given a user $u_i$ and his or her social profile $s_i$, the task of the Relationship Recommendation problem is to produce a ranked list of candidates based on a computed relevance score $F(s_i, s_j)$ where $s_j$ is the social profile of the candidate user $u_j$. $F(.)$ is a parameterized function. \end{Definition} There are mainly three types of learning to rank methods, namely pointwise, pairwise and list-wise. Pointwise considers each user pair individually, computing a relevance score solely based on the current sample, i.e., binary classification. Pairwise trains via noise constrastive estimation, which often minimizes a loss function like the margin based hinge loss. List-wise considers an entire list of candidates and is seldom employed due to the cumbersome constraints that stem from implementation efforts. Our proposed \textsc{CoupleNet} employs a pairwise paradigm. The intuition for this is that, relationship recommendation is considered very sparse and has very imbalanced classes (for each user, only one ground truth exists). Hence, training binary classification models suffers from class imbalance. Moreover, the good performance of pairwise learning to rank is also motivated by our early experiments. \section{The Love Birds Dataset} Since there are no publicly available datasets for training relationship recommendation models, we construct our own. The goal is to construct a list of user pairs in which both users are in relationship. Our dataset is constructed via distant supervision from Twitter. We call this dataset the \textit{Love Birds} dataset. This not only references the metaphorical meaning of the phrase `love birds' but also deliberately references the fact that the Twitter icon is a bird. This section describes the construction of our dataset\footnote{To facilitate further research, our dataset will be released at \url{https://github.com/vanzytay/ICWSM18_LB2M}. Distribution will come in the form of tweet IDs and labels, to adhere to the regulations of the Twitter public API. }. Figure \ref{overview} describes the overall process of our distant supervision framework. \begin{figure}[ht] \includegraphics[width=0.38\textwidth]{images/overview.pdf} \caption{Overview of our distant supervision and deep learning approach for relationship recommendation.}\label{overview} \end{figure}% \subsection{Distant Supervision} Using the Twitter public API, we collected tweets with emojis contains the keyword \textit{`heart'} in its description. The key is to find tweets where a user expresses love to another user. We observed that there are countless tweets such as \textit{`good night baby love you 😘'} and \textit{`darling i love you so much 💓'} on Twitter. As such, the initial list of tweets is crawled by watching heart and love-related emojis, e.g., 😘, 💓, 💖 etc. By collecting tweets containing these emojis, we form our initial candidate list of couple tweets (tweets in which two people in a relationship send to each other). Through this process, we collected 10 million tweets over a span of a couple of days. Each tweet will contain a sender and a target (the user mentioned and also the target of affection). \subsubsection{Keyword Filtering} We also noticed that the love related emojis do not necessarily imply a romantic relationship between two users. For instance, we noticed that a large percentage of such tweets are affection towards family members. Given the large corpus of candidates, we can apply a stricter filtering rule to obtain true couples. To this end, we use a ban list of words such as 'bro', 'sis', `dad', `mum' and apply regular expression based filtering on the candidates. We also observed a huge amount of music related tweets, e.g., `I love this song so much 💓!'. Hence, we also included music-related keywords such as `perform', `music', `official' and `song'. Finally, we also noticed that people use the heart emoji frequently when asking for someone to follow them back. As such, we also ban the word `follow'. \subsubsection{User-based Filtering} We further restricted tweets to contain only a single mention. Intuitively, mentioning more than one person implies a group message rather than a couple tweet. We also checked if one user has a much higher follower count over the other user. In this case, we found that this is because people send love messages to popular pop idols (we found that a huge bulk of crawled tweets came from fangirls sending love message to @harrystylesofficial). Any tweet with a user containing more than 5K followers is being removed from the candidate list. \subsection{Forming Couple Pairs} Finally, we arrive at 12K tweets after aggressive filtering. Using the 12K `cleaned' couple tweets, we formed a list of couples. We sorted couples in alphabetical order, i.e., (clara, ben) becomes (ben, clara) and removed duplicate couples to ensure that there are no `bidirectional' pairs in the dataset. For each user on this list, we crawled their timeline and collected 200 latest tweets from their timeline. Subsequently, we applied further preprocessing to remove explicit couple information. Notably, we do not differentiate between male and female users (since twitter API does not provide this information either). The signal for distant supervision can be thought of as an explicit signal which is commonplace in recommendation problems that are based on explicit feedback (user ratings, reviews, etc.). In this case, an act (tweet) of love / affection is the signal used. We call this explicit couple information. \subsubsection{Removing Additional Explicit Couple Information} To ensure that there are no \textit{additional} explicit couple information in each user's timeline, we removed all tweets with any words of affection (heart-related emojis, `love', `dear', etc.). We also masked all mentions with the @USER symbol. This is to ensure that there is no explicit leak of signals in the final dataset. Naturally, a more accurate method is to determine the date in which users got to know each other and then subsequently construct timelines based on tweets prior to that date. Unfortunately, there is no automatic and trivial way to easily determine this information. Consequently, a fraction of their timeline would possibly have been tweeted when the users have already been together in a relationship. As such, in order to remove as much 'couple' signals, we try our best to mask such information. \subsection{Why Twitter?} Finally, we answer the question of why Twitter was chosen as our primary data source. One key desiderata was that the data should be public, differentiating ourselves from other works that use proprietary datasets \cite{Xia:2015:RRS:2808797.2809282,Tu:2014:ODR:2567948.2579240}. In designing our experiments, we considered two other popular social platforms, i.e., Facebook and Instagram. Firstly, while Facebook provides explicit relationship information, we found that there is a lack of personal, personality-revealing posts on Facebook. For a large majority of users, the only signals on Facebook mainly consist of shares and likes of articles. The amount of original content created per user is extremely low compared to Twitter whereby it is trivial to obtain more than 200 tweets per user. Pertaining to Instagram, we found that posts are also generally much sparser especially in regards to frequency, making it difficult to amass large amounts of data per user. Moreover, Instagram adds a layer of difficulty as Instagram is primarily multi-modal. In our Twitter dataset, we can easily mask explicit couple information by keyword filters. However, it is non-trivial to mask a user's face on an image. Nevertheless, we would like to consider Instagram as an interesting line of future work. \subsection{Dataset Statistics} Our final dataset consists of 1.858M tweets (200 tweets per user). The total number of users is 9290 and 4645 couple pairs. The couple pairs are split into training, testing and development with a 80/10/10 split. The total vocabulary size (after lowercasing) is 2.33M. Ideally, more user pairs could be included in the dataset. However, we also note that the dataset is quite large (almost 2 million tweets) already, posing a challenge for standard hardware with mid-range graphic cards. Since this is the first dataset created for this novel problem, we leave the construction of a larger benchmark for future work. \section{Our Proposed Approach} In this section, we introduce our deep learning architecture - the \textsc{CoupleNet}. Overall, our neural architecture is a hierarchical recurrent model \cite{yang2016hierarchical}, utilizing multi-layered attentions at different hierarchical levels. An overview of the model architecture is illustrated in Figure \ref{modelarch}. There are two sides of the network, one for each user. Our network follows a `Siamese' architecture, with shared parameters for each side of the network. A single data input to our model comprises user pairs ($U1, U2$) (couples) and ($U1, U3$) (negative samples). Each user has $K$ tweets each with a maximum length of $L$. The value of $K$ and $L$ are tunnable hyperparameters. \begin{figure}[ht] \includegraphics[width=0.46\textwidth]{images/couplenet2.pdf} \caption{Overview of \textsc{CoupleNet} model architecture illustrating the computation of similarity score for User 1 and User 2. Negative sampling side of the network is omitted due to lack of space. }\label{modelarch} \end{figure}% \subsection{Embedding Layer} For each user, the inputs to our network are a matrix of indices, each corresponding to a specific word in the dictionary. The embedding matrix $\textbf{W} \in \mathbb{R}^{d \times |V|}$ acts as a look-up whereby each index selects a $d$ dimensional vector, i.e., the word representation. Thus, for each user, we have $K \times L$ vectors of dimension size $d$. The embedding layer is shared for all users and is initialized with pretrained word vectors. \subsection{Learning Tweet Representations} For each user, the output of the embedding layer is a tensor of shape $K \times L \times d$. We pass each tweet through a recurrent neural network. More specifically, we use Gated Recurrent Units (GRU) encoders with attentional pooling to learn a $n$ dimensional vector for each tweet. \subsubsection{Gated Recurrent Units (GRU)} The GRU accepts a sequence of vectors and recursively composes each input vector into a hidden state. The recursive operation of the GRU is defined as follows: \begin{align*} z_t &= \sigma (W_z x_t + U_z h_{t-1} + b_z) \\ r_t &= \sigma (W_r x_t + U_r h_{t-1} + b_r) \\ \hat{h_t} &= tanh (W_h \: x_t + U_h (r_t h_{t-1}) + b_h) \\ h_t &= z_t \: h_{t-1} + (1-z_t) \: \hat{h_t} \end{align*} where $h_t$ is the hidden state at time step $t$, $z_t$ and $r_t$ are the update gate and reset gate at time step $t$ respectively. $\sigma$ is the sigmoid function. $x_t$ is the input to the GRU unit at time step $t$. Note that time step is analogous to parsing a sequence of words sequentially in this context. $W_z, W_r \in \mathbb{R}^{d \times n}, W_h \in \mathbb{R}^{n \times n}$ are parameters of the GRU layer. \subsubsection{Tweet-level Attention} The output of each GRU is a sequence of hidden vectors $h_1, h_2 \cdots h_L \in \textbf{H}$, where $\textbf{H} \in \mathbb{R}^{L \times n}$. Each hidden vector is $n$ dimensions, which corresponds to the parameter size of the GRU. To learn a single $n$ dimensional vector, the last hidden vector $h_L$ is typically considered. However, a variety of pooling functions such as the average pooling, max pooling or attentional pooling can be adopted to learn more informative representations. More specifically, neural attention mechanisms are applied across the matrix $\textbf{H}$, learning a weighted representation of all hidden vectors. Intuitively, this learns to select more informative words to be passed to subsequent layers, potentially reducing noise and improving model performance. \begin{align*} \textbf{Y} = \text{tanh}(W_y \: \textbf{H}) \:\:;\:\: a= \text{softmax}(w^{\top} \: \textbf{Y}) \:\:;\:\: r = \textbf{H}\: a^{\top} \end{align*} where $W_y \in \mathbb{R}^{n \times n}, w \in \mathbb{R}^{n}$ are the parameters of the attention pooling layer. The output $r \in \mathbb{R}^{n}$ is the final vector representation of the tweet. Note that the parameters of the attentional pooling layer are shared across all tweets and across both users. \subsection{Learning User Representations} Recall that each user is represented by $K$ tweets and for each tweet we have a $n$ dimensional vector. Let $t^i_1, t^i_2 \cdots t^i_K$ be all the tweets for a given user $i$. In order to learn a fixed $n$ dimensional vector for each user, we require a pooling function across each user's tweet embeddings. In order to do so, we use a Coupled Attention Layer that learns to attend to U1 based on U2 (and vice versa). Similarly, for the negative sample, coupled attention is applied to (U1, U3) instead. However, we only describe the operation of (U1, U2) for the sake of brevity. \subsubsection{Coupled Attention} The key intuition behind the coupled attention layer is to learn attentional representations of U1 with respect to U2 (and vice versa). Intuitively, this compares each tweet of U1 with each tweet of U2 and learns to weight each tweet based on this grid-wise comparison scheme. Let U1 and U2 be represented by a sequence of $K$ tweets (each of which is a $n$ dimensional vector) and let $T_1, T_2 \in \mathbb{R}^{k \times n}$ be the tweet matrix for U1 and U2 respectively. For each tweet pair ($t^{1}_i, t^{2}_j$), we utilize a feed-forward neural network to learn a similarity score between each tweet. As such, each value of the similarity grid is computed: \begin{equation} s_{ij} = W_{c} \: [t^{1}_i; t^{2}_j] + b_c \end{equation} where $W_c \in \mathbb{R}^{n \times 1}$ and $b_c \in \mathbb{R}^{1}$ are parameters of the feed-forward neural network. Note that these parameters are shared across all tweet pair comparisons. The score $s_{ij}$ is a scalar value indicating the similarity between tweet $i$ of U1 and tweet $j$ of U2. \subsubsection{Aggregating Strong Signals} Given the similarity matrix $\textbf{S} \in \mathbb{R}^{K \times K}$, the strongest signals across each dimension are aggregated using max pooling. For example, by taking a max over the columns of \textbf{S}, we regard the importance of tweet $i$ of U1 as the strongest influence it has over all tweets of U2. The result of this aggregation is two $K$ length vectors which are used to attend over the original sequence of tweets. The following operations describe the aggregation functions: \begin{align} a^{row} = \text{smax}(\max_{row} \textbf{S}) \:\:\:\text{and}\:\:\: a^{col} = \text{smax}(\max_{col} \textbf{S}) \end{align} where $a^{row}, a^{col} \in \mathbb{R}^{K}$ and smax is the softmax function. Subsequently, both of these vectors are used to attentively pool the tweet vectors of each user. \begin{align*} u_1 = T_1 \: a^{col} \:\:\text{and}\:\:u_2 = T_2 \: a^{row} \end{align*} where $u_1, u_2 \in \mathbb{R}^{n}$ are the final user representations for U1 and U2. \subsection{Learning to Rank and Training Procedure} Given embeddings $u_1, u_2, u_3$, we introduce our similarity modeling layer and learning to rank objective. Given $u_1$ and $u_2$, the similarity between each user pair is modeled as follows: \begin{equation} s(u_1, u_2) = \frac{u_i \cdot u_2}{|u_1| |u_2|} \end{equation} which is the cosine similarity function. Subsequently, the pairwise ranking loss is optimized. We use the margin-based hinge loss to optimize our model. \begin{equation} J = \max \{0, \lambda - s(u_1,u_2) + s(u_1, u_3) \} \end{equation} where $\lambda$ is the margin hyperparameter, $s(u_1, u_2)$ is the similarity score for the ground truth (true couples) and $s(u_1, u_3)$ is the similarity score for the negative sample. This function aims to discriminate between couples and non-couples by increasing the margin between the ranking scores of these user pairs. Parameters of the network can be optimized efficiently with stochastic gradient descent (SGD). \section{Empirical Evaluation} Our experiments are designed to answer the following Research Questions (\textbf{RQ}s). \begin{itemize} \item \textbf{RQ1} - How well are machine learning and deep learning methods able to learn, predict, recommend relationships just based on linguistic information from social profiles? Are the romantic compatibility of two people predictable just based on textual information? \item \textbf{RQ2} - Does the amount of information (number of tweets per user) affect the ability to recommend relationships? \item \textbf{RQ3} - Are we able to derive any insight on how these models are learning to recommend relationships? Are attention models able to produce explainable relationship recommendations? \end{itemize} \subsection{Experimental Setup} All empirical evaluation is conducted on our LoveBirds dataset which has been described earlier. This section describes the evaluation metrics used and evaluation procedure. \subsubsection{Evaluation Metrics} Our problem is posed as a learning-to-rank problem. As such, the evaluation metrics used are as follows: \begin{itemize} \item \textbf{Hit Ratio @N} is the ratio of test samples which are correctly retrieved within the top $N$ users. We evaluate on $N=10,5,3$. \item \textbf{Accuracy} is the number of test samples that have been correctly ranked in the top position. \item \textbf{Mean Reciprocal Rank (MRR)} is a commonly used information retrieval metric. The reciprocal rank of a single test sample is the multiplicative inverse of the rank. The MRR is computed by $\frac{1}{Q} \sum^{|Q|}_{i=1} \frac{1}{rank_i}$. \item \textbf{Mean Rank} is the average rank of all test samples. \end{itemize} \subsubsection{Evaluation Procedure} Our experimental procedure samples $100$ users per test sample and ranks the golden sample amongst the $100$ negative samples. \subsubsection{Algorithms Compared} In this section, we discuss the algorithms and baselines compared. Notably, there are no established benchmarks for this new problem. As such, we create 6 baselines to compare against our proposed \textsc{CoupleNet}. \begin{itemize} \item \textbf{RankSVM (Tf-idf)} - This model is a RankSVM (Support Vector Machine) trained on tf-idf vectors. This model is known to be a powerful vector space model (VSM) baseline. The feature vector of each user is a $k$ dimensional vector, representing the top-$k$ most common n-grams. The n-gram range is set to (1,3) and $k$ is set to 5000 in our experiments. Following the original implementation, the kernel of RankSVM is a linear kernel. \item \textbf{RankSVM (Embed)} - This model is a RankSVM model trained on pretrained (static, un-tuned) word embeddings. For each user pair, the feature vector is the sum of all words of both users. \item \textbf{MLP (Embed)} - This is a Multi-layered Perceptron (MLP) model that learns to non-linearly project static word embedding. Each word embedding is projected using 2 layered MLP with ReLU activations. The user representation is the sum of all transformed word embeddings. \item \textbf{DeepCoNN (Deep Co-operative Neural Networks)} \cite{zheng2017joint} is a convolutional neural network (CNN). CNNs learn n-gram features by sliding weights across an input. In this model, all of a user's tweets are concatenated and encoded into a $d$ dimensional vector via a convolutional encoder. We use a fixed filter width of $3$. DeepCoNN was originally proposed for item recommendation task using reviews. In our context, we adapt the DeepCoNN for our RSR task (tweets are analogous to reviews). Given the different objectives (MSE vs ranking), we also switch\footnote{In our problem, we found that the FM layer significantly degraded performance.} the factorization machine (FM) layer for the cosine similarity. The number of filters is $100$. A max pooling layer is used to aggregate features. \item \textbf{Baseline Gated Recurrent Unit (GRU)} - We compare with a baseline GRU model. Similar to the DeepCoNN model, the baseline GRU considers a user to be a concatenation of all the user's tweets. The size of the recurrent cell is $100$ dimensions. \item \textbf{Hierarchical GRU (H-GRU)} - This model learns user representations by first encoding each tweet with a GRU encoder. The tweet embedding is the last hidden state of the GRU. Subsequently, all tweet embeddings are summed. This model serves as an ablation baseline of our model, i.e., removing all attentional pooling functions. \end{itemize} \subsubsection{Implementation Details} All models were implemented in Tensorflow on a Linux machine. For all neural network models, we follow a \textit{Siamese} architecture (shared parameters for both users) and mainly vary the neural encoder. The cosine ranking function and hinge loss are then used to optimize all models. We train all models with the Adam \cite{DBLP:journals/corr/KingmaB14} optimizer with a learning rate of $10^{-3}$ since this learning rate consistently produced the best results across all models. The batch size is tuned amongst $\{16,32,64\}$ and models are trained for $10$ epochs. We report the result based on the best performance on the development set. The margin is tuned amongst $\{0.1, 0.2, 0.5\}$. All model parameters are initialized with Gaussian distributions with a mean of 0 and standard deviation of $0.1$. The L2 regularization is set to $10^{-8}$. We use a dropout of $0.5$ after the convolution or recurrent layers. A dropout of $0.8$ is set after the Coupled Attention layer in our model. Text is tokenized with NLTK's tweet tokenizer. We initialize the word embedding matrix with Glove \cite{DBLP:conf/emnlp/PenningtonSM14} trained on Twitter corpus. All words that do not appear more than $5$ times are assigned unknown tokens. All tweets are truncated at a fixed length of $10$ tokens. Early experiments found that raising the number of tokens per tweet does not improve the performance. The number of tweets per user is tuned amongst $\{10,20,50,100,150,200\}$ and reported in our experimental results. \begin{figure}[t] \center \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/HR_10.pdf} \caption{HR@10 Results}\label{L} \end{subfigure}% \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/HR_5.pdf} \caption{HR@5 Results} \end{subfigure}% \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/HR_3.pdf} \caption{HR@3 Results} \end{subfigure} \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/Accuracy.pdf} \caption{Accuracy Results} \end{subfigure}% \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/MRR.pdf} \caption{MRR Results} \end{subfigure}% \begin{subfigure}[t]{0.22\textwidth} \includegraphics[width=\linewidth]{images/Mean_Rank.pdf} \caption{Mean Rank Results} \end{subfigure}% \caption{Experimental Results on the LoveBirds2M dataset. Results are plotted against number of tweets. \textit{Best viewed in color}. CoupleNet (\textit{red}) outperforms all baselines. }\label{exp_results} \end{figure} \subsection{Discussion and Analysis} Figure \ref{exp_results} reports the experimental results on the LoveBirds2M dataset. For all baselines and evaluation metrics, we compare across different settings of $\eta$, the number of tweets per user that is used to train the model. Firstly, we observe that \textsc{CoupleNet} significantly outperforms most of the baselines. Across most metrics, there is almost a $180\%-200\%$ relative improvement over DeepCoNN, the state-of-the-art model for item recommendation with text data. The performance improvement over the baseline GRU model is also extremely large, i.e., with a relative improvement of approximately $4$ times across all metrics. This shows that concatenating all of a user's tweets into a single document severely hurts performance. We believe that this is due to the inability of recurrent models to handle long sequences. Moreover, the DeepCoNN performs about $2$ times better than the baseline GRU model. On the other hand, we observe that H-GRU significantly improves the baseline GRU model. In the H-GRU model, sequences are only $L=10$ long but are encoded $K$ times with shared parameters. On the other hand, the GRU model has to process $K \times L$ words, which inevitably causes performance to drop significantly. While the performance of the H-GRU model is reasonable, it is still significantly outperformed by our \textsc{CoupleNet}. We believe this is due to the incorporation of the attentional pooling layers in our model, which allows it to eliminate noise and focus on the important keywords. A surprising and notable strong baseline is the MLP (Embed) model which outperforms DeepCoNN but still performs much worse than \textsc{CoupleNet}. On the other hand, RankSVM (Embed) performs poorly. We believe that this is attributed to the insufficiency of the linear kernel of the SVM. Since RankSVM and MLP are trained on the same features, we believe that nonlinear ReLU transformations of the MLP improve the performance significantly. Moreover, the MLP model has 2 layers, which learn different levels of abstractions. Finally, the performance of RankSVM (Tf-idf) is also poor. However, we observe that RankSVM (Tf-idf) slightly outperforms RankSVM (Embed) occasionally. While other models display a clear trend in performance with respect to the number of tweets, the performance of RankSVM (Tf-idf) and RankSVM (Embed) seem to fluctuate across the number of user tweets. Finally, we observe a clear trend in performance gain with respect to the number of user tweets. This is intuitive because more tweets provide the model with greater insight into the user's interest and personality, allowing a better match to be made. The improvement seems to follow a logarithmic scale which suggests diminishing returns beyond a certain number of tweets. Finally, we report the time cost of \textsc{CoupleNet}. With $200$ tweets per user, the cost of training is approximately $\approx 2$ mins per epoch on a medium grade GPU. This is much faster than expected because GRUs benefit from parallism as they can process multiple tweets simultaneously. \subsection{Ablation Study} In this section, we study the component-wise effectiveness of \textsc{CoupleNet}. We removed layers from \textsc{CoupleNet} in order to empirically motivate the design of each component. Firstly, we switched CoupleNet to a pointwise classification model, minimizing a cross entropy loss. We found that this halves the performance. As such, we observe the importance of pairwise ranking. Secondly, we swapped cosine similarity for a MLP layer with scalar sigmoid activation (to ensure inputs lie within $[0,1]$). We also found that the performance drops significantly. Finally, we also observe that the attention layers of \textsc{CoupleNet} contribute substantially to the performance of the model. More specifically, removing both the GRU attention and coupled attention layers cause performance to drop by 13.9\%. Removing the couple attention suffers a performance degrade of $2.5\%$ while removing the GRU attention drops performance by $3.9\%$. It also seems that dropping both degrades performance more than expected (not a straightforward summation of performance degradation). \begin{table}[htbp] \centering \small \begin{tabular}{lc} \midrule Model & \multicolumn{1}{l}{HR@10} \\ \midrule \textsc{CoupleNet} & 64.1 \\ w/o couple attention & 61.6 (-2.5\%) \\ w/o GRU attention & 60.2 (-3.9\%) \\ w/o GRU attention and couple attention & 50.2 (-13.9\%)\\ w/o cosine similarity & 33.8 (-30.3\%) \\ w/o pairwise (using pointwise) & 36.1 (-28.0\%)\\ \midrule \end{tabular}% \caption{Component-wise ablation study with $\eta=200$. } \label{tab:addlabel}% \end{table}% \subsection{Overall Quantitative Findings} In this subsection, we describe the overall findings of our quantitative experiments. \begin{itemize} \item Overall, the best HR@10 score for \textsc{CoupleNet} is about $64\%$, i.e., if an application would to recommend the top $10$ prospective partners to a user, then the ground truth will appear in this list $64\%$ of the time. Moreover, the accuracy is $25\%$ (ranking out of 100 candidates) which is also reasonably high. Given the intrinsic difficulty of the problem, we believe that the performance of \textsc{CoupleNet} on this new problem is encouraging and promising. To answer \textbf{RQ1}, we believe that text-based deep learning systems for relationship recommendation are plausible. However, special care has to be taken, i.e., model selection matters. \item The performance significantly improves when we include more tweets per user. This answers \textbf{RQ2}. This is intuitive since more tweets would enable better and more informative user representations, leading to a better matching performance. \end{itemize} \section{Qualitative Analysis} In this section, we describe several insights and observations based on real\footnote{We do not explicitly report the actual user accounts in this paper because this might violate their privacy. Actual tweets are slightly modified to protect identities from search.} examples from our LoveBirds20 dataset. One key advantage of \textsc{CoupleNet} is a greater extent of explainability due to the coupled attention mechanism. More specifically, we are able to obtain which of each user's tweets contributed the most to the user representation and the overall prediction. By analyzing the attention output of user pairs, we are able to derive qualitative insights. As an overall conclusion to answer \textbf{RQ3} (which will be elaborated by in the subsequent subsections), we found that \textsc{CoupleNet} is capable of explainable recommendations if there are explicit matching signals such as user interest and demographic similarity between user pairs. Finally, we discuss some caveats and limitations of our approach. \subsection{Mutual Interest between Couples is Captured in \textsc{CoupleNet}} We observed the \textsc{CoupleNet} is able to capture the mutual interest between couples. Table \ref{tab:bts_table} shows an example from the LoveBirds2M dataset. In general, we found that most user pairs have noisy tweets. However, we also observed that whenever couple pairs have mutual interest, \textsc{CoupleNet} is able to assign a high attention weight to the relevant tweets. For example, in Table \ref{tab:bts_table}, both couples are fans of BTS\footnote{\url{https://en.wikipedia.org/wiki/BTS_(band)}}, a Korean pop idol group. As such, tweets related to BTS are surfaced to the top via coupled attention. In the first tweet of User 1, tweets related to two entities, \textit{seokjin} and \textit{hoseok}, are ranked high (both entities are members of the pop idol group). This ascertains that \textsc{CoupleNet} is able to, to some extent, explain why two users are matched. This also validates the usage of our coupled attention mechanism. For instance, we could infer that User1 and User2 are matched because of their mutual interest in BTS. A limitation is that it is difficult to interpret why the other tweets (such as a \textit{thank} you without much context, or \textit{supporting your family}) were ranked highly. \begin{table}[htbp] \centering \small \begin{tabular}{cp{3.2cm}p{3.2cm}} \midrule \multicolumn{1}{l}{Rank} & User A & User B \\ \midrule 1 & i apologize to \hltwo{seokjin} and \hltwo{hoseok} 😂 & that's meant to say \hltwo{bts} but imma too tired to \\ 2 & thank you! & more sorry for making such a mess \\ 3 & \hltwo{bts} memes mayo & i'm not sure if I shld post this \\ 4 & @user @user support your family! 😊 & the last couple of days have been shitty for me \\ 5 & welcome hun paramore! & blur pic effects are the best 😊\\ \midrule \end{tabular}% \caption{Example of top-ranked tweets from user pair (ground truth is 1) in which mutual interests have the highest attention weight. Interest specific keywords are highlighted in red. \textsc{CoupleNet} successfully ranks this pair at the top position.} \label{tab:bts_table}% \end{table}% \subsection{\textsc{CoupleNet} Infers User Attribute and Demographic by Word Usage} We also discovered that \textsc{CoupleNet} learns to match users with similar attributes and demographics. For example, high school students will be recommended high school students at a higher probability. Note that location, age or any other information is not provided to \textsc{CoupleNet}. In other words, user attribute and demographic are solely inferred via a user's tweets. In Table \ref{tab:sch}, we report an example in which the top-ranked tweets (via coupled attention) are high school related tweets (homecoming, high school reception). This shows two things: (1) the coupled attention shows that the following 3 tweets were the most important tweets for prediction and (2) \textsc{CoupleNet} learns to infer user attribute and demographic without being explicitly provided with such information. We also note that both users seem to have strongly positive tweets being ranked highly in their attention scores which might hint at the role of sentiment and mood in making prediction. \begin{table}[htbp] \centering \small \begin{tabular}{cp{3.2cm}p{3.2cm}} \midrule \multicolumn{1}{l}{Rank} & User C & User D \\ \midrule 1 & homecoming! 😁 & high school reception was a blast 😃 \\ 2 & taking meds for sports & preview will be out soon \\ 3 & so pumped for senior homecoming 😜 😜 😜& this is my life homie \\ \midrule \end{tabular}% \caption{Example of top-ranked tweets from user pair (ground truth is 1) which are ranked by the Coupled Attention layer. \textsc{CoupleNet} places school related tweets on the top. } \label{tab:sch}% \end{table}% \subsection{\textsc{CoupleNet} Ranks Successfully Even Without Explicit Signals} It is intuitive that not every user will post interest or demographic revealing tweets. For instance, some users might exclusively post about their emotions. When analyzing the ranking outputs of \textsc{CoupleNet}, we found that, interestingly, \textsc{CoupleNet} can successfully rank couple pairs even when there seem to be no explicit matching signal in the social profiles of both users. Table \ref{tab:noise} shows an example where two user profiles do not share any explicit matching signals. User E and User F are a ground truth couple pair and the prediction of \textsc{CoupleNet} ranks User E with User F at the top position. The top tweets of User E and User F are mostly emotional tweets that are non-matching. Through this case, we understand that \textsc{CoupleNet} does not simply match people with similar emotions together. Notably, relationship recommendation is also a problem that humans may struggle with. Many times, the reason why two people are in a relationship may be implicit or unclear (even to humans). As such, the fact that \textsc{CoupleNet} ranks couple pairs correctly even when there is no explicit matching signals hints at its ability to go beyond simple keyword matching. In this case, we believe `hidden' (latent) patterns (such as emotions and personality) of the users are being learned and modeled in order to make recommendations. This shows that \textsc{CoupleNet} is not simply acting as a text-matching algorithm and learning features beyond that. \begin{table}[htbp] \centering \small \begin{tabular}{cp{3.2cm}p{3.2cm}} \midrule \multicolumn{1}{l}{Rank} & User E & User F \\ \midrule{} 1 & wanna be treated like a princess 😊 & can't deal with this forever 😰\\ 2 & in bed with cosy clothes and fluffy socks & 😭 my diet is screwed \\ 3 & rt if you are currently in a mess & 😔 feel too sick \\ 4 & so much regret lmao & life is shit, home is shit \\ 5 & some girls are just so naturally pretty & still care about my grades \\ \midrule \end{tabular}% \caption{Example of top-ranked tweets (from attention) from user pair (ground truth is 1) in which there is no explicit signal. \textsc{CoupleNet} correctly ranks this user pair at top position.} \label{tab:noise}% \end{table}% \section{Side Note, Caveats and Limitations} While we show that our approach is capable of producing interpretable results (especially when explicit signals exist), the usefulness of its explainability may still have limitations, e.g., consider Table \ref{tab:noise} where it is clear that the results are not explainable. Firstly, there might be a complete absence of any interpretable content in two user's profiles in the first place. Secondly, explaining relationships are also challenging for humans. As such, we recommend that the outputs of \textsc{CoupleNet} to be only used as a reference. Given that a user's profile may contain easily a hundreds to thousands of tweets, one posssible use is to use this ranked list to enable more efficient analysis by humans (such as social scientist or linguists). We believe our work provides a starting point of explainable relationship recommendation. \section{Conclusion} We introduced a new problem of relationship recommendation. In order to construct a dataset, we employ a novel distant supervision scheme to obtain real world couples from social media. We proposed the first deep learning model for text-based relationship recommendation. Our deep learning model, \textsc{CoupleNet} is characterized by its usage of hierarchical attention-based GRUs and coupled attention layers. Performance evaluation is overall optimistic and promising. Despite huge class imbalance, our approach is able to recommend at a reasonable precision ($64\%$ at HR@10 and $25\%$ accuracy while being ranked against $100$ negative samples). Finally, our qualitative analysis shows three key findings: (1) \textsc{CoupleNet} finds mutual interests between users for match-making, (2) \textsc{CoupleNet} infers user attributes and demographics in order to make recommendations, and (3) \textsc{CoupleNet} can successfully match-make couples even when there is no explicit matching signals in their social profiles, possibly leveraging emotion and personality based latent features for prediction. \footnotesize{
1,314,259,996,842
arxiv
\section{Introduction} Recent studies have shown that deep neural networks (DNNs) can achieve state-of-the-art performance in various applications such as image recognition\cite{IR01, IR02}, natural language processing\cite{NLP01}, speech recognition\cite{SR01, SR02} and complex video games\cite{GP01, GP02}. It has not only achieved exceptional accuracy in different tasks but also surpassed human-level performance in some of them\cite{GP01, reLU01}. DNNs have also been leveraged in health-related studies ranging from medical images\cite{medical01, medical02, medical03, medical04, medical05} to human genome analyses\cite{gen01, gen02, gen03}. Generative Adversarial Networks (GANs)\cite{gan} form a well-researched class of generative models\cite{Vision1, Vision2, Vision3, Vision4}. They can learn the distribution of the training data and generate synthetic data with a distribution very similar to the distribution of the training data. GAN models are particularly used by research communities to generate the synthetic datasets in cases where they cannot directly access sensitive datasets. However, using sensitive data to train GAN models raises privacy concerns for participating individuals. Indeed, recent works show that most machine learning models, including GAN models, are vulnerable to a slew of attacks (from model inversion attacks to membership inference attacks) that can expose significant information about training data\cite{shokri01, LoGAN, Fredrikson, Zhang}. Differential Privacy (DP) \cite{Dwork1, Dwork2} is a common technique to protect the privacy of ML models trained on sensitive data. However, in spite of its popularity, there have been very few recent studies on training GANs in a differentially private way\cite{dpgan, dpgan2019, pate_gan, clinical, BoLi}. The standard procedure leveraged by these recent studies to enforce DP is to first clip the l2 norm of the gradients of the sum of the discriminator's loss on real and fake data and then add Gaussian noise to the clipped gradients. To keep track of the privacy budget, they typically use the Moment Accountant (MA) technique\cite{abadi}. One of the limitations of these recent works is that they focus exclusively on generating synthetic data (e.g., images) without corresponding labels -- an aspect that renders the synthetically generated data useless for supervised learning applications. More importantly, training high quality GANs with a single digit epsilon parameter (for differential privacy) has been absent so far even for the simplest of all tasks: generating MNIST-like digits. In this work, we propose a Differentially Private Conditional GAN (DP-CGAN) training framework, which can preserve the privacy of conditional GAN models using DP\cite{Dwork1, Dwork2}. The main idea in DP-CGAN is that it clips the gradients of discriminator loss on real and fake data separately, which allows the designer to better control the sensitivity of the model to real (sensitive) data. Moreover, DP-CGAN can generate not only synthetic data but also corresponding labels. Further, DP-CGAN employs the newly introduced R\'enyi Differential Privacy (RDP) Accountant\cite{ren} to track the privacy budget. In comparison to the classical MA technique, RDP accounting provides a tighter bound on the privacy budget, allowing for the addition of less noise without compromising the privacy guarantees. DP-CGAN framework has three main components: conditional generator network, differentially private discriminator network, and privacy accountant. At each step of the training process, the discriminator network is trained in a differentially private manner in which the gradients of loss on real and fake data are clipped separately. Afterwards, the sum of these two set of clipped gradients are computed and noised by adding Gaussian noise to them. Then, the privacy accountant, which is based on the RDP accountant\cite{ren}, is updated by accumulating the spent privacy budget at each step. Next, the generator network is trained with a non-private optimizer. At any given point in time, if the privacy budget exceeds the target one, the training process is halted and the conditional generator network is ready for the creation of synthetic data and labels. We make the following contributions in this work: \begin{itemize} \item We propose DP-CGAN based on a new gradient clipping and noising procedure, which improves the performance compared to the standard procedure to preserve privacy. To the best of our knowledge, DP-CGAN is the first differentially private GAN framework than can generate both the synthetic data and corresponding labels with promising results. It leverages the recently introduced RDP accountant and TensorFlow Privacy\footnote{https://github.com/tensorflow/privacy} package (by Google) to keep track of the privacy budget. \item We provide preliminary experimental results showing that DP-CGAN can generate good visual and empirical results on MNIST dataset with single-digit epsilon parameter. This suggests that our work can be viewed as the first stepping stone towards training high quality GANs with strong DP guarantees. \item We use the differnetially private conditional generative model to create synthetic data and labels which are used (together) in the training of machine learning models. We test the accuracy of the learned models on real data and show that they perform well. We get an area under the ROC (AUROC) of $87.57\%$ using DP-CGANs compared to $92.17\%$ if we were to train the classifier directly on real data. \end{itemize} The remainder of the paper is organized as follows: Section \ref{sec:Pre} provides a background on GAN, CGAN, and differential privacy. Section \ref{sec:Rel} overviews the previous related work in the area of preserving the privacy of deep learning models. Section \ref{sec:Our} describes the DP-CGAN framework in detail. Section \ref{sec:Exp} provides the experimental results and Section \ref{sec:Con} concludes the paper with a brief conclusion. \section{Preliminaries}\label{sec:Pre} In this section, we review Generative Adversarial Networks(GAN), Conditional Generative Adversarial Networks(CGAN) and differential privacy concepts used in DP-CGAN. \subsection{GAN and CGAN} Nowadays, there is a great interest in using generative models to create synthetic data that looks like the original one. Generative Adversarial Network(GAN) proposed by Goodfellow et. al~\cite{gan} is one the primary methods to learn generative models for images. GANs consist of two main components: a generator and a discriminator. The generator takes noise as input and generates synthetic data by capturing the original data distribution while the discriminator takes the synthetic data (generator's output) as well as original data (training set) and learns to discriminate between the real (training) and fake (synthetic) data distribution. The discriminator returns two possible values as output which is the assigned score to a test sample representing whether it is real or fake data. The generator and discriminator always try hard to be as accurate as possible and the more the generator improves the quality of the fake data, it gets harder for discriminator to distinguish the difference between the original and fake data. These two components always play a game and are trained simultaneously. Suppose $p_{z}(z)$ is the probability distribution that random noise $z$ is taken from, $G(z)$ is the generator network that takes the random noise $z$ as input and $D(x)$ is the discriminator network that takes the generator's output as well as the input data $x$ taken form the distribution $p_{data}(x)$. The game that the generator and discriminator play to achieve a trade-off, encapsulates in the following objective function, $V(D, G)$, of a minimax game: \begin{equation} \begin{split} \min_{G}\max_{D}V(D,G)= \\ \mathop{\displaystyle\mathbb{E}}_{x\sim p_{data}(x)}[log (D(x))] + \\ \mathop{\mathbb{E}}_{z \sim p_{z}(z)}[log(1-D(G(z)))] \end{split} \end{equation} Conditional GAN\cite{cgan} is an extension of GAN in which both generator and discriminator are conditioned on some side information, “y” that can be any kind of extra information like class labels or data from other modalities. The objective function of a minimax game for CGAN is as the following: \\\\\\ \begin{equation} \begin{split} \min_{G}\max_{D}V(D,G)= \\ \mathop{\displaystyle\mathbb{E}}_{x\sim p_{data}(x)}[log (D(x|y))]+ \\ \mathop{\mathbb{E}}_{z \sim p_{z}(z)}[log(1-D(G(z|y))))] \end{split} \end{equation} \subsection{Differential Privacy} Differential privacy\cite{Dwork1, Dwork2} is a mathematical framework to express the level of privacy preservation of individuals in a statistical databases. It provides strong privacy guarantees for algorithms on aggregate databases. Intuitively, in differential privacy, the user should learn about population as a whole but not about particular individual. In other words, if we replace individual $I$ with another random member of the population, the user should learn the same thing about the dataset in presence or absence of individual $I$. Differential privacy has become an actual standard in data protection in both academia and industry\cite{myers:gltr16} (Apple\cite{Apple}, Google\cite{Google} and US Census\cite{USCensus}).\\ \textbf{Definition 1.} (\textit{differential privacy}) A randomized mechanism $M$ over a set of databases $D$, satisfies $(\epsilon,\delta)$-differential privacy if for any two adjacent databases $d , d^{'} \in D$, with only one different sample, and for any subset of output $S \in R$, the following inequality holds: \begin{equation*}\tag{3} Pr[M(d) \in S] \le e^{\epsilon}Pr[M(d') \in S]+\delta \end{equation*} In pure differential privacy, $\delta =0$ and the additive term $\delta$ does not exist while in approximate differential privacy~\cite{Dwork1}, $\delta$ is used for approximation in the cases that pure differential privacy is broken. $\delta$ is the probability that privacy loss is not bounded by $\epsilon$ and its optimal value is smaller than $\frac{1}{|d|}$ (inverse of the database size). Differential privacy is resistant to post-processing. That is, any arbitrary randomized mapping of an $(\epsilon, \delta)$-differentially private algorithm, is differentially private as well. \textbf{Theorem 1.} (\textit{post-processing}) Given a randomized algorithm $M : D \xrightarrow{}R$ that is $(\epsilon, \delta)$-differentially private and an arbitrary randomized mapping $f : R \xrightarrow{}R^{'}$, $f \circ M : D \xrightarrow{} R^{'}$ is $(\epsilon, \delta)$-differentially private. A routine approach to privatizing the output of a real-valued function $f: D \xrightarrow{} \mathbb R$ is to add noise with variance in the scale of $f$'s \textit{sensitivity}, $S_f$, to the output. The sensitivity of a function $f$ is defined as the maximum absolute distance $|f(d) - f(d^{'})|$ ($d$ and $d^{'}$ are adjacent databases). In formal notion: \begin{equation*}\tag{4} S_f \equiv \max_{d \sim d^{'}} {|f(d) - f(d^{'})|}, \end{equation*} Gaussian noise is one of the popular kinds of noise employed in differential privacy, in which $f(d)$ is perturbed by Gaussian noise $N(0 , {S_f}^2 . \sigma^2)$. That is: \begin{equation*}\tag{5} M(d) \equiv f(d) + N(0 , {S_f}^2 . \sigma^2) \end{equation*} Composability is one of the interesting properties of differetnial privacy that makes it possible to combine multiple differentially private mechanisms into one. A standard analysis implies the composition of $k$ mechanisms that each of them are $(\epsilon, \delta)$-differentially private, is at least $(k\epsilon, k\delta)$-differentially private\cite{Dwork1, Dwork2, Dwork4}. One of the possible ways of accounting differential privacy in composition of additive-noise mechanisms is to use Moment Account technique introduced by Abadi et. al\cite{abadi}, which provides strong estimates of privacy loss compared to various versions of composition theorem~\cite{Dwork1, Dwork4, peter_composition, Dwork5, strong_composition} including strong composition theorem \cite{strong_composition}. RDP accountant \cite{ren} is a new approach based on a new definition of privacy, R\'enyi differential privacy, which provides a tighter bound for privacy loss in comparison with Moment Accountant. \section{Related Work}\label{sec:Rel} Some previous studies have proposed approaches to addressing the problem of preserving privacy in Deep Learning. Shokri et al.\cite{shokri01} developed a distributed approach in which multiple parties train a model on their local training set independently. Then, each party selects a set of key parameters, and shares them with the other parties. Although this method has high training accuracy without sharing the input parameters, Abadi et al.\cite{abadi} showed that the overall privacy loss for each party exceeds several thousands on MNIST dataset using Moment Accountant technique they introduced. Moment accountant mechanism\cite{abadi} can be used to track the overall spent privacy budget, $(\epsilon, \delta)$, for composing Gaussian Mechanisms with random sampling (e.g. training process in Stochastic Gradient Descent). This method provides a much tighter estimation for privacy loss compared to standard composition theorem\cite{composition}. It computes the log moments of the random variable indicating privacy loss and then calculates the tail bound using moments bound and standard Markov inequality. The result is privacy loss estimation in terms of differential privacy. In addition to Moment Accountant technique, Abadi et al.\cite{abadi} proposed a method to make the Stochastic Gradient Descent(SGD) process differentially private. Private Aggregation of Teacher Ensembles (PATE)\cite{pate} is a framework that leverages the moment accountant mechanism to trace the privacy leakage of knowledge transfer task using differential privacy. It presents a differentially private semi-supervised learning method in which the training data is split into multiple disjoint sets and the teacher models are trained independently. The teacher ensemble predicts the labels after perturbing counts of teachers' votes by Laplace noise while the student model is trained on public data as well as labeled data from the teacher model and can be published publicly. Although this method outperforms Shokri et al.\cite{shokri01} work in terms of both accuracy and privacy, it assumes the model has access to public data which may not be the case in practice. Moreover, the teacher ensemble just responds to the queries for which the consensus among teachers is sufficiently high. Some other previous researches focused on preserving privacy of GANs in particular. DPGAN method\cite{dpgan}, enforces differential privacy during the training process of the discriminator by adding Gaussian noise to the gradient of Wasserstein distance in WGAN algorithm and uses post-processing theorem to guarantee differential privacy for the generator. However, it is unclear how the overall privacy budget is accounted, the results do not look promising even on MNIST dataset and there is no methodology for creating labels for synthetic images. Similar to DPGAN method, PATE-GAN approach\cite{pate_gan} enforces privacy by making the discriminator differentially private. In PATE-GAN, the discriminator is replaced with modified version of PATE\cite{pate} in which the student model allows back-propagation to the generator and there is no need to have access to public training data. It employs the generated data to train different classifiers and evaluate the quality of generated data by testing these classifiers on real test data. The limitation of PATE-GAN is that it assigns binary labels for synthetic data, and therefore, it is not applicable for multi-label datasets. Moreover, the datasets used to evaluate the model are small. The other work is a DPGAN framework for time series, continuous, and discrete data\cite{dpgan2019}. This framework is alike the previous DPGAN work\cite{dpgan} except it employs moments accountant approach to account the privacy budget and clips the discriminator gradients while reducing the clipping parameter over time (adaptive clipping). Unlike DPGAN method\cite{dpgan}, our proposed method leverages RDP accountant technique to follow the consumed privacy budget, $(\epsilon, \delta)$ and generates not only synthetic data but also the labels using a Conditional GAN model. In contrast to PATE-GAN\cite{pate_gan} which generates only binary labels, our model generates multi-class labels. Finally, in DPGAN frameworks\cite{dpgan, dpgan2019} the discriminator gradients are clipped and perturbed by adding Gaussian noise to gradients of the discriminator loss, while in our framework, Gaussian noise is added to the accumulation of clipped gradients of discriminator loss on real data and clipped gradients of discriminator loss on fake data. \section{Our Approach}\label{sec:Our} As mentioned before, DP-CGAN can generate the synthetic data as well as the corresponding labels while preserve privacy of training samples. To this end, the DP-CGAN makes the training process private by injecting random Gaussian noise into the optimization process of the discriminator network. Based on post-processing theorem\cite{Dwork3} making the generative network differentially private results in having a differentially private generator too. DP-CGAN tracks the spent privacy loss using RDP accounting technique\cite{ren}, which provides tighter estimation on privacy loss in comparison with moment accountant technique. The training procedure stops if the spent privacy budget $(\epsilon, \delta)$ goes beyond the target ones. \begin{algorithm} \caption{DP-CGAN} \DontPrintSemicolon Input: Examples $\{x_1 , x_2, ..., x_N\}$, labels $\{y_1, y_2, ..., y_N\}$, target epsilon $\epsilon$, target delta $\delta$, noise scale $\sigma$, clip norm bound $C$, learning rate $lr$, batch size $bs$ Output: Differentially private Generator that generates synthetic data and labels should$\_$terminate = False \While{$(step \le max\_step \And !\ should\_terminate)$} { \normalfont{- Sample random batch $(X^t, Y^t)$ of size $bs$ with probability $bs/N$ from data distribution $p_{data}{(X)}$ } \newline \normalfont{- Sample noise batch $Z_t$ of size $bs$ from noise prior $p_{z}{(z)}$ } \tcc{Update the Discriminator Network} $d\_loss\_real \xleftarrow[]{} \log(D(X^{t}))$ $d\_loss\_fake \xleftarrow[]{} \log (1- D(G(Z^{t})))$ \textbf{Compute per-example gradients of discriminator loss on real data $X_t$ and clip them} \For{$i \in X_t$} { Compute ${grad_{d\_real}}^{t} \xleftarrow[]{} \nabla_{\theta_d}d\_{loss}\_real ({{\theta_d}^{t}}, X_{i})$ } ${grad_{d\_real}}^t = {grad_{d\_real}}^t / max(1, \frac{||grad_{d\_real}||_{2}}{C}) $ \textbf{Compute per-example gradients of discriminator loss on fake data $Z_t$ and clip them } \For{$i \in Z_t$} { Compute ${grad_{d\_fake}}^{t} \xleftarrow[]{} \nabla_{\theta_d}d\_{loss}\_fake ({{\theta_d}^{t}}, Z_{i})$ } ${grad_{d\_fake}}^t = {grad_{d\_fake}}^t / max(1, \frac{||grad_{d\_fake}||_{2}}{C}) $ \textbf{Compute the overall gradients of discriminator and add Gaussian Noise to them} ${grad_{{d}}}^t \xleftarrow[]{}{\frac{1}{bs}{\sum {grad_{{d\_real}}}^t + {grad_{{d\_fake}}}^t + N (0, \sigma^2 C^2 I)}}$ \textbf{Take the gradient Descent step for discriminator} ${{\theta_{d}}_{t+1}} \xleftarrow[]{} SGD(grads\_d^t, {{\theta_d}_t}, lr)) $ \tcc{Update RDP Accountant} \textbf{Accumulate the spent privacy budget using RDP Accountant} \tcc{Update the Generator Network} $g\_loss \xleftarrow[]{} log (1- D(G(Z^{t})))$ \textbf{Compute gradients of generator loss} Compute ${grad\_g}^{t} \xleftarrow[]{} \nabla_{\theta_g}g\_loss ({{\theta_g}^t}, Z_{i})$ \textbf{Take the gradient Descent step for generator} ${{\theta_g}^{t+1}} \xleftarrow[]{} ADAM({grad\_g}^t, {\theta_g}^t)$ \If{ $spent\_epsilon > \epsilon$ OR $spent\_delta > \delta$}{ \tcc{Running out of privacy budget} should$\_$terminate = True }} \end{algorithm} DP-CGAN makes the optimization process of discriminator loss (discriminator training) differentially private by computing the per-example gradients of the discriminator loss on both real and fake data, clipping the per-example gradients on real data and fake data separately, summing up two sets of the clipped gradients, perturbing the clipped gradients by adding Gaussian noise $N(0, \sigma^2 C^2)$, $\sigma$ is noise multiplier and $C$ is clipping value, to them, and finally applying the perturbed gradients. Algorithm 1 outlines the training process of DP-CGAN. According to the algorithm, the model updates the discriminator network and the generator network as long as the number of iterations is less than maximum iteration count and the spent privacy budget is less than the target $\epsilon$. At each step, it minimizes the discriminator loss function by computing the discriminator gradients of loss on real data and clipping them by $L_2$-norm (lines 9-12), computing the discriminator gradients of loss on fake data(lines 13-15) and clipping them by $L_2$-norm, compute the overall clipped gradients of discriminator by adding these two sets of clipped gradients, adding Gaussian noise to them and taking average over all the perturbed clipped per-example gradients in the batch(line 16-17), and finally applying the gradients (line 18). The model tracks the spent privacy budget by accumulating the spent privacy budget and updating the RDP accountant every time that noise is injected into the model(line 20). Then, the generator the gradients of generator loss are computed and applied so that the generator network gets trained(line 21-25). The last step is to check the overall spent privacy budget so far. If the spent $\epsilon$ or the spent $\delta$ has exceeded the target values, training is stopped, otherwise it continues (line 26-27). \section{Experimental Results}\label{sec:Exp} We compare the performance of DP-CGAN to CGAN with no privacy and CGAN trained with standard differentially private approach.The CGAN architecture used in all models is a vanilla CGAN in which both generator and discriminator consist of two fully connected layers.The generator takes random noise sample $z$ and the corresponding label $y$ as inputs while the discriminator inputs are real training sample $x$ and its label $y$. Figure \ref{figure1} depicts the generator and discriminator architecture of the vanilla CGAN. \begin{figure}[ht] \centering \includegraphics[width=1.03\columnwidth]{CGAN_arch.png} \caption{Vanilla CGAN Generator and Discriminator Architecture } \label{figure1} \end{figure} Differentially private CGAN models use the new privacy package of TensorFlow Privacy (by Google), a python library that includes the implementation of few differentially private optimizers as well as the privacy accountants to keep track of the privacy loss. They leverage differentially private Gradient Descent as optimizer and RDP accountant as privacy accountant from this package. The dataset used used in the evaluation is MNIST handwritten dataset containing 60k training samples and 10k test samples. In the experiments, batch size is set to 600, $\delta = 10^{-5}$ and learning rate is set by an adapative approach in which the initial learning rate is 0.15, it is decreased to 0.052 in iteration 10K and is fixed on 0.052 for the rest iterations. We trained Logistic Regression and Multi-Layer Perceptron classifiers using the synthetic data and labels generated by the models and tested the classifier on real test data. Closer performance of the classifier trained on synthetic data generated by differntially private models and on real data indicates that the model has captured the real data distribution better. We measured the performance of the classifier using the Area under ROC curve metric (AuROC). In the evaluation process, the generative model takes the 60k MNIST training data and the labels as input and generates 60k synthetic labeled data.Then, the classifier is trained on the generated data. Finally, performance of the trained classifier is evaluated on the 10k test data using AuROC metric. Table 1. lists the results of AuROC for the three models as well as the case in which classifiers are trained on real data. According to the table, the AuROC of DP-CGAN is higher than CGAN trained with basic differentially private method, indicating that new clipping and perturbing technique used in DP-CGAN improves the performance. On the other hand, the AuROC of DP-CGAN is about $5\%$ lower than that for real data and this is the price we pay to have privacy. \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & Real & CGAN & \textbf{DP-CGAN} & \multicolumn{1}{|p{1.5cm}|}{\centering CGAN with \\ basic DP}\\ \hline\hline LR & $92.17\%$ & $91.10\%$ & $\bm{87.57\%}$ & $83.42\%$ \\ \hline MLP & $97.60\%$ & $91.06\%$ & $\bm{88.16\%}$ & $83.29\%$\\ \hline \end{tabular} \end{center} \caption{Comparing AuROC for Logistic Regression(LR) and Multi-Layer Perceptron(MLP), which are trained on real data, data generated by CGAN (non-private), DP-CGAN and CGAN with basic differentially private approach using $\epsilon = 9.6$ , and $\delta = {10^{-5}}$} \end{table} We also visualized the images generated by the models (Figure 2) . In the figure, the most left column shows the results for DP-CGAN, the left column represents the results for CGAN with no privacy, and the right column depicts the synthetic images generated by CGAN with basic differentially private approach. According to the figure, the quality of the images generated by DP-CGAN is better than CGAN with basic differentially private approach but worse than CGAN with no privacy. \begin{figure}[h!] \subfloat[]{\includegraphics[width=0.05\textwidth]{DP_CGAN.png}\label{subimg:dp-cgan}} \hfill \subfloat[]{\includegraphics[width=0.05\textwidth]{CGAN.png}\label{subimg:cgan}} \hfill \subfloat[]{\includegraphics[width=0.05\textwidth]{Standard.png}\label{subimg:base-dp}} \caption{(a) DP-CGAN, (b) CGAN with no privacy, (c) CGAN with basic differentially private approach} \end{figure} \section{Conclusion}\label{sec:Con} In this research, we proposed DP-CGAN framework that is a differentially private GAN model capable of generating both synthetic data and corresponding labels. The main idea behind DP-CGAN is that it clips the gradients of discriminator loss on real and fake data separately, sums up two sets of gradients, and adds Gaussian noise to the sum. DP-CGAN employs RDP account technique to track the spent privacy budget. The experimental results showed that DP-CGAN improves the performance compared to basic DP-CGAN and generates promising results on MNIST dataset. The architectures we used for the generator and discriminator are rather simple. We are going to consider deep CGAN architectures with multiple convolutional layers to improve the quality of the synthetic data while spending the same privacy budget as we did for vanilla CGAN. Moreover, our results are still preliminary and we are going to show high quality differentially private CGANs on more challenging datasets such as CIFAR100 and CelebA/B. Finally, our preliminary results are very promising and we can extend our methodology to tackle the mentioned challenges. {\small \bibliographystyle{ieeetr}
1,314,259,996,843
arxiv
\section{Introduction} A cline describes a gradual change in genotypic or phenotypic frequency as a function of spatial location. Clines often occur in species distributed along an environmental gradient, for instance in temperature, where alternative phenotypes or genotypes are better adapted to the different extremes of the environment. They are frequently observed in natural populations and are important objects of research in evolutionary biology and ecology (e.g.\ \cite{Adrion_etal2015}, \cite{Bedford_etal2015}, \cite{Endler1977}). Measurements of their shape admit inferences about the relative strength of migration and selection. The mathematical theory of clines was initiated by Haldane \cite{Haldane1948}, who derived a reaction-diffusion equation for the equilibrium allele frequencies at a diallelic locus subject to spatially varying selection along a single spatial dimension. He computed the cline, the spatially non-constant solution, for special cases. The mathematical theory of clines became a very active research area in the 1970s, when the consequences of various assumptions about spatial variation in fitnesses and about migration patterns were investigated (Slatkin \cite{Slatkin1973}, Nagylaki \cite{N1975,N1976,N1978}). These authors derived parabolic partial differential equations to describe and study not only the allele frequencies at equilibrium, but also their evolution. At about the same time, and motivated by this work, Conley \cite{Conley1975}, Fleming \cite{Fleming1975}, Fife and Peletier \cite{Fife&Peletier1977,Fife&Peletier1981}, and Henry \cite{Henry1981} developed and employed advanced mathematical methods to investigate existence, uniqueness, and stability of clinal solutions under a variety of assumptions about fitnesses. We refer to spatially nonuniform stationary solutions of the parabolic PDE as clines. More recently, Lou, Nagylaki, and their collaborators \cite{LN2002,LN2004,LN2006,LNS2010,Nakashima2016,Nakashima2018,NNS2010} extended previous work in several directions by modeling migration by general elliptic operators on bounded domains in arbitrary dimensions, by admitting wide classes of fitness functions, by including dominance, and by studying multiallelic loci. Several of these extensions revealed qualitatively new features. The theory of one-locus clines has been reviewed in \cite{NL2008} and \cite{LNN2013}. In the present work, we study two-locus clines. Understanding their properties is of biological relevance because many traits are determined by multiple genetic loci which undergo recombination. The resulting mathematical models are much more complex than one-locus models, because the interaction of selection and migration generates probabilistic associations (correlations) among these loci, so called linkage disequilibria, which are eroded in turn by recombination. We shall focus on the simplest case of two diallelic loci with additive fitnesses. The first study of a two-locus cline model is due to Slatkin \cite{Slatkin1975}, who showed numerically that the linkage disequilibrium generated between the two loci tends to steepen the cline. Barton \cite{Barton1983,Barton1986} derived some general results about the consequences of linkage on the linkage disequilibria among multiple loci and provided numerical results that can guide intuition. Most recently, B\"urger \cite{RB2017} analysed a two-locus model in which, following Haldane \cite{Haldane1948}, simple step functions are used to describe the spatial dependence of fitnesses along the real line. Using a singular-perturbation approach, an explicit approximation of the two-locus cline was obtained for the case of strong recombination. The steepening of the cline by linkage could be proved and quantified. Our aim here is to develop a rigorous mathematical theory for the existence, uniqueness, and stability of two-locus clines on bounded domains in $\Reals^n$ for fitnesses depending on the spatial location in a general way. In Section 2, we introduce the basic model, which is formulated as a system of semilinear parabolic PDEs. In Section 3, we collect several preliminaries that will be used subsequently. Section 4 is devoted to the study of the boundary equilibria. These can be monomorphic equilibria, i.e., constant stationary solutions such that both loci are globally fixed for one allele, or clines at one locus with the second locus fixed for one or the other allele. For the monomorphic equilibria, stability and bifurcations are determined. In Section 5, we investigate the case of no recombination. The results follow from the theory of diallelic and multiallelic one-locus models \cite{LN2002,LN2004,LN2006} and provide the basis for the investigation of clines maintained under weak recombination, which is the topic of Section 6. There, existence of an asymptotically stable two-locus cline is proved based on a regular perturbation argument. Finally, in Section 7, we treat strong recombination. This may be the biologically most frequently realized case because it applies when the loci are located on different chromosomes or on the same chromosome, but not close together. We prove existence, uniqueness, and global stability of a two-locus cline. In addition to standard elliptic and parabolic PDE methods, our proofs invoke perturbation techniques, persistence, and dynamical systems theory. The article closes by a brief discussion and by mentioning some open problems. \section{Model}\label{sec:model} We consider a monoecious, diploid population that occupies a bounded, open domain $\Om\subset\Reals^n$ with $C^2$ boundary $\partial\Om$. Fitness of individuals depends on location, but is independent of time, population density, or genotype frequencies. It is determined by two diallelic loci, $\A$ and $\B$, which recombine at rate $r\ge0$. We model migration by diffusion and assume it is homogeneous, isotropic, and genotype-independent. If the migration variance is $\si^2$, the diffusion constant is $d=\tfrac12\si^2$ \cite{N1975,N1989}. If the alleles at locus $\A$ are denoted by $A$ and $a$, and those at $\B$ by $B$ and $b$, then there are the four possible gametes $AB$, $Ab$, $aB$, and $ab$, which we label as $i=1$, 2, 3, and 4, respectively. We write $I=\{1,2,3,4\}$ for the set of gametes. Let the frequency of gamete $i$ at position $x\in\Om$ and time $t$ be $p_i=p_i(x,t)$, where $p_i\ge0$ and $\sum_{i=1}^4 p_i=1$, and let $p=(p_1,p_2,p_3,p_4)^T$. We denote the usual measure of linkage disequilibrium by \begin{equation}\label{eq:D} D = D(p) = p_1p_4-p_2p_3\,. \end{equation} If $w_{ij}(x)$ is the fitness of the diploid genotype $ij$ at location $x\in\Om$, then \begin{equation}\label{w} w_i = w_i(x,p) = \sum_{j=1}^4 w_{ij}(x)p_j \;\text{ and }\; \wb = \wb(x,p) = \sum_{i=1}^4 w_ip_i \end{equation} are the marginal fitness of gamete $i$ and the population mean fitness, respectively. As is biologically reasonable and common, throughout we posit $w_{ij}=w_{ji}$ and $w_{14}=w_{23}$, i.e., absence of position effects, and assume that every $w_{ij}$ is real valued and H\"older continuous, i.e., $w_{ij}\in C^\ga(\bar\Omega)$ for some $\ga\in(0,1)$. \subsection{Evolutionary equations} We assume that (i) the three evolutionary forces selection, migration, and recombination are of the same order of magnitude and sufficiently weak, (ii) migration is genotype independent, spatially uniform, and isotropic, and (iii) individuals mate locally at random so that Hardy-Weinberg proportions are obtained locally. By approximating the exact discrete-space discrete-time model (\cite{RB2009}, \cite{N2009}) by a continuous-space continuous-time model as in \cite{N1989}, the evolution of the gamete frequencies $p_i$, $i\in I$, is described by the following system of partial differential equations: \begin{subequations}\label{dynamics_dsr_pi} \begin{alignat}{2} &\partial_t p_i = d\De p_i + s S_i(x,p) - \et_i r D &\quad&\text{ for } (x,t)\in\Om\times(0,\infty) \,, \label{dynamics_dsr_pi_a} \\ &\partial_\nu p_i = 0 &\quad&\text{ for } (x,t)\in\partial\Om\times(0,\infty)\,, \label{dynamics_dsr_pi_b} \\ &p_i(x,0)\ge0 \, \text{ and } \sum_{i=1}^4 p_i(x,0)=1 &\quad&\text{ for $x\in\bar\Om$} \label{dynamics_dsr_pi_c} \end{alignat} \end{subequations} (cf.\ \cite{Slatkin1975, LNN2013, RB2017}). Here, $\De$ is the Laplace operator in $\Reals^n$, $d>0$ the diffusion constant, $s>0$ a measure of the strength of selection, $r\ge0$ the recombination rate, \begin{equation}\label{et} \et_1=\et_4=-\et_2=-\et_3 = 1\,, \end{equation} and $\nu$ is the unit outer normal vector to the boundary $\partial\Om$. The terms $\et_i r D$ describe the effects of recombination (see Section \ref{sec:reco_LD}). The functions \begin{equation}\label{S_i2} S_i(x,p) = p_i(w_i-\wb) \end{equation} arise from selection (see Section \ref{sec:assumptions_selection}). In many situations, it will be more convenient to scale away $d$ because we focus on the role of recombination. Therefore, if we fix $d>0$ and set $\la=s/d$, $\rho=r/d$, rescale time according to $\ta=td$, and return to $t$ instead of $\ta$, we can rewrite \eqref{dynamics_dsr_pi} as \begin{subequations}\label{dynamics_pi} \begin{alignat}{2} &\partial_t p_i = \De p_i + \la S_i(x,p) - \et_i \rh D &\quad&\text{ for } (x,t)\in\Om\times(0,\infty) \,, \label{dynamics_pi_a} \\ &\partial_\nu p_i = 0 &\quad&\text{ for } (x,t)\in\partial\Om\times(0,\infty)\,, \label{dynamics_pi_b} \\ &p_i(x,0)\ge0 \, \text{ and } \sum_{i=1}^4 p_i(x,0)=1 &\quad&\text{ for $x\in\bar\Om$}\,. \label{dynamics_pi_c} \end{alignat} \end{subequations} \subsection{Basic properties of the dynamics}\label{sec:basic_dynamics} If the initial data $p_i(x,0)$ are continuous on $\bar\Om$, then \eqref{dynamics_pi} has a unique classical solution $p(x,t)$ for every $\rh\ge0$ that exists for all $t\ge0$. It satisfies \begin{equation}\label{constraint1} p_i(x,t)\ge0 \text{ and } \sum_{i=1}^4 p_i(x,t)=1 \; \text{ on } \bar\Om\times(0,\infty)\,. \end{equation} In addition, if for some $i\in I$, \begin{equation}\label{flow_into_interior1} p_i(x,0) \not\equiv0 \text{ on } \bar\Om, \text{ then } p_i(x,t) > 0 \text{ on } \bar\Om\times(0,\infty)\,. \end{equation} The first assertion in \eqref{constraint1} and \eqref{flow_into_interior1} follow from the strong maximum principle for parabolic equations \cite{PW}. For the second assertion in \eqref{constraint1}, we observe from \eqref{w}, \eqref{et}, \eqref{S_i2}, and \eqref{dynamics_pi_a} that \begin{equation}\label{PDE_sum_p_i} \partial_t \left(\sum_{i=1}^4 p_i\right) = \De \left(\sum_{i=1}^4 p_i\right) + \la \wb\left(1-\sum_{i=1}^4 p_i\right)\,. \end{equation} Therefore, uniqueness of solutions of \eqref{PDE_sum_p_i} yields $\sum_{i=1}^4 p_i(x,t)=1$ (see \cite{LNN2013}). We define \begin{equation}\label{X} \mathbf{X}=\biggl\{(u_1,u_2,u_3,u_4) \in C(\bar\Omega; [0,1]^4):\; \sum_{i=1}^4 u_i \equiv 1\biggr\}\footnote{We write $C(\bar\Omega;S)$ for the space of $S$-valued uniformly continuous functions on $\bar\Omega$ equipped with the supremum norm, and $C(\bar\Omega)=C(\bar\Omega;\Reals)$.} \end{equation} and \begin{equation}\label{eq:X0} \mathbf{X}_0=\{ (u_1,u_2,u_3,u_4) \in \mathbf{X}:\; u_1+u_2 \equiv 0 \text{ or } u_3+u_4 \equiv 0 \text{ or } u_1 + u_3 \equiv 0 \text{ or } u_2+u_4 \equiv 0\}\,, \end{equation} where $\mathbf{X}_0$ is the subset of $\mathbf{X}$ that corresponds to fixation (across the whole population) of at least one of the alleles at one of the loci. We define $\Psi$ to be the semiflow generated by \eqref{dynamics_pi} in $\mathbf{X}$, i.e., for initial data $U_0\in \mathbf{X}$ and every $t > 0$ we set $\Psi_t(U_0)=p(\cdot,t)$, where $p(\cdot,t)$ is the solution of \eqref{dynamics_pi} corresponding to $p(\cdot,0)=U_0(\cdot)$. The above considerations show that $\mathbf{X}$ is positively invariant under the flow $\Psi$. It is easily seen that each of the four `edges' in $\mathbf{X}_0$ is invariant. In addition, we have the following property. \begin{lemma}\label{flow_into_interior} If $\rh>0$, then $\Psi$ maps $\mathbf{X}\setminus \mathbf{X}_0$ into the interior of $\mathbf{X}$. \end{lemma} \begin{proof} It is sufficient to consider the flow on the boundary of $\mathbf{X}$. By \eqref{flow_into_interior1}, it is sufficient to assume $p_i(x,0) \equiv 0$ for some $i$. By symmetry, we need to consider only the case $p_1(x,0) \equiv 0$. Because $p(\cdot,0)\notin\mathbf{X}_0$, $p_1(x,0) \equiv 0$ implies the existence of $x_2, x_3\in\Omega$ such that $p_2(x_2,0)>0$ and $p_3(x_3,0)>0$. Then, again by the maximum principle for parabolic equations (and because of Neumann boundary conditions), $p_2(x,t)>0$ and $p_3(x,t)>0$ on $\bar\Omega\times(0,\infty)$. Now, we argue by contradiction to show that $p_1(x,t)>0$ on $\bar\Omega\times(0,\infty)$. Suppose that $p_1(x_1,t_1)=0$ for some $x_1\in\bar\Om$ and $t_1>0$. Then $S_1(x_1,p(x_1,t_1))=0$. If $x_1\in\Om$, then $\partial_t p_1(x_1,t_1)\le0$ and $\De p_1(x_1,t_1)\ge0$, which contradicts \begin{equation} \partial_t p_1(x_1,t_1) - \Delta p_1(x_1,t_1) = \rh\, p_2(x_1,t_1)p_3(x_1,t_1) >0\,. \end{equation} This leaves us with the case $x_1\in\partial\Om$ and $p_1(x,t_1)>0$ for all $(x,t)\in\Omega\times(0,\infty)$, for which the Hopf lemma shows that $\partial_\nu p_1(x_1,t_1)<0$. This contradicts \eqref{dynamics_pi_b}. Therefore, $p_1(x,t)$ is positive on $\bar\Om$ whenever $t>0$. \end{proof} \subsection{Properties of recombination and linkage disequilibrium}\label{sec:reco_LD} The measure $D$ of linkage disequilibrium can be interpreted as the covariance of the random variables indicating presence or absence of allele $A$ ($B$) at locus $\A$ ($\B$). Indeed, from \eqref{constraint1} we deduce \begin{equation}\label{D_cov} D = p_1p_4 - p_2 p_3 = p_1(p_1+p_2+p_3+p_4) - (p_1+p_2)(p_1+p_3) = p_{AB} - p_A p_B\,, \end{equation} where $p_{AB}=p_1$, and \begin{equation}\label{pApB} p_A=p_1+p_2 \text{ and } p_B=p_1+p_3 \end{equation} denote the frequencies of alleles $A$ and $B$, respectively. In particular, recombination erodes linkage disequilibrium because, in the absence of diffusion and selection, $\partial_t D = \et_i \partial_t p_i = -\rh D$ for every $i\in I$, as we easily derive from \eqref{D_cov} and \eqref{dynamics_pi_a}. Recombination also generates missing gametes. For instance, if $p_1(x,0)=0$, but $p_2(x,0)>0$ and $p_3(x,0)>0$, then recombination will generate gamete $AB$ immediately, i.e., $p_1(x,t)>0$ for $t>0$ (see also Lemma \ref{flow_into_interior}). Consult \cite{Geiringer1944} and \cite{LewontinKojima1960} for important early treatments of linkage disequilibrium, and to \cite{Slatkin2008} for its applications in modern genetics. If recombination is absent, i.e., $\rh=0$, then alleles on the same gamete are never separated and therefore each gamete $i\in I$ may be regarded as an allele at a single locus. Thus, the system \eqref{dynamics_pi} reduces to a one-locus system with four alleles. This case is treated in Section~\ref{sec:no_rec}. If recombination is strong relative to selection and diffusion, then rapid decay of linkage disequilibrium $D$ to values close to zero will occur. In the limiting case of $D\equiv0$, i.e., vanishing covariance, the loci become independent. In Section \ref{sec:strong_reco}, we treat the case $\rh\gg1$ as a perturbation of that of two independent loci. \subsection{Assumptions on selection}\label{sec:assumptions_selection} Concerning selection, which arises as a consequence of a spatially heterogeneous environment, we assume that both loci are subject to so called additive selection, i.e., we ignore dominance and epistasis. Therefore, we can assign the Malthusian parameters $\tfrac12\alpha(x)$ and $-\tfrac12\alpha(x)$ to the alleles $A$ and $a$, and $\tfrac12\be(x)$ and $-\tfrac12\be(x)$ to $B$ and $b$, where $\alpha(x)$ and $\be(x)$ are real-valued functions on $\bar\Om$. They reflect the influence of environmental heterogeneity on the fitnesses of the alleles. Then the fitness coefficients of the gametes $AB$, $Ab$, $aB$, $ab$ are \begin{alignat}{2}\label{s1234} s_1(x) &=\tfrac12 [\alpha(x)+\be(x)] \,, \quad& s_2(x) &=\tfrac12 [\alpha(x)-\be(x)] \,, \notag \\ s_3(x) &=\tfrac12 [-\alpha(x)+\be(x)] \,, \quad& s_4(x) &=-\tfrac12 [\alpha(x)+\be(x)] \,, \end{alignat} respectively, and the genotypic fitnesses are $w_{ij}(x)=s_i(x)+s_j(x)$. Using $\sum_i p_i(x,t)=1$, straightforward calculations yield \begin{subequations}\label{S_i} \begin{align} S_1(x,p) &= p_1[\alpha(x)(p_3+p_4)+\be(x)(p_2+p_4)]\,, \\ S_2(x,p) &= p_2[\alpha(x)(p_3+p_4)-\be(x)(p_1+p_3)]\,, \\ S_3(x,p) &= p_3[-\alpha(x)(p_1+p_2)+\be(x)(p_2+p_4)]\,, \\ S_4(x,p) &= p_4[-\alpha(x)(p_1+p_2)-\be(x)(p_1+p_3)]\,. \end{align} \end{subequations} Throughout this paper, we will study \eqref{dynamics_pi}, or the equivalent \eqref{dynamics_dsr_pi}, by assuming \eqref{S_i}. In addition, the following assumption will play an important role: \noindent {\bf (A)} The functions $\alpha(x)$ and $\beta(x)$ change sign in $\Omega$ and are of class $C^\ga(\bar\Omega)$ for some $\ga\in(0,1)$. \section{Preliminaries}\label{sec:prelim} \subsection{Eigenvalue problems with indefinite weight}\label{sec:ev-problems} The linearized problem of \eqref{dynamics_pi} at an equilibrium $\hat{p}=(\hat{p}_1, \hat{p}_2, \hat{p}_3, \hat{p}_4)^T$, $\hat{p}_i=\hat{p}_i(x)$, reads \begin{subequations}\label{1.4} \begin{alignat}{2} &\De \Phi+J|_{\hat{p}} \Phi +\mu \Phi=0 &\quad&\text{in } \Omega\,, \label{1.4a}\\ &\partial_\nu\Phi=0 &\quad&\text{on } \partial\Omega\,, \label{1.4b} \end{alignat} \end{subequations} where $\Phi=(\phi_1,\phi_2,\phi_3,\phi_4)^T$, $\phi_i=\phi_i(x)$, $\sum_{i=1}^{4}\phi_i=0$, and \begin{align} J= & \la \begin{pmatrix} 0 & \beta p_1 & \alpha p_1 & (\alpha+\beta) p_1\\ -\beta p_2 & 0 & (\alpha-\beta) p_2 & \alpha p_2\\ -\alpha p_3 & (\beta-\alpha)p_3 & 0 & \beta p_3\\ -(\alpha+\beta) p_4 & -\alpha p_4 & -\beta p_4 & 0 \end{pmatrix} \notag \\ &+ \rho \begin{pmatrix}-p_4 & p_3 & p_2 & -p_1 \\ p_4 & -p_3 & -p_2 & p_1 \\ p_4 & -p_3 & -p_2 & p_1 \\ -p_4 & p_3 & p_2 & -p_1 \end{pmatrix} \nonumber\\ & + \la~\mbox{diag} \{\alpha (p_3+p_4)+\beta (p_2+p_4),~\alpha (p_3+p_4)-\beta (p_1+p_3), \nonumber\\ & \qquad -\alpha (p_1+p_2)+\beta (p_2+p_4),~-\alpha (p_1+p_2)-\beta (p_1+p_3) \}\,. \label{Jacob} \end{align} Sometimes it is more convenient to study \eqref{1.4} with three linearly independent equations using the relation $\sum_{i=1}^{4}\phi_i=0$. For any function $u(x)\in C(\bar{\Omega})$, we define its spatial average \begin{equation}\label{1.9} \bar{u}=\frac{1}{|\Omega|}\int_{\Omega} u(x)~dx\,. \end{equation} The following eigenvalue problem will be helpful: \begin{subequations}\label{1.6} \begin{alignat}{2} &\De \varphi+\tilde{\la} h(x)\varphi=0 &\quad&\text{in } \Om\,, \label{1.6a}\\ &\varphi>0 &\quad&\text{in } \Om\,, \label{1.6b}\\ &\partial_\nu\varphi = 0 &\quad&\text{on } \partial\Om\,, \label{1.6c} \end{alignat} \end{subequations} where $\Omega$ and $\nu$ are as in \eqref{dynamics_pi} and $h(x)\in C(\bar{\Omega})$. Brown and Lin \cite{BL80} showed that \eqref{1.6} has a positive eigenvalue $\tilde{\la}$ if and only if $h(x)$ changes sign and $\bar{h}<0$. In addition, the positive eigenvalue (if it exists) is unique, and we denote it by $\la^*(h)$. For each fixed $\tilde{\la}>0$, we consider the eigenvalue problem \begin{subequations}\label{1.7} \begin{alignat}{2} &\De \psi+\tilde{\la} h(x)\psi+\mu\psi=0 &\quad&\text{in } \Om \,, \label{1.7a}\\ &\partial_\nu\psi = 0 &\quad&\text{on } \partial\Omega\,, \label{1.7b} \end{alignat} \end{subequations} where $\Omega$ and $\nu$ are as in \eqref{dynamics_pi} and $h(x)\in C(\bar{\Omega})$. The following results are well known (\cite{S83}, \cite{LNN2013}). \begin{lemma}\label{lm1.2} Suppose that $h(x)\in C(\bar{\Omega})$ is a nonconstant function and positive somewhere. Then the smallest eigenvalue $\mu_1(\tilde{\la})$ of \eqref{1.7} is strictly concave down in $\tilde{\la}$, \begin{equation}\label{mu1} \lim_{\tilde{\la}\to \infty}\mu_1(\tilde{\la})=-\infty\,, \end{equation} and has the following properties. \noindent{\rm (a)} If $\bar{h}\ge 0$, then $\mu_1(\tilde{\la})<0$ and $\mu_1(\tilde{\la})$ is strictly decreasing for $\tilde{\la}>0$. \noindent{\rm (b)} Assume that $\bar{h}<0$. Then \begin{equation}\label{1.8} \mu_1(\tilde{\la})\begin{cases} <0 &\mbox{if} \;\; \tilde{\la}>\la^*(h)\,,\\ =0 &\mbox{if} \;\; \tilde{\la}=\la^*(h)\,,\\ >0 &\mbox{if} \;\; 0<\tilde{\la}<\la^*(h)\,, \end{cases} \end{equation} and $\mu_1(\tilde{\la})$ is strictly decreasing for $\tilde{\la}>\la^*(h)$. \end{lemma} \begin{remark}\label{rm1.3} {\rm Because the eigenfunction corresponding to $\mu_1(\tilde{\la})$ can be chosen to be positive on $\Omega$, integration of \eqref{1.7a} over $\Omega$ shows that if $h(x)\le 0$ and $h(x)\not\equiv 0$, then $\mu_1(\tilde{\la})>0$ for every $\tilde{\la}>0$.} \end{remark} For a nonconstant function $h(x)\in C(\bar{\Omega})$, it is convenient to define \begin{equation}\label{lambda_0} \la_{0}(h) = \begin{cases} \la^*(h) &\mbox{if $h(x)$ changes sign and} \; \bar h <0\,,\\ 0 &\mbox{if} \; \bar h \ge 0\,,\\ \infty &\mbox{if} \; h(x)\le 0\mbox{~in $\bar\Omega$}\,. \end{cases} \end{equation} Then Lemma \ref{lm1.2} and Remark \ref{rm1.3} yield \begin{lemma}\label{rm1.4} Suppose that $h(x)$ is a nonconstant continuous function on $ \bar{\Omega}$. If $\tilde{\la}>\la_{0}(h)$, then $\mu_1(\tilde{\la})<0$ and $\mu_1(\tilde{\la})$ is strictly decreasing in $\tilde{\la}$. If $0<\tilde{\la}<\la_{0}(h)$, then $\mu_1(\tilde{\la})>0$. \end{lemma} \subsection{One-locus theory}\label{sec:SLClines} The diallelic one-locus equation with isotropic, homogeneous migration, and selection without dominance reads \begin{subequations} \begin{align}\label{eq:theta} &\partial_t \theta = \Delta \theta+\la h(x) \theta (1-\theta) &&\text{for } (x,t) \in \Omega\times(0,\infty)\,, \\ &\partial_\nu \theta =0 &&\text{for }(x,t) \in \partial\Omega\times(0,\infty)\,, \\ &\theta(x,0) = \theta_0(x) &&\text{for }x \in \Omega \text{ and }\theta_0\in C^0(\bar\Omega;[0,1])\setminus\{0,1\}\,. \end{align} \end{subequations} Recalling that $\la^\ast(h)$ designates the unique positive eigenvalue of \eqref{1.6}, for a sign-changing $h(x)$ we introduce \begin{equation}\label{lambda_h} \la_{h}:= \begin{cases} \la^*(h) &\mbox{if } \; \bar h <0\,,\\ 0 &\mbox{if } \; \bar h =0\,,\\ \la^*(-h) &\mbox{if } \; \bar h >0\,. \end{cases} \end{equation} \begin{theorem}[{\cite[Lemma\,10.1.5]{Henry1981}}, {\cite[Theorem\,2.1]{LN2002}}]\label{thm:singlelocus} Let $h(x)$ be a sign-changing function of class $C^\gamma(\bar\Omega)$ for some $0 < \gamma < 1$. Then for every $\la>0$, the problem \eqref{eq:theta} has a unique stable equilibrium solution $\theta_h$, and every solution $\theta(x,t)$ converges to $\theta_h(x)$ uniformly in $x$ as $t \to \infty$. More precisely: \noindent {\rm (a)} Suppose that $\bar h <0$. If $0 < \la \leq \la_h$, then $\theta_h \equiv 0$ in $\bar\Omega$; if $\la > \la_h$, then $0 <\theta_h < 1$ in $\bar\Omega$. \noindent {\rm (b)} Suppose that $\bar h >0$. If $0 < \la \leq \la_h$, then $\theta_h \equiv 1$ in $\bar\Omega$; if $\la > \la_h$, then $0 <\theta_h < 1$ in $\bar\Omega$. \noindent {\rm (c)} Suppose that $\bar h = 0$. Then for every $\la >0$, $0 < \theta_h < 1$ in $\bar\Omega$. \noindent In each case, $\theta_h$ is linearly stable whenever $\la\neq\la_h$. The proof of Theorem 2.1 in \cite{LN2002} shows that convergence occurs in $C^2(\bar\Om)$. \end{theorem} For convenience, we call the constant equilibria $\theta(x)\equiv 0$ and $\theta(x)\equiv 1$ in $\bar\Omega$ the {\it trivial equilibria}, and we call $\theta_h$ the {\it global attractor} of \eqref{eq:theta}. If $0<\theta_h<1$, then we call it a (one-locus) {\it cline}. \section{Boundary equilibria}\label{Section_boundary} \subsection{Existence}\label{sec:existence_boundary} The four monomorphic equilibria $M_i$, defined by $p_i\equiv1$, exist always. We also call them the vertices or vertex equilibria. In addition, \eqref{dynamics_pi} may have up to six equilibria on the edges connecting any pair of vertices. We define \begin{equation}\label{1.20} h_{ij}(x)=s_i(x)-s_j(x)\,. \end{equation} Let $\hat{p}^{(ij)} = \hat{p}^{(ij)}(x)$, $i<j$, be the edge equilibrium with gametes $i$ and $j$ present, i.e., \begin{equation}\label{2.4} \hat{p}^{(ij)}_k = \begin{cases} \theta_{ij} &\mbox{if} \;\; k=i\,,\\ 1-\theta_{ij} &\mbox{if} \;\; k=j\,,\\ 0 &\mbox{if} \;\; k\neq i,j\,, \end{cases} \end{equation} where $\theta_{ij}=\theta_{ij}(x)$ satisfies \begin{subequations}\label{2.5} \begin{alignat}{2} &\De \theta_{ij}+\la h_{ij}(x)\theta_{ij}(1-\theta_{ij})=0 &\quad&\text{in } \Om\,, \label{2.5a}\\ &0<\theta_{ij}<1 &\quad&\text{in } \Om\,, \label{2.5b}\\ &\partial_\nu\theta_{ij} = 0 &\quad&\text{on } \partial\Om\,. \label{2.5c} \end{alignat} \end{subequations} Theorem~\ref{thm:singlelocus} and the above-cited result of Brown and Lin \cite{BL80} for \eqref{1.6} inform us that \eqref{2.5} has a solution if and only if \begin{subequations}\label{2.6} \begin{equation}\label{2.6a} h_{ij}(x)~\mbox{changes sign in $\Omega$} \end{equation} and \begin{equation}\label{2.6b} \la >\la_{ij}:=\la_{h_{ij}}\,, \end{equation} \end{subequations} where $\la_{h_{ij}}$ is given by \eqref{lambda_h} with $h=h_{ij}$. Moreover, if a solution of \eqref{2.5} exists, it is unique and linearly stable. If $\rh=0$, then all six edge equilibria may exist. If $\rh>0$, then only $\hat p^{(12)}$, $\hat p^{(13)}$, $\hat p^{(34)}$, and $\hat p^{(24)}$ can exist (Lemma \ref{flow_into_interior}). These four edge equilibria are independent of $\rh$ because $D\equiv0$ at each of them; see also Section \ref{sec:single-locus_poly}. The biological reason for the non-existence of $\hat p^{(14)}$ and $\hat p^{(23)}$ if $\rh>0$ is that recombination generates the two other gametes immediately (cf.\ Section \ref{sec:reco_LD}). \subsection{Stability of the monomorphic equilibria} Here we show that generically at most one monomorphic equilibrium can be linearly stable. In Theorem~\ref{thm:monos_stability}, we determine the range of parameters for which it is stable. For sufficiently strong migration (relative to selection and recombination), we establish global asymptotic stability in Theorem~\ref{thm:M1_globallystable}. We write \begin{equation}\label{2.17} I_i=I\setminus \{i\}\,, \end{equation} and define, for each fixed $j\in I$, \begin{equation}\label{3.2} \tilde{j}=5-j\,,\quad \tilde{I}_j=I\setminus\{j,\tilde{j}\}\,, \end{equation} i.e., $\tilde{I}_1=\tilde{I}_4=\{2,3\}$ and $\tilde{I}_2=\tilde{I}_3=\{1,4\}$. From \eqref{s1234} we observe that \begin{equation}\label{s14_s23} s_1(x)= -s_4(x) \quad\text{and}\quad s_2(x)= -s_3(x) \end{equation} for every $x\in \Omega$. Therefore, there are only two possibilities: \begin{subequations}\label{3.8} \begin{gather} \mbox{there exists $i\in I$ such that}\; \bar{s}_{\tilde{i}}<\bar{s}_{k}<\bar{s}_{i}\; \mbox{for each}\; k\in\tilde{I}_i\,; \label{3.8a}\\ \mbox{there exist $i\in I$ and $j\in \tilde{I_i}$ such that}\; \bar{s}_{i}=\bar{s}_{j}=-\bar{s}_{\tilde{i}}=-\bar{s}_{\tilde{j}}\,. \label{3.8b} \end{gather} \end{subequations} We note that \eqref{3.8a} is the generic case, which is equivalent to \begin{equation}\label{bar_si_max} \mbox{there exists an $i\in I$ such that $\bar{s}_i>\max_{j\in I_i}\{\bar{s}_j\}$\,.} \end{equation} To study the stability of the vertex equilibrium $M_j$, we have to investigate the eigenvalue problem \begin{subequations}\label{3.3} \begin{alignat}{2} &\De \phi_i+\la h_{ij}(x)\phi_i +\rho \phi_{\tilde{j}}+\mu \phi_i=0 &\quad&\text{in } \Omega\,, \label{3.3a}\\ &\De \phi_{\tilde{j}}+\la h_{\tilde{j}j}(x)\phi_{\tilde{j}} -\rho \phi_{\tilde{j}}+\mu \phi_{\tilde{j}}=0 &\quad&\text{in } \Omega\,,\label{3.3b} \\ &\partial_{\nu} \phi_i=\partial_{\nu}\phi_{\tilde{j}}=0 &\quad&\text{on }\partial\Omega\,,\label{3.3c} \end{alignat} \end{subequations} where $i\in \tilde{I}_j$ (cf.\ \cite[(2.23)]{LN2006} and \eqref{Jacob}). For each $k\in I$, we let $E_{k}$ be the set of all eigenvalues of the single-equation eigenvalue problem \begin{subequations}\label{3.4} \begin{alignat}{2} &\De \phi^{(k)}+\la h_{kj}(x)\phi^{(k)} +\mu^{(k)} \phi^{(k)}=0 &\quad&\text{in } \Omega\,, \label{3.4a} \\ &\partial_{\nu} \phi^{(k)}=0 &\quad&\text{on } \partial\Omega\,. \label{3.4b} \end{alignat} \end{subequations} Before formulating and proving our main results, we establish two lemmas. \begin{lemma}\label{lm3.1} For every $\rho\ge 0$ and every $j\in I$ fixed, the set of eigenvalues of system \eqref{3.3} consists of $\bigcup_{i\in \tilde{I}_j} E_i\, \bigcup \,\{\mu^{(\tilde{j})}+\rho: \mu^{(\tilde{j})}\in E_{\tilde{j}}\}$. \end{lemma} \begin{proof} First, we observe that for every $i\in \tilde{I}_j$, every $\mu^{(i)}\in E_i$ with an eigenfunction $\phi^{(i)}$ is also an eigenvalue of \eqref{3.3} and the corresponding eigenfunction has components $\phi_i=\phi^{(i)}$ and $\phi_k\equiv 0$ for $k\neq i$. Second, for every $\mu^{(\tilde{j})}\in E_{\tilde{j}}$ with an eigenfunction $\phi^{(\tilde{j})}$, there are two cases. If $\mu^{(\tilde{j})}+\rho\in E_i$ for some $i\in \tilde{I}_j$, then we already know it is an eigenvalue of \eqref{3.3} from the above discussion. If $\mu^{(\tilde{j})}+\rho\notin E_i$ for every $i\in \tilde{I}_j$, then the operator \begin{equation}\label{3.5} L_i:=\left\{\De +\la h_{ij}(x)+\mu^{(\tilde{j})}+\rho\right\} \end{equation} is invertible for every $i\in \tilde{I}_j$, whence $\mu^{(\tilde{j})}+\rho$ is an eigenvalue of \eqref{3.3} whose eigenfunction has components \begin{subequations}\label{3.6} \begin{alignat}{2} &\phi_i = L_i^{-1}[-\rho \phi^{(\tilde{j})}] &\quad&\text{for } i\in \tilde{I}_j\,, \label{3.6a}\\ &\phi_{\tilde{j}}=\phi^{(\tilde{j})}\,.&& \label{3.6b} \end{alignat} \end{subequations} Next, we show that if $\mu$ is an eigenvalue of \eqref{3.3}, then either $\mu\in E_{i}$ for some $i\in \tilde{I}_j$ or $\mu=\mu^{(\tilde{j})}+\rho$ for some $\mu^{(\tilde{j})}\in E_{\tilde{j}}$. We denote the components of the eigenfunction of $\mu$ by $\phi_{i}$ for $i\in I_j$. There are two possibilities. If $\phi_{\tilde{j}}\equiv 0$, then there exists at least one $\phi_{i}\not\equiv 0$, $i\in \tilde{I}_j$, whence in view of \eqref{3.3a} we conclude that $\mu\in E_i$ and the corresponding eigenfunction can be taken as $\phi^{(i)}=\phi_{i}$. If $\phi_{\tilde{j}}\not\equiv 0$, then from \eqref{3.3b} we see that $\mu=\mu^{(\tilde{j})}+\rho$ for some $\mu^{(\tilde{j})}\in E_{\tilde{j}}$ and the corresponding eigenfunction can be chosen as $\phi^{(\tilde{j})}=\phi_{\tilde{j}}$. This completes the proof of Lemma \ref{lm3.1}. \end{proof} For a fixed $j\in I$, let $\mu_1^{(\tilde{j})}(\la)$ be the smallest eigenvalue of \eqref{3.4} with $k=\tilde{j}$. From Lemma \ref{rm1.4} and \eqref{lambda_0}, we see that if $0\le \la_0(h_{\tilde{j}j})<\infty$, then for $\la>\la_0(h_{\tilde{j}j})$ we have $\mu_1^{(\tilde{j})}(\la)<0$ and $\mu_1^{(\tilde{j})}(\la)$ is strictly decreasing. Thus, for each $\rho>0$, there exists a unique $\la$, denoted by $\la_0(h_{\tilde{j}j},\rho)$, such that $\la>\la_0(h_{\tilde{j}j})$ and $\mu_1^{(\tilde{j})}(\la)+\rho=0$. If $\la_0(h_{\tilde{j}j})=\infty$, we define $\la_0(h_{\tilde{j}j},\rho)=\infty$. If $\rho=0$, we set $\la_0(h_{\tilde{j}j},0)=\la_0(h_{\tilde{j}j})$. Then, for $\rho\ge 0$, we have $\mu_1^{(\tilde{j})}(\la)+\rho<0$ if $\la>\la_0(h_{\tilde{j}j},\rho)$ and $\mu_1^{(\tilde{j})}(\la)+\rho>0$ if $0<\la<\la_0(h_{\tilde{j}j},\rho)$. Now, for every $j\in I$ and $\rho\ge 0$, we define \begin{subequations}\label{3.7} \begin{align} \la_j^*(\rho) &=\min_{i\in \tilde{I}_j}\{\la_0(h_{ij}),\la_0(h_{\tilde{j}j},\rho)\}\,, \label{3.7a}\\ \mu_j^* &=\min_{i\in \tilde{I}_j}\{\mu_1^{(i)}(\la),\mu_1^{(\tilde{j})}(\la)+\rho\}\,. \label{3.7b} \end{align} \end{subequations} The above discussion and Lemma \ref{rm1.4} inform us that $\mu_j^*>0$ if $0<\la<\la_j^*(\rho)$ and $\mu_j^*<0$ if $\la>\la_j^*(\rho)$. Since Lemma \ref{lm3.1} reveals that $M_j$ is stable if $\mu_j^*>0$ and unstable if $\mu_j^*<0$, we have proved the following. \begin{lemma}\label{lm3.2} Let $\rh\ge 0$. \noindent {\rm (a)} If $\la_j^*(\rho)=0$, then $M_j$ is linearly unstable for every $\la>0$. \noindent {\rm (b)} If $0<\la_j^*(\rho)<\infty$, then $M_j$ is linearly stable for $0<\la<\la_j^*(\rho)$ and linearly unstable for $\la>\la_j^*(\rho)$. \noindent {\rm (c)} If $\la_j^*(\rho)=\infty$, then $M_j$ is linearly stable for every $\la>0$. \end{lemma} Notice that if $\rho=0$, then the conclusions in Lemma \ref{lm3.2} are established in \cite[p.\,637]{LN2006}. \begin{remark}\label{rm3.3}\rm If $h_{\tilde{j}j}(x)\equiv 0$ and $\rho>0$, then from \eqref{3.4a} with $k=\tilde{j}$ we see that $\mu_1^{(\tilde{j})}(\la)=0$ for every $\la>0$ and thus $\mu_1^{(\tilde{j})}(\la)+\rho>0$ for every $\la>0$. Therefore, when $h_{\tilde{j}j}(x)\equiv 0$ and $\rho>0$, we set $\la_0(h_{\tilde{j}j},\rho)=\infty$ and the conclusions in Lemma \ref{lm3.2} still hold. \end{remark} \begin{theorem}\label{thm:monos_stability} Suppose (A) and that \eqref{bar_si_max} holds for some $i\in I$. Then we have for every $\rho\ge 0$: \noindent {\rm (a)} Every $M_j$ other than $M_i$ is linearly unstable. \noindent {\rm (b)} Let $\la_i^*(\rho)$ be given by \eqref{3.7a}. Then $0<\la_i^*(\rho)<\infty$ and $M_i$ is linearly stable if $0<\la<\la_i^*(\rho)$; $M_i$ is linearly unstable if $\la>\la_i^*(\rho)$. \end{theorem} If $\rho=0$, Theorem \ref{thm:monos_stability} follows directly from Theorem 1.5 in \cite{LN2006}. Its proof inspired the following proof. \begin{proof} (a) For each $j\neq i$, there are two cases. If $j\neq \tilde{i}$, i.e, $i\neq \tilde{j}$, by \eqref{bar_si_max} we have $\bar{s}_i>\bar{s}_j$, whence we obtain $\la_0(h_{ij})=0$ from \eqref{lambda_0} and \eqref{1.20}. Therefore, \eqref{3.7a} yields $\la_{j}^*(\rho)=0$. If $j=\tilde{i}$, by \eqref{3.8a} we have $\bar{s}_k>\bar{s}_j$ and hence $\la_0(h_{kj})=0$ for $k\in \tilde{I}_i=\tilde{I}_j$. Therefore, \eqref{3.7a} implies again that $\la_{j}^*(\rho)=0$. Now we deduce from Lemma \ref{lm3.2}(a) that $M_j$ is unstable for every $\la>0$, which proves part (a). (b) In view of \eqref{bar_si_max} and \eqref{lambda_0}, we have $\la_0(h_{ki})>0$ for every $k\in\tilde{I}_i$. From \eqref{s1234} we observe that \begin{equation}\label{3.9} s_m(x)-s_{l}(x)\in \{\pm\alpha(x),\pm\beta(x)\}\quad \mbox{for every } l\in I \mbox{ and every } m\in\tilde{I}_l\,. \end{equation} Since both $\alpha(x)$ and $\beta(x)$ change sign and $k\in\tilde{I}_i$, it follows from \eqref{3.9} and \eqref{lambda_0} that $\la_0(h_{ki})<\infty$. On account of the definition of $\la_0(h_{\tilde{i}i},\rho)$ we have $\la_0(h_{\tilde{i}i},\rho)>0$. Then \eqref{3.7a} implies that $0<\la_{i}^*(\rho)<\infty$ and part (b) follows immediately from Lemma \ref{lm3.2}(b). \end{proof} \begin{remark}\label{rm3.5}\rm Because $\mu_1^{(\tilde{j})}(\la)$ is strictly decreasing for $\la>\la_0(h_{\tilde{j}j})$ by Lemma \ref{lm1.2}(b), the critical value $\la_0(h_{\tilde{j}j},\rho)$ is strictly increasing in $\rho$ by its definition. Therefore, \eqref{3.7a} implies that $\la_j^*(\rho)$ is nondecreasing in $\rho$. Thus, Theorem \ref{thm:monos_stability}(b) shows that increasing the recombination rate facilitates stability of the monomorphic equilibrium with the highest spatially averaged fitness. \end{remark} \begin{theorem}\label{thm:M1_globallystable} Suppose (A) and that \eqref{bar_si_max} holds for some $i\in I$. Then, for every fixed $r\ge 0$ and $s>0$, there exists $d_0=d_0(r,s)\gg1$ such that $M_i$ is globally asymptotically stable for \eqref{dynamics_dsr_pi} if $d>d_0$. \end{theorem} \begin{proof} The proof is based on Theorem 2.1 in \cite{LN2006}. We set \begin{equation}\label{3.10} T_i(x,p)=s S_i(x,p)-\eta_i r D(p)\,,\quad i\in I\,. \end{equation} Then the spatially averaged system (2.3) of \cite{LN2006} becomes \begin{subequations}\label{3.11} \begin{equation} \frac{dq^*_i}{d\tau}=s \bar{S}_i(q^*)-\eta_i r D(q^*), \label{3.11a} \end{equation} \begin{equation} q^*(0)\in \mbox{int}\, \Delta_4\,, \label{3.11b} \end{equation} \end{subequations} where \begin{equation}\label{3.13} \Delta_4:=\{p\in \Reals^4: p_i\ge 0~\mbox{for every}~i\in I, ~\sum_{j=1}^{4}p_j=1\}\,, \end{equation} \begin{subequations}\label{3.12} \begin{align} \bar{S}_1(q^*) &= q^*_1[\bar{\alpha}(q^*_3+q^*_4)+\bar{\beta}(q^*_2+q^*_4)]\,, \label{3.12a} \\ \bar{S}_2(q^*) &= q^*_2[\bar{\alpha}(q^*_3+q^*_4)-\bar{\beta}(q^*_1+q^*_3)]\,, \label{3.12b} \\ \bar{S}_3(q^*) &= q^*_3[-\bar{\alpha}(q^*_1+q^*_2)+\bar{\beta}(q^*_2+q^*_4)]\,,\label{3.12c}\\ \bar{S}_4(q^*) &= q^*_4[-\bar{\alpha}(q^*_1+q^*_2)-\bar{\beta}(q^*_1+q^*_3)]\,. \label{3.12d} \end{align} \end{subequations} The system of ODEs \eqref{3.11} describes the dynamics in a simple two-locus model without migration, epistasis, or dominance. Therefore, mean fitness is a global Lyapunov function \cite{Ewens1969}. Hence, every solution of \eqref{3.11} converges to an equilibrium. In addition, every equilibrium $q^*$ of \eqref{3.11} is in linkage equilibrium, i.e., it satisfies $D(q^*)=0$ (\cite{Lyubich92}, \cite{NHB1999}). We are informed by \eqref{s1234}, \eqref{s14_s23}, and \eqref{bar_si_max} that $\bar{\alpha}\neq 0$ and $\bar{\beta}\neq 0$, whence it is clear from \eqref{3.12} that the only solutions to $\bar S_j(q^*)=0$ for every $j\in I$ are the monomorphic equilibria $M_j$. Simple analysis of the linearized problem of \eqref{3.11} at each $M_j$ shows that if \eqref{bar_si_max} holds for some $i\in I$, then $M_i$ is the only linearly stable monomorphic equilibrium. The other monomorphic equilibria are all unstable; they may have stable manifolds, but the stable manifolds are either invariant edges corresponding to a marginal one-locus system or connect to the vertices from the exterior of the state space $\Delta_4$. Therefore, every solution of \eqref{3.11} converges to $M_i$. Thus, we have shown that (A4) in \cite{LN2006} holds with $\hat{q}^*=M_i$. Therefore, Theorem 2.1 in \cite{LN2006} applies and, together with statement (b), yields the global asymptotic stability of $M_i$ with respect to the full system \eqref{dynamics_dsr_pi} provided $d\gg1$. \end{proof} \begin{remark}\label{rem:4.7}\rm Because the critical value $d_0$ originating from Theorem 2.1 in \cite{LN2006} may depend on $r$ and $s$, we cannot conclude that for every fixed $\rh\ge0$, there exists a $\la_0 \ll 1$ such that $M_i$ is globally asymptotically stable for \eqref{dynamics_pi} if $\la < \la_0$. However, we conjecture that it is true. \end{remark} In the nongeneric case \eqref{3.8b}, we obtain the following result. \begin{proposition}\label{th3.6} Suppose that (A) and \eqref{3.8b} hold. Then, for every $\rho\ge 0$, all monomorphic equilibria are linearly unstable. \end{proposition} \begin{proof} In view of \eqref{3.8b}, \eqref{3.9}, and \eqref{lambda_0}, for the $i,j$ in \eqref{3.8b}, we have \begin{equation} \la_0(h_{ji})=\la_0(h_{ij})=\la_0(h_{\tilde{j}\tilde{i}})=\la_0(h_{\tilde{i}\tilde{j}})=0\,. \end{equation} We conclude from \eqref{3.7a} that $\la_k^*(\rho)=0$ for every $\rho\ge 0$ and every $k\in I$. From Lemma \ref{lm3.2}(a), we infer that for every $\rho\ge 0$ each $M_k$ is unstable for every $\la>0$. \end{proof} \subsection{Equilibria with one polymorphic locus}\label{sec:single-locus_poly} From \eqref{s14_s23} we obtain $h_{12}=h_{34}$ and $h_{13}=h_{24}$. Therefore, the edge equilibria $\hat{p}^{(12)}$ and $\hat{p}^{(34)}$ as well as $\hat{p}^{(13)}$ and $\hat{p}^{(24)}$ exist only pairwise, i.e., if one member of a pair exists then also the other. We call them single-locus polymorphisms, or single-locus clines, because at each of these equilibria one locus maintains both alleles at positive frequency, whereas at the other locus one allele is fixed. For instance, $\hat{p}^{(12)}(x)$ describes a cline at locus $\B$ with allele $A$ fixed at locus $\A$. It is well known that a one-locus cline is globally asymptotically stable within its edge (Theorem \ref{thm:singlelocus}). However, determining stability of these equilibria with respect to the full system \eqref{dynamics_pi} is a challenging task and has been resolved only for special cases (see below). \section{No recombination}\label{sec:no_rec} In this section, we treat the case $r=0$, i.e., $\rho=0$. Therefore, the results depend only on $s/d=\la$, and we use \eqref{dynamics_pi} throughout. Because $\rho=0$, we may regard each gamete $i\in I$ as an allele at one locus. Therefore, the system \eqref{dynamics_pi} simplifies to a one-locus four-allele model, and the results of Lou and Nagylaki \cite{LN2002,LN2004,LN2006} on multiallelic one-locus models apply. We consider various assumptions on the functions $\alpha(x)$ and $\beta(x)$ and start with the most specific and simplest scenario that is of biological interest. \subsection{The functions $\alpha(x)$ and $\beta(x)$ have the same spatial dependence} We assume that \begin{subequations}\label{2.1} \begin{equation}\label{2.1a} \alpha (x)=a g(x)\,,\;\beta (x)=b g(x)\,, \end{equation} where \begin{equation}\label{2.1b} \mbox{the constants $a$ and $b$ are positive and the function $g(x)$ changes sign.} \end{equation} \end{subequations} Then \eqref{s1234} reduces to \begin{eqnarray}\label{2.2} s_1(x)=\tfrac{1}{2}(a+b)g(x)\,,\,&& s_2(x)=\tfrac{1}{2}(a-b)g(x)\,,\,\nonumber\\ s_3(x)=\tfrac{1}{2}(b-a)g(x)\,,\,&& s_4(x)=-\tfrac{1}{2}(a+b)g(x)\,. \end{eqnarray} By \eqref{2.1}, the conditions (A2) and (A3) in \cite{LN2002} hold with \begin{subequations} \begin{equation}\label{2.7} \sigma (x)= h_{14}(x)=(a+b)g(x)\,, \end{equation} \begin{equation}\label{2.8} \gamma_2=a(a+b)^{-1}\,,\quad \gamma_3=b(a+b)^{-1}\,. \end{equation} \end{subequations} Therefore, we obtain the following results directly from Theorems 3.2 and 3.3 in \cite{LN2002}. \begin{proposition}\label{prop2.1} If $\rho=0$ and \eqref{2.1} holds, system \eqref{dynamics_pi} has always a globally attracting equilibrium. \noindent {\rm (a)} Suppose that $\bar{g}<0$. Then $(0,0,0,1)^{T}$ is globally asymptotically stable if $0<\la\le\la^*(\sigma)$, and $\hat{p}^{(14)}$ is globally asymptotically stable if $\la>\la^*(\sigma)$. \noindent {\rm (b)} Suppose that $\bar{g}>0$. Then $(1,0,0,0)^{T}$ is globally asymptotically stable if $0<\la\le \la^*(-\sigma)$, and $\hat{p}^{(14)}$ is globally asymptotically stable if $\la>\la^*(-\sigma)$. \noindent {\rm (c)} Suppose that $\bar{g}=0$. Then $\hat{p}^{(14)}$ is globally asymptotically stable for every $\la>0$. \end{proposition} \subsection{The functions $\alpha(x)$ and $\beta(x)$ have the same sign} We assume that \begin{equation}\label{2.9} \beta (x)=\alpha (x)\gamma(x)\,,\; \mbox{where $\gamma(x)>0$ for every $x\in \bar{\Omega}$}. \end{equation} Then \eqref{s1234} reduces to \begin{eqnarray}\label{2.10} s_1(x)=\tfrac{1}{2}(1+\gamma(x))\alpha(x)\,,&& s_2(x)=\tfrac{1}{2}(1-\gamma(x))\alpha(x)\,,\\ s_3(x)=\tfrac{1}{2}(\gamma(x)-1)\alpha(x)\,,&& s_4(x)=-\tfrac{1}{2}(1+\gamma(x))\alpha(x)\,.\nonumber \end{eqnarray} The following result follows directly from Remark 3.3 in \cite{LN2006}. We present a proof here using the idea mentioned there. \begin{proposition}\label{prop2.3} Assume that $\rho=0$, that the function $\alpha(x)$ changes sign, and that \eqref{2.9} holds. Then $\hat{p}^{(14)}$ is globally asymptotically stable for $\la\gg 1$. \end{proposition} \begin{proof} By \eqref{2.9} and \eqref{2.10}, we have \begin{subequations}\label{2.11} \begin{equation}\label{2.11a} s_2(x)<\max_{j\neq 2} s_j(x)\;\; \mbox{and} \;\; s_3(x)<\max_{j\neq 3} s_j(x)\quad \mbox{for every $x\in\bar{\Omega}$}\,, \end{equation} \begin{equation}\label{2.11b} s_1(x)>\max_{j\neq 1} s_j(x)\;\;\mbox{when $\alpha(x)>0$\;\; and}\;\; s_4(x)>\max_{j\neq 4} s_j(x)\;\; \mbox{when $\alpha(x)<0$}\,. \end{equation} \end{subequations} Let $p=(p_1, p_2, p_3, p_4)^T$ be any solution of \eqref{dynamics_pi}. Therefore, for $\la$ sufficiently large, \eqref{2.11a} and \cite[Corollary 4.7]{LN2004} imply that \begin{equation}\label{2.12} p_i(x,t)\to 0 \;\; \mbox{uniformly in $x$ as $t\to \infty$ for $i=2,3$\,.} \end{equation} By \eqref{2.11b} and \cite[Corollary 4.9]{LN2004}, for $i=1, 4$, there exists $\delta_i^*=\delta_i^*(\la)>0$ such that for all initial data that satisfy \eqref{dynamics_pi_c}, there exists $t_i^*$, which may depend on $\la$ and the initial data, such that \begin{equation}\label{2.15} p_i(x,t)\ge \delta_i^* \;\; \mbox{for every $x\in\bar{\Omega}$ and every $t\ge t_i^*$.} \end{equation} Now pick any sequence $\{t_k\}_{k=1}^{\infty}$ such that $t_k\to \infty$ as $k\to \infty$. The estimate \cite[(3.19)]{LN2002} shows that, passing to a subsequence if necessary, $p(x,t_k)\to \hat{p}(x)$ as $k\to\infty$ in $C^2(\bar{\Omega})$, where $\hat{p}$ is an equilibrium of system \eqref{dynamics_pi}. Then from \eqref{2.12} and \eqref{2.15} we conclude that $\hat{p}_i(x)=0$ for $i=2, 3$ and $\hat{p}_i(x)\ge\delta_i^*$ for $i=1, 4$, respectively. Since the only equilibrium with the gametes 1 and 4 present, and 2 and 3 absent, is $\hat{p}^{(14)}$ (see \eqref{2.4} -- \eqref{2.6}), we must have $\hat{p}=\hat{p}^{(14)}$. Therefore, the $\omega$-limit set of any initial data that satisfies \eqref{dynamics_pi_c} is $\{\hat{p}^{(14)}\}$, and hence $p(x,t)\to \hat{p}^{(14)}(x)$ as $t\to\infty$. Finally, from \eqref{2.9} and \eqref{2.10} we observe that \begin{equation} \max [s_2(x),s_3(x)]<\max [s_1(x),s_4(x)]\quad\mbox{for every $x\in\bar{\Omega}$}\,, \end{equation} whence Theorem 1.6 in \cite{LN2006} informs us that $\hat{p}^{(14)}$ is asymptotically stable for $\la$ sufficiently large. This completes the proof. \end{proof} \subsection{Arbitrary functions $\alpha(x)$ and $\beta(x)$} We recall the definition of $I_i$ from \eqref{2.17} and make the generic assumption that \eqref{bar_si_max} holds for some $i\in I$. Then \cite[Theorem 1.1]{LN2006} yields \begin{proposition}\label{prop2.4} Assume that $\rho=0$. Let $p=(p_1, p_2, p_3, p_4)^T$ denote an arbitrary solution of \eqref{dynamics_pi} with $p_i(x,0)\not\equiv 0$. Then for $0<\la \ll 1$, as $t\rightarrow\infty$, $p_i(x,t)\to 1$ uniformly in $x$. \end{proposition} \begin{remark}\rm From \eqref{s1234} we see that \eqref{bar_si_max} holds with \begin{equation}\label{2.13} i=\begin{cases} 1 &\mbox{if} \;\;\bar{\alpha}>0,~\bar{\beta}>0 \,,\\ 2 &\mbox{if} \;\; \bar{\alpha}>0,~\bar{\beta}<0\,,\\ 3 &\mbox{if} \;\; \bar{\alpha}<0,~\bar{\beta}>0\,,\\ 4 &\mbox{if} \;\; \bar{\alpha}<0,~\bar{\beta}<0\,. \end{cases} \end{equation} \end{remark} \begin{remark} \label{rm2.5}{\rm We observe that if neither $\bar{\alpha}$ nor $\bar{\beta}$ is zero (as in the four cases in \eqref{2.13}), then $\bar{s}_j\neq \bar{s}_k$ for every $j\neq k$. Therefore, if $\rho=0$, then according to \cite[Remark 1.3]{LN2006}, for sufficiently small $\la$, the vertices are the only equilibria of \eqref{dynamics_pi}.} \end{remark} As $\la$ increases, the edge equilibria will appear if (A) holds. The next result determines the stability of each of them immediately after its appearance \cite[Theorem 1.7]{LN2006}; the notation $\la_{ij}$ is as in \eqref{2.6b}. \begin{proposition}\label{prop:no_rec_general} Suppose that $\rho=0$, that each of the functions $\alpha(x)$, $\beta(x)$, $\alpha(x)+\beta(x)$, and $\alpha(x)-\beta(x)$ changes sign, and that assumption \eqref{bar_si_max} holds for some $i\in I$. \noindent {\rm (a)} There exists $\delta_1>0$ such that $\hat{p}^{(jk)}$ is linearly unstable if $j,k\in I_i$, $j<k$, and $\la_{jk}<\la<\la_{jk}+\delta_1$. \noindent {\rm (b)} Suppose further that $\la_{ik}<\min_{j\in I_i,j\neq k}\la_{ij}$ for some $k\in I_i$. Then there exists $\delta_2>0$ such that $\hat{p}^{(ik)}$ is linearly stable if $\la_{ik}<\la<\la_{ik}+\delta_2$, and $\hat{p}^{(il)}$ is linearly unstable if $l\neq k$ and $\la_{il}<\la<\la_{il}+\delta_2$. \end{proposition} \begin{remark}\label{rem:edge_equil}\rm Suppose that $i$ is the gamete with the highest spatially averaged fitness. Under the assumption in Proposition~\ref{prop:no_rec_general}(b), we infer from \eqref{3.7a} that $\la_{i}^*(\rho)=\la_{ik}$. Then Theorem~\ref{thm:monos_stability}(b) shows that $M_i$ is linearly stable if $0<\la<\la_{ik}$ and unstable if $\la>\la_{ik}$. Proposition~\ref{prop:no_rec_general}(b) informs us that as $\la$ increases from $0$, $\hat p^{(ik)}$ is the first one that moves into the state space among the edge equilibria that bifurcate through $M_i$, and initially it is linearly stable (by exchange of stability with $M_i$). All other edge equilibria that may move into the state space will be unstable immediately after their appearance. \end{remark} If there exists $x_i\in \Omega$ for $i=1,2,3,4$ such that \begin{equation}\label{2.14} \alpha(x_1)\,,\, \beta(x_1)>0\,;\;\alpha(x_2)>0\,,\, \beta(x_2)<0\,;\; \alpha(x_3)<0\,,\, \beta(x_3)>0\,;\;\alpha(x_4)\,,\, \beta(x_4)<0\,, \end{equation} then Corollary 4.10 in \cite{LN2004} guarantees the existence of an internal equilibrium for $\la\gg 1$. \begin{proposition}\label{prop2.7} Suppose that $\rho=0$ and that \eqref{2.14} holds. Then for $\la \gg 1$, system \eqref{dynamics_pi} has at least one equilibrium $\hat{p}=(\hat{p}_1, \hat{p}_2, \hat{p}_3, \hat{p}_4)^T$ such that $\hat{p}_i(x)>0$ in $\Omega$ for every $i$. \end{proposition} \section{Weak recombination}\label{sec:weak_rec} Here, we study \eqref{dynamics_dsr_pi} for weak recombination, i.e., $d$ and $s$ are fixed and $0<r\ll1$. This is equivalent to studying \eqref{dynamics_pi} with $\la>0$ fixed and $0<\rh\ll1$, which we use henceforth. From Section \ref{Section_boundary}, we already know that the four single-locus polymorphisms $\hat{p}^{(12)}$, $\hat{p}^{(34)}$, $\hat{p}^{(13)}$, and $\hat{p}^{(24)}$, defined by \eqref{2.4} and \eqref{2.5}, exist in pairs and neither their values nor their existence depends on $\rh$. This is different for the edge equilibria $\hat{p}^{(14)}$ and $\hat{p}^{(23)}$, which can exist only if $\rh=0$. Suppose that $\hat{p}^{(14)}$ (or $\hat{p}^{(23)}$) exists when $\rh=0$. If we increase $\rho$ from $0$ slightly, will $\hat{p}^{(14)}$ ($\hat{p}^{(23)}$) move into the interior of the state space $\mathbf{X}$ and therefore become full polymorphisms? The investigation of this problem is the main purpose of this section. Throughout, we suppose assumption (A). Our main result is the following. \begin{theorem}\label{thm:weak_reco} \noindent {\rm (a)} If for $\rho=0$ the edge equilibrium $\hat{p}^{(14)}$ $(\hat{p}^{(23)})$ exists and is linearly stable, then for every sufficiently small $\rho>0$, problem \eqref{dynamics_pi} has an internal equilibrium $\hat{p}^{(\rho)}$ that is linearly stable, and $\hat{p}^{(\rho)}(x)\to \hat{p}^{(14)}(x)$ $(\hat{p}^{(23)}(x))$ uniformly as $\rho\to 0+$. \noindent {\rm (b)} Assume that each of $\alpha(x)$, $\beta(x)$, $\alpha(x)+\beta(x)$, and $\alpha(x)-\beta(x)$ changes sign, \eqref{bar_si_max} holds for $i=1$, and $\la_{14}<\min\{\la_{12},\la_{13}\}$. Then there exists $\delta>0$ such that for every $\la\in (\la_{14}, \la_{14}+\delta)$ and every sufficiently small $\rho>0$, problem \eqref{dynamics_pi} has an internal equilibrium $\hat{p}^{(\rho)}$, which is linearly stable. Moreover, for every fixed $\la\in (\la_{14}, \la_{14}+\delta)$, we have $\hat{p}^{(\rho)}(x)\to \hat{p}^{(14)}(x)$ uniformly as $\rho\to 0+$. \end{theorem} \begin{remark}\rm 1. Note that the assumption \eqref{bar_si_max} for $i=1$ can be imposed without loss of generality upon relabeling of gametes. \noindent {\rm 2.} Recall from Proposition \ref{prop:no_rec_general} and Remark \ref{rem:edge_equil} that for $\rh=0$, $\la_{14}$ is the critical eigenvalue at which $\hat{p}^{(14)}$ appears by an exchange-of-stability bifurcation with $M_1$ as $\la$ increases above $\la_{14}$. Moreover, $\la_{14}<\min\{\la_{12},\la_{13}\}$ implies that $\hat{p}^{(14)}$ appears before the two pairs of edge equilibria ($\hat{p}^{(12)}$ and $\hat{p}^{(34)}$, $\hat{p}^{(13)}$ and $\hat{p}^{(24)}$) as $\la$ increases from 0. \end{remark} To prove Theorem \ref{thm:weak_reco}, we need some preparations. Recalling \eqref{1.4}, \eqref{Jacob}, \eqref{1.20}, \eqref{2.4}, \eqref{2.5}, and using $\sum_{i=1}^{4}\phi_i=0$, the linearized problem of \eqref{dynamics_pi} with $\rho=0$ at $\hat{p}^{(14)}(x)$ reads \begin{subequations}\label{4.1} \begin{alignat}{2} &\De\phi_1+\la h_{14}(1-2\theta_{14})\phi_1 -\la\theta_{14}[h_{24}\phi_2+h_{34}\phi_3]+\mu \phi_1=0 &\quad&\text{in } \Omega\,, \label{4.1a} \\ &\De\phi_2+\la (h_{24}-h_{14}\theta_{14})\phi_{2} +\mu\phi_2=0 &\quad&\text{in } \Omega\,, \label{4.1b} \\ &\De\phi_3+\la (h_{34}-h_{14}\theta_{14})\phi_{3} +\mu\phi_3=0 &\quad&\text{in } \Omega\,, \label{4.1c} \\ &\partial_\nu \phi_i = 0\,, \quad i=1,2,3, &\quad&\text{on } \partial\Omega\,. \label{4.1d} \end{alignat} \end{subequations} There are three single-equation linearized problems related to \eqref{4.1}: \begin{equation}\label{4.5} \De\phi^{(1)}+\la h_{14}(1-2\theta_{14})\phi^{(1)} +\mu^{(1)}\phi^{(1)}=0\quad \mbox{in $\Omega$\,,} \quad \partial_\nu \phi^{(1)}=0 \quad \mbox{on~} \partial\Omega\,. \end{equation} \begin{equation}\label{4.4} \De\phi^{(2)}+\la (h_{24}-h_{14}\theta_{14})\phi^{(2)} +\mu^{(2)}\phi^{(2)}=0\quad \mbox{in $\Omega$\,,} \quad \partial_\nu \phi^{(2)}=0 \quad \mbox{on~} \partial\Omega\,. \end{equation} \begin{equation}\label{4.8} \De\phi^{(3)}+\la (h_{34}-h_{14}\theta_{14})\phi^{(3)} +\mu^{(3)}\phi^{(3)}=0\quad \mbox{in $\Omega$\,,} \quad \partial_\nu \phi^{(3)}=0 \quad \mbox{on~} \partial\Omega\,. \end{equation} We denote the set of eigenvalues of \eqref{4.1}, \eqref{4.5}, \eqref{4.4}, and \eqref{4.8} by $E$, $E^{(1)}$, $E^{(2)}$, and $E^{(3)}$, respectively. \begin{lemma}\label{lm4.1} The set of eigenvalues of problem \eqref{4.1} consists of the eigenvalues of problems \eqref{4.5}, \eqref{4.4}, and \eqref{4.8}, namely, $\displaystyle E=\bigcup_{i=1}^{3} E^{(i)}$. \end{lemma} \begin{proof} First, we show that $\displaystyle E \supseteq \bigcup_{i=1}^{3} E^{(i)}$. Suppose $\mu^{(1)}\in E^{(1)}$ with an eigenfunction $\phi^{(1)}$, then it is clear that $\mu^{(1)}$ solves \eqref{4.1} with $\phi_1=\phi^{(1)}$, $\phi_2=0$, and $\phi_3=0$, and therefore $\mu^{(1)}\in E$. If $\mu^{(2)}\in E^{(2)} \setminus E^{(1)}$ with an eigenfunction $\phi^{(2)}$, we see that it is also an eigenvalue of \eqref{4.1} by taking $\phi_2=\phi^{(2)}$, $\phi_3=0$, and solving $\phi_1$ from \eqref{4.1a}. Similarly, if $\mu^{(3)}\in E^{(3)}\setminus E^{(1)}$ with an eigenfunction $\phi^{(3)}$, we see that it is also an eigenvalue of \eqref{4.1} by taking $\phi_2=0$, $\phi_3=\phi^{(3)}$, and solving $\phi_1$ from \eqref{4.1a}. Second, we demonstrate the converse $\displaystyle E \subseteq \bigcup_{i=1}^{3} E^{(i)}$. If $\mu$ is an eigenvalue of \eqref{4.3} with $\phi_2=\phi_3=0$, then $\phi_1\neq 0$ and therefore $\mu$ is an eigenvalue of \eqref{4.5}; otherwise, if $\phi_2\neq 0$ or $\phi_3\neq 0$, then $\mu$ is an eigenvalue of \eqref{4.4} or \eqref{4.8}, respectively. Thus, the set of eigenvalues of \eqref{4.3} consists the eigenvalues of \eqref{4.5}, \eqref{4.4}, and \eqref{4.8}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:weak_reco}] (a) We present the proof only for $\hat{p}^{(14)}$; for $\hat{p}^{(23)}$ it is similar. By the asumption that $\hat{p}^{(14)}$ is linearly stable when $\rho=0$, every $\mu$ that satisfies \eqref{4.1} has a positive real part unless $\phi_i\equiv 0$ for $i=1,2,3$. Therefore, by the implicit function theorem, there exists a family of equilibria $\hat{p}^{(\rho)}$ for $\rho>0$ sufficiently small and $\hat{p}^{(\rho)}(x)\to \hat{p}^{(14)}(x)$ uniformly as $\rho\to 0+$. From \eqref{1.4} and \eqref{Jacob} we infer that the linearization of \eqref{dynamics_pi} at $\hat{p}^{(\rho)}$ is a small continuous perturbation of \eqref{4.1} for which every eigenvalue also has a positive real part, whence $\hat{p}^{(\rho)}$ is linearly stable. Next, we show that $\hat{p}^{(\rho)}$ is in the interior of $\mathbf{X}$. By the fact that $\hat{p}^{(14)}_1(x)>0$ and $\hat{p}^{(14)}_4(x)>0$ in $\bar{\Omega}$ and the uniform continuity of $\hat{p}^{(\rho)}(x)$ with respect to $\rho$, we obtain that $\hat{p}^{(\rho)}_1(x)>0$ and $\hat{p}^{(\rho)}_4(x)>0$ in $\bar{\Omega}$ for sufficiently small $\rho>0$. To see that $\hat{p}^{(\rho)}_2(x)>0$ and $\hat{p}^{(\rho)}_3(x)>0$ in $\bar{\Omega}$ for sufficiently small $\rho>0$, we consider \begin{equation}\label{4.2} u(x)=(u_1(x),u_2(x),u_3(x)):= \left(\frac{\partial \hat{p}^{(\rho)}_1}{\partial \rho}(x), \frac{\partial \hat{p}^{(\rho)}_2}{\partial \rho}(x), \frac{\partial \hat{p}^{(\rho)}_3}{\partial \rho}(x)\right) \biggl|_{\rho=0}\,. \end{equation} Differentiating the equilibrium problem that $\hat{p}^{(\rho)}$ satisfies with respect to $\rho$ and then substituting $\rho=0$, we obtain \begin{subequations}\label{4.3} \begin{align} &\De u_1+\la h_{14}(1-2\theta_{14})u_1 -\la\theta_{14}[h_{24}u_2+h_{34}u_3]-\theta_{14}(1-\theta_{14})=0 &\quad&\text{in } \Omega\,, \label{4.3a}\\ &\De u_2+\la (h_{24}-h_{14}\theta_{14})u_{2} +\theta_{14}(1-\theta_{14})=0 &\quad&\text{in } \Omega\,, \label{4.3b}\\ &\De u_3+\la (h_{34}-h_{14}\theta_{14})u_{3} +\theta_{14}(1-\theta_{14})=0 &\quad&\text{in } \Omega\,, \label{4.3c} \\ &\partial_\nu u_i=0\,, \quad i=1,2,3, &\quad&\text{on } \partial\Omega\,. \label{4.3d} \end{align} \end{subequations} By our assumption that every eigenvalue $\mu$ of \eqref{4.1} has positive real part, we infer from Lemma~\ref{lm4.1} that the smallest eigenvalue $\mu_{1}^{(2)}$ of \eqref{4.4} is positive. By an inverse positivity result, from \eqref{4.3b} and the facts $\mu_{1}^{(2)}>0$ and $\theta_{14}(1-\theta_{14})>0$ we conclude that $u_2(x)>0$ in $\bar{\Omega}$. (For the inverse positivity result, see e.g. Theorem 7.3 in \cite{Hess91}, in which we take \begin{equation}\label{K} K=\left[-\De-\la (h_{24}-h_{14}\theta_{14})+c\right]^{-1} \end{equation} for some constant $c>0$ such that $-\la (h_{24}-h_{14}\theta_{14})+c>0$ in $\bar{\Omega}$, and associate it with zero Neumann boundary condition. Then \begin{equation}\label{sprK} \mbox{spr}(K)=1/(\mu_{1}^{(2)}+c)\,, \end{equation} and \eqref{4.3b} is equivalent to \begin{equation}\label{4.7} \frac{1}{c} u_2-Ku_2 = \frac{1}{c} K[\theta_{14}(1-\theta_{14})]\,. \end{equation} By standard elliptic regularity, embedding theory, and the strong maximum principle, $K$ is compact and strongly positive on $C^{1+\gamma}(\bar\Omega)$ for some $\gamma\in (0,1)$. Moreover, $\mu_{1}^{(2)}>0$ and \eqref{sprK} imply that $1/c>\mbox{spr}(K)$, whence the positivity of the right-hand side of \eqref{4.7} leads to $u_2(x)>0$ in $\bar\Omega$.) Similarly, we have $\mu_{1}^{(3)}>0$ and $u_3(x)>0$ in $\bar{\Omega}$ as above. Hence, we deduce from $u_i(x)>0$ in $\bar{\Omega}$ for $i=2,3$ and \eqref{4.2} that $\hat{p}^{(\rho)}_2(x)>0$ and $\hat{p}^{(\rho)}_3(x)>0$ for sufficiently small $\rho>0$. Thus, we have proved that $\hat{p}^{(\rho)}$ is a full polymorphism for sufficiently small $\rho>0$, and this completes the proof of (a). Part (b) follows directly from Proposition \ref{prop:no_rec_general} and part (a). \end{proof} \begin{remark}\rm Theorem \ref{thm:weak_reco} shows that a linearly stable equilibrium at either the 14-edge or the 23-edge moves into the interior of the state space if $\rh>0$. The following result shows that if such an equilibrium is unstable for $\rho=0 $, it leaves the state space when $\rho>0$. \end{remark} \begin{proposition}\label{prop:weak_reco} Suppose that for $\rho=0$ the edge equilibrium $\hat{p}^{(14)}$ $(\hat{p}^{(23)})$ exists and is nondegenerate and linearly unstable. Then there exists a neighbourhood in $\mathbf{X}$ of $\hat{p}^{(14)}$ $(\hat{p}^{(23)})$ in which there is no equilibrium of \eqref{dynamics_pi} for sufficiently small $\rho>0$. However, there is a family of stationary states $\hat{p}^{(\rho)}\notin\mathbf{X}$ such that $\hat{p}^{(\rho)}(x)\to \hat{p}^{(14)}(x)$ $(\hat{p}^{(23)}(x))$ uniformly as $\rho\to 0+$. \end{proposition} \begin{proof} We prove this proposition for $\hat p^{(14)}$; the proof for $\hat p^{(23)}$ is analogous. Because we assume that $\hat p^{(14)}$ is nondegenerate, by the implicit function theorem, there exists a {\it unique} family of equilibria $\hat{p}^{(\rho)}$ of \eqref{dynamics_pi} for $\rho>0$ sufficiently small such that $\hat{p}^{(\rho)}(x)\to \hat{p}^{(14)}(x)$ uniformly as $\rho\to 0+$. From Section~\ref{sec:existence_boundary} we know that $\hat p^{(14)}$ is always linearly stable with respect to \eqref{2.5}, and therefore the smallest eigenvalue $\mu_1^{(1)}$ of \eqref{4.5} is positive. Then Lemma~\ref{lm4.1} and the instability of $\hat p^{(14)}$ with respect to the full system \eqref{dynamics_pi} with $\rho=0$ imply that either $\mu_{1}^{(2)}<0$ or $\mu_{1}^{(3)}<0$. If $\mu_{1}^{(2)}<0$, then by the same method we used in the proof of Theorem~\ref{thm:weak_reco}(a), we would have $1/c<\mbox{spr}(K)$ by \eqref{sprK}, whence the positivity of the right-hand side of \eqref{4.7} and \cite[Theorem 7.3]{Hess91} imply that $u_2$ cannot be a positive function in $\bar\Omega$. Thus, $\hat{p}^{(14)}$ leaves the state space when $\rho>0$. Similarly, if $\mu_{1}^{(3)}<0$, then $u_3$ cannot be a positive function in $\bar\Omega$, whence $\hat{p}^{(14)}$ again leaves the state space when $\rho>0$. In light of the uniqueness of the family of $\hat{p}^{(\rho)}$ which converges to $\hat{p}^{(14)}$ as $\rho\to 0+$, we conclude that there exists a neighbourhood in $\mathbf{X}$ of $\hat{p}^{(14)}$ in which there is no equilibrium of \eqref{dynamics_pi} for sufficiently small $\rho>0$. This completes the proof. \end{proof} \section{Strong recombination}\label{sec:strong_reco} Now we assume that recombination is sufficiently strong relative to diffusion and selection, i.e., $r\gg 1$. We fix $d>0$ and $s>0$, hence $\la>0$, work with \eqref{dynamics_pi}, and set $\ep=1/\rh>0$. We study existence, uniqueness, and stability of two-locus clines for sufficiently small $\ep$ under the assumption (A). It will be convenient to follow the evolution of the allele frequencies $p_A=p_1+p_2$ and $p_B=p_1+p_3$, and the linkage disequilibrium $D=p_1p_4-p_2p_3$, instead of the gamete frequencies $p_i$. The corresponding transformation is given by \begin{align}\label{trafo_T} &\mathcal{T} : (p_A,p_B,D) \mapsto (p_1,p_2,p_3,p_4)\nonumber\\ &\mathcal{T}(p_A,p_B,D) = (p_Ap_B+D,p_A(1-p_B)-D,(1-p_A)p_B-D,(1-p_A)(1-p_B)+D) \,. \end{align} It is easily shown that the system of differential equations \eqref{dynamics_pi_a} and \eqref{dynamics_pi_b} with the selection terms \eqref{S_i} is equivalent to \begin{subequations}\label{eq:ABD_add} \begin{align} \partial_t p_A &= \De p_A + \la\alpha(x) p_A(1-p_A) + \la\be(x)D \,, \label{eq:ABD_a} \\ \partial_t p_B &= \De p_B + \la\be(x) p_B(1-p_B) + \la\alpha(x)D \,, \label{eq:ABD_b} \\ \partial_t D &= \De D + 2\nabla p_A\cdot\nabla p_B+ \la[\alpha(x)(1-2p_A) + \be(x)(1-2p_B)]D - \frac{1}{\ep} D \label{eq:ABD_D} \end{align} in $\Om\times (0,\infty)$ and \begin{equation} \partial_\nu p_A = \partial_\nu p_B = \partial_\nu D = 0 \quad\text{on } \partial\Om\times (0,\infty)\,. \end{equation} \end{subequations} Here, $\nabla$ denotes the vector differential operator with derivatives with respect to $x\in\Reals^n$. The constraints \eqref{constraint1} on the $p_i$ are transformed to \begin{subequations}\label{constraints_twoloc} \begin{equation}\label{constraint_pA_pB} 0\le p_A\le1\,, \; 0\le p_B\le1\,, \end{equation} and \begin{equation}\label{constraint_D} -\min\{p_A p_B,(1-p_A)(1-p_B)\} \le D \le \min\{p_A (1-p_B),(1-p_A)p_B\}\,, \end{equation} \end{subequations} where these inequalities hold in $\Om\times[0,\infty)$ (e.g., \cite{RB2017}). In particular, the map $\mathcal{T}:\mathbf{Y}\to\mathbf{X}$, given by \eqref{trafo_T}, is a homeomorphism, where \begin{align} \mathbf{Y}:=\bigl\{&(v_1,v_2,v_3) \in C(\bar\Omega; [0,1]^2) \times C(\bar\Omega;[-\tfrac14,\tfrac14]): \notag\\ &-\min\{v_1 v_2, (1-v_1)(1-v_2)\} \leq v_3 \leq \min \{v_1(1-v_2), (1-v_1)v_2\}\bigr\}\,. \end{align} In addition, we define \begin{equation} \mathbf{Y}_0 = \left\{(v_1,v_2,v_3)\in \mathbf{Y}: v_1\equiv 0 \text{ or } v_1\equiv 1 \text{ or } v_2\equiv 0 \text{ or } v_2\equiv 1\right\} \end{equation} and recall that each of the four edges in $\mathbf{Y}_0=\mathcal{T}^{-1}(\mathbf{X}_0)$ is invariant (Section \ref{sec:basic_dynamics}). Because strong recombination erodes linkage disequilibrium rapidly, we expect that $D$ will be of order $\ep$ at stationarity (see \cite{RB2009,NHB1999} for related ODE models). If $D\equiv0$ then \eqref{eq:ABD_a} and \eqref{eq:ABD_b} describe two uncoupled one-locus systems, which are well understood (Section \ref{sec:SLClines}). We shall obtain the two-locus cline of \eqref{eq:ABD_add} as a perturbation of the Cartesian product of the two single-locus clines of \eqref{eq:ABD_a} and \eqref{eq:ABD_b} with $D\equiv0$. From Section~\ref{sec:SLClines}, and because we assume (A), we know that both exist if $\la>\max\{\la_{A},\la_{B}\}$, where $\la_{A}=\la_{\alpha}\in(0,\infty)$ and $\la_{B}=\la_{\beta}\in(0,\infty)$ are as in \eqref{lambda_h}. For $h\in\{\alpha, \beta\}$, let $\theta_h(x)$ denote the global attractor of the single-locus problem at locus $\A$ or $\B$, respectively (Theorem~\ref{thm:singlelocus}). The following is the main result of this section. \begin{theorem}\label{thm:7.1} Suppose that (A) holds. For every $\la>0$ with $\la\neq\max \{ \la_A, \la_B\}$ and for sufficiently small $\ep>0$, the system \eqref{eq:ABD_add} has an equilibrium $(\hat{p}_A, \hat{p}_B, \hat{D})= (\hat{p}_A^{(\ep)}, \hat{p}_B^{(\ep)}, \hat{D}^{(\ep)})$ that attracts all trajectories with initial data in $\mathbf{Y} \setminus \mathbf{Y}_0$, where convergence occurs in {$[C^2(\bar\Om)]^3$}. Moreover, the following conclusions hold. \noindent {\rm (a)} For every $0 < \la <\max \{ \la_A, \la_B\}$, there exists $\epsilon_0>0$ such that the system \eqref{eq:ABD_add} admits no internal equilibrium if $\epsilon \in (0,\epsilon_0]$. In fact, at least one of $\theta_\alpha$ and $\theta_\beta$ is trivial, and the globally attracting equilibrium is independent of $\epsilon$, i.e., \begin{equation} (\hat{p}_A^{(\epsilon)}, \hat{p}_B^{(\epsilon)}, \hat{D}^{(\epsilon)}) = (\theta_\alpha, \theta_\beta, 0) \in \mathbf{Y}_0\,. \end{equation} \noindent {\rm (b)} For every $\la > \max \{ \la_A, \la_B\}$, there exists $\epsilon_0>0$ such that for every $\epsilon \in (0,\epsilon_0]$, the globally attracting equilibrium is internal and satisfies \begin{equation}\label{eps_estimate_(pA,pB,D)} \| (\hat{p}_A^{(\ep)}, \hat{p}_B^{(\ep)}) - (\theta_\alpha, \theta_\beta)\|_{C^1(\bar\Omega)} + \|\hat{D}^{(\ep)}\|_{C(\bar\Omega)} = O(\epsilon)\,, \end{equation} i.e., $(\hat{p}_A^{(\ep)}, \hat{p}_B^{(\ep)}, \hat{D}^{(\ep)})$ lies in the interior of $\mathbf{Y}$ and converges to $(\theta_\alpha, \theta_\beta, 0)$ in {$C^1(\bar\Omega) \times C^1(\bar\Omega) \times C(\bar\Omega)$} as $\epsilon \to 0$. \end{theorem} \begin{remark}{\rm By examining the elliptic system satisfied by the stationary solution $(\hat{p}_A, \hat{p}_B, \hat{D})$, and using the fact that $\|\hat D\|_{L^\infty(\Omega)} = O(\epsilon)$, it is not hard to show that $\|\hat D\|_{W^{2,p}(\Omega)} \leq C$ and thus $\|(\hat p_A, \hat p_B)\|_{C^{2,\gamma}(\bar\Omega)} \leq C$. This shows that in fact the convergence in \eqref{eps_estimate_(pA,pB,D)} can be improved to $C^2(\bar\Omega) \times C^2(\bar\Omega) \times C^1(\bar\Omega)$.} \end{remark} \begin{remark}\label{rem:7.2}{\rm If $0<\la \le \min \{ \la_A, \la_B\}$, Theorem \ref{thm:7.1}(a) together with Theorem \ref{thm:singlelocus} implies that a monomorphic equilibrium is globally asymptotically stable for \eqref{eq:ABD_add} with sufficiently small $\epsilon>0$}. \end{remark} The case $\la = \max \{ \la_A, \la_B\}$ is degenerate and is briefly discussed in Section \ref{sec:Disc}. If (A) does not hold, then convergence to a boundary equilibrium occurs for every $\la>0$ (Section \ref{sec:Disc}). \subsection{Preliminaries and proof of Theorem \ref{thm:7.1}(a)} Throughout this subsection, we assume that $((p_A(x,0), p_B(x,0), D(x,0))\in \mathbf{Y}\setminus \mathbf{Y}_0$. Then, by Lemma~\ref{flow_into_interior}, the solution of \eqref{eq:ABD_add} satisfies $0<p_A(x,t)<1$ and $0<p_B(x,t)<1$ in $\bar\Omega\times (0,\infty)$. For convenience, we define \begin{equation}\label{wAB} \DA(x,t)= \dfrac{D(x,t)}{p_A(x,t)(1-p_A(x,t))}\,,\quad \DB(x,t)= \dfrac{D(x,t)}{p_B(x,t)(1-p_B(x,t))}\,. \end{equation} \begin{lemma}\label{lem:7.4a} For given $\la>0$, there exists $C_0>0$ independent of $\epsilon$ such that \begin{equation}\label{eq:lem7.4-1} \sup_{x \in \Omega, t \geq 1} \left[ \frac{|\nabla p_A(x,t)|}{p_A(x,t)(1-p_A(x,t))} + \frac{|\nabla p_B(x,t)|}{p_B(x,t)(1-p_B(x,t))} \right] \leq C_0\,. \end{equation} In particular, \begin{equation}\label{eq:lem7.4-2} | \nabla p_A(x,t)| + |\nabla p_B(x,t)| \leq C_0 \quad \text{ for } (x,t) \in \Omega\times[1,\infty)\,. \end{equation} \end{lemma} \begin{proof} In light of \eqref{wAB}, we can rewrite \eqref{eq:ABD_a} and its boundary condition as \begin{equation}\label{eq:pA} \begin{cases} \partial_t p_A - \Delta p_A = \la \left[\alpha + \beta \DA \right](1-p_A)p_A & \text{ in }\Om\times (0,\infty)\,, \\ \partial_\nu p_A = 0 &\text{ on } \partial\Om\times (0,\infty)\,, \end{cases} \end{equation} where, by the constraints \eqref{constraints_twoloc}, \begin{equation}\label{eq:wA} |\DA | = \frac{|D|}{p_A} + \frac{|D|}{1-p_A} \leq 2\,. \end{equation} Hence, $p_A \geq 0$ satisfies the differential inequality \begin{equation} \begin{cases} \partial_t p_A - \Delta p_A \leq M_0 p_A &\text{ in } \Omega \times (0,\infty)\,,\\ \partial_\nu p_A = 0 &\text{ on } \partial\Omega \times(0,\infty)\,, \end{cases} \end{equation} where $M_0 = \la (\|\alpha\|_{C(\bar\Omega)} + 2\|\beta\|_{C(\bar\Omega)})$. By comparison we obtain \begin{equation*} \|p_A\|_{C(\bar\Omega \times [t-1, t])} \leq e^{M_0} \|p_A(\cdot, t-1)\|_{C(\bar\Omega)} \quad \text{ for }t \geq 1. \end{equation*} Now, we may apply a parabolic $L^p$-estimate to the solution $p_A$ of \eqref{eq:pA} and obtain a constant $C_1>0$ (independent of $t \geq 1$) such that $$ \|p_A(\cdot,t)\|_{C^1(\bar\Omega)} \leq C_1 \|p_A(\cdot,t-1)\|_{C(\bar\Omega)} \quad \text{ for }t \geq 1. $$ Hence, \begin{equation}\label{eq:wAA} \sup_{x \in \Omega} \frac{\left|\nabla p_A(x,t)\right|}{p_A(x,t)} \leq \frac{C_1\sup_{x' \in \Omega} p_A(x', t-1)}{\inf_{x' \in \Omega} p_A(x',t)} \leq C_2 \quad \text{ for }t \geq 1, \end{equation} where the second inequality is based on a standard Harnack inequality for homogeneous parabolic equations with uniformly bounded coefficients \cite[Corollary 7.42]{lieberman}. (Due to the Neumann boundary condition, the Harnack inequality can be applied up to the boundary of the spatial domain $\Omega$.) By repeating the argument with $1-p_A$, we obtain \begin{equation}\label{eq:wAAA} \sup_{x \in \Omega} \frac{|\nabla p_A(x,t)|}{1-p_A(x,t)} = \sup_{x \in \Omega} \frac{|\nabla (1-p_A(x,t))|}{1-p_A(x,t)}\leq C_3 \quad \text{ for }t \geq 1. \end{equation} Combining \eqref{eq:wAA} and \eqref{eq:wAAA}, we deduce \begin{equation} \sup_{x \in \Omega} \frac{|\nabla p_A|}{p_A(1-p_A)} = \sup_{x \in \Omega}\left[ \frac{|\nabla p_A|}{p_A} + \frac{|\nabla p_A|}{1-p_A} \right] \leq C_2+C_3 \quad \text{ for } t\geq 1\,. \end{equation} The corresponding estimate for $p_B$ follows analogously. \end{proof} \begin{remark}\label{remark:lem7.4} {\rm The parabolic $L^p$ estimate and the Harnack inequality require only the boundedness of $\la$. Therefore, for each fixed $M>0$, the bound $C_0$ in Lemma \ref{lem:7.4a} can be chosen uniformly for $\la \in (0,M]$ and $\epsilon \in (0,\infty)$.} \end{remark} \begin{lemma}\label{lem:7.4} For given $\la>0$ and $\epsilon>0$ such that \begin{equation}\label{eq:epsilon_smaller} \frac{1}{2\epsilon} > 3 \la \|\beta \|_{C(\bar\Omega)} + 2 C_0^2, \end{equation} where $C_0$ is as in Lemma \ref{lem:7.4a}, we have \begin{subequations}\label{eq:cor7.4a} \begin{alignat}{2} &\limsup_{ t\to\infty} \left\|\DA(\cdot, t)\right\|_{C(\bar\Omega)} \leq 4 \epsilon C_0 \limsup_{t \to \infty} \|\nabla p_B\|_{C(\bar\Omega)} \,, \label{eq:cor7.4aa}\\ &\limsup_{ t\to\infty} \left\|\DB(\cdot, t)\right\|_{C(\bar\Omega)} \leq 4 \epsilon C_0 \limsup_{t \to \infty} \|\nabla p_A\|_{C(\bar\Omega)} \,.\label{eq:cor7.4ab} \end{alignat} \end{subequations} In particular, the following holds:\\ \noindent {\rm (a)} if $p_B(\cdot,t) \to 0$ or $1$ in $C(\bar\Omega)$, then $\DA(\cdot,t) \to 0$ in $C(\bar\Omega)$;\\ \noindent {\rm (b)} if $p_A(\cdot,t) \to 0$ or $1$ in $C(\bar\Omega)$, then $\DB(\cdot,t) \to 0$ in $C(\bar\Omega)$;\\ \noindent {\rm (c)} $\limsup_{ t\to\infty} \left\|\DA(\cdot, t)\right\|_{C(\bar\Omega)} \leq 4 \epsilon C_0^2$ \;and\; $\limsup_{ t\to\infty} \left\|\DB(\cdot, t)\right\|_{C(\bar\Omega)} \leq 4 \epsilon C_0^2$. \end{lemma} \begin{remark}\label{rem:D_to_O(eps)}\rm It is easy to deduce from \eqref{wAB} and Lemma \ref{lem:7.4}(c) that for every $\la>0$, \begin{equation}\label{eq:cor7.4b} \limsup_{t \to \infty} \|D(\cdot,t)\|_{C(\bar\Omega)} \leq \limsup_{t \to \infty} \left\|\DA(\cdot, t) \right\|_{C(\bar\Omega)} = O(\epsilon) \end{equation} as $\ep\to0$. This shows that indeed, as argued verbally in Section \ref{sec:reco_LD} and above, linkage disequilibrium decays to values close to 0 if recombination is sufficiently strong. Similar results were proved previously for general non-spatial multilocus models \cite{N1993,NHB1999} as well as for spatial models with a finite number of demes \cite{RB2009}. However, \eqref{eq:cor7.4a} is stronger than \eqref{eq:cor7.4b}, and it will be essential for the proof of Theorem \ref{thm:7.1}. \end{remark} \begin{proof}[Proof of Lemma \ref{lem:7.4}] From \eqref{eq:ABD_a}, \eqref{eq:ABD_D}, and \eqref{wAB}, we derive \begin{subequations}\label{eq:wAeq} \begin{alignat}{2} &\partial_t \DA - \Delta \DA -\frac{2(1-2p_A)\nabla p_A}{p_A(1-p_A)}\cdot \nabla \DA + \DA \Biggl[\la \beta(1-2p_A)\DA - \la \beta (1-2p_B) \notag \\ &\qquad+ \frac{2|\nabla p_A|^2}{p_A(1-p_A)} + \frac{1}{\epsilon} \Biggr] = \frac{2\,\nabla p_A \cdot \nabla p_B }{p_A(1-p_A)} &&\hspace{-30mm}\text{in } \Omega\times(0,\infty)\,,\\ &\partial_\nu \DA = 0 &&\hspace{-30mm}\text{on } \partial\Omega\times(0,\infty)\,. \end{alignat} \end{subequations} Because each of $\underlineDA \in \{\DA,-\DA \}$ satisfies the differential inequality \begin{align*} & \partial_t \underlineDA - \Delta \underlineDA -\frac{2(1-2p_A)\nabla p_A}{p_A(1-p_A)} \cdot \nabla \underlineDA \\ &\quad+\underlineDA \left[\la \beta(1-2p_A)\DA - \la \beta (1-2p_B) + \frac{2|\nabla p_A|^2}{p_A(1-p_A)} + \frac{1}{\epsilon} \right] \leq \frac{2|\nabla p_A | }{p_A(1-p_A)} \left|\nabla p_B\right| \end{align*} for a subsolution, their maximum $|\DA | = \max\{\DA , -\DA \}$ satisfies the same differential inequality in the weak sense. From \eqref{eq:epsilon_smaller}, \eqref{eq:lem7.4-1}, and \eqref{eq:lem7.4-2}, we obtain \begin{align* \frac{1}{2\epsilon} &\geq \sup_{x \in \Omega,\, t \geq 1} \left(3 \la |\beta(x)| + \frac{2|\nabla p_A|}{p_A(1-p_A)}|\nabla p_A|\right) \notag \\ &\geq \sup_{x \in \Omega,\, t \geq 1} \left| \la \beta (1-2p_A)\DA - \la \beta (1-2p_B) + \frac{2|\nabla p_A|^2}{p_A(1-p_A)}\right|\,, \end{align*} where we used the fact $|\DA |\leq 2$ by \eqref{eq:wA}. Then $|\DA |$ is a weak subsolution of \begin{subequations}\label{eq:abswAeq} \begin{alignat}{2} &\partial_t \mathcal{D} - \Delta \mathcal{D} - \frac{2(1-2p_A)\nabla p_A}{p_A(1-p_A)} \cdot \nabla\mathcal{D} +\frac{\mathcal{D}}{2\epsilon} = \frac{2|\nabla p_A | }{p_A(1-p_A)} \left|\nabla p_B\right| &&\quad\text{ in } \Omega\times(0,\infty)\,,\\ &\partial_\nu \mathcal{D} = 0 &&\quad\text{ on } \partial\Omega\times(0,\infty)\,. \end{alignat} \end{subequations} Now for every $t_0 \geq 1$, we may construct a supersolution of \eqref{eq:abswAeq} in the domain $\Omega \times [t_0,\infty)$ as follows: $$ \overlineDA := 4\epsilon \sup_{t' \geq t_0} \left[\left\| \frac{\nabla p_A(\cdot,t')}{p_A(\cdot,t')(1-p_A(\cdot,t'))}\right\|_{C(\bar\Omega)} \left\| \nabla p_B(\cdot, t')\right\|_{C(\bar\Omega)}\right] +2 e^{-(t-t_0)/(2\epsilon)}. $$ Then, clearly, $\overlineDA \geq 2 \geq |\DA |$ for $x\in \Omega$ and $t = t_0$. Hence, we can deduce by comparison that \begin{equation}\label{eq:est_wA} \sup_{x\in\Omega}|\DA(x,t)| \leq 4\epsilon \sup_{t' \geq t_0} \left[\left\| \frac{\nabla p_A(\cdot,t')}{p_A(\cdot,t')(1-p_A(\cdot,t'))}\right\|_{C(\bar\Omega)} \left\| \nabla p_B(\cdot, t')\right\|_{C(\bar\Omega)}\right] +2 e^{-(t-t_0)/(2\epsilon)} \end{equation} for $t \geq t_0$. By letting $t \to \infty$ and then $t_0 \to \infty$, we obtain \eqref{eq:cor7.4aa}. An analogous argument for $\DB $ yields \eqref{eq:cor7.4ab}. For assertion (a), we observe that if $p_B(\cdot,t)$ approaches $0$ or $1$ uniformly as $t \to \infty$, then Lemma \ref{lem:7.4a} informs us that $\|\nabla p_B(\cdot,t)\|_{C(\bar\Omega)} \to 0$ as $t \to \infty$. Hence, we obtain assertion (a) by \eqref{eq:cor7.4aa}. The proof of (b) is analogous and is omitted. Part (c) follows directly from \eqref{eq:cor7.4a} and \eqref{eq:lem7.4-2}. \end{proof} \begin{remark}\label{remark:lem7.5} {\rm From Remark \ref{remark:lem7.4} and \eqref{eq:epsilon_smaller}, we {conclude} that for each $M>0$, the estimates in \eqref{eq:cor7.4a}, Lemma \ref{lem:7.4}(c), and \eqref{eq:cor7.4b} hold for $C_0$ chosen uniformly for $\la \in (0,M]$ and for $\epsilon <(6\|\beta\|_{C(\bar\Omega)} + 4 C_0^2)^{-1}$}. \end{remark} \begin{lemma}\label{lem:7.4c} {\rm (a)} If $0<\la < \la_A$, there exists $\tilde\epsilon_a>0$ such that for $\epsilon \in (0,\tilde\epsilon_a]$, $$ \lim_{ t\to\infty} p_A(\cdot,t) = \left\{ \begin{array}{ll} 1 &\text{ if }\bar\alpha >0,\\ 0 &\text{ if }\bar\alpha<0 \end{array}\right. \quad \text{ in }C^1(\overline\Omega)\,. $$ \noindent {\rm (b)} If $0<\la < \la_B$, there exists $\tilde\epsilon_b>0$ such that for $\epsilon \in (0,\tilde\epsilon_b]$, $$ \lim_{ t\to\infty} p_B(\cdot,t) = \left\{ \begin{array}{ll} 1 &\text{ if }\bar\beta >0\,,\\ 0 &\text{ if }\bar\beta<0 \end{array}\right. \quad \text{ in }C^1(\overline\Omega). $$ \noindent {\rm (c)} If $0<\la < \max\{\la_A, \la_B\}$, there exists $\tilde\epsilon = \max\{\tilde\epsilon_a, \tilde\epsilon_b\}>0$ such that for $\epsilon \in (0,\tilde\epsilon]$, $$ \lim_{t \to \infty} D(\cdot,t) = 0 \;\text{ in } C(\bar\Omega)\,. $$ \end{lemma} \begin{proof} First, we prove (a) and suppose $\bar\alpha<0$. By Theorem \ref{thm:singlelocus}, $0$ is a linearly stable equilibrium of \begin{subequations}\label{eq:theta_time} \begin{alignat}{2} &\partial_t \theta - \Delta \theta = \la \alpha \theta(1-\theta) &&\quad\text{ in } \Omega\times(0,\infty)\,, \\ &\partial_\nu \theta =0 &&\quad\text{ on } \partial\Omega\times(0,\infty)\,, \end{alignat} \end{subequations} and it attracts all solutions of \eqref{eq:theta_time} that are not identically equal to $1$. Because $\alpha$ changes sign and $\bar\alpha<0$, for $\delta_1>0$ sufficiently small, $\alpha+\delta_1$ still changes sign and $\overline{\alpha+\delta_1}<0$. Moreover, $\la^{*}(\alpha+\delta_1)$, defined below \eqref{1.6}, decreases continuously as $\delta_1$ increases from $0$ [46, Proposition~1.5]. Because $\la<\la_A=\la^*(\alpha)$, we may choose $\delta_1$ sufficiently small such that $\la<\la^{*}(\alpha+\delta_1)$. Therefore, $0$ is globally asymptotically stable also for \begin{subequations}\label{eq:theta_time_upper} \begin{alignat}{2} &\partial_t \overline\theta - \Delta \overline\theta = \la (\alpha+ \delta_1)\overline\theta(1 -\overline\theta) &&\quad\text{ in } \Omega\times(0,\infty)\,, \\ &\partial_\nu \overline\theta =0 &&\quad\text{ on } \partial\Omega\times(0,\infty)\, . \end{alignat} \end{subequations} By Lemma \ref{lem:7.4}(c), let $\epsilon$ be sufficiently small so that for some $t_0>0$, $\left|\beta \DA \right| \leq \delta_1$ in $\Omega\times[t_0,\infty)$, and let $\overline\theta$ be a solution of \eqref{eq:theta_time_upper} with initial condition $\overline\theta(x,t_0) = p_A(x,t_0)$. Then $$ \partial_t p_A - \Delta p_A = \la \left[\alpha + \beta \DA \right] p_A(1-p_A) \leq \la(\alpha+ \delta_1) p_A(1-p_A) $$ on $\Omega\times[t_0,\infty)$. Since also $\partial_\nu p_A = 0$ on $\partial\Omega \times(0,\infty)$ and $p_A(x,t_0) = \overline\theta(x,t_0)$ in $\Omega$, we deduce by comparison that $$ 0 \leq p_A(x,t) \leq \overline\theta(x,t) \quad \text{ in } \Omega\times[t_0,\infty)\,. $$ Because $\left\|\overline\theta(\cdot,t)\right\|_{C(\bar\Omega)} \to 0$ as $t \to \infty$, we have $\left\|p_A(\cdot,t)\right\|_{C(\bar\Omega)} \to 0$ as $t \to \infty$. By parabolic regularity, we obtain $\|p_A(\cdot,t)\|_{C^1(\bar\Omega)} \to 0$. This proves (a) if $\bar\alpha<0$. The proofs of (a) for $\bar\alpha >0$ and of (b) are analogous and are omitted. By \eqref{constraints_twoloc}, statement (c) follows directly from (a) and (b). \end{proof} \begin{remark}\label{remark:lem7.5b} {\rm For every given $\delta>0$, the constant $\delta_1$ in the above proof can be chosen uniformly for $\la \in (0, \la_A - \delta]$. Hence by Lemma \ref{lem:7.4} and Remark \ref{remark:lem7.5} one can choose $\tilde\epsilon_a$ (resp., $\tilde\epsilon_b$) uniformly for $\la \in (0,\la_A - \delta]$ (resp. $\la \in (0,\la_B - \delta]$). } \end{remark} \begin{lemma}\label{lem:U1} Suppose $\la > \la_A$, and define \begin{equation}\label{def_L_ph} L_\ph = -\Delta - \la\alpha (1-2\ph)\,. \end{equation} Then there exists $\delta_1>0$ such that if $\ph \in C(\bar\Omega)$ satisfies $\|\ph - \theta_\alpha\|_{C(\bar\Omega)} < \delta_1$, then \begin{equation}\label{eq:spectrum} \sigma(L_\ph) \subset \{ z \in \mathbb{C}: \textup{Re}\, z > \delta_0\} \quad \text{ for some }\delta_0 >0. \end{equation} \end{lemma} \begin{proof} Because $\la > \la_A$, the positive equilibrium $\theta_\alpha$ is linearly stable in the single-locus problem, i.e., there exists $\delta_0>0$ such that the operator $L_{\theta_\alpha}$ satisfies $\sigma(L_{\theta_\alpha}) \subset \{z \in \mathbb{C}: \textup{Re}\, z \geq 2\delta_0\}$. The lemma thus follows from upper semicontinuity of the spectrum of $L_\ph$ with respect to the coefficient $\ph \in C(\bar\Omega)$. \end{proof} \begin{lemma}\label{lem:U2} Suppose $q_A$ is a solution of \begin{subequations} \begin{alignat}{2} &\partial_t q_A + L_\ph q_A = F(x,t) &&\quad\text{ in } \Omega\times(t_0,\infty)\,, \\ &\partial_\nu q_A = 0 &&\quad\text{ on } \partial\Omega \times (t_0,\infty) \,, \end{alignat} \end{subequations} where $L_\ph$ satisfies \eqref{eq:spectrum} and $F(x,t) \in C(\bar\Omega \times [t_0, \infty))$. Then there exists $C'>0$ (which depends on $L_\varphi$ but is independent of $F$) such that \begin{equation}\label{tilde_qA:lemU2} \limsup_{t \to \infty} \|q_A(\cdot,t)\|_{C^1(\bar\Omega)} \leq C'\limsup_{t\to\infty} \|F(\cdot,t)\|_{C(\bar\Omega)}\,. \end{equation} \end{lemma} \begin{proof} By the variation-of-constants formula, we have \begin{equation}\label{eq:voc} q_A(\cdot,t) = e^{-(t-t_0)L_\ph} q_A(\cdot,t_0) + \int_{t_0}^t e^{-(t-s) L_\ph} F(\cdot,s)\,ds \quad \text{ for } t > t_0\,, \end{equation} where $e^{-t L_\ph}$ is the semigroup generated by $L_\ph$ under homogeneous Neumann boundary conditions. Using \eqref{eq:spectrum}, it is a consequence of \cite[(2.3.3)]{Lunardi} that for every $\ga\in(0,1)$ and $p\ge1$ there is a constant $c>0$ such that \begin{equation* \|e^{-t L_\ph} w\|_{D_{L_\ph}(\ga,\infty)} \leq c t^{-\ga} e^{-\delta_0 t} \| w\|_{C(\bar\Omega)} \quad \text{for all }t > 0\,, \end{equation*} where $D_{L_\varphi}(\gamma,\infty)$ is the real interpolation space between $C(\bar\Omega)$ and the domain $D(L_\ph)=\bigcap_{p\ge1} W^{2,p}(\Omega)$. Because $D_{L_\varphi}(\gamma,\infty)\subseteq C^{1,2\gamma-1}(\bar\Omega)$ if $\gamma\in(\tfrac12,1)$ \cite[Theorem 3.1.30]{Lunardi}, we obtain \begin{equation}\label{estimate_etLph} \|e^{-t L_\ph} w\|_{C^{1}(\bar\Omega)} \leq c t^{-\ga} e^{-\delta_0 t} \| w\|_{C(\bar\Omega)} \quad \text{for all }t > 0\,. \end{equation} Applying \eqref{estimate_etLph} to \eqref{eq:voc}, we derive $$ \|q_A(\cdot,t)\|_{C^1(\bar\Omega)} \leq c(t-t_0)^{-\ga}e^{-\delta_0(t-t_0)}\| q_A(\cdot, t_0)\|_{C(\bar\Omega)} + \int_{t_0}^t c (t-s)^{-\ga} e^{-\delta_0(t-s)}\|F(\cdot,s)\|_{C(\bar\Omega)}\, ds $$ for $t > t_0 > 0$. Letting $t \to \infty$, we arrive at \eqref{tilde_qA:lemU2}. \end{proof} \begin{proposition}\label{prop:persistence} \noindent {\rm (a)} If $\la>\la_A$, then for every trajectory $(p_A, p_B, D)$ of \eqref{eq:ABD_add} with initial data $p_A(\cdot,0) \notin\{0, 1\}$, we have \begin{equation}\label{eq:prop2a} \limsup_{t \to\infty} \left\|p_A(\cdot,t) - \theta_\alpha \right\|_{C^1(\bar\Omega)} = O(\epsilon) \quad \text{ as }\epsilon \to 0. \end{equation} \noindent {\rm (b)} If $\la >\la_B$, then for every trajectory $(p_A, p_B, D)$ of \eqref{eq:ABD_add} with initial data $p_B(\cdot,0) \notin\{0, 1\}$, we have \begin{equation}\label{eq:prop2b} \limsup_{t \to\infty} \left\|p_B(\cdot,t) - \theta_\beta \right\|_{C^1(\bar\Omega)} = O(\epsilon)\quad \text{ as }\epsilon \to 0. \end{equation} \end{proposition} \begin{proof} To prove (a), assume $\la>\la_A$. We may choose a constant $\delta_2>0$ sufficiently small such that $$ \la > \la_{\alpha + \delta} \quad \text{ for all $\delta$ with } |\delta| \leq \delta_2\,, $$ where $\la_{\alpha + \delta}$ is defined in \eqref{lambda_h}. Then the single-locus equation \begin{subequations}\label{eq:theta_time_upper2a} \begin{alignat}{2} &\partial_t \theta - \Delta \theta = \la (\alpha+ \delta)\theta(1 -\theta) &&\quad\text{ in } \Omega\times(0,\infty)\,,\\ &\partial_\nu \theta =0 &&\quad\text{ on } \partial\Omega\times(0,\infty) \end{alignat} \end{subequations} has a unique globally asymptotically stable equilibrium $\theta_{\alpha+\delta}$. Let $C'$ be given by Lemma \ref{lem:U2}, where $L_\varphi = L_{\theta_\alpha}$, and let $\delta' = \frac{1}{C' \|\la\alpha\|_{C(\bar\Omega)}}$. We claim that for each sufficiently small $\epsilon$, and any non-trivial initial condition, \begin{equation}\label{eq:est_c0} \limsup_{t \to \infty} \|p_A(\cdot,t) - \theta_\alpha\|_{C(\bar\Omega)} < \delta'. \end{equation} To prove \eqref{eq:est_c0}, let $\delta'>0$ be given as above. Since $\la >\la_A$, the steady state $\theta_{\alpha+\delta}$ depends continuously on $\delta \in [-\delta_2,\delta_2]$, thus there exists $\eta \in (0,\delta_2)$ (depending on $\delta'$) such that \begin{equation}\label{eq:iftift} \|\theta_{\alpha - \eta} - \theta_{\alpha}\|_{C(\bar\Omega)} + \|\theta_{\alpha + \eta} - \theta_{\alpha}\|_{C(\bar\Omega)} < \delta'. \end{equation} Next, fix $\epsilon>0$ small enough so that $4\epsilon C_0^2\|\beta\|_{C(\bar\Omega)} < \eta$ and \eqref{eq:epsilon_smaller} are satisfied (where $C_0$ is as in Lemma \ref{lem:7.4a}). Then, by Lemma \ref{lem:7.4}, there exists $t_0>0$ such that $$ \left|\frac{\beta D}{p_A(1-p_A)}\right| = \left| \beta D_A\right| \leq \eta \quad\text{ in } \Omega\times[t_0,\infty)\,. $$ In this case, $p_A$ satisfies $$ \la (\alpha - \eta)p_A(1-p_A) \le \partial_t p_A - \Delta p_A \leq \la (\alpha + \eta)p_A(1-p_A) \quad\text{ in } \Omega\times[t_0,\infty) \,. $$ Hence, by comparison and by the fact that $\theta_{\alpha\pm \eta}$ is the globally asymptotically stable equilibrium of \eqref{eq:theta_time_upper2a} with $\delta = \pm \eta$, respectively, we deduce that $$ \theta_{\alpha - \eta}(x) \leq \liminf_{t \to \infty} p_A(x,t) \leq \limsup_{t \to \infty} p_A(x,t) \leq \theta_{\alpha+ \eta}(x). $$ Combining this with \eqref{eq:iftift}, we obtain \begin{align*} &\quad \limsup_{t \to \infty}\|p_A(\cdot,t)-\theta_\alpha\|_{C(\bar\Omega)} \\ &\qquad\leq \limsup_{t\to\infty} \left[\max_{x\in\bar\Omega} (p_A(x,t) - \theta_\alpha(x))_+ \right]+ \limsup_{t\to\infty} \left[\max_{x\in\bar\Omega} (\theta_\alpha (x)- p_A(x,t))_+ \right] \\ &\qquad\leq\| \theta_{\alpha + \eta} - \theta_\alpha\|_{C(\bar\Omega)} + \| \theta_{\alpha } - \theta_{\alpha - \eta}\|_{C(\bar\Omega)} < \delta'\,, \end{align*} which proves \eqref{eq:est_c0}. Next, let $q_A(x,t) = p_A(x,t) - \theta_\alpha(x)$ and $F(x,t)= -\la \alpha (q_A)^2 + \la \beta \DA p_A(1-p_A)$. Then \begin{equation} \partial_t q_A + L_{\theta_\alpha} q_A = F(x,t)\,, \end{equation} where $L_{\theta_\alpha}$ is defined according to \eqref{def_L_ph}. Since $\la >\la_A$, the equilibrium $\theta_\alpha$ is linearly stable and thus $\sigma(L_{\theta_\alpha}) \subset \{z \in \mathbb{C}: \textup{Re}\, z > \delta_0\}$ for some $\delta_0>0$. Because \eqref{eq:cor7.4b} entails $\limsup_{t\to\infty}\|F(\cdot,t)\|_{C(\bar\Omega)}= O(\epsilon) + \|\la \alpha\|_{C(\bar\Omega)} \limsup_{t\to\infty}\|q_A(\cdot,t)\|_{C(\bar\Omega)}^2$, we can invoke Lemma \ref{lem:U2} to deduce that for some constant $C'>0$ (the same as the one at the beginning of the proof), we have \begin{equation}\label{eq:implies} \limsup_{t \to \infty} \|q_A(\cdot,t)\|_{C^1(\bar\Omega)} = C'\left[O(\epsilon) + \|\la \alpha\|_{C(\bar\Omega)} \limsup_{t \to \infty} \|q_A(\cdot,t)\|^2_{C(\bar\Omega)} \right]. \end{equation} By our choice of $\delta'=\frac{1}{2C' \|\la \alpha\|_{C(\bar\Omega)}}$ and \eqref{eq:est_c0}, we have $$ C' \|\la \alpha\|_{C(\bar\Omega)} \limsup_{t \to \infty} \|q_A(\cdot,t)\|^2_{C(\bar\Omega)} \leq \frac{1}{2} \limsup_{t \to \infty} \|q_A(\cdot,t)\|_{C(\bar\Omega)}, $$ and \eqref{eq:implies} yields $$ \limsup_{t \to \infty} \|q_A(\cdot,t)\|_{C^1(\bar\Omega)} = O(\epsilon). $$ This proves (a) for $\la>\la_A$. The proof of (b) is analogous. \end{proof} We end this subsection with the proof of Theorem \ref{thm:7.1}(a). \begin{proof}[Proof of Theorem \ref{thm:7.1}(a)] Let $\la < \max\{\la_A,\la_B\}$. Without loss of generality, we assume $\la < \la_B$. Then by Lemma \ref{lem:7.4c}(b) and Lemma \ref{lem:7.4}(a) we have \begin{equation}\label{eq:LN2002} p_B(\cdot,t) \xrightarrow{t\to\infty} 0 \text{ or } 1 \;\text{ in } C^1(\bar\Om) \quad \text{and}\quad \DA \xrightarrow{t\to\infty} 0 \;\text{ in }C(\bar\Omega)\,, \end{equation} respectively. Hence, equation \eqref{eq:ABD_a} for $p_A$ is asymptotic to \eqref{eq:theta} with $h=\alpha$. Now, for \eqref{eq:theta} with $h = \alpha$, the equilibrium $\theta_\alpha$ is globally asymptotically stable (recall that $0<\theta_\alpha<1$ if $\la>\la_A$, and $\theta_\alpha \in\{0,1\}$ if $\la\le\la_A$). Any other equilibrium in $\{0,1\}$ is linearly unstable. For every given trajectory $\{p_A(\cdot,t)\}_{t \geq 0}$ of \eqref{eq:ABD_a}, the omega limit set $\omega_0$ is an internally chain-transitive set of the semiflow generated by the limiting equation \eqref{eq:theta} with $h=\alpha$. In particular, $\omega_0$ must be a singleton set containing one of the equilibria $\{0,\theta_\alpha,1\}$, i.e., $p_A(\cdot,t)$ converges to one of the equilibria as $t\to\infty$. To prove that $p_A(\cdot,t) \to \theta_\alpha$ in $C^1(\bar\Om)$, we consider the case $\bar{\alpha}>0$ first. If $0<\la<\la_A$, then $\theta_\alpha=1$ and $p_A(\cdot,t) \to 1$ follows from Lemma\,\ref{lem:7.4c}(a). If $\la>\la_A$, then $0<\theta_\alpha(x)<1$ on $\bar{\Omega}$ and Proposition\,\ref{prop:persistence}(a) excludes the possibility that $p_A(\cdot,t)\to 0$ or $1$ and thus leads to $p_A(\cdot,t) \to \theta_\alpha$. If $\la=\la_A$, then $\theta_\alpha=1$, and $0$ is linearly unstable as an equilibrium of \eqref{eq:theta} with $h=\alpha$. We rewrite the equation (\ref{eq:ABD_a}) for $p_A$ as \begin{subequations}\label{eq:LN2002-2} \begin{alignat}{2} &\partial_t p_A - \Delta p_A = \la\alpha(1-p_A)p_A + \la g(x,t) p_A &&\quad\text{ in } \Omega\times (0,\infty) \,,\\ &\partial_\nu p_A = 0 &&\quad\text{ on } \partial\Omega \times (0,\infty)\,,\\ &p_A(x,0) \geq 0 \,\text{ and }\, p_A(x,0)\not\equiv 0 &&\quad\text{ in } \bar\Omega\,, \end{alignat} \end{subequations} where, by \eqref{eq:LN2002}, \begin{equation} g(x,t) = \beta(x)\DA(1-p_A(x,t)) \to 0 \text{ in } C(\bar\Omega) \text{ as } t\to \infty\,. \end{equation} Thus, we may apply \cite[Lemma 2.5]{LN2002} to deduce that $p_A(\cdot,t)\to 1$ in $C^1(\bar\Om)$ as $t\to\infty$. For each fixed $\epsilon$, the convergence of $(p_A, p_B, D)$ as $t \to\infty$ can in fact be improved to $[C^2(\bar\Omega)]^3$, via parabolic regularity. This completes the proof of $p_A(\cdot,t) \to \theta_\alpha$ as $t\to\infty$. Finally, the proof for the case $\bar{\alpha}\le 0$ is similar and is omitted. \end{proof} \begin{remark}\rm Here is an alternative proof of Theorem \ref{thm:7.1}(a) without using the chain transitivity. As above, we consider the case $\bar{\alpha}>0$. If $0<\la\le\la_A$, we apply \cite[Lemma 2.5]{LN2002} to equation \eqref{eq:LN2002-2} and conclude that $p_A(\cdot,t)\to 1$ as $t\to\infty$. If $\la>\la_A$, we apply \cite[Lemma 2.5]{LN2002} to both $p_A$ and $(1-p_A)$ to obtain \begin{equation*} \liminf_{t\to\infty}p_A(x,t)\ge \theta_\alpha(x) \end{equation*} and \begin{equation*} \liminf_{t\to\infty}(1-p_A(x,t))\ge 1-\theta_\alpha(x)\,,\mbox{ \;i.e., \;} \limsup_{t\to\infty} p_A(x,t)\le \theta_\alpha(x)\,, \end{equation*} respectively. This implies $p_A(x,t)\to \theta_{\alpha}(x)$ pointwise as $t\to\infty$. By parabolic regularity and the Arzela-Ascoli Lemma, we infer that $p_A(x,t)\to \theta_{\alpha}(x)$ in $C^2(\bar\Omega)$, as in \cite[Theorem 2.1]{LN2002}. \end{remark} \begin{remark}\label{remark:thm7.1}\rm Based on Remarks \ref{remark:lem7.5}, \ref{remark:lem7.5b} and the proof of Theorem \ref{thm:7.1}(a), we observe that for every $\delta\in (0, \max\{\la_A,\la_B\})$ the $\epsilon_0$ in Theorem \ref{thm:7.1}(a) can be chosen independently of $\la\in (0,\max\{\la_A,\la_B\}-\delta]$. \end{remark} \subsection{Persistence results and existence of internal equilibrium} For the rest of this paper, we treat the case $\la > \max\{\la_A, \la_B\}$, so that the single-locus problems at loci $\A$ and $\B$ admit linearly stable clines $\theta_\alpha$ and $\theta_\beta$, respectively (Theorem \ref{thm:singlelocus}). First, we will use persistence theory (e.g.\ \cite{ST}) to establish the existence of an internal equilibrium of the two-locus problem. \begin{definition} Let $\Phi:\mathbf{Y}\times[0,\infty) \to \mathbf{Y}$ be a semiflow. \noindent {\rm (i)} $\Phi$ is point-dissipative if there exists $C>0$ independent of initial conditions $Q_0 \in \mathbf{Y}$ such that \begin{equation} \limsup_{t \to \infty} \left\| \Phi_t(Q_0)\right\|_{\mathbf{Y}} \leq C\,. \end{equation} \noindent {\rm (ii)} $\Phi$ is eventually bounded on $\mathbf{Y}$ if $\bigcup_{t \geq t_0} \Phi_t(\mathbf{Y})$ is bounded for some $t_0 \geq 0$. \noindent {\rm (iii)} $\Phi_t: \mathbf{Y} \to \mathbf{Y}$ is compact for given $t>0$ if $\Phi_t(B)$ is precompact for every bounded subset $B$ of $\mathbf{Y}$. \end{definition} \begin{proposition}\label{prop:2.2} The system \eqref{eq:ABD_add} generates a semiflow $\Phi$ on $\mathbf{Y}$, i.e., for initial data $Q_0\in \mathbf{Y}$ and every $t \geq 0$, let $\Phi_t(Q_0)=(p_A(\cdot,t), p_B(\cdot,t), D(\cdot,t))$, where $(p_A,p_B,D)$ is the corresponding solution of \eqref{eq:ABD_add}. Then $\Phi$ is {\rm (i)} point-dissipative, {\rm (ii)} eventually bounded on $\mathbf{Y}$, and {\rm (iii)} $\Phi_t: \mathbf{Y} \to \mathbf{Y}$ is compact for every $t >0$. \end{proposition} \begin{proof} Because the map $\mathcal{T}:\mathbf{Y}\to\mathbf{X}$ in \eqref{trafo_T} is a homeomorphism and $\mathbf{X}$ in \eqref{X} is forward invariant under the semiflow $\Psi$ generated by \eqref{dynamics_pi}, $\mathbf{Y}$ is forward invariant under $\Phi$. Therefore, $\Phi_t(Q_0)=(p_A,p_B,D)(\cdot,t)$ exists and remains in $\mathbf{Y}$ for all $t >0$. Since $\mathbf{Y}$ is a bounded set, $\Phi$ is point-dissipative and eventually bounded. To prove (iii), we rewrite the first two equations of \eqref{eq:ABD_add} as \begin{align*} \partial_t p_A - \Delta p_A = &F_A:= \la \alpha p_A(1- p_A) + \la \beta D\,, \\ \partial_t p_B - \Delta p_B = &F_B:= \la \beta p_B(1- p_B) + \la \alpha D \,, \end{align*} and apply semigroup and regularity theory. For every $t_0 \geq \ta>0$, there exists $C>0$ independent of $\epsilon$ and initial data, such that $$ \|(p_A,p_B)\|_{W^{2,1,p}(\Omega \times [t_0, t_0+\ta])} \leq C(\|(F_A,F_B)\|_{L^p(\bar\Omega \times [t_0-\ta, t_0+\ta])}+\|(p_A, p_B)\|_{L^p(\bar\Omega\times[t_0-\ta,t_0+\ta])}) $$ \cite[Theorem 7.35]{lieberman}, and the constant $C$ depends on $\min\{t_0, 1\}$ because we can take $\tau = \min\{\tfrac12t_0, \tfrac12\}$. By Sobolev embedding, we deduce \begin{align}\label{eq:regular} &\sup_{t \in [t_0, t_0+\tau]} \|p_A(\cdot,t), p_B(\cdot,t)\|_{C^{1+\gamma}(\bar\Omega)} \notag\\ &\qquad\leq C''\left(\|(p_A, p_B)\|_{C(\bar\Omega \times [t_0-\ta, t_0+\ta])} + \|D(\cdot,t)\|_{C(\bar\Omega \times [t_0-\ta, t_0+\ta])} \right) \leq \frac{5}{4}C''\,, \end{align} where the last inequality follows from \eqref{constraints_twoloc}. Similarly, for every $t_0 \geq \ta >0$ there is a constant $C_\epsilon$ independent of initial data, such that $$ \sup_{t_0 \leq t \leq t_0 +\ta}\|D(\cdot,t)\|_{C^{1+\gamma}(\bar\Omega)} \leq C_\epsilon. $$ (Note that $C_\epsilon$ depends not only on $\min\{t_0,1\}$, as above, but also on $\epsilon$ because the coefficients in equation \eqref{eq:ABD_D} for $D$ depend on $\epsilon$.) Therefore, $\Phi_t$ is a bounded mapping from $\mathbf{Y} \to \mathbf{Y} \cap C^{1+\gamma}(\bar\Omega; [0,1]^2\times[-\tfrac14,\tfrac14])$ for every $t \geq t_0$, i.e., there is a constant $M_t$, such that $\|\Phi_t(Q_0)\|_{C^{1+\ga}(\bar\Om)}\le M_t$ for all $Q_0\in\mathbf{Y}$. By the compactness of the embedding $ C^{1+\gamma}(\bar\Omega; [0,1]^2\times[-\tfrac14,\tfrac14]) \hookrightarrow C(\bar\Omega; [0,1]^2\times[-\tfrac14,\tfrac14])$ and because $t_0$ can be arbitrarily small, we deduce that $\Phi_t: \mathbf{Y} \to \mathbf{Y}$ is compact for every $t >0$. \end{proof} \begin{corollary}\label{cor:attractor} The semiflow $\Phi$ has a compact attractor $\mathcal{C}$ of $\mathbf{Y}$, i.e., $\textup{dist}(\Phi_t(\mathbf{Y}),\mathcal{C}) \to 0$ as $t \to \infty$. \end{corollary} \begin{proof} By \cite[Theorem 2.30 and Remark 2.26(b)]{ST}, it is sufficient to verify that the semiflow $\Phi$ is (i) point-dissipative, (ii) eventually bounded on $\mathbf{Y}$, and (iii) $\Phi_t:\mathbf{Y} \to \mathbf{Y}$ is compact for some $t>0$. These have been shown in Proposition \ref{prop:2.2}. \end{proof} \begin{definition} {\rm (i)} Define the function $\kappa: \mathbf{Y} \rightarrow [0,\infty)$ by \begin{equation} \kappa(v_1,v_2,v_3):= \inf_{x \in \Omega} \left[\min\left\{ v_1(x), 1-v_1(x), v_2(x), 1-v_2(x) \right\}\right]\,. \end{equation} \noindent {\rm (ii)} We call the semiflow $\Phi$ uniformly $\kappa$-persistent, if there exists $\delta_0>0$ independent of initial condition $Q_0 \in \mathbf{Y} \setminus \mathbf{Y}_0$ such that $$ \liminf_{t \to \infty} \kappa(\Phi_t(Q_0)) = \liminf_{t \to \infty} \left[\inf_{x \in \Omega} \min\left\{p_A(x,t), 1-p_A(x,t), p_B(x,t), 1-p_B(x,t) \right\}\right] \geq \delta_0\,. $$ \end{definition} The function $\kappa$ is continuous and, by Lemma~\ref{flow_into_interior}, satisfies $\kappa(p_A(\cdot,t), p_B(\cdot,t), D(\cdot,t))>0$ for $t>0$ if either $$ \kappa(p_A(\cdot,0), p_B(\cdot,0), D(\cdot,0))>0 $$ or $$ \kappa(p_A(\cdot,0), p_B(\cdot,0), D(\cdot,0))=0 \; \text{ and } \; (p_A(\cdot,0), p_B(\cdot,0), D(\cdot,0)) \in \mathbf{Y}\setminus\mathbf{Y}_0\,. $$ In the following, we apply standard results from persistence theory to prove the existence of at least one internal equilibrium. Any such equilibrium will satisfy \eqref{eps_estimate_(pA,pB,D)}. \begin{corollary}\label{cor:existence} Suppose $\la > \max\{\la_A, \la_B\}$. Then for every sufficiently small $\epsilon>0$, the system \eqref{eq:ABD_add} has an internal equilibrium, i.e., there exists $(\hat{p}_A, \hat{p}_B, \hat{D}) = (\hat{p}_A^{(\ep)}, \hat{p}_B^{(\ep)}, \hat{D}^{(\ep)})$ in the interior of $\mathbf{Y}$, such that $\kappa(\hat{p}_A, \hat{p}_B, \hat{D}) >0$ and $\Phi_t(\hat{p}_A, \hat{p}_B, \hat{D}) = (\hat{p}_A, \hat{p}_B, \hat{D})$ for all $t \geq 0$. Moreover, \begin{equation}\label{eq:corexistence} \left\| (\hat{p}_A, \hat{p}_B) - (\theta_\alpha, \theta_\beta) \right\|_{C^1(\bar\Omega)} + \| \hat{D} \|_{C(\bar\Omega)} = O(\epsilon) \end{equation} as $\ep\to0$. \end{corollary} \begin{proof} We recall that the semiflow $\Phi$ on $\mathbf{Y}$ is equivalent to the semiflow $\Psi$ on $\mathbf{X}$ via the relation $\Phi_t = \mathcal{T}^{-1} \circ \Psi_t \circ \mathcal{T}$, where $\mathbf{X}$ is given in $\eqref{X}$, and $\mathcal{T}(p_A, p_B,D) = (p_1,p_2,p_3,p_4)$ is given in \eqref{trafo_T}. If we define $\kappa': \mathbf{X} \rightarrow [0,\infty)$ by $$ \kappa'(u_1,u_2,u_3,u_4)= \inf_{x \in \Omega} \left[ \min\left\{ u_1+u_2, 1- u_1-u_2, u_1+u_3, 1- u_1-u_3 \right\} \right]\,, $$ then $\kappa' = \kappa\circ \mathcal{T}^{-1}$. For every fixed, sufficiently small $\epsilon$, we observe that (i) the semiflow $\Psi$ is uniformly $\kappa'$-persistent (because $\Phi$ is uniformly $\kappa$-persistent by Proposition \ref{prop:persistence}); (ii) $\Psi_t: \mathbf{X} \to \mathbf{X}$ is compact, hence condensing, for every $t >0 $ (because $\Phi_t: \mathbf{Y} \to \mathbf{Y}$ is compact for every $t>0$ by Proposition \ref{prop:2.2}, and $\mathcal{T}:\mathbf{Y} \to \mathbf{X}$ is a homeomorphism); and (iii) $\Psi$ has a compact attractor in $\mathbf{X}$ (because $\Phi$ has a compact attractor in $\mathbf{Y}$ by Corollary \ref{cor:attractor}), which shows that $\Psi$ has a compact attractor of neighborhoods of compact sets. Observe in addition that \begin{itemize} \item $\mathbf{X}$ is a closed convex subset of the Banach space $C(\bar\Omega; \mathbb{R}^4)$. \item $\kappa': \mathbf{X} \to \mathbb{R}_+$ is continuous and concave, where concave means $$ \kappa'(\la Q_1 + (1-\la) Q_2) \geq \la \kappa'(Q_1) + (1-\la) \kappa'(Q_2) $$ for all $\la \in [0,1]$ and $Q_1, Q_2 \in \mathbf{X}$. \end{itemize} Therefore, the existence of an equilibrium $(\hat{p}_1,\hat{p}_2, \hat{p}_3,\hat{p}_4)$ satisfying $\kappa'(\hat{p}_1,\hat{p}_2, \hat{p}_3,\hat{p}_4) >0$ follows from \cite[Theorem 6.2]{ST}. Hence, $(\hat{p}_A,\hat{p}_B,\hat{D}) := \mathcal{T}^{-1} (\hat{p}_1,\hat{p}_2, \hat{p}_3,\hat{p}_4)$ is an equilibrium of the semiflow $\Phi$ associated with \eqref{eq:ABD_add}. Because $$ \kappa(\hat{p}_A,\hat{p}_B,\hat{D}) = \kappa'(\hat{p}_1,\hat{p}_2, \hat{p}_3,\hat{p}_4) >0\,, $$ $(\hat{p}_A,\hat{p}_B,\hat{D})$ is an internal equilibrium of \eqref{eq:ABD_add}. Finally, \eqref{eq:corexistence} follows from \eqref{eq:prop2a}, \eqref{eq:prop2b}, and \eqref{eq:cor7.4b}. \end{proof} \subsection{Global asymptotic stability} Let $\la > \max\{\la_A, \la_B\}$ and let $(\hat{p}_A, \hat{p}_B, \hat{D})$ be an internal equilibrium given by Corollary \ref{cor:existence}, we will show that it attracts all trajectories initiating in $\mathbf{Y} \setminus \mathbf{Y}_0$. This in particular implies the uniqueness of the internal equilibrium. Part (b) of Theorem \ref{thm:7.1} is an immediate consequence of Corollary~\ref{cor:existence} and the following proposition. \begin{proposition}\label{prop:final} Let $\la > \max\{\la_A, \la_B\}$. For every sufficiently small $\epsilon>0$, the internal equilibrium $(\hat{p}_A, \hat{p}_B, \hat{D})$ attracts all trajectories initiating in $\mathbf{Y} \setminus \mathbf{Y}_0$, where convergence occurs in $[C^2(\bar\Om)]^3$. In particular, $(\hat{p}_A, \hat{p}_B, \hat{D})$ is the unique internal equilibrium of \eqref{eq:ABD_add}. \end{proposition} To prepare for the proof of Proposition \ref{prop:final}, we define $$ (\tilde{p}_A(x,t), \tilde{p}_B(x,t), \tilde{D}(x,t)):= (p_A(x,t) - \hat{p}_A(x), p_B(x,t) - \hat{p}_B(x), D(x,t) - \hat{D}(x)). $$ If $\la > \max\{\la_A, \la_B\}$, then by Proposition~\ref{prop:persistence}, Remark~\ref{rem:D_to_O(eps)}, and Corollary~\ref{cor:existence}, there exist $C_1>0$ and $\ep_1>0$ such that \begin{equation}\label{eq:ultimate} \limsup_{t \to \infty}\left[ \left\|(\tilde{p}_A, \tilde{p}_B)(\cdot,t) \right\|_{C^1(\bar\Omega)} + \|\tilde{D}(\cdot,t) \|_{C(\bar\Omega)} \right]\leq C_1 \epsilon \end{equation} for every $\ep\in(0,\ep_1]$. Furthermore, observe that $(\tilde{p}_A(x,t), \tilde{p}_B(x,t), \tilde{D}(x,t))$ satisfies \begin{subequations}\label{eq:subtractp} \begin{alignat}{3} &\partial_t \tilde{p}_A - \Delta \tilde{p}_A - \la \alpha(x) (1- 2\hat{p}_A(x))\tilde{p}_A = -\la\alpha(x)(\tilde{p}_A)^2+ \la \beta(x) \tilde{D} &&\quad\text{ in } \Omega\times(0,\infty)\,, \\ &\partial_t \tilde{p}_B - \Delta \tilde{p}_B - \la \beta(x) (1- 2\hat{p}_B(x))\tilde{p}_B = -\la \beta(x)(\tilde{p}_B)^2+ \la\alpha(x) \tilde{D} &&\quad\text{ in } \Omega\times(0,\infty)\,, \\ &\partial_\nu \tilde{p}_A = \partial_\nu \tilde{p}_B = 0 &&\quad\text{ on } \partial\Omega\times(0,\infty)\,, \end{alignat} \end{subequations} and \begin{subequations} \begin{alignat}{2} &\partial_t \tilde{D} - \Delta \tilde{D} -\la \left[ \alpha (1-2p_A) + \beta (1-2p_B)\right]\tilde{D} + \frac{1}{\epsilon}\tilde{D} \notag \\ &\qquad = 2\nabla p_B \cdot \nabla \tilde{p}_A + 2 \nabla \hat{p}_A \cdot \nabla \tilde{p}_B - 2\la \alpha \hat{D} \tilde{p}_A - 2\la \beta \hat{D} \tilde{p}_B &&\quad\text{in } \Omega\times(0,\infty)\,, \label{PDE_tilde D} \\ &\partial_\nu \tilde{D} = 0 &&\quad\text{on } \Omega\times(0,\infty)\,. \end{alignat} \end{subequations} \begin{lemma} Let $\la > \max\{\la_A, \la_B\}$. Then there exists $C_2>0$ such that \begin{equation}\label{eq:step3b} \limsup_{t \to \infty} \|\tilde{D}(\cdot,t)\|_{C(\bar\Omega)} \leq \epsilon C_2 \limsup_{t \to \infty} \|(\tilde{p}_A(\cdot,t), \tilde{p}_B(\cdot,t))\|_{C^1(\bar\Omega)} \end{equation} for every $\ep \le \min\{\ep_1,\ep_2\}$, where $\epsilon_1$ is associated with \eqref{eq:ultimate} and $\epsilon_2$ is chosen such that \begin{equation} \frac{1}{2\epsilon_2} = \la\sup_{x \in \Omega} \left( |\alpha(x)| + |\beta(x)|\right) \geq \la \sup_{x \in \Omega} |\alpha(x)(1-2p_A(x)) + \beta(x)(1-2p_B(x))| \,. \label{1/eps_bound2} \end{equation} \end{lemma} \begin{proof} To prove \eqref{eq:step3b}, we define $$ \tilde{D}^*(t)=\max\{\sup_{x \in \Omega} \tilde{D}(x,t),0\} \;\text{ and }\; \tilde{p}^*(t) = \|(\tilde{p}_A(\cdot,t), \tilde{p}_B(\cdot,t))\|_{C^1(\bar\Omega)} \,. $$ We choose, by \eqref{eq:lem7.4-2}, a positive constant $C_2>0$ such that the right hand side of \eqref{PDE_tilde D} is bounded from above by $\frac12C_2\tilde{p}^*(t)$ for $t \geq 1$. We claim that $\tilde{D}^*$ satisfies the following differential inequality (in the weak sense) \begin{equation}\label{eq:weakdiffeq} \frac{d}{dt}\tilde{D}^*(t) + \frac{1}{2\epsilon} \tilde{D}^*(t) \leq \frac{C_2}{2} \tilde{p}^*(t) \quad \text{ for }t \in (1,\infty). \end{equation} First, we observe that $\tilde{D}^*(t)$ is Lipschitz in $[1,\infty)$. For fixed $M>0$ and $t_1, t_2 \in [1,M]$, we assume without loss that $\tilde{D}^*(t_1) \leq \tilde{D}^*(t_2)$, and let $\sup_{x \in \Omega} \tilde{D}(x,t_i) = \tilde{D}(x_i,t_i)$ for some $x_i \in \bar\Omega$ ($i=1,2$). Then \begin{align*} |\tilde{D}^*(t_2) - \tilde{D}^*(t_1)| &= \max\{\tilde{D}(x_2,t_2),0\} - \max\{\tilde{D}(x_1,t_1),0\} \\ &\leq \tilde{D}(x_2,t_2) - \tilde{D}(x_1,t_1) \leq \tilde{D}(x_2,t_2) - \tilde{D}(x_2, t_1) \end{align*} and thus $[\tilde{D}^*]_{\rm{Lip}([1,M])} \leq \|\partial_t\tilde{D}\|_{C(\bar\Omega \times [1,M])}$, where the latter is finite because $\partial_t\tilde{D}$ is H\"older continuous by parabolic Schauder estimates. It remains to show that $\tilde{D}^*$ satisfies \eqref{eq:weakdiffeq} whenever it is differentiable. To this end, suppose $\frac{d}{dt}\tilde{D}^*(t_0)$ exists for some $t_0>0$. There are two cases: Case (a) $\sup_{x \in \Omega} \tilde{D}(x,t_0) <0$; Case (b) $\tilde{D}^*(t_0) = \tilde{D}(x_0,t_0) \geq 0$ for some $x_0 \in \bar\Omega$. In Case (a), $\tilde{D}^*(t) = 0$ in a neigborhood of $t_0$ and \eqref{eq:weakdiffeq} holds trivially. For Case (b), we claim that $\Delta \tilde{D}(x_0,t_0) \leq 0$. Assume not, then $\Delta \tilde{D}(x_0,t_0) >0$ and $x_0$ cannot be an interior maximum point. Thus, $x_0 \in \partial\Omega$ and there exists $\delta'>0$ such that $$ \tilde{D}(x,t_0) < \tilde{D}(x_0,t_0) \,\,\text{ and }\quad \Delta \tilde{D}(x,t_0) >0\quad \text{in }B_{\delta'}(x_0) \cap \bar\Omega. $$ But then the Hopf lemma applies to yield that $\partial_\nu \tilde{D}(x_0,t_0) >0$. This is in contradiction with the Neumann boundary condition imposed on $\tilde{D}$ on $\partial\Omega \times (0,\infty).$ Thus, $\Delta \tilde{D}(x_0,t_0) \leq 0$. With this, we may evaluate \eqref{PDE_tilde D} at $(x_0,t_0)$ to obtain (here the choice of $\epsilon < \epsilon_2$ is needed) $$ \frac{\partial}{\partial t} \tilde{D}(x_0,t_0)+ \frac{1}{2\epsilon} \tilde{D} (x_0,t_0)\leq \frac{C_2}{2} \tilde{p}^*(t_0). $$ Since $\tilde{D}^*$ is differentiable at $t_0$ and $\tilde{D}^*(t_0)=\tilde{D}(x_0,t_0)\geq 0$, we must have $\frac{d}{dt}\tilde{D}^*(t_0) = \frac{\partial}{\partial t} \tilde{D}(x_0,t_0)$, hence we deduce \eqref{eq:weakdiffeq} at $t= t_0$. Since $\tilde{D}^* \in C([0,\infty)) \cap {\rm Lip}\,([1,\infty))$ (and thus absolutely continuous in $[1,\infty)$), and satisfies \eqref{eq:weakdiffeq} at all points where it is differentiable, it satisfies \eqref{eq:weakdiffeq} in the weak sense. From \eqref{eq:weakdiffeq} we deduce $$ \tilde{D}^*(t) \leq \tilde{D}^*(1) e^{-\frac{(t-1)}{2\epsilon}} + \frac{C_2}{2} \int_1^t e^{\frac{-(t-s)}{2\epsilon}} \tilde{p}^*(s)\,ds \, \quad \text{ for }t\geq 1. $$ This implies $$ \limsup_{t \to \infty} \left[\sup_{x \in \Omega} \tilde{D}(x,t)\right] \leq \limsup_{t \to \infty} \tilde{D}^*(t) \leq C_2\epsilon \limsup_{t \to \infty} \tilde{p}^*(t). $$ Similarly, we obtain $$ \liminf_{t \to \infty} \left[\inf_{x \in \Omega} \tilde{D}(x,t)\right] \geq -C_2\epsilon \liminf_{t \to \infty} \tilde{p}^*(t)\,, $$ which proves \eqref{eq:step3b}. \end{proof} We are now in the position to prove the main result of this section. \begin{proof}[Proof of Proposition \ref{prop:final}] We claim that \begin{equation}\label{eq:step3a} \limsup_{t \to \infty} \|(\tilde{p}_A,\tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)} = 0. \end{equation} To this end, let $L_{\hat p_A}$ and $L_{\hat p_B}$ be defined according to \eqref{def_L_ph}. By \eqref{eq:corexistence}, we can apply Lemma \ref{lem:U1} and obtain $$ \sigma(L_{\hat p_A}) \subset \{z \in \mathbb{C}: \textup{Re}\, z > \delta_0\} \; \text{ and }\, \sigma(L_{\hat p_B}) \subset \{z \in \mathbb{C}: \textup{Re}\, z > \delta_0\} $$ for some $\delta_0>0$. Hence, we can apply Lemma \ref{lem:U2} to \eqref{eq:subtractp} to deduce $$ \limsup_{t \to \infty} \|(\tilde{p}_A, \tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)} \leq C_3 \left[ \left(\limsup_{t \to \infty} \|(\tilde{p}_A,\tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)}\right)^2 + \limsup_{t \to \infty} \|\tilde{D}(\cdot,t)\|_{C(\bar\Omega)} \right] \,. $$ Now, by \eqref{eq:ultimate} and \eqref{eq:step3b} there exists a constant $C_4$ independent of $\epsilon$ such that $$ \limsup_{t \to \infty} \|(\tilde{p}_A, \tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)} \leq C_4 \epsilon \left[ \limsup_{t \to \infty} \|(\tilde{p}_A,\tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)}\right]\,. $$ This proves \eqref{eq:step3a} provided $\epsilon < \min\{\ep_1,\ep_2, 1/C_4\}$. Finally, the estimates \eqref{eq:step3a} and \eqref{eq:step3b} imply $$ \limsup_{t \to \infty} \|(\tilde{p}_A, \tilde{p}_B)(\cdot,t)\|_{C^1(\bar\Omega)} = \limsup_{t \to \infty} \|\tilde{D}(\cdot,t)\|_{C(\bar\Omega)} = 0\,, $$ i.e., $(p_A(\cdot,t), p_B(\cdot,t), D(\cdot,t)) \to (\hat{p}_A, \hat{p}_B, \hat{D})$ in $C^1(\bar\Omega) \times C^1(\bar\Omega) \times C(\bar\Omega)$ as $t\to\infty$. In particular, $ (\hat{p}_A, \hat{p}_B, \hat{D})$ is the unique internal equilibrium of \eqref{eq:ABD_add}. As before, for each fixed $\epsilon>0$, we may apply parabolic regularity theory to strengthen the above convergence to $[C^2(\bar\Omega)]^3$. This completes the proof. \end{proof} \section{Discussion}\label{sec:Disc} The aim of this work was the establishment of conditions for existence, uniqueness, and stability of two-locus clines. This has been achieved for two limiting cases: weak recombination ($\rh\ll1$, Theorem \ref{thm:weak_reco}) and strong recombination ($\rh\gg1$, Theorem \ref{thm:7.1}). In the latter case, even global asymptotic stability could be proved, whereas in the former case only existence and linear stability were proved. For general strength of recombination, the problem remains largely unresolved, and the equilibrium structure and dynamics are likely more complex. We conjecture that for intermediate recombination rates and if the strength of selection relative to diffusion is in a certain range, an internal equilibrium, i.e., a two-locus cline, can be simultaneously stable with a boundary equilibrium. For a related ODE model, in which there is unidirectional migration from one deme into an other deme, this was proved in \cite{BuergerAkerman2011}. Numerical solution of the system \eqref{eq:ABD_add} supports this conjecture (RB, unpublished). A global convergence result that applies to arbitrary recombination is Theorem \ref{thm:M1_globallystable}. It shows that for every fixed $r\ge 0$ and $s>0$, there exists $d_0=d_0(r,s)\gg1$ such that the monomorphic equilibrium with the highest spatially averaged fitness is globally asymptotically stable if $d>d_0$. We conjecture that for given $s>0$, $d_0$ can be chosen even independent of $r\ge0$; in other words, there exists $\la_0\ll 1$ such that such monomorphic equilibrium is globally asymptotically stable for \eqref{dynamics_pi} if $\la<\la_0$ (Remark \ref{rem:4.7}). A limiting case, for which we also conjecture existence of a globally asymptotically stable two-locus cline is that of weak migration relative to selection and recombination ($d\ll1$). However, this limit is degenerate. For a single locus, profiles of the clines were derived in this limit under various assumptions about dominance in \cite{LN2004} and \cite{NNS2010}. There are other cases that should be amenable to a rigorous analysis. For a finite number of demes, several limiting cases were studied rigorously in \cite{RB2009}. In such discrete-space models, selection and recombination in each deme are described by difference equations (if generations are discrete) or ODEs (if generations are overlapping), and migration between demes is modeled by an ergodic matrix. In \cite{RB2009}, global convergence results were proved for weak migration and for strong migration, subject to additional, also scaling, assumptions which guaranteed that the set of chain-recurrent points of an appropriate limiting system consists of hyperbolic equilibria only. There, an arbitrary number of multiallelic loci was admitted as well as selection schemes with dominance and epistasis. Despite this additional complications (which enable multiple stable equilibria), the proofs (if restricted to two diallelic loci) are simpler than here, and also different, because they rely on methods and results developed in \cite{NHB1999} and invoke perturbation theory of compact normally hyperbolic manifolds and of chain-recurrent sets for dynamical systems on compact state spaces. The case of strong recombination is briefly outlined in \cite[Section 7.9]{RB2014}. For the special case of two diallelic loci and additive fitnesses as in \eqref{s1234} and \eqref{S_i}, the result given there reduces to an analogue of the present Theorem~\ref{thm:7.1}. It is of considerable biological interest to study how the shape of a cline depends on the underlying parameters. In the present context, population genetic intuition suggests that the two-locus cline becomes steeper with stronger linkage, i.e., smaller $r$ (hence $\rh$), provided the functions $\alpha$ and $\be$ have the same sign. The reason is that positive linkage disequilibrium (covariance) between the loci will be generated in this case, so that a kind of mutual reinforcement emerges. Support for this conjecture comes from numerical results and formal calculations \cite{Barton1983,Barton1986,Slatkin1975}, as well as from related ODE models \cite{AkermanBuerger2014,BuergerAkerman2011,GeroldingerBuerger2015}. For a step environment on the real line, i.e., if each of $\alpha(x)$ and $\be(x)$ assume only two values and change sign at the same location, the slope of each of the allele-frequency clines ($p_A$, $p_B$) at the step was shown to increase with decreasing $\rh$ provided $\rh$ was sufficiently large \cite{RB2017}. This was done by deriving an explicit first-order approximation of the two-locus cline. It would be of interest to show similar results for the allele-frequency clines of the present model, possibly following \cite{LL2011} and using $||\nabla p_A||_{L^2(\Om)}$ as a measure of the steepness. Throughout the present paper, we assumed an open bounded domain. It would be desirable and challenging to develop an analogous theory for unbounded domains. For one locus with two alleles, various results on the existence, uniqueness and stability of clines were derived in \cite{Conley1975} and \cite{Fife&Peletier1977}. In particular, Conley \cite{Conley1975} showed that a cline exists if the function describing the influence of environmental variation, say $h(x)$, is not integrable near $\pm\infty$ and $\operatorname{sgn} h(x) = \operatorname{sgn} x$. Therefore, in contrast to a bounded domain, a cline exists independently of the strength of diffusion relative to selection (see also \cite{RB2017} for the two-locus model with a step environment). For the two-locus case, one may conjecture that a two-locus cline exists if both $\alpha(x)$ and $\be(x)$ satisfy these conditions on $h(x)$. Another general assumption was that the functions $\alpha(x)$ and $\be(x)$ change sign in $\Om$, i.e., (A). For a single locus, it is well known that in the absence of a sign change, one of the trivial equilibria is globally asymptotically stable (eg.\ \cite{LN2002,LNN2013}). Assume that $\be(x)$ does not change sign, but $\alpha(x)$ does. Then the results in Section \ref{sec:ev-problems} imply that $\la_B=\la_\be=\infty$. Therefore, we can follow the proof of Theorem \ref{thm:7.1}(a) to show global convergence to a boundary equilibrium for every $\la>0$. In Theorem \ref{thm:7.1}, the degenerate case $\la=\max\{\la_A,\la_B\}$ was excluded. Assuming $\la=\la_A > \la_B$, our results in Section \ref{sec:strong_reco} show that $\theta_\alpha=0$ (or $\theta_\alpha=1$) and $0<\theta_\be<1$. Straightforward linearization is insufficient to determine whether the perturbation of the equilibrium $(\theta_\alpha,\theta_\be,0)$ is in the state space or not. We expect that a sufficient condition for the existence of an internal equilibrium for large $\rh$ is that $\alpha(x)$ and $\be(x)$ have the same sign. {\bf Acknowledgments.} The authors gratefully acknowledge helpful discussions with Profs.\ Josef Hofbauer and Yuan Lou, inspiring communication with Prof.\ Thomas Nagylaki, and useful comments by T.\ Nagylaki and an anonymous reviewer. LS and RB were supported by the Austrian Science Fund (FWF) through Grant P25188-N25. LS was also supported by the National Natural Science Foundation of China (NSFC), Grant 11501283.
1,314,259,996,844
arxiv
\section{Introduction} Stroke (i.e., cerebrovascular disease) is the fourth leading cause of death in the general population and {third} --- behind heart disease and cancer --- among those aged 85 and older, with rates increasing exponentially with age \citep{deaths}. Geographic trends in stroke mortality have been studied as far back as the 1960s, when \citet{strokebelt} identified a region of the southeastern United States (US) stretching from Mississippi to North Carolina which had the highest rates of stroke mortality --- a region which would become known as the ``stroke belt.'' Later work by \citet{shiftingbelt} noticed an apparent shift in the stroke belt, observing that regions of the Mississippi River Valley appeared in the highest decile of mortality rates in the early 1990s where they had previously not. More recently, \citet{linda} have studied geographic trends in stroke hospitalizations from 1995 to 2006, noting that, among Medicare beneficiaries ages 65 and older, this shift in the stroke belt has persisted, stretching further into parts of Texas and Oklahoma. In addition to changing geographic patterns, numerous studies have observed the overall declines in stroke mortality \citep[e.g.,][]{regards:declines,gillum:2011:stroke}. While many of these studies have age-adjusted their data, accounting for the variation in age distributions among counties and disparities in stroke mortality across age ranges, this precludes inference within individual age groups. {On the other hand, this aggregation and data standardization step also helps mitigate the issue of small population sizes and low number of stroke deaths in many US counties, an issue that can lead to unreliable rate estimates and is only exacerbated when the data are stratified by a factor such as age group. In this work, however, our goal is to investigate spatiotemporal trends in stroke mortality by jointly modeling data from three age-based subpopulations, permitting more reliable inference at the county level for each age group while preserving the ability to compute age-adjusted rates.} More specifically, we look to build a complex multivariate space-time model which borrows strength across space, time, and age group. {We also propose a new tool for measuring declines in mortality which accounts for temporal changes in mortality rates.} To achieve this, a natural starting point is the family of models based on the conditionally autoregressive (CAR) model proposed by \citet{besag}. Since its extension to the fully Bayesian setting in \citet{bym}, CAR models have sparked a wealth of research in the disease mapping context for both spatial \citep[e.g.,][]{besag:poisson,besag:higdon} and spatiotemporal applications. While these early examples were all based on the standard univariate CAR model, \citet{gelfand:mcar} developed methods for general multivariate CAR (MCAR) models, inspiring novel approaches for both multiple and spatiotemporal disease mapping {\citep[e.g.,][]{JBC,QBC,hcar,m-b:2013,b-r:2015}.} More recently, \citet{quick:waller} proposed a special case of the MCAR of \citet{gelfand:mcar} --- referred to as the multivariate space-time CAR ($\mbox{$\text{MSTCAR}$}$) model --- for the purpose of analyzing county-level heart disease mortality rates in the US over time for various race/gender groups. For a more complete coverage of the recent advances in spatial and space-time modeling, see \citet{BCG}. { In any discussion of the spatiotemporal modeling literature, we would be remiss not to mention the subject of \emph{separability} --- i.e., models where the spatiotemporal covariance can be decomposed into the product of a purely spatial covariance and a purely temporal covariance. While separable covariance structures offer computational benefits, \citet{stein2005} highlights some drawbacks of the separability assumption, and concerns over the utility of separable models have motivated the development of classes of nonseparable covariance functions in the univariate continuous space, continuous time setting \citep[e.g.,][]{cressie:huang, gneiting2002}. In cases where both space and time are discrete, as encountered in this study, \citet{knorr-held:2000} has discussed a variety of possible space-time interactions, but here again the focus has been on a single outcome, with extensions to the multivariate space-time setting being more recent developments. In addition to the nonseparable $\mbox{$\text{MSTCAR}$}$ proposed by \citet{quick:waller}, \citet{jon} implement a reduced rank multivariate spatiotemporal mixed effects model which is designed to analyze high dimensional data efficiently. That said, both of these approaches restrict their attention to the case of Gaussian data. {Other methods \citep[e.g.,][]{JBC,m-b:2013} allow for varying spatial structures by utilizing \emph{proper} MCAR models --- while these approaches are feasible when the number of spatial regions in the spatial domain is small, the large number of US counties prohibits the use of proper MCAR models.} } Here we extend the nonseparable $\mbox{$\text{MSTCAR}$}$ model to the generalized linear model setting to analyze spatiotemporal trends in the dataset comprised of stroke mortality counts among those aged 65--74, 75--84, and 85+ described in Section~\ref{sec:data}. Specifically, due to the rarity of stroke deaths, the Gaussian assumption of \citet{quick:waller} may not be appropriate. As such, in Section~\ref{sec:methods} we detail our approach for embedding the $\mbox{$\text{MSTCAR}$}$ model into a Poisson likelihood akin to \citet{bym}, {in addition to presenting the saved person-years (SPY) tool for measuring patterns in declines}. We then analyze the stroke mortality data in Section~\ref{sec:anal}, where we {discover different spatiotemporal trends in mortality rates across age groups and observe evidence that a separable model would be inappropriate for these data. Finally, we summarize our findings and offer some concluding remarks in Section~\ref{sec:disc}.} \section{Data Description}\label{sec:data} The study population for this analysis includes all US residents aged 65 or older. In order to assess differences across the high-risk age ranges, we separate our data into $N_g =3$ groups: those aged 65--74, those 75--84, and those 85$+$. Annual counts of stroke-related deaths per county per age group were obtained from the National Vital Statistics System (NVSS) of the National Center for Health Statistics (NCHS). Due to inconsistencies in the manner in which death records were recorded prior to 1973, we restrict the analysis to data from 1973--2013 ($N_t = 41$ years) to ensure valid comparisons across time. {Deaths from stroke were defined as those for which the underlying cause of death was cerebrovascular disease according to the 8th, 9th, and 10th revisions of the International Classification of Diseases (ICD; {ICD--8: 430--438}; ICD--9: 430--438; ICD-10: I60--69). {Based on the {comparability ratios reported by \citet{icd8:icd9} and \citet{icd9:icd10}, which indicate a high degree of similarity between the three revisions of the ICD}, we assume that this definition is consistent over the 41 year study period.} The geographic unit used in this analysis was the county (or county equivalent). Given changes in county definitions during the study period affecting ten counties (e.g., the merging/splitting of counties), a single set of $N_s$ = 3,099 regions (henceforth referred to as counties) from the contiguous lower 48 states (including the District of Columbia) was used for the entire study period. Annual population counts were based on the bridged-race intercensal estimates provided by {\citet{census}}. {While the number of individuals in each age bracket has risen considerably ({by 89\%, 101\%, and 267\%, respectively}), the number of stroke-related deaths for individuals 65--84 has decreased nearly 60\% since 1973. From a public health perspective, this reduction in stroke mortality is a great achievement. From a statistical perspective, however, this can lead to concerns over the reliability of county-level mortality rate estimates based on so few events. Figures illustrating the national population and death trends for each age group can be found in Web Appendix~B. } \begin{comment} \begin{figure}[t] \centering \includegraphics[width=.6\textwidth]{time_trends.png} \caption{Stroke-related death rates over time for each of the age groups compared to the stroke-related death rate for individuals over 65. {\color{red}I feel like this figure is uninteresting(?) because all three groups are so different (and thus it's hard to really see anything).}} \label{fig:trend} \end{figure} \end{comment} \section{Methods}\label{sec:methods} \subsection{Statistical model}\label{sec:mstcar} Letting $Y_i$ and $n_i$ denote the incidence of disease and the population at risk in county $i$, \citet{bym} proposed a model of the form \begin{equation} Y_{i} \sim \mbox{$\text{Pois}$}\left(n_i \exp\left[ {\bf x} _i \ensuremath{\boldsymbol{\beta}} + Z_i + \phi_i\right]\right), \text{for $i=1,\ldots,N_s$} \label{eq:bym} \end{equation} where $ {\bf x} _i$ denotes a $p$-vector of covariates with corresponding regression coefficients, $ \ensuremath{\boldsymbol{\beta}}$, $Z_{i}$ is a spatial random effect, and $\phi_i \mbox{$\stackrel{\text{ind}}{\sim}$} N\left(0,\tau^2\right)$ is an exchangeable random effect. In their work, \citet{bym} modeled $ {\bf Z} =\left(Z_1,\ldots,Z_{N_s}\right)'$ as arising from an intrinsic conditional autoregressive (CAR) model, which has the conditional distribution \begin{equation} Z_i\,\vert\, {\bf Z} _{(i)}, \ensuremath{\sigma}^2 \sim N\left(\sum_{j=1}^{N_s} w_{ij} Z_j \slash \sum_{j=1}^{N_s} w_{ij}, \ensuremath{\sigma}^2\slash \sum_{j=1}^{N_s} w_{ij}\right) \label{eq:car} \end{equation} where $ {\bf Z} _{(i)}$ denotes the vector $ {\bf Z} $ with the $i$th element removed and $w_{ij}=1$ if $i$ and $j$ are neighbors {(denoted $i\sim j$)} and 0 otherwise. Recommendations for prior distributions for $ \ensuremath{\sigma}^2$ and $\tau^2$ are offered by \citet{bernardinelli} and \citet{waller:carlin}. Extending~\eqref{eq:bym} and \eqref{eq:car} to a setting consisting of multiple spatial surfaces is straightforward. Letting $Y_{ikt}$ denote the number of deaths in county $i$ during year $t$ for age group $k$, we model \begin{equation} Y_{ikt} \sim \mbox{$\text{Pois}$}\left(n_{ikt} \exp\left[ {\bf x} _{ikt} \ensuremath{\boldsymbol{\beta}}_{kt} + Z_{ikt} + \phi_{ikt}\right]\right), \label{eq:bym_k} \end{equation} for $i=1,\ldots,N_s$, $k=1,\ldots,N_g$, and $t=1,\ldots, N_t$ where $\phi_{ikt} \mbox{$\stackrel{\text{ind}}{\sim}$} N\left(0,\tau_{k}^2\right)$. To account for the multivariate spatiotemporal association in the data, we follow the MSTCAR model of \citet{quick:waller} --- itself a special case of the $\mbox{$\text{MCAR}$}$ of \citet{gelfand:mcar} --- and let {$ {\bf Z} = \left( {\bf Z} _{1\cdot\cdot}',\ldots, {\bf Z} _{N_s\cdot\cdot}'\right)' \sim\mbox{$\text{MCAR}$}\left(1, \mbox{$\Sigma$} _{\eta}\right)$}, \begin{align*} \pi\left( {\bf Z} \,\vert\, \mbox{$\Sigma$} _{\eta}\right) &\propto \vert \mbox{$\Sigma$} _{\eta}\vert^{-(N_s-1)\slash 2} \exp\left[-\frac{1}{2} {\bf Z} '\left\{(D-W)\otimes \mbox{$\Sigma$} _{\eta}^{-1}\right\} {\bf Z} \right]\\ \text{and}\;\; {\bf Z} _{i\cdot\cdot}\,\vert\, {\bf Z} _{(i)\cdot\cdot}, \mbox{$\Sigma$} _{\eta} &\sim N\left(\sum_{j\sim i} {\bf Z} _{j\cdot\cdot} \slash m_i, \frac{1}{m_i} \mbox{$\Sigma$} _{\eta}\right), \end{align*} where $ {\bf Z} _{i\cdot\cdot} = \left( {\bf Z} _{i1\cdot},\ldots, {\bf Z} _{iN_g\cdot}\right)'$, $ {\bf Z} _{ik\cdot} = \left(Z_{ik1},\ldots,Z_{ikN_t}\right)'$, $W$ is an adjacency matrix with elements $w_{ij}$, $D$ is an $N_s\times N_s$ diagonal matrix with elements $m_i = \sum_{j=1}^{N_s} w_{ij}$, $ \mbox{$\Sigma$} _{\eta}$ denotes the $N_tN_g \times N_tN_g$ covariance structure for our $N_t$ years and $N_g$ age groups and $\otimes$ denotes the Kronecker product. The $\eta$ subscript is a reference to the construction of $ {\bf Z} $ from \citet{quick:waller} where the authors began by defining $ {\bf v} _{\iota\cdot t} \mbox{$\stackrel{\text{iid}}{\sim}$} N(\textbf{0},G_t)$, where $\iota = 1,\ldots, N_s-1$, to be a collection of independent $N_g$-dimensional random variables with covariance $G_t$ for $\iota=1,\ldots,(N_s-1)$ and $t=1,\ldots, N_t$. Using $R_k = R\left(\rho_k\right)$ to denote an age group-specific temporal correlation matrix based on an autoregressive order 1 (denoted AR(1)) model and letting $\widetilde{R}_k$ to be the Cholesky decomposition of $R_k$ such that $\widetilde{R}_k \widetilde{R}_k' = R_k$, the authors define $ \mbox{\boldmath $ \eta $} _{\iota k\cdot} = \widetilde{R}_k {\bf v} _{\iota k\cdot}$ where $ {\bf v} _{\iota k\cdot} = \left(v_{\iota k1},\ldots,v_{\iota kN_t}\right)'$. We then find that $ \mbox{\boldmath $ \eta $} _{\iota\cdot\cdot} \sim N\left(\textbf{0}, \mbox{$\Sigma$} _{\eta}\right)$, where \begin{align}\label{eq:Sig_eta} \mbox{$\Sigma$} _{\eta} = \begin{bmatrix} \widetilde{R}_{1,1}^* & \textbf{0} & \textbf{0}\\ \vdots & \ddots & \textbf{0} \\ \widetilde{R}_{N_t,1}^* & \cdots & \widetilde{R}_{N_t,N_t}^* \end{bmatrix} \begin{bmatrix} G_1 & \textbf{0} & \textbf{0}\\ \textbf{0} & \ddots & \textbf{0} \\ \textbf{0} & \textbf{0} & G_{N_t} \end{bmatrix} \begin{bmatrix} \widetilde{R}_{1,1}^* & \cdots & \widetilde{R}_{N_t,1}^*\\ \textbf{0} & \ddots & \vdots \\ \textbf{0} & \textbf{0} & \widetilde{R}_{N_t,N_t}^* \end{bmatrix} \end{align} and $\widetilde{R}_{t,t'}^*$ denotes the $N_g\times N_g$ diagonal matrix with elements $\left\{\widetilde{R}_k\right\}_{t,t'}$ for $k=1,\ldots,N_g$; $ {\bf Z} $ is then constructed from the $ \mbox{\boldmath $ \eta $} _{\iota\cdot\cdot}$ using the eigenvalues and eigenvectors of the matrix $D-W$ \citep[see][]{rue:held}. This structure is then denoted $ {\bf Z} \sim \mbox{$\text{MSTCAR}$}\left(G_{1},\ldots,G_{N_t}, \mbox{\boldmath $\rho$} \right)$. \subsection{Hierarchical model and computational details} While a Poisson model like~\eqref{eq:bym_k} is a straightforward extension of~\eqref{eq:bym}, such models can also pose computational challenges, particularly for large dimensions. For instance, the full conditional of $ {\bf Z} _{i\cdot\cdot}$, given by \begin{align} \pi\left( {\bf Z} _{i\cdot\cdot}\,\vert\, {\bf Y} , {\bf Z} _{(i)\cdot\cdot}, \ensuremath{\boldsymbol{\beta}}, \mbox{\boldmath $\phi$}, \mbox{$\Sigma$} _{\eta}\right) \propto& \prod_{k=1}^{N_g} \prod_{t=1}^{N_t} \mbox{$\text{Pois}$}\left(Y_{ikt}\,\vert\, n_{ikt} \exp\left[ {\bf x} _{ikt} \ensuremath{\boldsymbol{\beta}}_{kt} + Z_{ikt} + \phi_{ikt}\right]\right) \notag\\ &\times \pi\left( {\bf Z} _{i\cdot\cdot}\,\vert\, {\bf Z} _{(i)\cdot\cdot}, \mbox{$\Sigma$} _{\eta}\right) \end{align} is \emph{not} a known distribution. That is, if we use a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior distribution of our model parameters, this model may require the use of large multivariate Metropolis updates within our Gibbs sampler. \citet{besag:poisson} and \citet{knorr-held:rue} suggest a reparameterization of~\eqref{eq:bym_k} which involves integrating $\phi_{ikt}$ out of the model, yielding a Gaussian full conditional for $ {\bf Z} _{i\cdot\cdot}$ and requiring Metropolis updates for $\theta_{ikt} = {\bf x} _{ikt} \ensuremath{\boldsymbol{\beta}}_{kt} + Z_{ikt} + \phi_{ikt}$. Fortunately, the $\theta_{ikt}$ are independent of one another given $ {\bf Y} $ and the other model parameters, so these Metropolis updates can be conducted independently and in parallel. We complete our hierarchical model by specifying the following prior distributions for our other model parameters: a vague prior for $ \ensuremath{\boldsymbol{\beta}}$, a weakly informative inverse Gamma prior for each $\tau_k^2$, a beta prior for each $\rho_k$, and an inverse Wishart prior for each $G_t$ with hyperparameter $G$, itself modeled using a Wishart prior. While this structure on the covariance matrices is likely unnecessary given the number of spatial regions in the data \citep[see the discussion of prior sensitivity in spatial models by][]{bernardinelli}, this comes at little-to-no computational cost (see {Web Appendix~A.6}) and offers a convenient means for specifying proper priors. Putting these pieces together, our full hierarchical model is as follows: \begin{align}\label{eq:hier} \pi\left( \ensuremath{\boldsymbol{\beta}}, {\bf Z} ,G,G_1,\ldots,G_t, \mbox{\boldmath $\rho$} ,\tau_k^2, \mbox{\boldmath $ \theta $} \,\vert\, {\bf Y} \right) \propto& \prod_{i,k,t} \mbox{$\text{Pois}$}\left(Y_{ikt}\,\vert\, n_{ikt}\exp\left[\theta_{ikt}\right]\right) \times N\left( \mbox{\boldmath $ \theta $} \,\vert\, X \ensuremath{\boldsymbol{\beta}}+ {\bf Z} , \mbox{$\Sigma$} _{\theta}\right)\notag\\ &\times \mbox{$\text{MSTCAR}$}\left( {\bf Z} \,\vert\, G_{1},\ldots,G_{N_t}, \mbox{\boldmath $\rho$} \right) \notag\\ &\times \prod_{t=1}^{N_t} \mbox{$\text{InvWish}$}\left(G_t\,\vert\, G,\nu\right) \times \mbox{$\text{Wish}$}\left(G\,\vert\, G_0,\nu_0\right)\notag\\ &\times \prod_{k=1}^{N_g} \left[\mbox{$\text{Beta}$}\left(\rho_k\,\vert\, a_{\rho},b_{\rho}\right) \times \mbox{$\text{IG}$}\left(\tau_k^2\,\vert\, a_{\tau},b_{\tau}\right)\right] , \end{align} where $ \mbox{$\Sigma$} _{\theta}$ is a diagonal matrix of size $N_sN_gN_t$ with elements $\tau_{k}^2$ and $X$ is the $(N_sN_gN_t\times p)$ matrix of covariates. {While full details for implementing this model in an MCMC framework are provided in {Web Appendix~A}, we would be remiss to not discuss the computational burden associated with fitting a nonseparable model as opposed to a {separable} model; i.e., letting $\rho_k = \rho$ for $k=1,\ldots,N_g$ and $G_t = G$ for $t=1,\ldots,N_t$ corresponds to fitting a separable multivariate space-time model with $ \mbox{$\Sigma$} _{\eta} = R\left(\rho\right) \otimes G$. First note that by using an AR(1) model for time, we can compute the $\widetilde{R}_{t,t'}^*$ elements of $ \mbox{$\Sigma$} _{\eta}$ in closed-form, reducing the burden of computing $ \mbox{$\Sigma$} _{\eta}^{-1}$ from an $N_tN_g \times N_tN_g$ matrix inversion to a series of $N_g\times N_g$ matrix inversions. Furthermore, while the nonseparable $\mbox{$\text{MSTCAR}$}$ model contains more parameters than its separable counterpart, the additional computational burden associated with its implementation in an MCMC framework is negligible. Specifically, the computations necessary to construct the full conditional distributions for each $G_t$ are simply a partitioning of those necessary for constructing the full conditional distribution of $G$ in a separable model.} When implementing this model in an MCMC framework, we have found that proper specification of the initial values can be crucial to facilitating convergence in a timely fashion. In particular, the parameters which require Metropolis updates --- $\rho_k$ and $\theta_{ikt}$ --- should be treated with care. For instance, we recommend initializing $\rho_k$ to be large (say, 0.90) if a high degree of temporal correlation is expected. More importantly, we recommend initializing \begin{align} \theta_{ikt}^{(0)} = \begin{cases} \log \left(Y_{ikt}\slash n_{ikt}\right), &\text{if $Y_{ikt} > \epsilon$}\\ \log \left(\sum_{i} Y_{ikt} \slash \sum_{i} n_{ikt}\right), &\text{if $Y_{ikt}\le \epsilon$} \end{cases}, \end{align} where $\epsilon\ge 0$ is some small nonnegative integer, as this will allow the model to learn about parameters such as $ \ensuremath{\boldsymbol{\beta}}$ and $ {\bf Z} $ early on in the process; in practice, letting $\epsilon=0$ has been sufficient. When $\theta_{ikt}$ is poorly initialized, however, the MCMC algorithm may take a large number of iterations to recover, resulting in a chain which is slow to converge. { \subsection{Assessment of reliability}\label{sec:reliable} The primary motivation for this work is to achieve more reliable age-specific mortality rate estimates for these data. In order to assess the reliability of our estimates, we will begin by generating \emph{synthetic} death counts from the posterior predictive distribution for $Y_{ikt}^{*}$, \begin{align*} Y_{ikt}^{*(\ell)}\,\vert\, \theta_{ikt}^{(\ell)} \sim \mbox{$\text{Pois}$}\left(n_{ikt}\exp\left[\theta_{ikt}^{(\ell)}\right]\right), \text{for $i=1,\ldots,N_s$, $j=1,\ldots,N_t$, $k=1,\ldots,N_g$}, \end{align*} and for $\ell=1,\ldots,L$, where $\theta_{ikt}^{(\ell)}$ denotes the $\ell$th (post-burn-in) sample from the posterior distribution of $\theta_{ikt}$. We can then compute the 95\% CI for $Y_{ikt}^*$ from these posterior predictions and determine the proportion of each county's $N_tN_g$ 95\% CIs that contain the observed $Y_{ikt}$ to estimate the coverage probability. As a baseline for comparison purposes, we will compare our results to those generated from an empirical Bayesian Poisson-gamma model of the form \begin{align} \mbox{\boldmath $\gamma$} \,\vert\, {\bf Y} ,a_{kt},b_{kt} \propto \prod_{ikt} \mbox{$\text{Pois}$}\left(Y_{ikt}\,\vert\, n_{ikt}\gamma_{ikt}\right) \times \mbox{$\text{Gamma}$}\left(\gamma_{ikt} \,\vert\, a_{kt},b_{kt}\right)\label{eq:poisgam} \end{align} where $a_{kt} = \sum_{i} Y_{ikt} \slash \sum_i n_{ikt} \times b_{kt}$ and $b_{kt} = 1000$, indicating a prior distribution equivalent to 1,000 additional persons with an event rate equal to the national average. We can then generate synthetic death counts from the resulting posterior predictive distribution, $Y_{ikt}^{\dagger (\ell)}\,\vert\, \gamma_{ikt}^{(\ell)} \sim \mbox{$\text{Pois}$}\left(n_{ikt}\gamma_{ikt}^{(\ell)}\right)$ as before. The method which yields coverage probabilities near 0.95 more consistently will be deemed more reliable. } \subsection{Tools for measuring temporal changes in mortality} When studying temporarily-varying mortality rates, it is often of interest to measure the decline from the beginning of the study period to the end. Letting $\lambda_{ikt} = \exp\left[\theta_{ikt}\right]$ denote the county-specific mortality rate for group $k$ at time $t$ and letting $\Delta_{ik}(t,t') = \left(\lambda_{ikt} - \lambda_{ikt'}\right)\slash\lambda_{ikt}$ denote the percent change from time $t$ to $t'> t$ for group $k$ in county $i$, an obvious choice would be to compute $\Delta_{ik}(1,N_t)$ for each county and each group. Similarly, one could define $\Delta_{i}(t,t') = \left(\sum_{k} n_{ikt}\lambda_{ikt} - \sum_{k} n_{ikt'} \lambda_{ikt'}\right)\slash \sum_{k} n_{ikt}\lambda_{ikt}$ and compute $\Delta_{i}(1,N_t)$ as an estimate of the county's percent decline. The drawback of these quantities is that they only account for the rates at the beginning and end of the study period, ignoring the intervening periods. Here, we propose a measure we refer to as the ``saved person-years'' --- or SPY --- which can be computed as \begin{align}\label{eq:lsaa} \mbox{$\text{SPY}$}_i &= \frac{1}{N_t-1} \sum_{t=2}^{N_t} \frac{\sum_k n_{ikt} \left(\lambda_{ik1} \left[1-\Delta_{\cdot k}(1,t)\right] - \lambda_{ikt}\right)}{\sum_{k} n_{ikt}} \times \text{100,000}, \end{align} where $\Delta_{\cdot k}(1,t) = \left(\lambda_{\cdot k1} - \lambda_{\cdot k N_t}\right)\slash \lambda_{\cdot k 1}$ denotes the nationwide average decline from time 1 to time $t$ with $\lambda_{\cdot kt} = \sum_i n_{ikt} \lambda_{ikt} \slash \sum_i n_{ikt}$. Note that $\mbox{$\text{SPY}$}_i$ is essentially a measure of the deviation between the expected rate for county $i$ at time $t$ if the county had declined at the rate of the national average --- i.e., $\lambda_{ik1} \left[1-\Delta_{\cdot k}(1,t)\right]$ --- and the rate we estimate from the model. While this quantity should not replace the crude measure of the percent decline, it is a simple and easily interpretable investigative tool that tells a more thorough story of a region's trajectory over the study period, as we will demonstrate in our analysis. In practice, we estimate $\theta_{ikt}$ by obtaining samples from its posterior distribution --- say, $\theta_{ikt}^{(\ell)}$ for iteration $\ell$ of our MCMC algorithm. As such, both $\Delta_{i}(t,t')$ and $\mbox{$\text{SPY}$}_i$ can be computed for each iteration of our MCMC algorithm, resulting in posterior distributions for each of these quantities. From this point forward, however, we restrict our attention to the posterior medians of $\Delta_{i}(t,t')$ and $\mbox{$\text{SPY}$}_i$ unless otherwise stated. \section{Analysis of the Stroke Mortality Data}\label{sec:anal} In the absence of covariate information, we fit the hierarchical model in~(\ref{eq:hier}) to the stroke mortality data described in Section~\ref{sec:data} using an intercept term for each combination of year and age group \citep[as required when using an improper CAR model, per][]{besag95}, forcing the random effects to account for a substantial amount of the spatio-temporal variability in the data. We place an informative beta prior on each $\rho_k$ to encourage higher temporal correlations in the model, and we use a vague inverse Wishart prior for each of the $G_t$. When running the MCMC algorithm, we thinned our posterior samples for $\theta_{ikt}$ by removing 9 out of 10 samples --- while this is not theoretically necessary, it {reduced the burden of storing excess samples for our nearly 400,000 random effects}. Estimates provided are based on posterior medians, and 95\% credible intervals (95\% CI) were obtained by taking the 2.5- and 97.5-percentiles from the thinned post-burn-in samples. Additional figures, including animations displaying temporal evolutions in the geographic trends, can be found in {Web Appendix~B}. Before delving into the epidemiologic findings, we evaluate some of the numerous variance parameters permitted by the use of a nonseparable model. While the $\mbox{$\text{MSTCAR}$}$ model in Section~\ref{sec:mstcar} is derived using temporally-varying covariance matrices, $G_t$, these parameters are not necessarily of direct interest as they are the variance parameters for $ {\bf v} _{\ell\cdot t}$, and thus they are \emph{not} directly interpretable with respect to the mortality rates. Instead, we need to use the posterior samples of $G_t$ and $\rho_k$ to construct $ \mbox{$\Sigma$} _{\eta}$ from~\eqref{eq:Sig_eta}. These values coincide with the conditional covariance matrix of $ {\bf Z} _{i\cdot\cdot}$ (when scaled by the number of neighbors, $m_i$), and thus \emph{are} {interpretable with respect to the log mortality rates}. {Furthermore, patterns in these parameters can be easily interpreted, as well. For instance, Figure~\ref{fig:var} displays the posterior estimates for the diagonal elements of $ \mbox{$\Sigma$} _{\eta}$ corresponding to each age group. Here, the declines in Figures~\ref{fig:var2} and~\ref{fig:var3} suggest a higher degree of spatial smoothing in later years than at the beginning of the study period, as the corresponding $Z_{ikt}$ become less free to deviate from their neighbors over time. As we will see, this may be in part due to declines in the mortality rates, themselves, as lower rates may also imply a smaller range of rates.} \begin{figure}[t] \begin{center} \subfigure[Ages 65--74]{\includegraphics[width=.32\textwidth]{var_figs_6.png}\label{fig:var1}} \subfigure[Ages 75--84]{\includegraphics[width=.32\textwidth]{var_figs_7.png}\label{fig:var2}} \subfigure[Ages 85+]{\includegraphics[width=.32\textwidth]{var_figs_8.png}\label{fig:var3}} \end{center} \caption{Temporal evolution of the diagonal elements of $ \mbox{$\Sigma$} _{\eta}$. While the scale of these parameters is not directly interpretable, declines in these variance parameters suggest an increase in spatial smoothing over time (particularly for those 75+).} \label{fig:var} \end{figure} { To assess the reliability of our estimates, we follow the approach set forth in Section~\ref{sec:reliable}. We begin by generating $L=1000$ replicates for each $Y_{ikt}$ from the posterior predictive distribution corresponding to both the $\mbox{$\text{MSTCAR}$}$ model fit in~\eqref{eq:hier}, as well as the empirical Bayesian Poisson-gamma model from~\eqref{eq:poisgam}. After computing the 95\% CI of the replicates from both models and comparing these intervals to the observed $Y_{ikt}$, we find that the mean county-specific coverage from the $\mbox{$\text{MSTCAR}$}$ is 97.7\%, whereas the Poisson-gamma yields a coverage of 99.6\%. What is immediately clear from this result is that while the intervals from the $\mbox{$\text{MSTCAR}$}$ are consistently more narrow than those from the Poisson-gamma model, this increase in precision due to the $\mbox{$\text{MSTCAR}$}$ model is valid. {Figures related to this reliability assessment are included in Web Appendix~B}. } Having demonstrated the necessity and utility of the $\mbox{$\text{MSTCAR}$}$ model for these data, we now shift our attention to the rate estimates themselves. For the sake of illustration of the temporal trends and the $\mbox{$\text{SPY}$}$ tool, we highlight two counties: one from the heart of the stroke belt (Jefferson County, AL), and one from the opposite side of the country (King County, WA). Figure~\ref{fig:gt} displays time trends of the estimated rates for these two counties for each of the age groups, along with the national averages. Here, we observe that while these counties exhibit similar trends for the 65--74 age group --- where King County consistently outperforms both Jefferson County and the nation as a whole --- the trends for the remaining age groups are quite different. This is particularly true for the eldest age group in our study, as Jefferson County experienced such a sharp decline from 1973 to 1990 so as to not only pass King County, but also the national average. This period of consistent declines was followed by stagnant rates through the early 1990s and an \emph{increase} in rates until the early 2000s, leading to the county ending the study period among the worst in the nation {(albeit with much lower rates than in the 1970s)}. \begin{figure}[t] \begin{center} \subfigure[Ages 65--74]{\includegraphics[width=.32\textwidth]{grouptrend_1.png}\label{fig:gt1}} \subfigure[Ages 75--84]{\includegraphics[width=.32\textwidth]{grouptrend_2.png}\label{fig:gt2}} \subfigure[Ages 85+]{\includegraphics[width=.32\textwidth]{grouptrend_3.png}\label{fig:gt3}} \end{center} \caption{Comparison of the rate trends over time for two selected counties (solid lines), along with the national average (dashed line). Gray bands denote 95\% CI. Also displayed for each county is the expected rate had the county declined at the same rate as the national average (dotted lines). } \label{fig:gt} \end{figure} One aspect highlighted by these figures, however, is that computing a simple percent decline does not tell the whole story. For instance, while Jefferson County experienced some degree of increasing rates in each of the three age groups during the late 1990s, simply measuring the age-standardized decline from 1973 to 2013 would overlook the strides the county took during the first half of the study period, when Jefferson County declined at a rate much faster than the national average. The same cannot be said, however, for King County, which underperformed during the first 30 years of the study, despite experiencing an overall decline that outperformed the national average. This discrepancy can be observed by investigating the $\mbox{$\text{SPY}$}_i$ tool for each county. Here, we find that Jefferson County saved 136.6 (98.8, 174.9) person-years per 100,000, while the $\mbox{$\text{SPY}$}_i$ for King County was -34.0 (-61.7, -5.3) person-years per 100,000, reinforcing our claim that King County's declines lagged behind the national average. A map of the $\mbox{$\text{SPY}$}_i$ values for all 3,099 counties can be found in Figure~\ref{fig:lsaa}, where we find that parts of the Deep South outperformed much of the nation, and a comparison to the percent decline can be found in Figure~B.4 of the Web Appendix.} \begin{figure}[t] \centering \includegraphics[width=.5\textwidth]{LSAA_noleg.png} \includegraphics[width=.10\textwidth]{SPY_leg.png} \caption{Map of the saved person-years (SPY) measure from~\eqref{eq:lsaa}, which measures the average difference between the model-estimated mortality rate of a county (per 100,000) and the expected mortality rate assuming a rate of decline equivalent to the national average.} \label{fig:lsaa} \end{figure} Turning our attention to the geographic patterns in stroke death rates presented in Figure~\ref{fig:rates}, we find substantial differences between age groups. For the youngest subpopulation (ages 65--74), the clear geographic pattern shown in Figure~\ref{fig:6a} prominently highlights the so-called ``stroke belt'' in the rates from 1973, and the map of the percent declines in Figure~\ref{fig:6d} --- with large declines along the East Coast and smaller declines in the region stretching from Texas to the Dakotas --- seems to indicate the ``shift'' in the stroke belt identified by \citet{shiftingbelt}. These patterns, however, are much less evident among the older age groups, especially for the eldest population. Here, the rates in Figure~\ref{fig:8a} exhibit far less spatial clustering, while Figure~\ref{fig:8d} suggests that the rate of declines in mortality for those 85 and older was generally slower nationwide compared to those observed for those aged 65--74 and 75--84. \begin{comment} \begin{figure}[t] \begin{center} \subfigure[Ages 65--74: 1973]{\includegraphics[width=.45\textwidth]{Age/longquin_6_1.png}\label{fig:6a}} \subfigure[Ages 65--74: Declines]{\includegraphics[width=.45\textwidth]{declineleg_6.png}\label{fig:6d}} \end{center} \caption{Maps of the stroke mortality rates at the beginning of the study period (1973) and {the percent declines over the course of the study period}. Note that estimates for counties with fewer than 100 people in an age bracket in 1973 are suppressed.} \label{fig:rates} \end{figure} \end{comment} \begin{sidewaysfigure}[t] \begin{center} \subfigure[Ages 65--74: 1973]{\includegraphics[width=.32\textwidth]{longquin_6_1.png}\label{fig:6a}} \subfigure[Ages 75--84: 1973]{\includegraphics[width=.32\textwidth]{longquin_7_1.png}\label{fig:7a}} \subfigure[Ages 85+: 1973]{\includegraphics[width=.32\textwidth]{longquin_8_1.png}\label{fig:8a}}\\ \subfigure[Ages 65--74: Declines]{\includegraphics[width=.32\textwidth]{declineleg_6.png}\label{fig:6d}} \subfigure[Ages 75--84: Declines]{\includegraphics[width=.32\textwidth]{declineleg_7.png}\label{fig:7d}} \subfigure[Ages 85+: Declines]{\includegraphics[width=.32\textwidth]{declineleg_8.png}\label{fig:8d}} \end{center} \caption{Maps of the stroke mortality rates at the beginning of the study period (1973) and {the percent declines over the course of the study period}. Note that estimates for counties with fewer than 100 people in an age bracket in 1973 are suppressed.} \label{fig:rates} \end{sidewaysfigure} \begin{comment} \begin{figure}[t] \begin{center} \subfigure[Ages 65--74]{\includegraphics[width=.32\textwidth]{bhamb_1.png}\label{fig:bham1}} \subfigure[Ages 75--84]{\includegraphics[width=.32\textwidth]{bhamb_2.png}\label{fig:bham2}} \subfigure[Ages 85+]{\includegraphics[width=.32\textwidth]{bhamb_3.png}\label{fig:bham3}} \end{center} \caption{Number of deaths (per 100,000) for Jefferson County, AL. {\color{red}need to add ``(per 100,000)'' to figures}. Need to decide if this is worth putting into the paper.} \label{fig:bham} \end{figure} \end{comment} \section{Discussion}\label{sec:disc} This paper has extended the $\mbox{$\text{MSTCAR}$}$ model of \citet{quick:waller} to a generalized linear model for the purposes of analyzing a dataset comprised of county-level counts of stroke mortality, allowing for a nonseparable multivariate spatiotemporal dependence structure. Our analysis revealed spatiotemporal trends in stroke mortality that varied by age group, in addition to the nationwide reduction in rates previously noted in the literature \citep[e.g.,][]{gillum:2011:stroke,linda}. We also observed differing aspects of the western shift in parts of the South for each age group, as identified in the total population by \citet{shiftingbelt}, and explored the impact of non-linear trends in stroke mortality via the SPY tool. When modeling event rates for a rare event such as stroke mortality, it is important to leverage as much information as possible to achieve reliable estimates. In addition to incorporating spatial structure into the model --- allowing for regional patterns such as the stroke belt to lend support to less populated counties --- the $\mbox{$\text{MSTCAR}$}$ accounts for temporal correlation between observations in consecutive years and multivariate dependencies, such as those between observations in different age-brackets. These additional sources of information can be invaluable when dealing with outlying counts, a problem which particularly plagues counties with small population sizes where an increase/decrease of a single death can dramatically change the observed mortality rate. Overcoming this challenge will be paramount in our future work, where we wish to use the $\mbox{$\text{MSTCAR}$}$ model to analyze racial and gender disparities in heart disease and stroke mortality across various age groups. While there is computational burden associated with implementing the $\mbox{$\text{MSTCAR}$}$ for such large multivariate datasets, these analyses will provide incredible insight into these various disparities. \begin{comment} Another area for future research is to assess how the effect of known risk factors for stroke mortality differ across the various age groups. Based on a comparison of Figures~\ref{fig:6a}--\ref{fig:8a}, as well as Figures~B.4--B.6 in Web Appendix~B, it appears as though the degree of spatial clustering in the mortality rates declines from the 65--74 age group to the 85+ age group. This could suggest that while stroke mortality amongst younger populations may be heavily influenced by environmental factors --- such as {diet, smoking, and the socioeconomic factors discussed by \citet{linda}} --- many of the risk factors for stroke mortality among those ages 85 and older may be less geographically structured. \end{comment} {Another area for future research is understanding the factors that contribute to differential geographic patterns by age group in both the baseline 1973 stroke mortality rates as well as the patterns of declining stroke mortality rates. While it is well known that the risk for stroke increases with age, the spatiotemporal patterns of stroke mortality by age group have not been documented previously. Hypotheses for understanding the observed differential spatio-temporal patterns in stroke mortality by age group include, but are not limited to, the following categories: 1) spatio-temporal differences in the relative contributions of decreasing case fatality rates and incidence rates by age group {\citep[e.g.,][]{case_fatality,geo_variation}}; 2) differential influence of living conditions (e.g. socioeconomic resources, access to quality health care, access to healthy food and recreational environments, etc.) or changes in those living conditions, on stroke mortality by age group {\citep[e.g.,][]{tassone,neighborhood}}; or 3) differences in the accuracy of death certificate reporting by age group due to more co-morbidities and competing conditions of death in the older ages. } Lastly, a topic worth discussing is that publicly available data for rare events such as deaths from stroke can be difficult to find due to data privacy issues. While data such as the number of stroke-related deaths are publicly available at the county-level from NCHS (via CDC Wonder), subsets of data with less than 10 events in a geographic region are suppressed beginning in 1989 \citep{cdc:sharing}. This results in over {80}\% of the data points for those aged 65--74 and nearly 70\% of the over 380,000 data points used in this analysis being suppressed to the public. To overcome this privacy issue while still preserving utility, \citet{quick:zero} have explored the risks associated with the generation of {synthetic} data for rare event data for a single population for a single year of data. As an area of active research, we hope to use the methodology proposed here to generate reliable synthetic public-use data for small areas which respects the spatial-, temporal-, and multivariate structures in the true data, thereby providing greater access to complete, high quality data. \bibliographystyle{jasa}
1,314,259,996,845
arxiv
\section{Introduction} \subsection{Motivation} Recent progress in deep learning has profoundly affected many areas of artificial intelligence. One exception is probabilistic first-order logical reasoning. In this paper, we seek to closely integrate probabilistic logical reasoning with the powerful infrastructure that has been developed for deep learning. The end goal is to enable deep learners to incorporate first-order probabilistic KBs, and conversely, to enable probabilistic reasoning over the outputs of deep learners. \begin{figure} \begin{tabbing}1234\=\kill \cd{answer(Question,Answer) :-} \\ \> \cd{classification(Question,aboutActedIn),} \\ \> \cd{mentionsEntity(Question,Entity), actedIn(Answer,Entity).}\\ \cd{answer(Question,Answer) :- } \\ \> \cd{classification(Question,aboutDirected),} \\ \> \cd{mentionsEntity(Question,Entity), directed(Answer,Entity).}\\ \cd{answer(Question,Answer) :- } \\ \> \cd{classification(Question,aboutProduced), }\\ \> \cd{mentionsEntity(Question,Entity), produced(Answer,Entity).}\\ \ldots\\ \cd{mentionsEntity(Question,Entity) :- } \\ \> \cd{containsNGram(Question,NGram), matches(NGram,Name),}\\ \> \cd{possibleName(Entity,Name), \underline{popular}(Entity).}\\ ~\\ \cd{classification(Question,Y) :- }\\ \> \cd{containsNGram(Question,NGram), \underline{indicatesLabel}(NGram,Y).}\\ \cd{matches(NGram,Name) :- }\\ \> \cd{containsWord(NGram,Word), containsWord(Name,Word), \underline{important}(Word).}\\ \end{tabbing} \caption{A simple theory for question-answering against a KB.} \label{fig:qatheory} \end{figure} As motivation, consider the program of Figure~\ref{fig:qatheory}, which could be plausibly used for answering simple natural-language questions against a KB, such as ``Who was the director of Apocalyse Now?'' The main predicate \cd{answer} takes a question and produces an answer (which would be an entity in the KB). The predicates \cd{actedIn}, \cd{directed}, etc, are from the KB. For the purpose of performing natural-language analysis, the KB has also been extended with facts about the text that composes the training and test data: the KB stores information about word $n$-grams contained in the question, the strings that are possible names of an entity, and the words that are contained in these names and $n$-grams. The underlined predicates \cd{indicatesLabel}, \cd{important}, and \cd{popular} are ``soft'' KB predicates, and the goal of learning is to find appropriate weights for the soft-predicate facts---e.g., to learn that \cd{indicatesLabel(director, aboutDirected)} has high weight. Ideally these weights would be learned indirectly, from {observing inferences made using the KB}. In this case we would like to learn from question-answer pairs, which rely indirectly on the KB predicates like \cd{actedIn}, etc, rather than from hand-classified questions, or judgements about specific facts in the soft predicates. TensorLog, the system we describe here, makes this possible to do at reasonable scale using conventional neural-network platforms. For instance, for a variant of the problem above, we can learn from 10,000 questions against a KB of 420,000 tuples in around 200 seconds per epoch, on a typical desktop with a single GPU. \subsection{Approach and Contributions} The main technical obstacle to integration of probabilistic logics into deep learners is that most existing first-order probabilistic logics are not easily adapted to evaluation on a GPU. One superficial problem is that the computations made in theorem-proving are not numeric, but there is also a more fundamental problem, which we will now discuss. The most common approach to first-order inference is to ``ground'' a first-order logic by converting it to a zeroth-order format, such as a boolean formula or a probabilistic graphical model. For instance, in the context of a particular KB, the rule \begin{equation} \label{eq:join2} {p(X,Y) \leftarrow q(Y,Z),r(Z,Y).} \end{equation} can be ``grounded'' as the following finite boolean disjunction, where ${\cal{C}}$ is the set of objects in the KB: \[ \bigvee_{\exists x, y, z \in {\cal{C}}} \left( {p}(x,y) \vee \neg {q}(y,z) \vee \neg {r}(z,y) \right) \] This boolean disjunction can embedded in a neural network, e.g. to initialize an architecture \cite{TowellAAAI90} or as a regularizer \cite{hu2016harnessing,riedelinjecting2015}. For probabilistic first-order languages (e.g., Markov logic networks \cite{RichardsonMLJ2006}), grounding typically results in a directed graphical model (see \cite{kimmig2015lifted} for a survey of this work). The problem with this approach is that groundings can be very large: even the small rule above gives a grounding of size $o(|{\cal{C}}|^3)$, which is likely much larger than the size of the KB, and a grounding of size $o(|{\cal{C}}|^n)$ is produced by a rule like \begin{equation} \label{eq:chain} {p}(X_0,X_n) \leftarrow q_1(X_0,X_1),q_2(X_1,X_2),\ldots,q_n(X_{n-1},X_n) \end{equation} The target architecture for modern deep learners is based on GPUs, which have limited memory: hence the grounding approach can be used only for small KBs and short rules. For example, \cite{serafini2016logic} describes experimental results with five rules and a few dozen facts, and the largest datasets considered by \cite{DBLP:journals/corr/SourekAZK15} contain only about 3500 examples. Although not all probabilistic logic implementations require explicit grounding, a similar problem arises in using neural-network platforms to implement any probabilistic logic which is computationally hard. For many probabilistic logics, answering queries is \#P-complete or worse. Since the networks constructed in modern deep learning platforms can be evaluated in time polynomial in their size, no polysize network can implement such a logic, unless \#P=P. This paper addresses these obstacles with several interrelated contributions. First, in Section~\ref{sec:background}, we identify a restricted family of probabilistic deductive databases (PrDDBs) called \trm{polytree-limited stochastic deductive knowledge graphs (ptree-SDKGs)} which are tractable, but still reasonably expressive. This formalism is a variant of stochastic logic programs (SLPs). We also show that ptree-SDKGs are in some sense maximally expressive, in that we cannot drop the polytree restriction, or switch to a more conventional possible-worlds semantics, without making inference intractible. Next, in Section~\ref{sec:inf-alg}, we present an algorithm for performing inference for ptree-SDKGs. This algorithm performs inference with a dynamic-programming method, which we formalize as belief propagation on a certain factor graph, where each random variable in the factor graph correspond to possible bindings to a logical variable in a proof, and the factors correspond to database predicates. In other words, the random variables are multinomials over all constants in the database, and the factors constrain these bindings to be consistent with database predicates that relate the corresponding logical variables. Although this is a simple idea, to our knowledge it is novel. We also discuss in some detail our implementation of this logic, called TensorLog. We finally discuss related work, experimental results, and present conclusions. \section{Background} \label{sec:background} \subsection{Deductive DBs} \begin{figure} \begin{center} \begin{center} \begin{tabular}[t]{l} 1. \cd{uncle(X,Y):-child(X,W),brother(W,Y).}\\ 2. \cd{uncle(X,Y):-aunt(X,W),husband(W,Y).}\\ 3. \cd{status(X,tired):-child(W,X),infant(W).}\\ \end{tabular}\begin{tabular}[t]{ll} \cd{child(liam,eve)} & 0.99 \\ \cd{child(dave,eve)} & 0.99 \\ \cd{child(liam,bob)} & 0.75 \\ \cd{husband(eve,bob)} & 0.9 \\ \cd{infant(liam)} & 0.7 \\ \cd{infant(dave)} & 0.1 \\ \cd{aunt(joe,eve)} & 0.9 \\ \cd{brother(eve,chip)} & 0.9 \end{tabular} \end{center} \end{center} \caption{ An example database and theory. Uppercase symbols are universally quantified variables, and so clause 3 should be read as a logical implication: for all database constants $c_X$ and $c_W$, if \cd{child($c_X$,$c_W$)} and \cd{infant($c_W$)} can be proved, then \cd{status($c_X$,tired)} can also be proved.}\label{fig:ddb} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \multicolumn{2}{c}{ (S=uncle(liam,Y), L=[uncle(liam,Y)]) } \\ \multicolumn{2}{c}{ $\downarrow$ } \\ \multicolumn{2}{c}{ (S=uncle(liam,Y), L=[child(liam,W),brother(W,Y)]) } \\ $\downarrow$ & $\downarrow$ \\ (S=uncle(liam,Y), L=[brother(bob,Y)]) & (S=uncle(liam,Y), L=[brother(eve,Y)]) \\ $\downarrow$ & $\downarrow$ \\ \textit{dead end} & (S=uncle(liam,chip), L=[]) \\ \end{tabular} \end{center} \caption{ An example proof tree. From root to second level uses rule 1; next level uses unit clause \cd{child(liam,bob):-} on left and unit clause \cd{child(liam,eve):-} on right; final level uses \cd{brother(eve,chip):-} on the right.} \label{fig:tree} \end{figure} In this section we review the usual definitions for logic programs and deductive databases, and also introduce the term \trm{deductive knowledge graph (DKG)} for deductive databases containing only unary and binary predicates. This section can be omitted by readers familiar with logic programming. An example of a \trm{deductive database} (DDB) is shown in Figure~\ref{fig:ddb}. A \trm{database}, ${\cal{DB}}$, is a set $\{f_1,\ldots,f_N\}$ of ground facts. (For the moment, ignore the numbers associated with each database fact in the figure.) A theory, ${\cal{T}}$, is a set of function-free Horn clauses. Clauses are written $A\cd{:-}B_1,\ldots,B_k$, where $A$ is called the \trm{head} of the clause, $B_1,\ldots,B_k$ is the \trm{body}, and $A$ and the $B_i$'s are called \trm{literals}. Literals must be of the form $p(X_1,\ldots,X_k)$, where $p$ is a \trm{predicate symbol} and the $X_i$'s either logical variables or database constants. The set of all database constants is written ${\cal{C}}$. The number of arguments $k$ to a literal is called its \trm{arity}. In this paper we focus on the case where all literals are binary or unary, i.e., have arity no more than two. We will call such a database a \trm{knowledge graph (KG)}, and the program a \trm{deductive knowledge graph (DKG)}. We will also assume that constants appear only in the database, not in the theory (although this assumption can be relaxed). Clauses can be understood as logical implications. Let $\sigma$ be a \trm{substitution}, i.e., a mapping from logical variables to constants in ${\cal{C}}$, and let $\sigma(L)$ be the result of replacing all logical variables $X$ in the literal $L$ with $\sigma(X)$. A set of tuples $S$ is \trm{deductively closed} with respect to the clause $A\leftarrow{}B_1,\ldots,B_k$ iff for all substitutions $\sigma$, either $\sigma(A) \in {}S$ or $\exists B_i:\sigma(B_i)\not\in{}S$. For example, if $S$ contains the facts of Figure~\ref{fig:ddb}, $S$ is not deductively closed with respect to the clause 1 unless it also contains \cd{uncle(chip,liam)} and \cd{uncle(chip,dave)}. The \trm{least model} for a pair ${\cal{DB}},{\cal{T}}$, written $\textit{Model}({\cal{DB}},{\cal{T}})$, is the smallest superset of ${\cal{DB}}$ that is deductively closed with respect to every clause in ${\cal{T}}$. This least model is unique, and in the usual DDB semantics, a ground fact $f$ is considered ``true'' iff $f\in\textit{Model}({\cal{DB}},T)$. There are two broad classes of algorithms for inference in a DDB. \trm{Bottom-up inference} explicitly computes the set $\textit{Model}({\cal{DB}},{\cal{T}})$ iteratively. Bottom-up inference repeatedly extends a set of facts $S$, which initially contains just the database facts, by looking for rules which ``fire'' on $S$ and using them derive new facts. (More formally, one looks for rules $A\leftarrow{}B_1,\ldots,B_k$ and substitutions $\sigma$ such that $\forall i$, $\sigma(B_i)\in S$, and then adds the derived fact $\sigma(A)$ to $S$.) This process is then repeated until it converges. For DDB programs, bottom-up inference takes time polynomial in the size of the database $|{\cal{DB}}|$, but exponential in the length of the longest clause in ${\cal{T}}$ \cite{ramakrishnan1995survey}. One problem with bottom-up theorem-proving is that it explicitly generates $\textit{Model}({\cal{DB}},{\cal{T}})$, which can be much larger than the original database. The alternative is \trm{top-down inference}. Here, the algorithm does not compute a least model explicitly: instead, it takes as input a query fact $f$ and determines whether $f$ is derivable, i.e., if $f\in\textit{Model}({\cal{DB}},{\cal{T}})$. More generally, one might retrieve all derivable facts that match some pattern, e.g., find all values of $Y$ such that \cd{uncle(joe,Y)} holds. (Formally, given $Q=\cd{uncle(joe,Y)}$, we would like to find all $f\in\textit{Model}({\cal{DB}},{\cal{T}})$ which are \trm{instances of $Q$}, where an $f$ is defined to be an \trm{instance of $Q$} iff $\exists \sigma:f=\sigma(Q)$.. To describe top-down theorem-proving, we note that facts in the database can also be viewed as clauses: in particular a fact $p(a,b)$ can be viewed as a clause $p(a,b)\leftarrow$ which has $p(a,b)$ as its head and an empty body. This sort of clause is called a \trm{unit clause}. We will use ${\cal{T}}^{+{\cal{DB}}}$ to denote the theory ${\cal{T}}$ augmented with unit clauses for each database fact. A top-down theorem prover can be viewed as constructing and searching a following tree, using the theory ${\cal{T}}^{+{\cal{DB}}}$. The process is illustrated in Figure~\ref{fig:tree}, and detailed below. \begin{enumerate} \item The root vertex is a pair $(S,L)$, where $S$ is the query $Q$, and $L$ is a list containing only $Q$. In general every vertex is a pair where $S$ is something derived from $Q$, and $L$ is a list of literals left to prove. \item For any vertex $(S,L)$, where $L=[G_1,\ldots,G_n]$, there is a child vertex $(S',L')$ for each rule $A\leftarrow{}B_1,\ldots,B_k \in {\cal{T}}^{+{\cal{DB}}}$ and substitution $\sigma$ for which $\sigma(G_i)=\sigma(A)$ for some $G_i$. In this child node, $S'=\sigma(S)$, and \[ L'= [\sigma(G_1),\ldots,\sigma(G_{i-1}),\sigma(B_1),\ldots,\sigma(B_k),\sigma(G_{i+1}),\ldots,\sigma(G_n)] \] \end{enumerate} Note that $L'$ is smaller than $L$ if the clause selected is a unit clause (i.e., a fact). If $L'$ is empty, then the vertex is called a \trm{solution vertex}. In any solution vertex $(S,L)$, if $S$ contains no variables,\footnote{If \textit{S} does have variables in it, then any fact $f$ which can be constructed by replacing variables in $Q$ with database constants is in the least model. For clarity we will ignore this complication in the discussion below.} then $S$ is an instance of $Q$ and is in $\textit{Model}({\cal{T}},{\cal{DB}})$. If ${\cal{T}}$ is not recursive, or if recursion is limited to a fixed depth, then the proof graph is finite. We will restrict our discussion below to theories with finite proof graphs. For this case, the set of all answers to a query $Q$ can be found by systematically searching the proof tree for all solution vertices. A number of strategies exist for this, but one popular one is that used by Prolog, which uses depth-first search, ordering edges by picking the first rule $A\leftarrow B_1,\ldots,B_k$ in a fixed order, and only matching rules against the first element of $L$. This strategy can be implemented quite efficiently and is easily extended to much more general logic programs. \subsection{SLPs and stochastic deductive KGs} There are a number of approaches to incorporating probabilistic reasoning in first-order logics. We focus here on \trm{stochastic logic programs (SLPs)} \cite{DBLP:journals/ml/Cussens01}, in which the theory ${\cal{T}}$ is extended by associating with each rule $r$ a non-negative scalar weight $\theta_r$. Below we summarize the semantics associated with SLPs, for completeness, and refer the reader to \cite{DBLP:journals/ml/Cussens01} for details. In an SLP weights $\theta_r$ are added to edges of the top-down proof graph the natural way: when a rule $r$ is used to create an edge $(S,L)\rightarrow (S',L')'$, this edge is given weight $\theta_r$. We define the {weight of a path $v_1\rightarrow\ldots\rightarrow v_n$} in the proof graph for $Q$ to be the product of the weights of the edges in the path, and the weight of a node $v$ to be the sum of the weights of the paths from the root note $v_0=(Q,[Q])$ to $v$. If $r_{v,v'}$ is the rule used for the edge from $v$ to $v'$, then the weight of $w_Q(v_n)$ is \[ w_Q(v_n) \equiv \sum_{v_0\rightarrow\ldots\rightarrow v_n} \prod_{i=0}^{n-1} \theta_{r_{v_i,v_{i+1}}} \] The weight of an answer $f$ to query $Q$ is defined by summing over paths to solution nodes that yield $f$: \begin{equation} \label{eq:wq} w_Q(f) \equiv \sum_{v:v=(\textit{f},[])} w_Q(v) \end{equation} (Here $[]$ is the empty list, which indicates a solution vertex has been reached.) Finally, if we assume that some answers to $Q$ do exist, we can produce a conditional probability distribution over answers $f$ to the query $Q$ by normalizing $w_Q$, i.e., \[ \Pr(f|Q) \equiv \frac{1}{Z} w_Q(f) \] Following the terminology of \cite{DBLP:journals/ml/Cussens01} this is a \trm{pure unnormalized SLP}. SLPs were originally defined \cite{muggleton1996stochastic} for a fairly expressive class of logic programs, namely all programs which are \trm{fail free}, in the sense that there are no ``dead ends'' in the proof graph (i.e., from every vertex $v$, at least one solution node is reachable). Prior work with SLPs also considered the special case of \trm{normalized SLPs}, in which the weights of all outgoing edges from every vertex $v$ sum to one. For normalized fail-free SLPs, it is simple to modify the usual top-down theorem prover to sample from $Pr(f|Q)$. \subsection{Stochastic deductive KGs and discussion of SLPs} SLPs are closely connected to several other well-known types of probabilistic reasoners. SLPs are defined by introducing probabilistic choices into a top-down theorem-proving process: since top-down theorem-proving for logic programs is analogous to program execution in ordinary programs, SLPs can be thought of as logic-program analogs to probabilistic programming languages like Church \cite{goodman2012church}. Normalized SLPs are also conceptually quite similar to stochastic grammars, such as pCFGs, except that stochastic choices are made during theorem-proving, rather than rewriting a string. Here we consider three restrictions on SLPs. First, we restrict the program to be in DDB form---i.e., it consists of a theory ${\cal{T}}$ which contains function-free clauses, and a database ${\cal{DB}}$ (of unit clauses). Second, we restrict all predicates to be unary or binary. Third, we restrict the clauses in the theory ${\cal{T}}$ to have weight 1, so that the only meaningful weights are associated with database facts. We call this restricted SLP a \trm{stochastic deductive knowledge graph (SDKG)}. For SDKGs, a final connection with other logics can be made by considering a logic program that has been grounded by conversion to a boolean formulae. One simple approach to implementing a ``soft'' extension of a boolean logic is to evaluate the truth or falsity of a formula bottom-up, deriving a numeric confidence $c$ for each subexpression from the confidences associated with its subparts. For instance, one might use the rules \begin{eqnarray*} c(x \wedge y) & \equiv & \min(c(x),c(y)) \\ c(x \vee y) & \equiv & \max(c(x),c(y)) \\ c(\neg x) & \equiv & 1 - c(x) \end{eqnarray*} This approach to implementing a soft logic is is sometimes called an \trm{extensional} approach \cite{suciu2011probabilistic}, and it is common in practical systems: PSL \cite{brocheler2012probabilistic} uses an extensional approach, as do several recent neural approaches \cite{serafini2016logic,hu2016harnessing}. Now consider modifying a top-down prover to produce a particular boolean formula, in which each path $v_0\rightarrow\ldots\rightarrow v_n$ is associated with a conjunction $f_1\wedge\ldots\wedge f_m$ of all unit-clause facts used along this path, and each answer $f$ is associated with the disjunction of these conjunctions. Then let us compute the unnormalized weight $w_Q(f)$ using the rules \begin{eqnarray*} c(x \wedge y) & \equiv & c(x)\cdot c(y) \\ c(x \vee y) & \equiv & c(x) + c(y) \end{eqnarray*} (which are sufficient since no negation occurs in the formula). This (followed by normalization) can be shown to be equivalent to the SLP semantics. \subsection{Complexity of reasoning with stochastic deductive KGs} SLPs have a relatively simple proof procedure: informally, inference only requires computing a weighted count of all proofs for a query, and the weight for any particular proof can be computed quickly. A natural question is whether computationally efficient theorem-proving schemes exist for SLPs. The similarity between SLPs and probabilistic context-free grammars suggests that efficient schemes might exist, since there are efficient dynamic-programming methods for probabilistic parsing. Unfortunately, this is not the case: even for the restricted case of SDKGs, computing $P(f|Q)$ is \#P-hard. \newcommand{\statementone}{Computing $P(f|Q)$ (relative to a SDKG ${\cal{T}},{\cal{DB}}$) for all possible answers $f$ of the query $Q$ is \#P-hard, even if there are only two such answers, the theory contains only two non-recursive clauses, and the KG contains only 13 facts.} \begin{thm}\statementone \end{thm} A proof appears in the appendix. The result is not especially surprising, as it is easy to find small theories with exponentially many proofs: e.g., the clause of Equation\ref{eq:chain} can have exponentially many proofs, and naive proof-counting methods may be expensive on such a clause. Fortunately, one further restriction makes SLP theorem-proving efficient. For a theory clause $r=A\leftarrow B_1,\ldots,B_k$, define the \trm{literal influence graph for $r$} to be a graph where each $B_i$ is a vertex, and there is an edge from $B_i$ to $B_j$ iff they share a variable. A graph is a \trm{polytree} iff there is at most one path between any pair of vertices: i.e., if each strongly connected component of the graph is a tree. Finally, we define a theory to be \trm{polytree-limited} iff the influence graph for every clause is a polytree. Figure~\ref{fig:factors} contains some examples of polytree-limited clauses. This additional restriction makes inference tractable. \newcommand{\statementtwo}{For any SDKG with a non-recursive polytree-limited theory ${\cal{T}}$, $P(f|Q)$ can be computed in computed in time linear in the size of ${\cal{T}}$ and ${\cal{DB}}$.} \begin{thm}\statementtwo \end{thm} The proof follows from the correctness of a dynamic-programming algorithm for SDKG inference, which we will present below, in detail, in Section~\ref{sec:inf-alg}. In brief, the algorithm is based on belief propagation in a certain factor graph. We construct a graph where the random variables are multinomials over the set of all database constants, and each random variable corresponds to a logical variable in the proof graph. The logical literals in a proof correspond to factors, which constrain the bindings of the variables to make the literals true. Importantly for the goal of compilation into deep-learning frameworks, the message-passing steps used for belief propagation can be defined as numerical operations, and given a predicate and an input/output mode, the message-passing steps required to perform belief propagation (and hence inference) can be ``unrolled'' into a function, which is differentiable. \subsection{Complexity of stochastic DKGs variants} \subsubsection{Extensions that maintain efficiency} \textit{Constants in the theory.} We will assume that constants appear only in the database, not in the theory. To relax this, note that it is possible to introduce a constant into a theory by creating a special unary predicate which holds only for that constant: e.g., to use the constant \cd{tired}, one could create a database predicate \cd{assign\_tired(T)} which contains the one fact \cd{assign\_tired(tired)}, and use it to introduce a variable which is bound to the constant \cd{tired} when needed. For instance, the clause 3 of Figure~\ref{fig:ddb} would be rewritten as \begin{equation} \label{eq:tired} \cd{status(X,T):-assign\_tired(T),child(X,W),infant(W).} \end{equation} Without loss of generality, we assume henceforth that constants only appear in literals of this sort. \textit{Rule weights and rule features.} In a SDKG, weights are associated only with \emph{facts} in the databases, not with \emph{rules} in the theory (which differs from the usual SLP definition). However, there is a standard ``trick'' which can be used to lift weights from a database into rules: one simply introduces a special clause-specific fact, and add it to the clause body \cite{poole1997independent}. For example, a weighted version of clause 3 could be re-written as \[ \cd{status(X,tired):-assign\_c3(RuleId),weighted(RuleId),child(W,X),infant(W)} \] where the (parameterized) fact \cd{weighted(c3)} appears in ${\cal{DB}}$, and the constant \cd{c3} appears nowhere else in ${\cal{DB}}$. In some probabilistic logics, e.g., ProPPR \cite{wang2013programming} one can attach a computed set of features to a rule in order to weight it: e.g., one can write \[ \cd{status(X,tired):-$\{$weighted(A):child(W,X),age(W,A)$\}$} \] which indicates that the all the ages of the children of $X$ should be used as features to determine if the rule succeeds. This is equivalent to the rule \cd{status(X,tired)} :-\cd{child(W,X), age(W,A), weighted(A)}, and in the experiments below, where we compare to ProPPR, we use this construction. \subsubsection{Extension to possible-worlds semantics} In the SLP semantics, the parameters $\Theta$ only have meaning in the context of the set of proofs derivable using the theory ${\cal{T}}$. This can be thought of as a ``possible proofs'' semantics. It has been argued that it is more natural to adopt a ``possible worlds'' semantics, in which $\Theta$ is used to define a distribution, $\Pr(I|{\cal{DB}},\Theta)$, over ``hard'' databases, and the probability of a derived fact $f$ is defined as follows, where $\indicate{\cdot}$ is a zero-one indicator function: \begin{equation} \label{eq:ddb-semantics} \Pr_\cd{TupInd}(f|{\cal{T}},{\cal{DB}},\Theta) \equiv \sum_{I} \indicate{f\in\textit{Model}(I,{\cal{T}})} \cdot \Pr(I|{\cal{DB}},\Theta) \end{equation} Potential hard databases are often called \trm{interpretations} in this setting. The simplest such ``possible worlds'' model is the \trm{tuple independence} model for PrDDB's \cite{suciu2011probabilistic}: in this model, to generate an interpretation $I$, each fact $f\in{\cal{DB}}$ sampled by independent coin tosses, i.e., \( \Pr_\cd{TupInd}(I|{\cal{DB}},\Theta) \equiv \prod_{t \in I} \theta_t \cdot \prod_{t \in {\cal{DB}}-I} (1-\theta_t) \). ProbLog \cite{fierens2016} is one well-known logic programming language which adopts this semantics, and there is a large literature (for surveys, see \cite{suciu2011probabilistic,de2008probabilistic}) on approaches to more tractibly estimating Eq~\ref{eq:ddb-semantics}, which naively requires marginalizing over all $2^{|{\cal{DB}}|}$ interpretations. A natural question to ask is whether polytree-limited SDKGs, which are tractible under the possible-proofs semantics of SLPs, are also tractible under a possible-worlds semantics. Unfortunately, this is not the case. \newcommand{\statementthree}{Computing $P(f)$ in the tuple-independent possible-worlds semantics for a single ground fact $f$ is \#P-hard.} \begin{thm}\statementthree \end{thm} This result is well known: for instance, Suciu and Olteanu \cite{suciu2011probabilistic} show that it is \#P-hard to compute probabilities against the one-rule theory \cd{p(X,Y) :- q(X,Z),r(Z,Y).} For completeness, the appendix to this paper contains a proof, which emphasizes the fact that reasonable syntactic restrictions (such as polytree-limited theories) are unlikely to make inference tractible. In particular, the theory used in the construction is extremely simple: all predicates are unary, and contain only three literals in their body. \section{Efficient differentiable inference for polytree-limited SDKGs} \label{sec:inf-alg} In this section we present an efficient dynamic-programming method for inference in polytree-limited SDKGs. We formalize this method as belief propagation on a certain factor graph, where the random variables in the factor graph correspond to possible bindings to a logical variable in a proof, and the factors correspond to database predicates. In other words, the random variables are multinomials over all constants in the database, and the factors will constrain these bindings to be consistent with database predicates that related the corresponding logical variables. Although using belief propagation in this way is a simple idea, to our knowledge it is a novel method for first-order probabilistic inference. Certainly it is quite different from more common formulations of first-order probabilistic inference, where random variables typically are Bernoulli random variables, which correspond to \emph{potential} ground database facts (i.e., elements of the Herbrand base of the program.) \subsection{Numeric encoding of PrDDB's and queries} Because our ultimate goal is integration with neural networks, we will implement reasoning by defining a series of numeric functions, each of which finds answers to a particular family of queries. It will be convenient to encode the database numerically. We will assume all constants have been mapped to integers. For a constant $c\in{\cal{C}}$, we define $\vek{u}_c$ to be a one-hot row-vector representation for $c$, i.e., a row vector of dimension $|{\cal{C}}|$ where $\vek{u}[c]=1$ and $\vek{u}[c']=0$ for $c'\not=C$. We can also represent a binary predicate $p$ by a sparse matrix $\textbf{M}_p$, where $\textbf{M}_p[a,b]=\theta_{p(a,b)}$ if $p(a,b)\in{\cal{DB}}$, and a unary predicate $q$ as an analogous row vector $\vek{v}_q$. Note that $\textbf{M}_p$ encodes information not only about the database facts in predicate $p$, but also about their parameter values. Collectively, the matrices $M_{p_1}$, \ldots, $M_{p_n}$ for the predicates $p_1,\ldots,p_n$ can be viewed as a three-dimensional tensor. Our main interest here is queries that retrieve all derivable facts that match some query $Q$: e.g., to find all values of $Y$ such that \cd{uncle(joe,Y)} holds. We define an \trm{argument-retrieval query} $Q$ as query of the form $p(c,Y)$ or $p(Y,c)$. We say that $p(c,Y)$ has an \trm{input-output mode} of \cd{in,out} and $p(Y,c)$ has an input-output mode of \cd{out,in}. For the sake of brevity, below we will assume below the mode \cd{in,out} when possible, and abbreviate the two modes as \cd{io} and \cd{io}. The \trm{response} to a query $p(c,Y)$ is a distribution over possible substitutions for $Y$, encoded as a vector $\vek{v}_{Y}$ such that for all constants $d\in{\cal{C}}$, $\vek{v}_{Y}[d] = \Pr(p(c,d)|Q=p(x,Y),{\cal{T}},{\cal{DB}},\Theta)$. Note that in the SLP model $\vek{v}_{Y}$ is a conditional probability vector, conditioned of $Q=p(c,Y)$, which we will sometimes emphasize with denoting it as $\vek{v}_{Y|c}$. Formally if $U_{p(c,Y)}$ the set of facts $f$ that ``match'' (are instances of) $p(c,Y)$, then \[ \vek{v}_{Y|c}[d] = \Pr(f=p(c,d)|f\in{}U_{p(c,Y)},{\cal{T}},{\cal{DB}},\Theta) \equiv \frac{1}{Z} w_Q(f=p(c,d)) \] Although here we only consider single-literal queries, we note that more complex queries can be answered by extending the theory: e.g., to find \[ \{ \cd{Y: uncle(joe,X),husband(X,Y)}\} \] we could add the clause \cd{q1(Y):-uncle(joe,X),husband(X,Y)} to the theory and find the answer to \cd{q1(Y)}. Since the goal of our reasoning system is to correctly answer queries using functions, we also introduce a notation for functions that answer particular types of queries: in particular, for a predicate symbol $p$, $f^p_\cd{io}$ denotes a \trm{query response function} for all queries with predicate $p$ and mode \cd{io}. We define a \trm{query response function} for a query of the form $p(c,Y)$ to be a function which, when given a one-hot encoding of $c$, $f^p_\cd{io}$ returns the appropriate conditional probability vector: \begin{equation} \label{eq:def-f} f^p_\cd{io}(\vek{u}_c) \equiv \vek{v}_{Y|c} \end{equation} We analogously define $f^p_\cd{oi}$, Finally, we define $g^p_\cd{io}$ to be the unnormalized version of this function, i.e., the weight of $f$ according to $w_Q(f)$: \[ g^p_\cd{io}(\vek{u}_c) \equiv w_Q(f) \] \begin{figure} \centerline{\includegraphics[width=0.8\linewidth]{./factor-graphs.png}} \caption{ Examples of factor graphs for the example theory.} \label{fig:factors} \end{figure} For convenience, we will introduce another special DB predicate \cd{any}, where \cd{any($a,b$)} is conceptually true for any pair of constants $a,b$; however, as we show below, the matrix $\textbf{M}_\cd{any}$ need not be explicitly stored. We also constrain clause heads to contain distinct variables which all appear also in the body. \subsection{Efficient inference for one-clause theories} We will start by considering a highly restricted class of theories ${\cal{T}}$, namely programs containing only one non-recursive polytree-limited clause $r$ that obeys the restrictions above. We build a factor graph $G_r$ for $r$ as follows: for each logical variable $W$ in the body, there is a random variable $W$; and for every literal $q(W_i,W_j)$ in the body of the clause, there is a factor with potentials $\textbf{M}_q$ linking variables $W_i$ and $W_j$. Finally, if the factor graph is disconnected, we add \cd{any} factors between the components until it is connected. Figure~\ref{fig:factors} gives examples. The variables appearing in the clause's head are starred. The correctness of this procedure follow immediately from the convergence of belief propagation on factor graphs for polytrees \cite{kschischang2001factor}. \begin{figure} \begin{center} \begin{alg} \textbf{define} compileMessage($L \rightarrow X$):\\ \> assume wolg that $L=q(X)$ or $L=p(X_i,X_o)$\\ \> generate a new variable name $\vek{v}_{L,X}$ \\ \> \textbf{if} $L=q(X)$ \textbf{then}\\ \> \> emitOperation( $\vek{v}_{L,X} = \vek{v}_q$)\\ \> \textbf{else if} $X$ is the output variable $X_o$ of $L$ \textbf{then}\\ \> \> $\vek{v}_i =$ compileMessage($X_i \rightarrow L$)\\ \> \> emitOperation( $\vek{v}_{L,X} = \vek{v}_i \cdot \textbf{M}_p$ )\\ \> \textbf{else if} $X$ is the input variable $X_i$ of $L$ \textbf{then}\\ \> \> $\vek{v}_o =$ compileMessage($X_i \rightarrow L$) \\% \> \> emitOperation( $\vek{v}_{L,X} = \vek{v}_o \cdot \textbf{M}_p^T$ ) \\ \> \textbf{return} $\vek{v}_{L,X}$\\ \end{alg}~~~~\begin{alg} \textbf{define} compileMessage($X \rightarrow L$): \\ \> \textbf{if} $X$ is the input variable $X$ \textbf{then}\\ \> \> \textbf{return} $\vek{u}_c$, the input\\ \> \textbf{else}\\ \> \> generate a new variable name $\vek{v}_X$\\ \> \> assume $L_1,L_2,\ldots,L_k$ are the \\ \> \> ~~ neighbors of $X$ excluding $L$ \\ \> \> \textbf{for} $i=1,\ldots,k$ \textbf{do}\\ \> \> \> $\vek{v}_i =$ compileMessage($L_i \rightarrow X$)\\ \> \> emitOperation($\vek{v}_X = \vek{v}_1 \circ \cdots \circ \vek{v}_k$) \\ \> \> \textbf{return} $\vek{v}_X$\\ \end{alg} \end{center} \caption{ Algorithm for unrolling belief propagation on a polytree into a sequence of message-computation operations. Notes: (1) if $L=p(X_o,X_i)$ then replace $\textbf{M}_p$ with $\textbf{M}_p^T$ (the transpose). (2) Here $\vek{v}_1 \circ \vek{v}_2$ denotes the Hadamard (component-wise) product, and if $k=0$ an all-ones vector is returned.} \label{fig:alg} \end{figure} BP over $G_r$ can now be used to compute the conditional vectors $f^p_\cd{io}(\vek{u}_c)$ and $f^p_\cd{oi}(\vek{u}_c)$. For example to compute $f^p_\cd{io}(\vek{u}_c)$ for clause 1, we would set the message for the evidence variable $X$ to $\vek{u}_c$, run BP, and read out as the value of $f$ the marginal distribution for $Y$. \subsection{Differentiable inference for one-clause theories} To make the final step toward integration of this algorithm with neural-network platforms, we must finally compute an explicit, differentiable, query response function, which computes $f^p_\cd{io}(\vek{u}_c)$. To do this we ``unroll'' the message-passing steps into a series of operations. Figure~\ref{fig:alg} shows the algorithm used in the current implementation of TensorLog, which follows previous work in translating belief propagation to differentiable form \cite{gormley_approximation-aware_2015}. In the code, we found it convenient to extend the notion of input-output modes for a query, as follows: a variable $X$ appearing in a literal $L=p(X,Y)$ in a clause body is an \trm{nominal input} if it appears in the input position of the head, or any literal to the left of $L$ in the body, and is an \trm{nomimal output} otherwise. In Prolog a convention is that nominal inputs appear as the first argument of a predicate, and in TensorLog, if the user respects this convention, then ``forward'' message-passing steps use $M_p$ rather than $M_p^T$ (reducing the cost of transposing large ${\cal{DB}}$-derived matrices, since our message-passing schedule tries to maximize forward messages.) The code contains two mutually recursive routines, and is invoked by requesting a message from the output variable to a fictional output literal. The result will be to emit a series of operations, and return the name of a register that contains the unnormalized conditional probability vector for the output variable. For instance, for the sample clauses, the functions returned are shown in Table~\ref{tab:messages}. \begin{table} \begin{tabular}{c|l|l|l} \hline Rule & r1: uncle(X,Y):- & r2: uncle(X,Y):- & r3: status(X,T):- \\ & ~parent(X,W), & ~aunt(X,W), & ~assign\_tired(T), \\ & ~brother(W,Y) & husband(W,Y) & ~parent(X,W),\\ & & & ~infant(W),any(T,W)\\ \hline Function & $g^{r1}_\cd{io}(\vec{u}_c)$ & $g^{r2}_\cd{io}(\vec{u}_c)$ & $g^{r3}_\cd{io}(\vec{u}_c)$ \\ \hline & $\vek{v}_{1,W} = \vek{u}_c \textbf{M}_\cd{parent}$ & $\vek{v}_{1,W} = \vek{u}_c \textbf{M}_\cd{aunt}$ & $\vek{v}_{2,W} = \vek{u}_c \textbf{M}_\cd{parent}$ \\ Operation & $\vek{v}_W = \vek{v}_{1,W}$ & $\vek{v}_W = \vek{v}_{1,W}$ & $\vek{v}_{3,W} = \vek{v}_\cd{infant}$ \\ sequence & $\vek{v}_{2,Y} = \vek{v}_W \textbf{M}_\cd{brother}$ & $\vek{v}_{2,Y} = \vek{v}_W \textbf{M}_\cd{husband}$ & $\vek{W} = \vek{v}_{2,W} \circ \vek{v}_{3,W}$ \\ defining & $\vek{v}_Y = \vek{v}_{2,Y}$ & $\vek{v}_Y = \vek{v}_{2,Y}$ & $\vek{v}_{1,T} = \vek{v}_\cd{assign\_tired}$ \\ function & & & $\vek{v}_{4,T} = \vek{v}_{W} \textbf{M}_\cd{any}$ \\ & & & $\vek{T} = \vek{v}_{1,T} \circ \vek{v}_{4,T}$ \\ \hline Returns & $\vek{v}_{Y}$ & $\vek{v}_{Y}$ & $\vek{v}_{T}$ \\ \hline \end{tabular} \caption{Chains of messages constructed for the three sample clauses shown in Figure~\ref{fig:factors}, written as functions in pseudo code.} \label{tab:messages} \end{table} Here we use $g^r_\cd{io}(\vec{u}_c)$ for the unnormalized version of the query response function build from $G_r$. One could normalize as follows: \begin{equation} \label{eq:normalize} f^p_\cd{io}(\vec{u}_c) \equiv g^r_\cd{io}(\vec{u}_c)/\onenorm{g^r_\cd{io}(\vec{u}_c)} \end{equation} where $r$ is the one-clause theory defining $p$. \subsection{Multi-clause programs} \label{sec:multi-clause} We now extend this idea to theories with many clauses. We first note that if there are several clauses with the same predicate symbol in the head, we simply sum the unnormalized query response functions: e.g., for the predicate \cd{uncle}, defined by rules $r_1$ and $r_2$, we would define \[ g^\cd{uncle}_\cd{io} = g^{r1}_\cd{io} + g^{r2}_\cd{io} \] This is equivalent to building a new factor graph $G$, which would be approximately $\cup_i G_{ri}$, together global input and output variables, plus a factor that constrains the input variables of the $G_{ri}$'s to be equal, plus a factor that constrains the output variable of $G$ to be the sum of the outputs of the $G_{ri}$'s. A more complex situation is when the clauses for one predicate, $p$, use a second theory predicate $q$, in their body: for example, this would be the case if \cd{aunt} was also defined in the theory, rather than the database. For a theory with no recursion, we can replace the message-passing operations $\vek{v}_Y = \vek{v}_X \textbf{M}_q$ with the function call $\vek{v}_Y = g^q_\cd{io}(\vek{v}_X)$, and likewise the operation $\vek{v}_Y = \vek{v}_X \textbf{M}_q^T$ with the function call $\vek{v}_Y = g^q_\cd{oi}(\vek{v}_X)$. It can be shown that this is equivalent to taking the factor graph for $q$ and ``splicing'' it into the graph for $p$. It is also possible to allow function calls to recurse to a fixed maximum depth: we must simply add an extra argument that tracks depth to the recursively-invoked $g^q$ functions, and make sure that $g^p$ returns an all-zeros vector (indicating no more proofs can be found) when the depth bound is exceeded. Currently this is implemented by marking learned functions $g$ with the predicate $q$, a mode, and a depth argument $d$, and ensuring that function calls inside $g^p_{\cd{io},d}$ to $q$ always call the next-deeper version of the function for $q$, e.g., $g^q_{\cd{io},d+1}$. Computationally, the algorithm we describe is quite efficient. Assuming the matrices $\textbf{M}_p$ exist, the additional memory needed for the factor-graph $G_r$ is linear in the size of the clause $r$, and hence the compilation to response functions is linear in the theory size and the number of steps of BP. For ptree-SDKGs, $G_r$ is a tree, the number of message-passing steps is also linear. Message size is (by design) limited to $|{\cal{C}}|$, and is often smaller in practice, due to sparsity or type restrictions (discussed below). \subsection{Implementation: TensorLog} \textit{Compilation and execution.} The current implementation of TensorLog operates by first ``unrolling'' the belief-propagation inference to an intermediate form consisting of sequences of abstract operators, as suggested by the examples of Table~\ref{tab:messages}. The ``unrolling'' code performs a number of optimizations to the sequence in-line: one important one is to use the fact that \( \vek{v}_X \circ (\vek{v}_Y \textbf{M}_\cd{any}) = \vek{v}_X \onenorm{\vek{v}_Y} \) to avoid explicitly building $\textbf{M}_\cd{any}$. These abstract operator sequences are then ``cross-compiled'' into expressions on one of two possible ``back end'' deep learning frameworks, Tensorflow \cite{abadi2016tensorflow} and Theano \cite{bergstra2010theano}. The operator sequences can also be evaluated and differentiated on a ``local infrastructure'' which is implemented in the SciPy sparse-matrix package \cite{jones2014scipy}, which includes only the few operations actually needed for inference, and a simple gradient-descent optimizer. The local infrastructure's main advantage is that it makes more use of sparse-matrix representations. In all the implementations, the matrices that correspond to KB relations are sparse. The messages corresponding to a one-hot variable binding, or the possible bindings to a variable, are sparse vectors in the local infrastructure, but dense vectors in the Tensorflow and Theano versions, to allow use of GPU implementations of multiplication of dense vectors and sparse matrices. (The implementation also supports grouping examples into minibatches, in which case the dense vectors become dense matrices with a number of rows equal to minibatch size.) TensorLog compiles query response functions on demand, i.e., only as needed to answer queries or train. In TensorLog the parameters $\Theta$ are partitioned by the predicate they are associated with, making it possible to learn parameters for any selected subset of database predicates, while keeping the remainder fixed. \textit{Typed predicates.} One practically important extension to the language for the Tensorflow and Theano targets was include machinery for declaring types for the arguments of database predicates, and inferring these types for logic programs: for instance, for the sample program of Figure~\ref{fig:qatheory}, one might include declarations like \cd{actedIn(actor,film)} or \cd{indicatesLabel(ngram,questionLabel)}. Typing reduces the size of the message vectors by a large constant factor, which increases the potential minibatch size and speeds up run-time by a similar factor. \begin{figure} \begin{small} \begin{tabbing}1234\=1234\=1234\=1234\=1234\=1234\=1234\=\kill tlog = tensorlog.simple.Compiler(db=''data.db'', prog=''rules.tlog'') \\ train\_data = tlog.load\_dataset(''train.exam'') \\ test\_data = tlog.load\_dataset(''test.exam'') \\ \# \textit{data is stored dictionary mapping a function specification, like $p_\cd{io}$,} \\ \# \textit{to a pair X, Y. The rows of X are possible inputs $f^p_\cd{io}$, and the rows of} \\ \# \textit{Y are desired outputs.} \\ function\_spec = train\_data.keys()[0] \\ \textit{\# assume only one function spec} \\ X,Y = train\_data[function\_spec] \\ ~\\ \textit{\# construct a tensorflow version of the loss function, and function used for inference} \\ unregularized\_loss = tlog.loss(function\_spec) \\ f = tlog.inference(function\_spec) \\ \# \textit{add regularization terms to the loss} \\ regularized\_loss = unregularized\_loss\\ for weight in tlog.trainable\_db\_variables(function\_spec): \\ \> regularized\_loss = regularized\_loss + tf.reduce\_sum(tf.abs(weights))*0.01 \# \textit{L1 penalty}\\ ~\\ \# \textit{set up optimizer and inputs to the optimizer}\\ optimizer = tf.train.AdagradOptimizer(rate) \\ train\_step = optimizer.minimize(regularized\_loss)\\ \# \textit{inputs are a dictionary, with keys that name the appropriate variables used in the loss function}\\ train\_step\_input = \{\} \\ train\_step\_input[tlog.input\_placeholder\_name(function\_spec)] = X \\ train\_step\_input[tlog.target\_output\_placeholder\_name(function\_spec)] = Y \\ ~\\ \# \textit{run the optimizer for 10 epochs}\\ session = tf.Session()\\ session.run(tf.global\_variables\_initializer())\\ for i in range(10):\\ \>session.run(train\_step, feed\_dict=train\_step\_input)\\ ~\\ \# \textit{now run the learned function on some new data}\\ result = session.run(f, feed\_dict=\{tlog.input\_placeholder\_name(function\_spec): X2\}) \end{tabbing} \end{small} \caption{Sample code for using TensorLog within Tensorflow. This code minimizes an alternative version of the loss function which includes and L1 penalty of the weights.} \label{fig:tfcode} \end{figure} \textit{Constraining the optimizer.} TensorLog's learning changes the numeric score $\theta_f$ of every soft KG fact $f$ using gradient descent. Under the proof-counting semantics used in TensorLog, a fact with a score of $\theta_f>1$ could be semantically meaningful: for instance for $f=\cd{costar(ginger\_rogers,fred\_astaire)}$ one might plausibly set $\theta_f$ to the number of movies those actors appeared in together. However it is not semantically meaningful to allow $\theta_f$ to be negative. To prevent this, before learning, for each KG parameter $\theta_f$, we replace each occurrence of $\theta_f$ with $h(\tilde{\theta}_f)$ for the function $h=\ln(1+e^x)$ (the ``softplus'' function), where $\tilde{\theta}_f \equiv h^{-1}(\theta_f)$. Unconstrained optimization is then performed to optimize the value of $\tilde{\theta}_f$ to some $\tilde{\theta}^*_f$ After learning, we update $\theta_f$ to be $h(\tilde{\theta}_f^*)$, which is always non-negative. \textit{Regularization.} By default, TensorLog trains to minimize unregularized cross-entropy loss. (Following common practice deep learning, the default loss function replaces the conventional normalizer of Equation~\ref{eq:normalize} with a softmax normalization.) However, because modern deep-learning frameworks are quite powerful, it is relatively easy to use the cross-compiled functions produced by TensorLog in slight variants of this learning problem---often this requires only a few lines of code. For instance, Figure~\ref{fig:tfcode} illustrates how to add L1-regularization to TensorLog's loss function (and then train) using the Tensorflow backend. \textit{Extension to multi-objective learning.} It is also relatively easy to extend TensorLog in other ways. We will discuss several possible extensions which we have not, as yet, experimented with extensively, although we have verified that all can be implemented in the current framework. \begin{sloppypar}For learning, TensorLog's training data consists of a set of queries $p(c_1,Y),\ldots,p(c_m,Y)$, and a corresponding set of desired outputs $\vek{v}_{Y|c_1},\ldots,\vek{v}_{Y|c_m}$. It is possible to train with examples of multiple predicates: for instance, with the example program of Figure~\ref{fig:qatheory}, one could include training examples for both \cd{answer} and \cd{matches}. \end{sloppypar} \textit{Alternative semantics for query responses.} One natural extension would address a limitation of the SLP semantics, namely, that the weighting of answers relative to a query sometimes leads to a loss of information. For example, suppose the answers to \cd{father(joe,Y)} are two facts \cd{father(joe,harry)} and \cd{father(joe,fred)}, each with weight 0.5. This answer does not distinguish between a world in which \cd{joe}'s paternity is uncertain, and a world in which \cd{joe} has two fathers. One possible solution is to learn parameters that set an appropriate soft threshold on each element of $w_Q$, e.g., to redefine $f^p$ as \[ f^p(\vek{u})) = \textit{sigmoid}(g^p(\vek{u}) + b^p) \] where $b^p$ is a bias term. The code required to do this for Tensorflow is below: \begin{tabbing}1234\=\kill \>target = tlog.target\_output\_placeholder(function\_spec)\\ \>g = tlog.proof\_count(function\_spec) \# \textit{$g^p$, computes $w_Q$}\\ \>bias = tf.Variable(0.0, dtype=tf.float32, trainable=True) \\ \>f = tf.sigmoid(g + bias) \# \textit{function used for inference} \\ \>unregularized\_loss = tf.nn.sigmoid\_cross\_entropy\_with\_logits(g+bias,target) \end{tabbing} This extension illustrates an advantage of being able to embed TensorLog inferences in a deep network. \textit{Extension to call out to the host infrastructure.} A second extension is to allow TensorLog functions to ``call out'' to the backend language. Suppose, for example, we wish to replace the \cd{classification} predicate in the example program of Figure~\ref{fig:qatheory} with a Tensorflow model, e.g., a multilayer perceptron, and that \cd{buildMLP(q)} is function that constructs a an expression which evaluates the MLP on input \cd{q}. We can instruct the compiler to include this model in place of the usual function $g^\cd{classification}_\cd{io}$ as follows: \begin{tabbing}1234\=\kill \>plugins = tensorlog.program.Plugins() \\ \>plugins.define(''classification/io'', buildMLP) \\ \>tlog = simple.Compiler(db=''data.db'', prog=''rules.tlog'', plugins=plugins) \\ \end{tabbing} To date we have not experimentally explored this capability in depth; however, it would appear to be very useful to be able to write logical rules over arbitrary neurally-defined low-level predicates, rather than merely over KB facts. We note that the compilation approach also makes it easy to export a TensorLog predicate (e.g., the \cd{answer} predicate defined by the logic) to a deep learner, as a function which maps a question to possible answers and their confidences. This might be useful in building a still more complex model non-logical model (e.g., a dialog agent which makes use of question-answering as a subroutine.) \section{Related Work} \subsection{Hybrid logical/neural systems} There is a long tradition of embedding logical expressions in neural networks for the purpose of learning, but generally this is done indirectly, by conversion of the logic to a boolean formula, rather than developing a differentiable theorem-proving mechanism, as considered here. Embedding logic may lead to a useful architecture \cite{TowellAAAI90} or regularizer \cite{riedelinjecting2015,hu2016harnessing}. More recently \cite{rocktaschel2016learning} have proposed a differentiable theorem prover, in which a proof for an example is unrolled into a network. Their system includes representation-learning as a component, as well as a template-instantiation approach (similar to \cite{wang2014structure}), allowing structure learning as well. However, published experiments with the system been limited to very small datasets. Another recent paper \cite{DBLP:journals/corr/AndreasRDK16} describes a system in which non-logical but compositionally defined expressions are converted to neural components for question-answering tasks. \subsection{Explicitly grounded probabilistic first-order languages} Many first-order probabilistic models are implemented by ``grounding'', i.e., conversion to a more traditional representation. In the context of a deductive DB, a rule can be considered as a finite disjunction over ground instances: for instance, the rule \[ \cd{p(X,Y) :- q(Y,Z),r(Z,Y). } \] is equivalent to \[ \exists x \in {\cal{C}}, y \in {\cal{C}}, x \in {\cal{C}} : \cd{p}(x,y) \vee \neg \cd{q}(y,z) \vee \neg \cd{r}(Z,Y) \] For example, Markov logic networks (MLNs) are a widely-used probabilistic first-order model \cite{RichardsonMLJ2006} in which a Bernoulli random variable is associated with each \emph{potential} ground database fact (e.g., in the binary-predicate case, there would be a random variable for each possible $p(a,b)$ where $a$ and $b$ are any facts in the database and $p$ is any binary predicate) and each ground instance of a clause is a factor. The Markov field built by an MLN is hence of size $O(|{\cal{C}}|^2)$ for binary predicates, which is much larger than the factor graphs used by TensorLog, which are of size linear in the size of the theory. In our experiments we compare to ProPPR, which has been elsewhere compared extensively to MLNs. Inference on the Markov field can also be expensive, which motivated the development of probabilistic similarity logic (PSL), \cite{brocheler2012probabilistic} a MLN variant which uses a more tractible hinge loss, as well as lifted relational neural networks \cite{DBLP:journals/corr/SourekAZK15} and logic tensor networks \cite{serafini2016logic} two recent models which grounds first-order theories to a neural network. However, any grounded model for a first-order theory can be very large, limiting the scalability of such techniques. \subsection{Stochastic logic programs and ProPPR} \label{sec:SLPs} As noted above, TensorLog is very closely related to stochastic logic programs (SLPs) \cite{DBLP:journals/ml/Cussens01}. In an SLP, a probabilistic process is associated with a top-down theorem-prover: i.e., each clause $r$ used in a derivation has an assocated probability $\theta_{r}$. Let $N(r,E)$ be the number of times $r$ was used in deriving the explanation $E$: then in SLPs, \( \Pr_\cd{SLP}(f) = \frac{1}{Z} \sum_{E\in{\cal E}x(f)} \prod_r \theta_r^{N(r,E)} \). The same probability distribution can be generated by TensorLog if (1) for each rule $r$, the body of $r$ is prefixed with the literals \cd{assign(RuleId,$r$),weighted(RuleId)}, where $r$ is a unique identifier for the rule and (2) $\Theta$ is constructed so that $\theta_f=1$ for ordinary database facts $f$, and $\theta_\cd{weighted(r)}=\theta'_\cd{r}$, where $\Theta'$ is the parameters for a SLP. SLPs can be \trm{normalized} or \trm{unnormalized}; in normalized SLPs, $\Theta$ is defined so for each set of clauses $S_p$ of clauses with the same predicate symbol $p$ in the head, $\sum_{r\in{}S_p} \theta_r=1$. TensorLog can represent both normalized and unnormalized SLPs (although clearly learning must be appropriately constrained to learn parameters for normalized SLPs.) Normalized SLPs generalize probabilistic context-free grammars, and unnormalized SLPs can express Bayesian networks or Markov random fields \cite{DBLP:journals/ml/Cussens01}. ProPPR \cite{wang2013programming} is a variant of SLPs in which (1) the stochastic proof-generation process is augmented with a reset, and (2) the transitional probabilities are based on a normalized soft-thresholded linear weighting of features. The first extension to SLPs can be easily modeled in TensorLog, but the second cannot: the equivalent of ProPPR's clause-specific features can be incorporated, but they are globally normalized, not locally normalized as in ProPPR. ProPPR also includes an approximate grounding procedure which generates networks of bounded size. Asymptotic analysis suggests that ProPPR should be faster for very large database and small numbers of training examples (assuming moderate values of $\epsilon$ and $\alpha$ are feasible to use), but that TensorLog should be faster with large numbers of training examples and moderate-sized databases. \section{Experiments} \newcommand{\bst}[1]{\textbf{#1}} \begin{table} \begin{center} \begin{tabular}{l|rr} \hline & \multicolumn{2}{c}{Social Influence Task} \\ \hline ProbLog2 & 20 nodes & 40-50 sec \\ TensorLog & 3327 nodes & \bst{9.2 msec} \\ \hline \end{tabular} \caption{Comparison to ProbLog2 on the ``friends and smokers'' inference task.} \label{tab:smokers} ~\\ ~\\ \begin{tabular}[c]{l|rrr} \hline & \multicolumn{3}{c}{Path-finding} \\ & \multicolumn{1}{c}{Size} & \multicolumn{1}{c}{Time} & \multicolumn{1}{c}{Acc} \\ \hline ProbLog2 & 16x16 grid, $d=10$ & 100-120 sec & \\ \hline TensorLog & 16x16 grid, $d=10$ & \bst{2.1 msec} & \\ & 64x64 grid, $d=99$ & {2.2 msec} & \\ \cline{2-3} \textit{trained} & 16x16 grid, $d=10$ & 6.2 msec & 99.89\% \\ \hline \end{tabular}\begin{minipage}[c]{0.3\linewidth} \includegraphics[width=\linewidth]{./visualize-6x6.png} \end{minipage} \caption{Comparison to ProbLog2 on path-finding in a grid.} \label{tab:grid} ~\\ ~\\ \begin{tabular}{cc|cc|cc|cc} \hline Grid Size & Max Depth & \multicolumn{2}{c}\# Graph Nodes & \multicolumn{2}{c}{Acc} & \multicolumn{2}{c}{Time (30 epochs)} \\ & & Local & TF & Local & TF & Local & TF \\ \hline 16 & 10 & 68 & 2696 & \bst{99.9} & 97.2 & 37.6 sec & \bst{1.1 sec} \\ 18 & 12 & 80 & 3164 & 93.9 & \bst{96.9} & 126.1 sec & \bst{1.8 sec}\\ 20 & 14 & 92 & 3632 & 25.2 & \bst{99.1} & 144.9 sec & \bst{2.8 sec}\\ 22 & 16 & 104 & 4100 & 8.6 & \bst{98.4} & 83.8 sec & \bst{4.2 sec}\\ 24 & 18 & 116 & 4568 & \bst{2.4} & 0.0 & 611.7 sec & \bst{6.3 sec}\\ \hline \end{tabular} \caption{Learning for the path-finding task with local and Tensorflow (TF) backends.} \label{tab:gridlearn} ~\\ ~\\ \begin{tabular}{l|r|rc} \hline & \multicolumn{1}{c|}{ProPPR} & \multicolumn{1}{c}{TensorLog} \\ \hline CORA (13k facts,10 rules) & AUC 83.2 & AUC \bst{97.6} \\ \hline UMLS (5k facts, 226 rules) & acc 49.8 & acc \bst{52.5} \\ \hline Wordnet (276k facts) & & \\ ~~Hypernym (46 rules) & acc \bst{93.4} & acc 93.3 \\ ~~Hyponym (46 rules) & acc 92.1 & acc \bst{92.8} \\ \hline \end{tabular} \caption{Comparison to ProPPR on relational learning tasks.} \label{tab:proppr} \end{center} \end{table} \subsection{Inference tasks} \label{sec:timing} We compared TensorLog's inference time (using the local infrastructure) with ProbLog2, a mature probabilistic logic programming system which implements the tuple independence semantics, on two inference problems described in \cite{fierens2016}. One is a version of the ``friends and smokers'' problem, a simplified model of social influence. In \cite{fierens2016} small graphs were artificially generated using a preferential attachment model, the details of which were not described; instead we used a small existing network dataset\footnote{The Citeseer dataset from \cite{DBLP:conf/asunam/LinC10}.} which displays preferential-attachment statistics. The inference times we report are for the same inference tasks, for a subset of 120 randomly-selected entities. As shown in Table~\ref{tab:smokers}, in spite of querying six times as many entities, TensorLog is many times faster. We also compare on a path-finding task from \cite{fierens2016}, which is intended to test performance on deeply recursive tasks. The goal here is to compute fixed-depth transitive closure on a grid: in \cite{fierens2016} a 16-by-16 grid was used, with a maximum path length of 10. Again TensorLog shows much faster performance, and better scalability, as shown in Table~\ref{tab:grid} by run times on a larger 64-by-64 grid. We set TensorLog's maximum path length to 99 for the larger grid. \subsection{Learning Tasks} We also compared experimentally with ProPPR on several standard benchmark learning tasks. We chose two traditional relational learning tasks on which ProPPR outperformed plausible competitors, such as MLNs. One was the CORA citation-matching task (from \cite{wang2013programming}) with hand-constructed rules.\footnote{We replicated the experiments with the most recent version of ProPPR, obtaining a result slightly higher than the 2013 version's published AUC of 80.0}. A second was learning the most common relation, ``affects'', from UMLS, using a rule set learned by the algorithm of \cite{wang2014structure}. Finally, motivated by recent comparisons between ProPPR and embedding-based approaches to knowledge-base completion \cite{Wang-Cohen:2016:IJCAI}, we also compared to ProPPR on two relation-prediction tasks involving WordNet, again using rules from the (non-recursive) theories used in \cite{Wang-Cohen:2016:IJCAI}. In all of these tasks parameters are learned on a separate training set. For TensorLog's learner, we used the local infrastructure with the default loss function (unregularized cross-entropy loss), using a fixed-rate gradient descent learner with the learning rate to 0.1, and 30 epochs.\footnote{Thirty epochs approximately matches ProPPR's runtime on a single-threaded machine.} We also used the default parameters for ProPPR's learning. Table~\ref{tab:proppr} shows that the accuracy of the two systems after learning is quite comparable, even with a rather simplistic learning scheme. ProPPR, of course, is not well suited to tight integration with deep learners. \subsection{Path-finding after learning} The results of Section~\ref{sec:timing} demonstrate that TensorLog's approximation to ProbLog2's semantics is efficient, but not that it is useful. To demonstrate that TensorLog can efficiently and usefully approximate deeply recursive concepts, we posed a learning task on the 16-by-16 grid, with a maximum depth of 10, and trained TensorLog to approximate the distribution for this task. The dataset consists of 256 grid cells connected by 2116 edges, so there are 256 example queries of the form \cd{path(a,X)} where $a$ is a particular grid cell. We picked 1/3 of these queries as test, and the remainder as train, and trained so that that the single positive answer to the query \cd{path(a,X)} is the extreme corner closest to \cd{a}---i.e., one of the corners (1,1), (1,16), (16,1) or (16,16). We set the initial weights of the edges uniformly to 0.2. Training for 30 epochs with the local backend and a fixed-rate gradient descent learner, using a learning rate of 0.01, brings the accuracy from 0\% to 99.89\% for test cases (averaged over 10 trials, with different train/test splits). Learning takes less than 1.5 sec/epoch. After learning query times are still quite fast, as shown in the table. The table also includes a visualization of the learned weights for a small 6x6 grid. For every pair of adjacent grid cells $u,v$, there are two weights to learn, one for the edge from $u$ to $v$ and one for its converse. For each weight pair, we show a single directed edge (the heavy blue squares are the arrows) colored by the magnitude of the difference. We observe that ProbLog2, in addition to implementing the full tuple-independence semantics, implements a much more expressive logic than considered here, including a large portion of full Prolog, while in contrast TensorLog includes only a subset of Datalog. So to some extent this comparison is unfair. We also observe that although this task seems simple, it is quite difficult for probabilistic logics, because of deeply recursive theories lead to large, deep proofs. While TensorLog's inference schemes scale well on this task, is still challenging to optimize the parameters, especially for larger grid sizes. One problem is that unrolling the inference leads to very large graphs, especially after they are compiled to the relatively fine-grained operations used in deep-learning infrastructure. Table~\ref{tab:gridlearn} shows the size of the networks after compilation to Tensorflow for various extensions of the 16-by-16 depth 10 task. Although growth is linear in depth, the constants are large: e.g., the Tensorflow 64-by-64 depth 99 network does not fit in memory for a 4Gb GPU. A second problem is that the constructed networks are very deep, which leads to problems in optimization. For the smaller task, the local optimizer (which is a fixed-rate gradient descent method) required careful tuning of the initial weights and learning rate to reliably converge. The size and complexity of this task suggested a second set of experiments, where we varied the task complexity, while fixing the parameters of two optimizers. For the local optimizer we fixed the parameters to those used the 16-by-16 depth 10 task, and for the Tensorflow backend, we used a the \cd{Adagrad\-Optimizer} with a default learning rate of 1.0, running for 30 epochs. The results are shown in Table~\ref{tab:gridlearn} (averaged over 10 trials for each datapoint), and they illustrate several of the advantages of using a mature deep-learning framework as the backend of TensorLog. \begin{itemize} \item In general learning is many times faster for the Tensorflow backend, which uses a GPU processor, than using the local infrastructure.\footnote{Learning times for the local infrastructure are quite variable for the larger sizes, because numerical instabilities often cause the optimizer to fail. In computing times we discard runs where there is overflow but not when there is underflow, which is harder to detect. The high variance accounts for the anomolously low average time for grid size 22.} \item Although they do not completely eliminate the need for hyperparameter tuning, the more sophisticated optimizers available in Tensorflow do appear to be more robust. In particular, Adagrad performs well up to a depth of around 16, while the fixed-rate optimizer performs well only for depths 10 and 12. \end{itemize} We conjecture that good performance on larger grid sizes would require use of gradient clipping. \begin{table} \begin{center} \begin{tabular}{cc|cc|ccc} \hline \multicolumn{2}{c|}{Original KB} & \multicolumn{2}{c|}{Extended KB} & \multicolumn{3}{c}{Num Examples} \\ Num Tuples & Num Relations & Num Tuples & Num Relations & Train & Devel & Test \\ \hline 421,243 & 10 & 1,362,670 & 12 & 96,182 & 20,000 & 10,000 \\ \hline \end{tabular} \caption{Statistics concerning the WikiMovies dataset.} ~\\ ~\\ \begin{tabular}{lcr} \hline Method & Accuracy & Time per epoch \\ \hline Subgraph/question embedding & 93.5\% & \\ Key-value memory network & 93.9\% & \\ \hline TensorLog (1,000 training examples) & 89.4\% & 6.1 sec \\ TensorLog (10,000 training examples) & 94.8\% & 1.7 min \\ TensorLog (96,182 training examples) & \bst{95.0\%} & 49.5 min \\ \hline \end{tabular} \end{center} \caption{Experiments with the WikiMovies dataset. The first two results are taken from \cite{miller-EtAl:2016:EMNLP2016}.} \label{tab:wikimovies} \end{table} \subsection{Answering Natural-Language Questions Against a KB} As larger scale experiment, we used the WikiMovies question-answering task proposed by \cite{miller-EtAl:2016:EMNLP2016}. This task is similar to the one shown in Figure~\ref{fig:qatheory}. The KB consists of over 420k tuples containing information about 10 relations and 16k movies. Some sample questions with their answers are below, with double quotes identifying KB entities. \begin{itemize} \item Question: Who acted in the movie Wise Guys? \\ \textit{Answers: ``Harvey Keitel'', ``Danny DeVito'', ``Joe Piscopo'', \ldots} \item Question: what is a film written by Luke Ricci? \\ \textit{Answer: ``How to be a Serial Killer''} \end{itemize} We encoded the questions into the KB by extending it with two additional relations: \cd{mentionsEntity(Q,E)}, which is true if question \cd{Q} mentions entity \cd{E}, and \cd{hasFeature(Q,W)}, which is true if question \cd{Q} contains feature \cd{W}. The entities mentioned in a question were extracted by looking for every longest match to a name in the KB. The features of a question are simply the words in the question (minus a short stoplist). The theory is a variant of the one given as an example in Figure~\ref{fig:qatheory}. The main difference is that because the simple longest-exact-match heuristic described above identifies entities accurately for this dataset, we made \cd{mentionsEntity} a hard KB predicate. We also extended the theory to handle questions with answers that are either movie-related entities (like the actors in the first example question) or movies (as in the second example. Finally, we simplified the question-classification step slightly. The final theory contains two rules and two ``soft'' unary relations \cd{QuestionType$_{R,1}$}, \cd{indicates\-Question\-Type$_{R,2}$} for each relation $R$ in the original movie KB. For example, for the relation \cd{directedBy} the theory has the two rules \begin{tabbing}1234\=1234\=\kill \>\cd{answer(Question,Movie) :-} \\ \> \> \cd{mentionsEntity(Question,Entity), directedBy(Movie,Entity),}\\ \> \> \cd{hasFeature(Question,Word), indicatesQuestionType$_{\rm directedBy,1}$(Word)}\\ \>\cd{answer(Question,Entity) :-} \\ \> \> \cd{mentionsEntity(Question,Movie), directedBy(Movie,Entity),}\\ \> \> \cd{hasFeature(Question,Word), indicatesQuestionType$_{\rm directedBy,2}$(Word)}\\ \end{tabbing} The last line of each rule acts as a linear classifier for that rule. For efficiency we used three distinct types of entities (question ids, entities from the original KB, and word features) and the Tensorflow backend, with minibatches of size 100 and an Adagrad optimizer with a learning rate of 0.1, running for 20 epochs, and no regularization. We compare accuracy results with two prior neural-network based methods which have been applied to this task. As shown in Table~\ref{tab:wikimovies}, TensorLog performs better than the prior state-of-the-art on this task, and is quite efficient. \section{Concluding Remarks} In this paper, we described a scheme to integrate probabilistic logical reasoning with the powerful infrastructure that has been developed for deep learning. The end goal is to enable deep learners to incorporate first-order probabilistic KBs, and conversely, to enable probabilistic reasoning over the outputs of deep learners. TensorLog, the system we describe here, makes this possible to do at reasonable scale using conventional neural-network platforms. This paper contains several interrelated technical contributions. First, we identified a family of probabilistic deductive databases (PrDDBs) called {polytree-limited stochastic deductive knowledge graphs (ptree-SDKGs)} which are tractable, but still reasonably expressive. This language is a variant of SLPs, and it is maximally expressive, in that one cannot drop the polytree restriction, or switch to a possible-worlds semantics, without making inference intractible. We argue above that logics which are not tractable (i.e., are \#P or worse in complexity) are unlikely to be practically incorporated into neural networks. Second, we presented an algorithm for performing inference for ptree-SDKGs, based on belief propagation. Computationally, the algorithm is quite efficient. Assuming the matrices $\textbf{M}_p$ exist, the additional memory needed for the factor-graph $G_r$ is linear in the size of the clause $r$, and hence the compilation is linear in the theory size and recursion depth. To our knowledge use of BP for first-order inference in this setting is novel. Finally, we present an implementation of this logic, called TensorLog. The implementation makes it possible to both call TensorLog inference within neural models, or conversely, to call neural models within TensorLog. The current implementation of TensorLog includes a number of restrictions. Two backends are implemented, one for Tensorflow and one for Theano, but the Tensorflow backend has been more extensively tested and evaluated. We are also exploring compilation to PyTorch\footnote{pytorch.org}, which supports dynamic networks. We also plan to implement support for more stable optimization (e.g., gradient clipping), and better support for debugging. As noted above, TensorLog also makes it possible to replace components of the logic program (e.g., the \cd{classification} or \cd{matches} predicate) with submodels learned in the deep-learning infrastructure. Alternatively, one can export a \cd{answer} predicate defined by the logic to a deep learner, as a function which maps a question to possible answers and their confidences; this might be useful in building a still more complex model non-logical model (e.g., a dialog agent which makes use of question-answering as a subroutine.) In future work we hope to explore these capabilities. We also note that although the experiments in this paper assume that theories are given, the problem of learning programs in TensorLog is also of great interest. Some early results from the authors on this problem are discussed elsewhere \cite{yang2017differentiable}. \acks{Thanks to William Wang for providing some of the datasets used here; and to William Wang and many other colleagues contributed with technical discussions and advice. The author is greatful to Google for financial support, and also to NSF for their support of his work via grants CCF-1414030 and IIS-1250956.} \newpage
1,314,259,996,846
arxiv
\section{Introduction}\label{se:intro} \citet{Bor:CRAS1921} proposed a game, later dubbed \emph{Colonel Blotto game} by \citet{GroWag:RAND1950}. In this game Colonel Blotto and his enemy each have a given (possibly unequal) amount of resources, that have to be allocated to $n$ battlefields. The side that allocates more resources to field $j$ is the winner in this field and gains a positive amount $a_{j}$ which the other side loses. The war is won by the army that obtains the largest total gain. The relevance of Borel precursory insight in the theory of games was discussed in an issue of \emph{Econometrica} that contains three papers by Borel, including the translation of the 1921 paper \citep{Bor:E1953}, two notes by \citet{Fre:E1953a,Fre:E1953b} and one by \citet{von:E1953}. \citet{BorVil:GV1938} proposed a solution to the game when the two enemies have an equal amount of resources and there are $n=3$ battlefields. The problem was then taken up by several authors, including several other famous mathematicians. \citet{GroWag:RAND1950, Gro:RAND1950} provided the solution for a generic $n$, keeping the amount of resources equal and the gain in each battlefield constant ($a_{i}=a_{j}$). \citet{Bla:NRLQ1954,Bla:NRLQ1958} considered the case where the payoff to Colonel Blotto in each battlefields is an increasing function of his resources and a decreasing function of his enemy's resources. \citet{Bel:SIAMR1969} showed the use of dynamic programming to solve the Blotto game. \citet{ShuWeb:NRLQ1981} studied a more complex model where there exist complementaries among the fields being defended. In this case the total payoff depends on the relative value of capturing various configurations of targets. \citet{Rob:ET2006} used $n$-copulas to determine the mixed equilibrium of the game under general conditions on the amount of resources for each player. His analysis is based on an interesting analogy with the theory of all-pay auctions (see also \citet{Wei:mimeo2005} for the equilibrium of the game and \citet{SahPer:ET2006} for the connection between all-pay auctions and allocation of electoral promises). \citet{Har:IJGT2008} considered a discrete version of the Blotto game, where player A has $A$ alabaster marbles and player B has $B$ black marbles. The players are to distribute their marbles into $K$ urns. One urn is chosen at random and the player with the largest number of marbles in the chosen urn wins the game. In another version of the game, called \emph{Colonel Lotto game}, each player has $K$ urns where she can distribute her marbles. Two urns (one for each player) are chosen at random and the urn with the larger number of marbles determines the winner. The discrete Colonel Blotto game and the Colonel Lotto game have the same value. In a third version, called \emph{General Lotto game}, given $a,b>0$, player A chooses a nonnegative integer-valued random variable $X$ with expectation $\mathbb{E}[X] = a$ and player B chooses a nonnegative integer-valued random variable $Y$ with expectation $\mathbb{E}[Y] = b$. The payoff for A is $\mathbb{P}(X >Y)-\mathbb{P}(X <Y)$, where $X$ and $Y$ are assumed independent. The value of the game and the optimal strategies are determined. Other authors who dealt with the Blotto game and its applications include, for instance, \citet{Tuk:E1949, SioWol:CTG3PUP1957, Fri:OR1958, CooRes:SIAMR1967, Pen:OR1971, Heu:TCS2001, Kva:JET2007, AdaMat:EL2009, Pow:GEB2009, GolPag:PC2009} and many more. We refer to \citet{KovRob:CESIFO2010, ChoKovShe:CESIFO2010} for some history of the Colonel Blotto game and a good list of references. In this paper we deal with a stochastic version of the Colonel Blotto game, called \emph{gladiator game} by \citet{KamLukNel:AJS1984}. In their model two teams of gladiators engage in a sequence of one-to-one fights. Each gladiator has a strength parameter. When two gladiators fight, the ratio of their strengths determines the odds of winning. The loser dies and the winner retains his strength and is ready for a new duel. The team that is wiped out loses. Each team chooses once and for all at the beginning of the game the order in which gladiators go to the arena. We construct a zero-sum two-team game where each team also has to allocate a fixed total strength among its players. The payoff is linear in the probability of winning. We find the Nash equilibria and compute the value of the game. The main results are: \begin{enumerate}[(i)] \item the order according to which gladiators fight has no relevance, moreover knowing the order chosen by the opponent team does not provide any advantage; \item the stronger team always splits its strength uniformly among its gladiators, whereas the weaker team splits the strength uniformly among a subset of its gladiators; \item when the two teams have roughly equal total strengths, the optimal strategy for the weaker team is to divide its total strength equally among all its members; \item when the total strength of one team is much larger than that of the other, the weaker team should concentrate all the strength on a single member. \end{enumerate} \citet{DeSDeMDeB:DAM2006} consider a dice game that has some analogies with ours. Both players can choose one of many dice having $n$ faces and such that the total number of pips on the faces of the die is $\sigma$. The two dice are tossed and the player with the highest score wins a dollar. The model described below for the probability that gladiator $i$ defeats $j$, is equivalent, with different parametrization, to the well-known Rasch model in educational statistics, \citep{Ras:UChicagoP1960}, in which the probability of correct response of subject $i$ to item $j$ is $\operatorname{e}^{\alpha_i-\beta_j}/(1+\operatorname{e}^{\alpha_i-\beta_j})$ \citep[see][for a recent mathematical study of Rasch models]{Lau:RMA2008}. A similar model has been used also in the theory of contests proposed by \citet{Tul:TAMUP1980}, as will be described in Section~\ref{se:extensions}. Finding the Nash equilibria of the gladiator game involves an analysis of the probability of winning. The key step is a result in \citet{KamLukNel:AJS1984} that translates the calculation of this probability into an inequality involving the sum of independent but not necessarily identically distributed exponential random variables. The main theorems are demonstrated through interesting and hard probability inequalities, whose proofs are of independent interest and turned out to be more complicated than expected. Much of the paper consists of these proofs. We rely on \citet{SzeBak:PTRF2003} for some of the technical machinery. The problem is cast as a minimization problem involving convolutions of exponential variables and is solved by perturbation arguments. A key identity, derived using Laplace transforms, directs our perturbation arguments to the analysis of the modal location of Gamma convolutions. Our inequalities are related to majorization type inequalities for probabilities of the form $\mathbb{P}(Q <t)$, where $Q$ is a linear combinations of Exponential or Gamma variables, that appear in \citet{BocDiaHufPer:CJS1987,DiaPer:IMSLN1990,SzeBak:PTRF2003} and in \citet{Tel:ETT1999, JorBoc:ITW2003, AbbHuaTel:ArXiv2011}. The motivation in the last three papers, and numerous others, is the performance of some wireless systems that depends on the coefficients of the linear combination $Q$. For stochastic comparisons between such linear combinations see \citet{Yu:AAP2008, Yu:B2011} and references therein. Linear combinations of exponential variables appear in various other applications. For instance \citet{LipMcC:RANDJE1987} consider a two-firm model in which learning is stochastic and the research race is divided into a finite number $N$ of stages, each having an exponential completion date. The invention is discovered at the completion of the $N$-th stage. If the exponential times for one firm have parameters that can be controlled by the firms subject to constraints, then our results apply to the problem of best response and equilibrium allocation strategies for such races. Finally, it is well known that the first passage time from $0$ to $N$ of a birth and death process on the positive integers is distributed as a linear combination of exponential random variables, with coefficients determined by the eigenvalues of the process' generator. For a clear statement, a probabilistic proof, and further references see \citet{DiaMic:JTP2009}. This allows one to consider R\&D type races in which one can also move backwards, and applies, for example, to the study of queues, where one compares the time until different systems reach a given queue size. The paper is organized as follows. In Section~\ref{se:model} we describe the model. In Section~\ref{se:main} we determine the Nash equilibria and the value of the game for different values of the parameters. Section~\ref{se:inequalities} contains the main probability inequalities used to compute the equilibria. Section~ \ref{se:proofs} is devoted to the proofs of the main results. Section~\ref{se:monotonicity} deals with some monotonicity properties, that follow from our main result and have some interest \emph{per se}. Finally Section~\ref{se:extensions} contains some extensions and open problems. \section{The model}\label{se:model} We formalize the model described in the Introduction. Two teams of gladiators fight each other according to the following rules. Team $A$ is an ordered set $\{A_{1}, \dots, A_{m}\}$ of $m$ gladiators and team $B$ is an ordered set $\{B_{1}, \dots, B_{n}\}$ of $n$ gladiators. The numbers $m,n$ and the orders of the gladiators in the two teams are exogenously given. At any given time, only two gladiators fight, one for each team. At the end of each fight only one gladiator survives. In each team gladiators go to fight according to the exogenously given order. First gladiators $A_{1}$ and $B_{1}$ fight. The winner remains in the arena and fights the following gladiator of the opposing team. Assume that for $i<m$ and $j<n$ at some point, $A_{i}$ fights $B_{j}$. If $A_{i}$ wins, the following fight will be between $A_{i}$ and $B_{j+1}$; if $A_{i}$ loses, the following fight will be between $A_{i+1}$ and $B_{j}$. The process goes on until a team is wiped out. The other team is then proclaimed the winner. So if at some point, for some $i \le m$, gladiator $A_{i}$ fights $B_{n}$ and wins, then team $A$ is the winner. Symmetrically if, for some $j \le n$, $A_{m}$ fights $B_{j}$ and loses, then team $B$ is the winner. Team $A$ has total strength $c_{A}$ and team $B$ has total strength $c_{B}$. The values $c_{A}$ and $c_{B}$ are exogenously given. Before fights start the coach of each team decides how to allocate the total strength to the gladiators of the team. These decisions are simultaneous and cannot be altered during the play. Let $\boldsymbol{a}=(a_{1}, \dots, a_{m})$ and $\boldsymbol{b}=(b_{1}, \dots, b_{n})$ be the strength vectors of team $A$ and $B$, respectively. This means that in team $A$ gladiator $A_{i}$ gets strength $a_{i}$ and in team $B$ gladiator $B_{j}$ gets strength $b_{j}$. The vectors $\boldsymbol{a}, \boldsymbol{b}$ are nonnegative and such that \[ \sum_{i=1}^{m} a_{i} = c_{A}, \quad \sum_{j=1}^{n} b_{j} = c_{B}, \] namely, each coach distributes all the available strength among the gladiators of his team. When a gladiator with strength $a$ fights a gladiator with strength $b$, the first defeats the second with probability \begin{equation}\label{eq:aoveraplusb} \frac{a}{a+b}, \end{equation} all fights being independent. When a gladiator wins a fight his strength remains unaltered. The rules of the play and its parameters, i.e., the teams $A$ and $B$ and the strengths $c_{A}, c_{B}$, are common knowledge. Call $G_{m,n}(\boldsymbol{a},\boldsymbol{b})$ the probability that team $A$ with strength vector $\boldsymbol{a}$ wins over team $B$ with strength vector $\boldsymbol{b}$. The above model gives rise to the zero-sum two-person game \begin{equation}\label{eq:game} \mathcal{G}(m,n,c_{A},c_{B}) = \langle \mathcal{A}(m,c_{A}), \mathcal{B}(n,c_{B}), H_{m,n} \rangle \end{equation} in which team $A$ chooses $\boldsymbol{a} \in \mathcal{A}(m,c_{A})$ and $B$ chooses $\boldsymbol{b} \in \mathcal{B}(n,c_{B})$, where \begin{align} \mathcal{A}(m,c_{A})&=\left\{(a_1,\dots,a_m) \in \mathbb{R}^{m}_{+}: \sum_{i=1}^m a_i=c_{A}\right\}, \label{eq:mathcalA}\\ \mathcal{B}(n,c_{B})&=\left\{(b_1,\dots,b_n) \in \mathbb{R}^{n}_{+} : \sum_{i=1}^n b_i=c_{B}\right\}, \label{eq:mathcalB} \\ H_{m,n} &=G_{m,n} - \frac{1}{2}. \label{eq:payoff} \end{align} The payoff of team $A$ is then its probability of victory $G_{m,n}(\boldsymbol{a},\boldsymbol{b})$ minus $1/2$. We subtracted $1/2$ to make the game zero-sum. As will be shown in Remark~\ref{re:marco} below, other models with different rules of engagement for the gladiators give rise to the same zero-sum game. \section{Main results}\label{se:main} Consider the game $\mathcal{G}$ defined in \eqref{eq:game}. The action $\boldsymbol{a}^*$ is a best response against $\boldsymbol{b}$ if \[ \boldsymbol{a}^{*} \in \arg \max_{\boldsymbol{a} \in {\mathcal{A}}}H_{m,n}(\boldsymbol{a},\boldsymbol{b}). \] A pair of actions $(\boldsymbol{a}^*,\boldsymbol{b}^*)$ is a \emph{Nash equilibrium} of the game $\mathcal{G}$ if \[ H_{m,n}(\boldsymbol{a},\boldsymbol{b}^{*}) \le H_{m,n}(\boldsymbol{a}^{*},\boldsymbol{b}^{*}) \le H_{m,n}(\boldsymbol{a}^{*},\boldsymbol{b}), \quad\text{for all}\ \boldsymbol{a} \in \mathcal{A}(m,c_{A})\ \text{and}\ \boldsymbol{b} \in \mathcal{B}(n,c_{B}). \] A pair of actions $(\boldsymbol{a}^*,\boldsymbol{b}^*)$ is a \emph{minmax solution} of the game $\mathcal{G}$ if \[ \max_{\boldsymbol{a} \in \mathcal{A}(m,c_{A})} \min_{\boldsymbol{b} \in \mathcal{B}(n,c_{B})} H_{m,n}(\boldsymbol{a},\boldsymbol{b}) = \min_{\boldsymbol{b} \in \mathcal{B}(n,c_{B})} \max_{\boldsymbol{a} \in \mathcal{A}(m,c_{A})} H_{m,n}(\boldsymbol{a},\boldsymbol{b}) = H_{m,n}(\boldsymbol{a}^{*},\boldsymbol{b}^{*}). \] Since we are dealing with a finite zero-sum game, Nash equilibria and minmax solutions coincide \citep[see, e.g.,][Proposition~22.2]{OsbRub:MITP1994}. The quantity $H_{m,n}(\boldsymbol{a}^{*},\boldsymbol{b}^{*})$ is called the \emph{value} of the game $\mathcal{G}$. The next theorem characterizes the structure of Nash equilibria of the game $\mathcal{G}(m,n,c_{A},c_{B})$. \begin{theorem}\label{th:Nash} Consider the game $\mathcal{G}(m,n,c_{A},c_{B})$ defined in \eqref{eq:game}. Assume that $c_{A} \le c_{B}$. \begin{enumerate}[{\rm (a)}] \item\label{it:th:Nash-a} There exists an equilibrium strategy profile $(\boldsymbol{a}^{*}, \boldsymbol{b}^{*})$ of $\mathcal{G}$ such that for some $J \subseteq \{1, \dots, m \}$ we have \begin{align}\label{eq:equila} a^{*}_{i} & = c_{A}/|J| \ \text{ for }\ i \in J, \quad a^{*}_{i} = 0 \ \text{ for }\ i \in J^{c}, \\ b^{*}_{i} & = c_{B}/n \ \text{ for }\ i \in \{1, \dots, n\}. \label{eq:equilb} \end{align} Moreover, all pure equilibria are of this form. \item\label{it:th:Nash-b} If \begin{equation}\label{eq:inequalitycasimcb} c_{B} \le \frac{n}{n-1} c_{A}, \end{equation} then $J = \{1, \dots, m \}$, so that $a_{1}^{*}=\dots=a_{m}^{*} = c_{A}/m$ and $b_{1}^{*}=\dots=b_{n}^{*}=c_{B}/n$. \item\label{it:th:Nash-c} If \begin{equation}\label{eq:inequalitycasmallercb} c_{B} \ge \frac{3n}{2(n-1)}c_{A}, \end{equation} then $J = \{ i \}$, that is $a_{i}^{*}=c_{A}$ for some $i \in \{1,\dots,m\}$ and $a_{j}^{*}=0$ for all $j \ne i$, and $b_{1}^{*}=\dots=b_{n}^{*}=c_{B}/n$. \item\label{it:th:Nash-d} Let $t_0=1.256431\cdots$ be the root of the equation $\operatorname{e}^t = 1+2t$. Then for fixed $m$, and $c_A$ and $c_B$ such that $c_B > t_0 c_A$, the same conclusion as in \eqref{it:th:Nash-c} holds if $n$ is sufficiently large. \end{enumerate} \end{theorem} Theorem~\ref{th:Nash} shows that if a vector $(\boldsymbol{a}^{*},\boldsymbol{b}^{*})$ is an equilibrium, then so is any permutation of $\boldsymbol{a}^{*}$ or $\boldsymbol{b}^{*}$. Moreover the team with the highest total strength always divides it equally among its members, whereas the other team divides its strength equally among a subset of its members. This subset coincides with the whole team if the total strengths of the two teams are similar, and it reduces to one single gladiator if the team has a much lower strength than the other team (see Figures~\ref{fi:plotmeqn}, \ref{fi:plotfixedn}, and \ref{fi:plotfixedm}). \begin{center} FIGURES~\ref{fi:plotmeqn}, \ref{fi:plotfixedn}, AND \ref{fi:plotfixedm} ABOUT HERE \end{center} For $n=1$, i.e., when team $B$ has a single player, equal strength is always team $A$'s best strategy. In order to compute the value of the game $\mathcal{G}(m,n,c_{A},c_{B})$, we need the regularized incomplete beta function \begin{equation}\label{eq:incompletebeta} I(x,\alpha,\beta) = \frac{1}{\operatorname{B}(\alpha, \beta)}\int_{0}^{x} t^{\alpha-1}(1-t)^{\beta-1} \ \mathrm{d} t, \end{equation} where \[ \operatorname{B}(\alpha, \beta)=\int_{0}^{1} t^{\alpha-1}(1-t)^{\beta-1} \ \mathrm{d} t = \frac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha+\beta)}. \] When $\alpha$ and $\beta$ are integers, then \begin{equation}\label{eq:binombeta} I(x,\alpha,\beta) = \sum_{j=\alpha}^{\alpha+\beta-1} \binom{\alpha+\beta-1}{j}x^{j}(1-x)^{\alpha+\beta-1-j}. \end{equation} For properties of incomplete beta functions see, for instance, \citet{OlvLozBoiCla:NISTHMF2010}. \begin{theorem}\label{th:value} Consider the game $\mathcal{G}(m,n,c_{A},c_{B})$. Assume that $c_{A} \le c_{B}$. \begin{enumerate}[\rm (a)] \item\label{it:th:value-a} The value of the game is \begin{equation}\label{eq:valuegeneral} \frac{1}{2}-I\left(\frac{rc_{B}}{rc_{B}+nc_{A}},r,n\right), \end{equation} where $r$ is the number of positive $a_{i}^{*}$ in the vector $\boldsymbol{a}^{*}$. In particular \item\label{it:th:value-b} if \eqref{eq:inequalitycasimcb} holds, then the value of the game is \begin{equation}\label{eq:valueequal} \frac{1}{2}-I\left(\frac{mc_{B}}{mc_{B}+nc_{A}},m,n\right), \end{equation} \item\label{it:th:value-c} if \eqref{eq:inequalitycasmallercb} holds, then the value of the game is \begin{equation}\label{eq:valueunequal} \frac{1}{2}-I\left(\frac{c_{B}}{c_{B}+nc_{A}},1,n\right). \end{equation} \end{enumerate} \end{theorem} In general, to compute the value of the game, one only needs to maximize (\ref{eq:valuegeneral}) over $r=1,\ldots, m$; any maximizing $r$ gives an optimal strategy for team $A$. Figure~\ref{fi:plotvariousr} shows the value of the game as $c_{B}$ varies. Different values of $c_{B}$ imply different numbers of positive $a_{i}^{*}$. \begin{center} FIGURE~\ref{fi:plotvariousr} ABOUT HERE \end{center} \section{Probability inequalities}\label{se:inequalities} We say that $X\sim \operatorname{Exp}(1)$ if $X$ has a standard exponential distribution, i.e., $\mathbb{P}(X > x) = \operatorname{e}^{-x}$ for $x>0$. The main theorems of this paper rely on the following result. \begin{proposition}\label{pr:kaminsky}[\citet{KamLukNel:AJS1984}] The probability $G_{m,n}(\boldsymbol{a},\boldsymbol{b})$ of team $A$ defeating $B$ is \begin{equation}\label{eq:defG} G_{m,n}(\boldsymbol{a},\boldsymbol{b}) = \mathbb{P}\left(\sum_{i=1}^m a_i X_i > \sum_{j=1}^n b_j Y_j \right), \end{equation} where $X_{1}, \dots, X_{m}, Y_{1}, \dots, Y_{n}$ are i.i.d. random variables, with $X_{1}\sim \operatorname{Exp}(1)$. \end{proposition} \begin{remark}\label{re:marco} The implication of Proposition~\ref{pr:kaminsky} is that two vectors $\boldsymbol{a}, \boldsymbol{a}'$ of strengths that are equal up to a permutation produce the same probability of victory, that is, the same payoff function \eqref{eq:payoff}. The same holds for two vectors $\boldsymbol{b}, \boldsymbol{b}'$. Therefore various models, with different rules for the order in which gladiators fight, give rise to the same game \eqref{eq:game}. This happens, for instance, in a model where the winning gladiator does not stay in the arena to fight the following opponent, but, rather, goes to the bench at the end of his team's queue, and comes back to fight when his turn comes. This happens also when, at the end of each fight, each coach chooses one of the living gladiators in his team at random and sends him to fight. Basically, provided the allocations of strength in the two teams is decided simultaneously at the beginning and is not modified throughout, any rule governing the order of descent of gladiators in the arena leads to the same game \eqref{eq:game}. This is true also for nonanticipative rules that depend on the history of the battles so far. The key assumption for this is the fact that a winning gladiator does not lose (or gain) any strength after a victorious battle. This is parallel to the lack-of-memory property in many reliability models, and explains why the probability of winning \eqref{eq:defG} involves sums of exponential random variables. Note that the main result (Theorem~\ref{th:Nash}) does not go through if the allocations can also be decided dynamically as battles unfold. In this case the resulting game is more complicated and optimal allocations may change according to the observed history. For instance consider the case where $c_{B}$ is slightly larger than $c_{A}$. At the beginning, suppose team $B$ spreads the strength uniformly across all its players. If team $B$ keeps losing some battles, then it may become optimal to spread the strength among only a subset of the surviving players. \end{remark} The following theorem is the main tool to prove Theorem~\ref{th:Nash}. \begin{theorem} \label{th:minimizer} Let $X_1,\dots, X_m$ and $Y_1,\dots, Y_n$, $m, n\geq 1$, be i.i.d. random variables with $X_{1}\sim \operatorname{Exp}(1)$. For fixed $b>0$, let $\mathcal{A}$ be as in \eqref{eq:mathcalA} and let \[ (a_1^*,\dots, a_m^*)\in \arg \min_{\boldsymbol{a}\in \mathcal{A}(m,m)}\mathbb{P}\left(\sum_{i=1}^m a_i X_i \leq b\sum_{j=1}^n Y_j\right). \] Then \begin{enumerate}[\rm (a)] \item\label{it:th:minimizer-a} all nonzero values among $a_1^*,\ldots, a_m^*$ are equal; \item\label{it:th:minimizer-b} if $m \ge (n-1)b$, then $a^*_1=\cdots =a^*_m=1$; \item\label{it:th:minimizer-c} if $m\leq 2(n-1)b/3$, then $a^*_i=m$ for a single $i$,\, $1\leq i\leq m$, and $a^*_j=0$, for $j\neq i$. \end{enumerate} \end{theorem} \section{Proofs of the main results}\label{se:proofs} The long path to the proof of Theorem~\ref{th:Nash} goes through the following steps: first we provide a short proof of Proposition~\ref{pr:kaminsky} for the sake of completeness. Then we state and prove three lemmas needed for the proof of Theorem~\ref{th:minimizer}. Then we prove Theorem~\ref{th:minimizer}, and, resorting to it, we finally prove Theorem~\ref{th:Nash}. \begin{proof}[Proof of Proposition~\ref{pr:kaminsky}] First note that if $X$, $Y$ are i.i.d.~random variables with $X\sim \operatorname{Exp}(1)$, then $\mathbb{P}(aX>bY)= a/(a+b)$. Therefore, one can see a duel between gladiators $i$ and $j$ as a competition in which the probability of winning is the probability of living longer, when their lifetimes are $a_iX$ and $b_jY$, respectively. At the end of a duel, the winner's remaining lifetime is as good as new by the memoryless property of exponential random variables, corresponding to the fact that the strength of a winner remains unaltered. The teams' total lives are $\sum_{i=1}^m a_i X_i$ and $\sum_{j=1}^n b_jY_j$, and the probability that team $A$ wins is that it lives longer, which is $G_{m,n}(\boldsymbol{a},\boldsymbol{b})$, so \eqref{eq:defG} follows. \end{proof} In order to prove Theorem~\ref{th:minimizer} we need several preliminary results. Let $G_{1}, G_{2}, Z_{1}, Z_{2}$ be independent with $G_i \sim \operatorname{Gamma}(u_i,1)$, $Z_i \sim \operatorname{Exp}(1)$, for $i=1,2$. For $u_i=0$ we define $G_i=0$ with probability 1. \begin{lemma} \label{le:LL}Given $a_{1}^{*},a_{2}^{*}$, set $a_1=a_1^*+\delta/u_1$ and $a_2=a_2^*-\delta/u_2$. Then \begin{equation} \label{eq:SB} \frac{\partial}{\partial \delta} \mathbb{P}(a_1G_1+a_2G_2\leq x) = (a_1-a_2)\frac{\partial^2}{\partial x^2} \mathbb{P}(a_1(G_1+Z_1)+a_2(G_2+Z_2)\leq x). \end{equation} \end{lemma} \begin{proof} Let \begin{align*} F(x)&= \mathbb{P}(a_1G_1+a_2G_2 \le x)\\ H(x)&=\mathbb{P}(a_1G_1+a_2G_2+a_1Z_1+a_2Z_2 \le x) \end{align*} and let $f$ and $h$ denote the corresponding densities. Let $\mathcal{L}$ denote the Laplace transform, that is, \[ \mathcal{L}(F)=\int_0^\infty \operatorname{e}^{-tx}F(x)\ \mathrm{d} x. \] Note that \eqref{eq:SB} is equivalent to \begin{equation}\label{eq:Lap} \mathcal{L}\left(\frac{\partial}{\partial \delta}F(x)\right)=(a_1-a_2)\mathcal{L}\left(\frac{\partial^2}{\partial x^2}H(x)\right). \end{equation} Using integration by parts we get \[ \mathcal{L}\left(\frac{\partial^2}{\partial x^2}H(x)\right) = t\int_0^\infty \operatorname{e}^{-tx}h(x)\ \mathrm{d} x = t\ \mathbb{E}[\exp\{-t(a_1G_1+a_2G_2+a_1Z_1+a_2Z_2)\}]. \] For the left hand side of \eqref{eq:Lap} note that we can interchange differentiation and integration, and also that \[ \frac{\partial}{\partial \delta}\mathcal{L}(F(x))=\mathcal{L}(F(x))\frac{\partial}{\partial \delta}\log \mathcal{L}(F(x)). \] Again by integration by parts we have \[ \mathcal{L}(F(x))=\frac{1}{t}\mathcal{L}(f(x))=\frac{1}{t}\mathbb{E}[\exp\{-t(a_1G_1+a_2G_2)\}]. \] It follows that \eqref{eq:Lap} is equivalent to \begin{equation}\label{eq:Lap2} \frac{1}{t}\frac{\partial}{\partial \delta}\log \mathcal{L}(f(x))= (a_1-a_2)\ t\ \mathbb{E}[\exp\{-t(a_1Z_1+a_2Z_2)\}]. \end{equation} Explicitly this becomes \begin{equation}\label{eq:Lap3} \frac{1}{t}\frac{\partial}{\partial \delta}\log[(1+a_1t)^{-u_1}(1+a_2t)^{-u_2}]= (a_1-a_2)t(1+a_1t)^{-1}(1+a_2t)^{-1}. \end{equation} Using $a_1=a_1^*+\delta/u_1$, and $a_2=a_2^*-\delta/u_2$, \eqref{eq:Lap3} is verified by a straightforward calculation. \end{proof} A related result to Lemma \ref{le:LL}, with a similar type of proof, appears in \citet{SzeBak:PTRF2003}. \begin{lemma}\label{le:QQ} Given a nonnegative vector $(a_1^*,\dots,a_m^*)$, let \[ a_1=a_1^*+\delta/u_1,\quad a_2=a_2^*-\delta/u_2,\quad a_i=a_i^*\text{ for }3\leq i\leq m. \] Define \begin{equation}\label{eq:Q} Q(\boldsymbol{a}, \boldsymbol{u})=\sum_{i=1}^m a_i G_i - b \sum_{j=1}^n Y_j, \end{equation} where $(\boldsymbol{a}, \boldsymbol{u}) := (a_1,\dots, a_m, u_1,\dots, u_m)$, $G_{1}, \dots, G_{m}, Y_{1}, \dots, Y_{n}$ are independent random variables with $G_i\sim \operatorname{Gamma}(u_i, 1)$, for $i=1, \dots, m$ and $Y_j \sim \operatorname{Exp}(1)$, for $j=1,\dots, n$. Let $Z_i \sim \operatorname{Exp}(1)$, for $i=1,2$ be independent of all other variables. Then \begin{equation} \label{eq:key} \frac{\partial}{\partial\delta} \mathbb{P}(Q(\boldsymbol{a}, \boldsymbol{u})\leq x) = (a_1-a_2)\frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a}, \boldsymbol{u})+a_1 Z_1 +a_2 Z_2\leq x). \end{equation} \end{lemma} \begin{proof} Set $T=\sum_{i=3}^m a_i G_i- b \sum_{j=1}^n Y_j$. Then \begin{equation} \label{eq:cond} \frac{\partial}{\partial\delta} \mathbb{P}(Q(\boldsymbol{a}, \boldsymbol{u})\leq x|T) = (a_1-a_2)\frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a}, \boldsymbol{u})+a_1 Z_1 +a_2 Z_2\leq x|T), \end{equation} which is equivalent to \eqref{eq:SB} with a different $x$. Taking the expectation in \eqref{eq:cond} over $T$ yields \eqref{eq:key}. \end{proof} \begin{lemma} \label{le:mode} Let $X$ and $Y$ be independent random variables where $Y\sim\operatorname{Exp}(1)$ and $X$ has a density $f(x)$ such that \begin{enumerate}[\rm (i)] \item $f(x)$ is continuously differentiable with a bounded derivative on $(-\infty, \infty)$, \item $f(x)>0$ for sufficiently small $x\in (-\infty, \infty)$, \item $f(x)$ is unimodal, i.e., there exists $a\in (-\infty, \infty)$ such that $f'(x)\geq 0$ if $x<a$ and $f'(x)\leq 0$ if $x>a$. \end{enumerate} For $\lambda> 0$, denote the density of $X+\lambda Y$ by $f_\lambda(x)$. Then $f_\lambda(x)$ is unimodal and if $f'_\lambda(x_0)=0$ then $x_0$ is a mode of $f_\lambda$. Moreover, if $\lambda>\lambda_0> 0$, then any mode of $f_\lambda(x)$ is strictly larger than any mode of $f_{\lambda_0}(x)$. \end{lemma} \begin{proof} This result is similar to \citet[Lemma~1]{SzeBak:PTRF2003}. We provide a quick proof using variation diminishing properties of sign regular kernels \citep[see][]{Kar:SUP1968}. First, since the density of $\lambda Y$ is log-concave (a.k.a. strongly unimodal) its convolution with the unimodal $f(x)$ is also unimodal, that is, the pdf of $X+\lambda Y$ is unimodal \citep[see][]{Ibr:TVP1956, Kar:SUP1968}. Differentiating (justified by (i)) yields \begin{align*} f_\lambda'(x) &= \int_0^\infty f'(x-z) \frac{1}{\lambda} \operatorname{e}^{-z/\lambda} \ \mathrm{d} z \\ &= \int_{-\infty}^x f'(z) \frac{1}{\lambda} \operatorname{e}^{(z-x)/\lambda} \ \mathrm{d} z \\ &=\frac{\operatorname{e}^{-x/\lambda}}{\lambda} \int 1_{(-\infty,x)}(z) f'(z) \operatorname{e}^{z/\lambda} \ \mathrm{d} z . \end{align*} Suppose $f_\lambda'(x_0)=0$. Since $f'(z) \geq 0$ for $z\leq a$, we know from the representation above that $f_\lambda'(x)>0$ if $x\leq a$, and hence $x_0>a$. The representation also shows that the function $\operatorname{e}^{x/\lambda} f_\lambda'(x)$ is nonincreasing in $x\in (a,\infty)$. Therefore $f_\lambda'(x)\geq 0$ if $x\in (a, x_0)$ and $f_\lambda'(x)\leq 0$ if $x>x_0$. It follows that $x_0$ is a mode of $f_\lambda(x)$. For fixed $x$, the function $1_{(-\infty,x)}(z) f'(z)$ as a function of $z$ does not vanish (by (ii)), and has at most one sign change from positive to negative (by (iii)), and the kernel $\operatorname{e}^{z/\lambda}$ is strictly reverse rule \citep[see][]{Kar:SUP1968}. It follows that $\int 1_{(-\infty,x)}(z) f'(z) \operatorname{e}^{z/\lambda} \ \mathrm{d} z $ has at most one sign change from negative to positive, as a function of $\lambda$. Thus, if for a given $x$, $f_{\lambda_0}'(x)=0$ and $\lambda>\lambda_0$, then $f_\lambda'(x) >0$, and the result follows. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:minimizer}] Let $Q(\boldsymbol{a}):= Q(\boldsymbol{a}, \boldsymbol{1}_m)$ as in \eqref{eq:Q}. Consider minimizing $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ over \begin{equation*} \Omega = \left\{\boldsymbol{a}:\ 0\leq a_i,\ \sum_{i=1}^m a_i =m \right\}. \end{equation*} Since $\Omega$ is compact, and $\mathbb{P}(Q\leq 0)$ is continuous in $\boldsymbol{a}$, the minimum is attained, say, at $\boldsymbol{a}^*\in \Omega$. \begin{claim}\label{cl:claimB} In any minimizing point $\boldsymbol{a}^{*}$ of $\mathbb{P}(Q \le 0)$ the $a_i^*$'s take at most two distinct nonzero values. Moreover, in the case of two distinct nonzero values, the smaller one appears only once. \end{claim} \begin{proof} Assume the contrary, say $0<a_1^*\leq a_2^*<a_3^*$. We show below in Case \ref{ca:a1lea2} that more than two distinct values are impossible by showing that $a_1^*<a_2^*$ leads to a contradiction. Similarly Case \ref{ca:a1eqa2} implies the impossibility of repetitions of the smallest of two distinct values. Let $a_1=a_1^*+\delta,\ a_2=a_2^*-\delta,\ a_i=a_i^*,\ 3\leq i\leq m$. Then by \eqref{eq:key} we have \begin{equation} \label{eq:key2} \frac{\partial}{\partial\delta} \mathbb{P}(Q(\boldsymbol{a})\leq x) = (a_1-a_2)\frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a})+a_1 Z_1 +a_2 Z_2\leq x), \end{equation} where $Z_1$ and $Z_2$ are i.i.d. random variables with $Z_{1}\sim\operatorname{Exp}(1)$, independent of $Q$. We can focus on $x=0$. \begin{case}\label{ca:a1lea2} $a_1^*<a_2^*$. Since $\delta=0$ achieves the minimum, both sides of \eqref{eq:key2} with $x=0$ vanish at $\delta=0$. The density function of $Q(\boldsymbol{a}^{*})+a_1^*Z_1$ is positive everywhere and is log-concave and hence unimodal. By Lemma \ref{le:mode}, $S=Q(\boldsymbol{a}^{*})+a_1^*Z_1+a_2^* Z_2$ has a mode at zero. Following Case \ref{ca:a1eqa2} we show that this leads to a contradiction. \end{case} \begin{case}\label{ca:a1eqa2} $a_1^*=a_2^*$. Then \eqref{eq:key2} gives \[ \lim_{\delta\downarrow 0} \frac{\partial \mathbb{P}(Q(\boldsymbol{a})\leq 0)}{\partial \delta} =0 \] and \[ \left. \frac{\partial^2}{\partial\delta^2} \mathbb{P}(Q(\boldsymbol{a})\leq 0)\right|_{\delta=0} =\left. 2\lim_{\delta\to 0}\frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a})+a_1 Z_1 + a_2 Z_2\leq x)\right|_{x=0}. \] A minimum at $\delta=0$ entails \[ \left. \frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a}^{*})+a_1^* Z_1 + a_2^* Z_2\leq x)\right|_{x=0}\geq 0, \] showing that $S=Q(\boldsymbol{a}^{*})+a_1^*Z_1+a_2^* Z_2$ has a mode that is nonnegative. \end{case} Thus $S$ has a nonnegative mode in either case. By Lemma \ref{le:mode} and $a_2^*<a_3^*$, any mode of $Q(\boldsymbol{a}^{*})+a_1^*Z_1+a_3^* Z_2$ is strictly positive, i.e., \[ \left. \frac{\partial^2}{\partial x^2} \mathbb{P}(Q(\boldsymbol{a}^{*})+a_1^* Z_1 + a_3^* Z_2\leq x)\right|_{x=0}> 0. \] The latter expression, multiplied by $(a_1^*-a_3^*)$ is negative. Using \eqref{eq:key2} with $a_3^*$ in place of $a_2^*$, however, this implies that $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ strictly decreases under the perturbation $(a_1^*, a_3^*)\to (a_1^*+\delta, a_3^*-\delta)$ for small $\delta>0$, which is a contradiction to the minimality at $\delta=0$. Note that the crux of the proof is in comparing two perturbations. \end{proof} \begin{claim}\label{cl:key} In any minimizing point $\boldsymbol{a}^{*}$ of $\mathbb{P}(Q \le 0)$ the $a_i^*$'s are either all equal, or take only two distinct values, in which case one of them is zero. \end{claim} \begin{proof} Assume the contrary, and in view of Claim 5.4, suppose we have \[ 0<a_1^*<a_2^*=\cdots = a_{k+1}^*,\ 1\leq k < m,\ a_{k+2}^*=\cdots = a_m^*=0, \quad \text{and}\quad \sum_{i=1}^m a_i^*=m. \] Then for some $\delta \in (0, 1/k)$,\, $a_1^*,\ldots, a_m^*$ must be of the form \ignore{ IGNORE \[ a_1^*=(1-k\delta)m/(k+1),\ a_2=\cdots=a_{k+1}=(1+\delta)m/(k+1),\ 0\leq \delta\leq 1/k. \] END IGNORE } \[ a_1^*=(1-k\delta)m/(k+1),\ a_2^*=\cdots=a_{k+1}^*=(1+\delta)m/(k+1),\, a_{k+2}^*=\cdots = a_m^*=0. \] We then have \[ \frac{k+1}{m} Q(\boldsymbol{a})=(1- k\delta) X + (1+\delta) G - \lambda Y,\quad \lambda = \frac{b(k+1)}{m}, \] with $X\sim \operatorname{Exp}(1),\ G\sim \operatorname{Gamma}(k, 1),\ Y\sim \operatorname{Gamma}(n, 1)$ independently. We show that the minimum of $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ cannot be achieved in the open interval $\delta \in (0, 1/k)$, contradicting the assumption that $\boldsymbol{a}^{*}$ is a minimizer. We have \begin{align*} \mathbb{P}(Q(\boldsymbol{a})\leq 0) &=\mathbb{P}\left(1+\delta (1-(k+1)B)\leq \frac{\lambda Y}{X+G}\right), \end{align*} where $B:= X/(X+G)$. Note that $B$ has a $\operatorname{Beta}(1, k)$ distribution, $Y/(X+G)$ has a scaled $F(2n, 2(k+1))$ distribution, and $B$ and $Y/(X+G)$ are independent. Thus \[ \mathbb{P}(Q(\boldsymbol{a})\leq 0)=C_1\, \mathbb{E} \left[\int_{1+\delta (1-(k+1)B)}^\infty \frac{y^{n-1}}{(\lambda +y)^{n+k+1}}\, \ \mathrm{d} y \right]. \] where above and below, $C_i>0$ denote constants that do not depend on $\delta$, and $D_i(\delta)>0$ denote functions of $\delta\in (0, 1/k)$, and both may depend on other constants such as $\lambda, k$, etc. It follows that \begin{align} \nonumber \frac{\partial \mathbb{P}(Q(\boldsymbol{a})\leq 0)}{\partial \delta} &=-C_1 \, \mathbb{E} \left[(1-(k+1)B) \frac{(1+\delta (1-(k+1)B))^{n-1}}{(\lambda +1+\delta(1-(k+1)B))^{n+k+1}}\right]\\ \label{eq:ginte} &= -C_2 \int_{-k}^1 x (x+k)^{k-1} \frac{(1+\delta x)^{n-1}}{(\lambda +1+\delta x)^{n+k+1}}\, \ \mathrm{d} x\\ \label{eq:gdelta} &= -D_1(\delta) g(\delta), \end{align} where \begin{align*} g(\delta) &:= \int_1^{p} \left[(\lambda +1-\delta k)(y-1)-\delta k \lambda y\right] y^{n-1} (y-1)^{k-1}\, \ \mathrm{d} y,\\ \nonumber p = p(\delta) &:= \frac{(1+\delta)(\lambda +1-\delta k)}{(\lambda +1+\delta)(1-\delta k)}, \end{align*} and \eqref{eq:gdelta} uses the change of variables \[ y=\frac{(1+\delta x)(\lambda +1-\delta k)}{(\lambda +1+\delta x)(1-\delta k)}. \] Using the closed form integral \[ \int_1^p\left[ky + n(y-1)\right] y^{n-1} (y-1)^{k-1}\, \ \mathrm{d} y = p^n(p-1)^k \] we get \begin{align} \nonumber g'(\delta) &= \frac{\lambda \delta (\lambda +1-\delta k)}{\lambda +1 +\delta} p^{n-1} (p-1)^{k-1}p'(\delta) + \int_1^p k(1-(\lambda +1)y) y^{n-1} (y-1)^{k-1}\, \ \mathrm{d} y\\ \nonumber &=\frac{\lambda \delta (\lambda +1-\delta k)}{\lambda +1 +\delta} p^{n-1} (p-1)^{k-1}p'(\delta) + \frac{(\lambda n -k) g(\delta) - \lambda(\lambda +1) p^n (p-1)^k}{\lambda +1-\delta k + \lambda n \delta}\\ \nonumber &=D_2(\delta) \left[k(\lambda n -k) \delta^2 + (\lambda +1)(k-1)\delta +(\lambda +1)(\lambda (n-1) -k-2)\right]\\ \label{eq:gprime} &\quad\quad + \frac{(\lambda n -k) g(\delta)}{\lambda +1-\delta k + \lambda n \delta}. \end{align} Specifically \[ D_2(\delta)=\frac{\lambda \delta p^{n}(p-1)^{k}}{(1 + \delta)(\lambda+1+\delta) (\lambda+1-\delta k+\lambda n\delta)}. \] It is helpful to determine the sign of $g(\delta)$ for small $\delta>0$ and large $\delta<1/k$. Let us denote the integral in \eqref{eq:ginte} by $\tilde{g}(\delta)$, which has the same sign as $g(\delta)$ for $\delta\in (0, 1/k)$. A Taylor expansion yields \begin{align*} \tilde{g}(\delta) &=\int_{-k}^1 \left[\frac{x(x+k)^{k-1}}{(\lambda+1)^{n+k+1}} + \frac{(\lambda(n-1)-k-2) \delta} {(\lambda+1)^{n+k+2}} x^2(x+k)^{k-1} \right]\, \ \mathrm{d} x + o(\delta)\\ & = C_3 (\lambda(n-1)-k-2)\delta +o(\delta),\quad \text{as}\ \delta\downarrow 0. \end{align*} By direct calculation, \[ \tilde{g}(1/k)= C_4 (\lambda (n-1)-k-1). \] We distinguish three cases: \begin{enumerate}[(i)] \item\label{ca:ineq-1} $\lambda (n-1) > k+2$. Then $\tilde{g}(\delta)>0$ and hence $g(\delta)>0$ for sufficiently small $\delta>0$. Moreover, by \eqref{eq:gprime}, $g'(\delta)> D_3(\delta) g(\delta),\ \delta\in (0,1/k)$. It follows that $g(\delta)>0$ for all $\delta\in (0, 1/k)$, i.e., $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ decreases in $\delta\in [0, 1/k]$. The same holds in the boundary case $\lambda (n-1) = k+2$. \item\label{ca:ineq-2} $k+1< \lambda (n-1)< k+2$. Then $g(\delta)<0$ for sufficiently small $\delta>0$, and $g(\delta)>0$ for sufficiently large $\delta <1/k$. If the minimum of $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ is achieved at $\delta^*\in (0, 1/k)$, then $g(\delta^*)=0\geq g'(\delta^*)$, and $g(\delta)$ has at least one root in $(0, \delta^*)$, say $\delta^{**},$ such that $g'(\delta^{**})\geq 0$. This contradicts \eqref{eq:gprime}, however, because the term in square brackets strictly increases in $\delta$. \item\label{ca:ineq-3} $\lambda (n-1)< k+1$. Then $g(\delta)<0$ for both sufficiently large $\delta <1/k$ and sufficiently small $\delta >0$. Suppose $g(\delta^*)> 0$ for some $\delta^*\in (0, 1/k)$. If $\lambda n>k$ then a contradiction results as in Case (ii). Otherwise the term in square brackets in \eqref{eq:gprime} is no more than \[ (\lambda +1)(k-1)k^{-1} + (\lambda +1)(\lambda (n-1)-k-2)<0. \] Thus any $\delta\in (0, 1/k)$ such that $g(\delta)=0$ entails $g'(\delta)<0$. This is impossible as $g(\delta)$ cannot cross zero from above without first doing so from below. Hence $g(\delta)\leq 0$, i.e., $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ increases in $\delta\in [0, 1/k]$. The same holds in the boundary case $\lambda (n-1)=k+1$. \qedhere \end{enumerate} \end{proof} We now prove the three statements of Theorem~\ref{th:minimizer}. \begin{itemize} \item[\eqref{it:th:minimizer-a}] This is an immediate consequence of Claim~\ref{cl:key}. \item[\eqref{it:th:minimizer-b}] Let $h(k)=\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ with \[ a_1=\dots= a_k=\frac{m}{k},\ 1\leq k\leq m,\quad\text{and}\quad a_{k+1}=\cdots =a_m=0. \] Comparing $\mathbb{P}(Q(\boldsymbol{a})\leq 0)$ in Case~\eqref{ca:ineq-3} of the proof of Claim (\ref{cl:key}) at $\delta=0$ and $\delta=1/k$, we see that if $m \ge b(n-1)$, i.e., \[ \frac{b(k+1)(n-1)}{m} \le k+1, \] then $h(k+1) < h(k),\ 1\leq k <m$. Thus $h(k)$ achieves its minimum at $k=m$. \item[\eqref{it:th:minimizer-c}] Suppose now $m < b(n-1)$. According to Case~\eqref{ca:ineq-1}, if $b(k+1)(n-1)/m \geq k+2$, i.e., \begin{equation}\label{eq:last} k+1\geq \frac{m}{(b(n-1)-m)}, \end{equation} then $h(k+1)>h(k)$. In particular, \eqref{eq:last} holds for all $k$ if $m\leq 2b(n-1)/3$, which yields $h(m)>\cdots > h(2)>h(1)$, i.e., $h(k)$ is minimized at $k=1$. In general $h(k)$ is minimized at some $k\leq \lceil m/(b(n-1)-m) -1 \rceil$. \qedhere \end{itemize} \end{proof} \begin{proof}[Proof of Theorem~\ref{th:Nash}] \begin{itemize} \item[\eqref{it:th:Nash-a}] Using Proposition~\ref{pr:kaminsky} and Theorem~\ref{th:minimizer}\eqref{it:th:minimizer-a}\eqref{it:th:minimizer-b}, once all the $a_{i}$ are multiplied by a factor $c_{A}/m$, we can prove that there exists a Nash equilibrium that satisfies \eqref{eq:equila} and \eqref{eq:equilb}, which we denote as $(\boldsymbol{a}^*, \boldsymbol{b}^*)$. Assume $(\widetilde{\boldsymbol{a}}, \widetilde{\boldsymbol{b}})$ is another equilibrium. Because the game is zero-sum, we have \[ H_{m.,n}(\widetilde{\boldsymbol{a}}, \widetilde{\boldsymbol{b}}) \ge H_{m,n}(\boldsymbol{a}^*, \widetilde{\boldsymbol{b}}) \geq H_{m,n} (\boldsymbol{a}^*, \boldsymbol{b}^*) \] and \[ H_{m,n}(\widetilde{\boldsymbol{a}}, \widetilde{\boldsymbol{b}}) \le H_{m,n}(\widetilde{\boldsymbol{a}}, \boldsymbol{b}^*) \leq H_{m,n}(\boldsymbol{a}^*, \boldsymbol{b}^*). \] Thus equalities must all hold. Since $\boldsymbol{b}^*$ (equal allocation) is the unique optimal response to $\boldsymbol{a}^*$, for the equality to hold in $H_{m,n}(\boldsymbol{a}^*, \widetilde{\boldsymbol{b}}) \geq H_{m,n} (\boldsymbol{a}^*, \boldsymbol{b}^*)$ we must have $\widetilde{\boldsymbol{b}} = \boldsymbol{b}^*$. Similarly, for the equality to hold in $H_{m,n}(\widetilde{\boldsymbol{a}}, \boldsymbol{b}^*) \leq H_{m,n}(\boldsymbol{a}^*, \boldsymbol{b}^*)$, $\widetilde{\boldsymbol{a}}$ must be of the form \eqref{eq:equila}. Thus all pure equilibria satisfy \eqref{eq:equila} and \eqref{eq:equilb}. \item[\eqref{it:th:Nash-b}] Theorem~\ref{th:minimizer}\eqref{it:th:minimizer-b} guarantees that if $a_{1}^{*}=\dots=a_{m}^{*} = c_{A}/m$ and $b_{1}^{*}=\dots=b_{n}^{*}=c_{B}/n$, then $\boldsymbol{a}^{*}$ is the unique best response to $\boldsymbol{b}^{*}$ and vice versa. This proves that $(\boldsymbol{a}^{*}, \boldsymbol{b}^{*})$ is a Nash equilibrium of the game. This equilibrium is unique by the argument in part \eqref{it:th:Nash-a}. \item[\eqref{it:th:Nash-c}] Theorem~\ref{th:minimizer}\eqref{it:th:minimizer-c} guarantees that if $a_{i}^{*}=c_{A}$ for some $i \in \{1,\dots,m\}$ and $a_{j}^{*}=0$ for all $j \ne i$, and $b_{1}^{*}=\dots=b_{n}^{*}=c_{B}/n$, then $\boldsymbol{a}^{*}$ is a best response to $\boldsymbol{b}^{*}$ and Theorem~\ref{th:minimizer}\eqref{it:th:minimizer-b} guarantees that $\boldsymbol{b}^{*}$ is the unique best response to $\boldsymbol{a}^{*}$. This proves that $(\boldsymbol{a}^{*}, \boldsymbol{b}^{*})$ is a Nash equilibrium of the game. Again the argument used in part \eqref{it:th:Nash-a} shows that all Nash equilibria are of this form. \item[\eqref{it:th:Nash-d}] Suppose team $A$ allocates its strength equally among $r$ players, and team $B$ adopts the optimal strategy of equal allocation among all $n$ players. Then, as $n\to \infty$, the winning probability for team $A$ approaches $f(r):= \mathbb{P}(c_A G_r > r c_B)$, where $G_r$ is a $\operatorname{Gamma}(r, 1)$ random variable. Letting $\beta:=c_B/c_A$, we get \begin{align*} f(r)-f(r+1) &= \int_{r\beta}^\infty \frac{r x^{r-1} \operatorname{e}^{-x}}{\Gamma(r+1)} \ \mathrm{d} x -\int_{(r+1)\beta}^\infty \frac{x^r \operatorname{e}^{-x}}{\Gamma(r+1)} \ \mathrm{d} x\\ & = \frac{1}{\Gamma(r+1)} \left( -(r\beta)^r \operatorname{e}^{-r\beta} + \int_{r\beta}^{(r+1)\beta} x^r \operatorname{e}^{-x} \ \mathrm{d} x\right)\\ & = \frac{(r\beta)^{r} \operatorname{e}^{-r \beta}}{\Gamma(r+1)}\left[\int_0^1 \left(1+\frac{y}{r}\right)^r \operatorname{e}^{-y\beta} \beta \ \mathrm{d} y - 1\right], \end{align*} where we have integrated by parts in the second equality and changed the variables $y= x/\beta -r$ in the third. The integral inside the square brackets obviously increases in $r$. Hence $f(r)> f(r+1)$ implies $f(r+1)> f(r+2)> \cdots >f(m)$. Moreover, if $\beta = c_B/c_A > t_0$ then $f(1) > f(2)$ by direct calculation. In this case $f(r)$ is maximized at $r=1$ and $r=1$ is the optimal strategy for team $A$ in the large $n$ limit. \qedhere \end{itemize} \end{proof} \begin{proof}[Proof of Theorem~\ref{th:value}] \begin{itemize} \item[\eqref{it:th:value-a}] Using Theorem~\ref{th:Nash}\eqref{it:th:Nash-a} we know that for some $1 \le r \le m$ and some permutation $\pi$ we have $a_{\pi(1)}^{*} = \dots = a_{\pi(r)}^{*} = c_{A}/r$, $a_{\pi(r+1)} = \dots = a_{\pi(m)} = 0$, and $b_1^*=\cdots =b_n^* = c_{B}/n$. Hence \[ \sum_{i=1}^m a^*_{i} X_{i} \sim\operatorname{Gamma}(r, r/c_{A}), \quad \sum_{j=1}^n b^*_{j} Y_{j} \sim\operatorname{Gamma}(n, n/c_{B}). \] Therefore, \citep[see, e.g,][]{CooNad:BJ2006, Coo:UTMD2008} \begin{equation}\label{eq:Pincompletebeta} \mathbb{P}\left(\sum_{i=1}^m a_i^* X_i > \sum_{j=1}^n b_j^* Y_j \right) = 1-I\left(\frac{rc_{B}}{rc_{B}+nc_{A}}, r,n \right), \end{equation} where $I$ is the regularized incomplete beta function defined in \eqref{eq:incompletebeta}. \item[\eqref{it:th:value-b}] By Theorem~\ref{th:Nash}\eqref{it:th:Nash-b} in this case $r=m$. \item[\eqref{it:th:value-c}] By Theorem~\ref{th:Nash}\eqref{it:th:Nash-c} in this case $r=1$. \qedhere \end{itemize} \end{proof} \section{Monotonicity results}\label{se:monotonicity} \subsection{Monotonicity of the value} \begin{center} FIGURE~\ref{fi:plotcaeqcb} ABOUT HERE \end{center} We mention the following consequence of Theorem~\ref{th:Nash} (see Figure~\ref{fi:plotcaeqcb}). \begin{corollary}\label{co:equalblotto} In the game $\mathcal{G}(m,n,c_{A},c_{B})$, if the two teams have equal strength (i.e., $c_{A}=c_{B}$), then the value is positive if $m>n$, namely, the team with more players has an advantage over the other team. Moreover, the value of the game is increasing in $m$ and decreasing in $n$. \end{corollary} \begin{proof} The team with more players always has the option of not using them all. Therefore it cannot be worse off than the team with fewer players. However, since equal allocation is the unique best response, using them all is strictly better. The same argument proves the monotonicity in $m$ and $n$. Note that directly verifying this from the properties of the incomplete beta function appears nontrivial. \end{proof} \begin{center} FIGURE~\ref{fi:plotcasimcb} ABOUT HERE \end{center} Figure~\ref{fi:plotcasimcb} shows an interesting implication of Theorem~\ref{th:value}: team $B$ may be at a disadvantage even if $c_{A} < c_{B}$, and this happens if the number $n$ of its gladiators is much smaller than the number $m$ of gladiators in $A$. As the relative difference in strength between the two teams increases, it takes a larger number of gladiators to compensate for the lower strength. \begin{center} FIGURES~\ref{fi:plotvariouscb} AND \ref{fi:plotvariousn} ABOUT HERE \end{center} As Figures~\ref{fi:plotvariouscb} and \ref{fi:plotvariousn} show, if condition \eqref{eq:inequalitycasmallercb} holds, then team $A$ is at a strong disadvantage. The disadvantage increases with the total strength $c_{B}$ and the number $n$ of gladiators of team $B$. The number $m$ of gladiators of team $A$ is totally irrelevant, since, in equilibrium, the whole strength $c_{A}$ is assigned to only one gladiator. \subsection{Related probability inequalities}\label{suse:additional} If $X_{1}, \dots, X_{m}$, and $Y_{1},\dots,Y_{n}$ are i.i.d. random variables with $X_{1}\sim \operatorname{Exp}(1)$, and \[ \bar{X} = \frac{1}{m}\sum_{i=1}^m X_i,\quad \bar{Y} = \frac{1}{n}\sum_{j=1}^n Y_j, \quad Z = \frac{m\bar{X}}{m\bar{X}+n\bar{Y}}, \] then $Z$ has a $\operatorname{Beta}(m,n)$ distribution. Hence \begin{equation}\label{eq:conbeta} \mathbb{P}(\bar{X} < \bar{Y}) =\mathbb{P}\left(Z < \frac{m}{m+n}\right) = I\left(\frac{m}{m+n},m,n \right). \end{equation} For $m > n$, by Corollary~\ref{co:equalblotto}, we have \begin{equation}\label{eq:means} \mathbb{P}(\bar{X} < \bar{Y})<\frac{1}{2}. \end{equation} Since $\mathbb{E}[Z]=m/(m+n)$, \eqref{eq:means} is equivalent to $\mathbb{P}\left(Z < \mathbb{E}[Z]\right) < 1/2$, that is, $\mathbb{E}[Z] < \operatorname{Med}[Z]$. This is a well known mean-median inequality for beta distributions \citep[see][]{GroMee:AS1977}. The inequality \eqref{eq:means} has the following interesting statistical implication. If two statisticians estimate the mean of exponential variables, and use the sample mean as their unbiased estimate, then the statistician with the larger sample tends to have a larger (unbiased) estimate. If the two of them bet on who has a larger estimate, the one with the larger sample tends to win. For normal variables, or any symmetric variables, this clearly cannot happen and $\mathbb{P}(\bar{X} < \bar{Y})=1/2$. Suppose now that the two statisticians share the first $n$ variables, that is, for $i=1,\dots,n$ we have $X_i=Y_i$, and the remaining variables $X_{n+1},\dots,X_m$ are independent of the previous ones. Then \begin{align}\label{eq:medcom} \mathbb{P}(\bar X < \bar Y)&=\mathbb{P}\left(\frac{1}{m}\left[\sum_{j=1}^n Y_j + \sum_{i=n+1}^m X_i\right] < \frac{1}{n}\sum_{j=1}^n Y_j\right) \nonumber\\ &=\mathbb{P}\left(\frac{1}{m-n}\sum_{i=n+1}^m X_i < \frac{1}{n}\sum_{j=1}^n Y_j\right). \end{align} By \eqref{eq:means} the last expression in \eqref{eq:medcom} is less than $1/2$ if and only if $m-n >n$, that is, $m>2n$. It equals $1/2$ if $m=2n$, and it is larger than $1/2$ if $m<2n$, in which case \eqref{eq:means} is reversed. Thus in the bet between the statisticians, if most of the variables are in common, the odds are against the one with the larger sample, contrary to the previous situation. This was noted by Abram Kagan. Our main results can be presented in terms of various other distributional inequalities or monotonicity. Using \eqref{eq:binombeta} and Corollary~\ref{co:equalblotto} we obtain further results that cannot easily be proved more directly. We say that $X\sim\operatorname{Gamma}(\alpha,\beta)$ if $X$ has a density \[ f(x) = \frac{\beta^{\alpha}}{\Gamma(\alpha)}\operatorname{e}^{-\beta x}x^{\alpha-1},\quad x>0. \] \begin{corollary} \label{co:betabin} For $m,n$ integers the following properties hold: \begin{enumerate}[\rm (a)] \item\label{it:co:betabin-a} The function \[ I\left(\frac{m}{m+n},m,n\right) \] is decreasing in $m$ for fixed $n$, and increasing in $n$ for fixed $m$. \item\label{it:co:betabin-b} Let $T \sim \operatorname{Binom}(m+n-1, m/(m+n))$. Then $\mathbb{P}(T \ge m)$ is decreasing in $m$ and increasing in $n$. \item\label{it:co:betabin-c} Let $S \sim \operatorname{Poisson}(m)$. Then $\mathbb{P}(S \ge m)$ is decreasing in $m$. \item\label{it:co:betabin-d} Let $R\sim \operatorname{Gamma}(m,1)$. Then $\mathbb{P}(R \le m)$ is decreasing in $m$. \end{enumerate} \end{corollary} \begin{proof} \begin{itemize} \item[\eqref{it:co:betabin-a}] is a restatement of the last part of Corollary~\ref{co:equalblotto}. \item[\eqref{it:co:betabin-b}] follows from \eqref{it:co:betabin-a} and \eqref{eq:binombeta}. \item[\eqref{it:co:betabin-c}] follows from \eqref{it:co:betabin-b} by letting $n \rightarrow \infty$. \item[\eqref{it:co:betabin-d}] follows from \eqref{it:co:betabin-c} and the identity \[ \mathbb{P}(S \ge m) = \frac{1}{\Gamma(m)}\int_0^m \operatorname{e}^{-t}\,t^{m-1} \ \mathrm{d} t. \qedhere \] \end{itemize} \end{proof} We say that a random variable $Q\sim\operatorname{Geom}(p)$ if $\mathbb{P}(Q_{1}=k)=(1-p)^k p$, $k=0,1,2,\dots$. \begin{proposition}\label{pr:geometric} Let $Q_{1}, \dots, Q_m$ be independent random variables such that $Q_{i}\sim\operatorname{Geom}(1/(1+a_{i}))$. Define $Q =\sum_{i=1}^m Q_i$. \begin{enumerate}[\rm (a)] \item\label{it:co:Negbin-a} We have \begin{equation}\label{eq:geo0} 1-G_{m,n}(\boldsymbol{a},\boldsymbol{1}_n)=\mathbb{P}\left(Q \le n-1\right), \end{equation} where $\boldsymbol{a}=(a_1, \ldots, a_m)$ and $\boldsymbol{1}_n$ denotes the $n$-dimensional vector of ones. \item\label{it:co:Negbin-b} If $\sum_{i=1}^m a_i=n$, then the probability in \eqref{eq:geo0} is minimized when all $a_i$'s are equal. In this case $Q_i$ are i.i.d. and $Q$ has a negative binomial distribution. \item\label{it:co:Negbin-c} If $\mathbb{E}[Q] = m$, then $\mathbb{E}[Q] > \operatorname{Med}[Q]$. \end{enumerate} \end{proposition} \begin{proof} \begin{itemize} \item[\eqref{it:co:Negbin-a}] The relation \eqref{eq:geo0} can be explained directly: team $A$ loses if all its gladiators together defeat at most $n-1$ opponents. Gladiator $i$ from team $A$ defeats a geometric random number, $Q_i$, of gladiators of strength 1 from team $B$ since he fights until he loses, and he loses a fight with probability $1/(1+a_{i})$. Thus if $\sum_{i=1}^m Q_i \le n-1$, then team $A$ defeats at most $n-1$ gladiators altogether, and loses. \item[\eqref{it:co:Negbin-b}] This follows directly from Theorem \ref{th:minimizer}. \item[\eqref{it:co:Negbin-c}] Note that $\mathbb{E}[Q] = \sum_{i=1}^m a_i$. Letting $n=m$, and using \eqref{eq:geo0} and part \eqref{it:co:Negbin-b}, we conclude that $\mathbb{P}(Q\le n-1)\geq 1-G_{m,n}(\boldsymbol{1}_m,\boldsymbol{1}_n)=1/2$. We obtain $\mathbb{P}(Q \le \mathbb{E}[Q])=\mathbb{P}(Q \le n)>1/2$, and therefore $\mathbb{E}[Q] > \operatorname{Med}[Q]$. \qedhere \end{itemize} \end{proof} \section{Comments and extensions}\label{se:extensions} The probability in \eqref{eq:aoveraplusb} is a particular example of \emph{contest success function}\footnote{\citet{Hir:PC1989} calls it technology of conflict}. The following more general class was considered by \citet{Tul:TAMUP1980} with the purpose of studying efficient rent seeking: \begin{equation}\label{eq:csf} h_{\gamma}(a,b) = \frac{a^{\gamma}}{a^{\gamma}+b^{\gamma}}, \quad \gamma > 0. \end{equation} These functions have been studied, axiomatized, and widely used in different fields \citep[see, e.g.,][and many others]{Ska:ET1996, Szy:JEL2003, CorDah:ET2010}. The reader is referred to \citet{Cor:RED2007}, \citet{GarSka:HDE2007}, \citet{Kon:OUP2009} for surveys on this topic. In \eqref{eq:csf}, when $\gamma \to \infty$, then \[ h_{\gamma}(a,b) \to h_{\infty}(a,b):= \begin{cases} 1 & \text{if $a>b$}, \\ \frac{1}{2} & \text{if $a=b$}, \\ 0 & \text{if $a<b$}. \end{cases} \] This case corresponds to a classical Colonel Blotto situation where the stronger gladiator always wins. If the contest success function $h_{\infty}$ is used in our game, then any equilibrium strategy for the stronger team assigns the whole strength to one single gladiator, and, for $c_{A} < c_{B}$, team $A$ loses with probability one and value of the game is $-1/2$. In \eqref{eq:csf}, when $\gamma \to 0$, then \[ h_{\gamma}(a,b) \to h_{0}(a,b):= \begin{cases} 1 & \text{if $a>b=0$}, \\ \frac{1}{2} & \text{if $a, b > 0$}, \\ 0 & \text{if $0=a<b$}. \end{cases} \] When $h_{0}$ is used as a contest success function in our game, then any equilibrium strategy assigns positive strength to every gladiator, therefore in each fight either gladiator wins with probability $1/2$ and the game reduces to one with two teams of $m$ and $n$ gladiators respectively, all having equal power. Then, using \eqref{eq:defG}, and \eqref{eq:Pincompletebeta} we see that the probability that team $A$ wins is equal to \[ G_{m,n}(\boldsymbol{1},\boldsymbol{1})=1- I\left(\frac{1}{2}, m,n\right). \] If $a_{1} = \dots = a_{m} = 1$, then in \eqref{eq:geo0} the random variable $Q$ is negative binomial. Hence it is easy to see that \[ G_{m,n}(\boldsymbol{1},\boldsymbol{1})=\sum_{j=0}^{m-1} \left(\frac{1}{2}\right)^{n+j} \binom{n+j-1}{j},\] and the value of the game is obtained by subtracting $1/2$. If the extreme cases $\gamma = 0$ and $\gamma = \infty$ are easy to analyze, and the case $\gamma = 1$ required hard calculations, the remaining cases, i.e., $\gamma \not \in \{0, 1, \infty\}$ look prohibitive in our model. They were considered in easier to deal frameworks by some authors. For instance, in a context of rent-seeking, when a contest success function of type \eqref{eq:csf} is used, \citet{AlcDah:JPE2010} show that for $\gamma \ge 2$ the structure of the equilibrium is always the same. \citet{Fri:OR1958} and \citet{Rob:ANUmimeo2005} consider the case $\gamma=1$ in a static simultaneous battle context similar to the classical Colonel Blotto model and show that the equilibrium strategies for both players involve splitting strength evenly across all the battlefields. \citet{Rob:ET2006} considers the case $\gamma = \infty$ and shows that the equilibrium mixed strategy of the stronger player stochastically assigns positive strength to each battlefield, whereas the one of the weaker player gives zero strength to some randomly selected battlefields and randomly distributes the strength among the remaining fields. These results bear some analogy with the structure of the equilibrium in our game. \citet{TanShoLin:AI2010} consider contest games where the strengths of players are exogenously given and coaches simultaneously choose the order of players and then players with the corresponding position fight. This model was used by \citet{Ara:mimeo2009}. \section*{Acknowledgments} The gladiator game of \citet{KamLukNel:AJS1984} was pointed to us by Gil Ben Zvi. We are grateful to Sergiu Hart, Pierpaolo Brutti, Abram Kagan, Paolo Giulietti, Chris Peterson, and Andreas Hefti for their interest and excellent advice. We thank two referees and an associate editor for their insightful comments. \bibliographystyle{artbibst}
1,314,259,996,847
arxiv
\section{Introduction} Accurate state estimation and mapping in large, perceptually-challenging environments has become a critical capability for autonomous mobile robots. Whereas typical visual SLAM approaches often perform poorly in dust, fog, or low-light conditions, LiDAR-based methods can provide more reliable localization due to the superior range and accuracy of direct depth measurements. However, recent work on LiDAR odometry (LO) has revealed the challenges of processing the large number of depth returns generated by commercial LiDAR sensors in real-time for high-rate state estimation \cite{shan2018lego, ebadi2020lamp}. This work will present several algorithmic innovations that make real-time localization with dense LiDAR scans feasible while also demonstrating the superiority of our method in terms of accuracy and computational complexity when compared to the state-of-the-art. Current LO algorithms estimate a robot's egomotion in two stages: first, by performing a ``scan-to-scan" alignment between adjacent LiDAR frames to recover an immediate motion guess, followed by a ``scan-to-map" registration between the current scan and past environmental knowledge to increase global pose consistency. Unfortunately, the large number of data points per scan from modern LiDARs can quickly overwhelm computationally-limited processors and bottleneck performance during alignment, which can ultimately induce frame drops and cause poor pose estimation. More specifically, ``scan-to-scan" alignment requires a registration of corresponding points between two clouds, but this process often involves a nearest-neighbor search which grows exponentially with the number of points per scan. Feature-based methods \cite{shan2018lego, lvisam2021shan, ye2019tightly, shan2020lio} attempt to mitigate this by using only the most salient points, but these methods require an often computationally-intensive feature extraction step and may accidentally discard data which could otherwise help anchor the downstream registration. Moreover, in ``scan-to-map" alignment, keyed environmental history (which consists of all or a subset of past points) can grow rapidly in size as new scans are acquired and stored in memory by the system, significantly expanding the nearest-neighbor search space for typical submap extraction methods. Although tree-based data structures have been shown to decrease this nearest-neighbor search cost significantly \cite{palieri2020locus}, the extraction of a local submap can still involve on the order of hundreds of thousands of points after just a few keyframes and prevent consistent performance for long-term navigation. \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{figures/pdf/header.pdf} \vspace{-2mm} \caption{\textbf{Fast and light-weight LiDAR odometry.} Two of Team CoSTAR's robotic platforms which are limited in computational resources partially due to weight limitations. (A) Our custom quadrotor platform which features an Ouster OS1 LiDAR sensor on top. (B) Boston Dynamics' Spot robot with a mounted custom payload and a Velodyne VLP-16 with protective guards. (C) Top-down view of a mapped limestone mine using our lighweight odometry method on these robots during testing and integration for the DARPA Subterranean Challenge.} \label{fig:header} \vspace{-2mm} \end{figure} \begin{figure*}[!t] \centering \vspace{4mm} \includegraphics[width=0.85\textwidth]{figures/pdf/architecture.pdf} \vspace{-2mm} \caption{\textbf{LiDAR odometry architecture.} Our system first retrieves a relative transform between two temporally-adjacent scans of times $k$ and $k-1$ through ``scan-to-scan" (S2S) matching with RANSAC outlier rejection and an optional rotational prior from IMU. This initial estimate is then propagated into the world frame and used as the initialization point for our secondary GICP module for ``scan-to-map" optimization (S2M), which scan-matches the current point cloud $\mathcal{P}_k$ with a derived submap $\mathcal{S}_k$ consisting of scans from nearby and boundary keyframes. The output of this is our globally-consistent pose estimate which is subsequently checked against several metrics to determine if the current pose should be stored as a new keyframe or not.} \label{fig:architecture} \vspace{-6mm} \end{figure*} In this letter, we will present the Direct LiDAR Odometry (DLO) algorithm -- a high-speed and computationally-efficient frontend localization solution which permits the direct use of raw point cloud scans without significant preprocessing. A key insight of our work is the link between algorithmic speed and accuracy. Our contributions are as follows. First, a custom ``speed-first" pipeline that can accurately resolve robot egomotion in real-time using minimally-preprocessed LiDAR scans and an optional IMU on consumer-grade processors. Second, a novel keyframing system which adapts to environmental signals and permits fast and permissive submap generation via convex optimization. Third, several important algorithmic insights developed from real-world implementation to further reduce computational overhead, which resulted in NanoGICP, our custom iterative closest point solver for light-weight point cloud scan-matching with cross-object data sharing, And finally, extensive evaluation of our methods in several challenging environments on computationally-limited robotic platforms as part of Team CoSTAR's research and development efforts for the DARPA Subterranean Challenge. \subsection*{Related Work} LiDAR-based odometry is typically cast as a non-linear optimization problem to calculate a best-fit homogeneous transform which minimizes the error across corresponding, i.e., matching, points and/or planes between two point clouds. Since correspondences are not known \textit{a priori}, algorithms such as the iterative closest point (ICP) algorithm \cite{chen1992object} or other variants like Generalized ICP (GICP) \cite{segal2009generalized} have become the standard aligning two point clouds. However, searching over all data points can be computationally costly. \textit{Feature}-based methods attempt to extract and use only the most salient points before scan-matching to reduce computation. Such features are found either via hand-tuned methods \cite{zhang2014loam} or through learned networks \cite{yew20183dfeat} and include planes \cite{ye2019tightly}, edges and lines \cite{shan2020lio, lvisam2021shan}, or ground points \cite{shan2018lego}, and these works aim to translate insights gained from visual odometry (VO) techniques into the 3D domain. However, adding this step increases computational overhead and risks discarding data points which could help with better correspondence matching for odometry accuracy. Alternatively, \textit{direct} methods attempt to align dense point clouds but current approaches either heavily down-sample \cite{palieri2020locus} or use a filtering-based framework \cite{fastlio2} to achieve real-time performance. Nevertheless, a second stage immediately following scan alignment between adjacent clouds has been shown to help with global pose estimation consistency across a history of past scans \cite{palieri2020locus, ebadi2020lamp}. In ``scan-to-map," the transformation is further refined by aligning the current scan with an existing in-memory map. In practice, aligning with a submap (rather than the full history of scans) helps with computational efficiency, and this submap is typically derived by retrieving nearby map points within some radius of the robot's current position. This search in ``point-space," however, can quickly explode in computational expense due to the sheer number of data points per scan, and while there are techniques to mitigate this such as only incrementally storing map data via keyframe clouds \cite{palieri2020locus, shan2020lio}, this search can still involve thousands of operations and can quickly bottleneck system performance as time goes on. To address these issues, DLO only minimally preprocesses point clouds to provide accurate pose estimates even for robots with limited computational resources. The key contribution of our work lies in how we efficiently derive our submap for global refinement in scan-to-map matching. That is, rather than extracting points within a local vicinity of a robot's current position as most works do, DLO instead searches in keyframe-space by associating a scan's set of points with its corresponding keyframe position. The submap is subsequently constructed by concatenating the clouds from a subset of historic keyframes derived from nearby keyframes and those which make up the convex hull; this provides the current scan with both nearby and distant points in the submap to anchor to. In addition, a custom GICP solver enables extensive reuse of data structures across multiple solver instantiations to reduce redundant operations across the two-stage process. Our system also optionally accepts a relative prior from IMU in a loosely-coupled fashion to further improve accuracy in the optimization process, which can help especially during aggresive rotational motions. The reliability of our approach is demonstrated through extensive tests on multiple computationally-limited robotic platforms in several environments as part of Team CoSTAR's research and development efforts for the DARPA Subterranean Challenge in support of NASA JPL's Networked Belief-aware Perceptual Autonomy (NeBula) framework \cite{agha2021nebula}, in which DLO was the primary state estimation component on our fleet of autonomous aerial vehicles (Fig.~\ref{fig:header}A). \section{Methods} \subsection{Notation} A point cloud, $\mathcal{P}$, is composed of a set of points $p \in \mathcal{P}$ with Cartesian coordinates $p_{i} \in \mathbb{R}^3$. We denote \{$\mathcal{L}$\} as the LiDAR's coordinate system, \{$\mathcal{B}$\} as the robot's coordinate system located at the base link, and \{$\mathcal{W}$\} as the world coordinate system which coincides with \{$\mathcal{B}$\} at the initial position. Note that in this work we assume \{$\mathcal{L}$\} and \{$\mathcal{B}$\} reference frames coincide. Additionally, for optional optimization priors via proprioceptive sensors as described in Section~\ref{sec:optimization_priors}, we also assume that these frames coincide with \{$\mathcal{L}$\}, and subsequently \{$\mathcal{B}$\}, for convenience. We adopt standard convention such that $x$ points forward, $y$ points left, and $z$ points upward, and our work attempts to address the following problem: given adjacent point clouds scans $\mathcal{P}_k$ and $\mathcal{P}_{k-1}$ at time $k$, estimate the robot's current pose $\hat{\textbf{X}}_k \in \mathbb{SE}(3)$, trajectory $\hat{\textbf{X}}_{1:k}$, and map $\mathcal{M}_k$, all in \{$\mathcal{W}$\}. \subsection{Preprocessing} Our system assumes an input of 3D point cloud data gathered by a 360$^\circ$ LiDAR such as an Ouster OS1 or Velodyne VLP-16 (although our methods are not limited to these two). To minimize information loss from the raw sensor data, only two filters are used during preprocessing: first, we remove all point returns that may be from the robot itself through a box filter of size $1$m$^3$ around the origin. This is especially important if the LiDAR's field of view is within range of an aerial robot's propellers (Fig.~\ref{fig:header}A) or is surrounded by protective guards (Fig.~\ref{fig:header}B). The resulting cloud is then sent through a 3D voxel grid filter with a resolution of $0.25$m to lightly downsample the data for downstream tasks while maintaining dominate structures within the surrounding environment; the output of this is then used for subsequent tasks in the pipeline. Note that we do not correct for motion distortion in the point cloud as we assume that any distortion from robot movement is negligible due to high frame rate, and we directly use the dense point cloud rather than extracting features. On average, each cloud contains ${\sim}10{,}000$ points after preprocessing. \vspace{1mm} \begin{algorithm}[!t] \small \setstretch{0.9} \SetAlgoLined \textbf{input:} $\mathcal{P}_k$, $\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$ ; \textbf{initialize:} $\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$ = $\textbf{I}$ or gravityAlign() \\ \textbf{output:} $\hat{\textbf{X}}^{\mathcal{W}}_{k}$, $\mathcal{M}_k$\\ \While{$\mathcal{P}_k$} { \small {\tcp{preprocessing}} $\mathcal{\bar{P}}_k$ $\leftarrow$ preprocessPoints($\mathcal{P}_k$) ; \\ computeAdaptiveParameters($\mathcal{\bar{P}}_k$) ; \\ \small {\tcp{initialization}} \If {$k = 0$} { $\mathcal{T}^{\text{t}_1}_k$, $\mathcal{C}^{\text{t}_1}_k$ $\leftarrow$ NanoGICP$_1$.buildTargetTree($\mathcal{\bar{P}}_k$) ; \\ $\mathcal{K}_k$ $\leftarrow$ updateKeyframeDatabase($\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$, $\mathcal{\bar{P}}_k$) ; \\ \textbf{continue}; \\ } \small {\tcp{prior}} \leIf {IMU} { $\tilde{\textbf{X}}^{\mathcal{L}}_k$ $\leftarrow$ integrateGyro() } { $\tilde{\textbf{X}}^{\mathcal{L}}_k$ $\leftarrow$ $\textbf{I}$ } \small {\tcp{scan-to-scan}} $\mathcal{T}^{\text{s}_1}_k$, $\mathcal{C}^{\text{s}_1}_k$ $\leftarrow$ NanoGICP$_1$.buildSourceTree($\mathcal{\bar{P}}_k$) ; \\ $\hat{\textbf{X}}^{\mathcal{L}}_k$ $\leftarrow$ NanoGICP$_1$.align($\mathcal{T}^{\text{s}_1}_k$, $\mathcal{T}^{\text{t}_1}_k$, $\mathcal{C}^{\text{s}_1}_k$, $\mathcal{C}^{\text{t}_1}_k$, $\tilde{\textbf{X}}^{\mathcal{L}}_k$) ; \\ $\tilde{\textbf{X}}^{\mathcal{W}}_k$ $\leftarrow$ $\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$ $\hat{\textbf{X}}^{\mathcal{L}}_k$ ; \\ \small {\tcp{scan-to-map}} $\mathcal{T}^{\text{s}_2}_k$ $\leftarrow$ $\mathcal{T}^{\text{s}_1}_k$; $\mathcal{C}^{\text{s}_2}_k$ $\leftarrow$ $\mathcal{C}^{\text{s}_1}_k$ ; \\ $\mathcal{Q}_k$ $\leftarrow$ getKeyframeNeighbors($\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$, $\mathcal{K}_k$) ; \\ $\mathcal{H}_k$ $\leftarrow$ getKeyframeHulls($\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$, $\mathcal{K}_k$) ; \\ $\mathcal{S}_k$ $\leftarrow$ $\mathcal{Q}_k \oplus \mathcal{H}_k$ ; \\ $\mathcal{T}^{\text{t}_2}_k$, $\mathcal{C}^{\text{t}_2}_k$ $\leftarrow$ NanoGICP$_2$.buildTargetTree($\mathcal{S}_k$) ; \\ $\hat{\textbf{X}}^{\mathcal{W}}_k$ $\leftarrow$ NanoGICP$_2$.align($\mathcal{T}^{\text{s}_2}_k$, $\mathcal{T}^{\text{t}_2}_k$, $\mathcal{C}^{\text{s}_2}_k$, $\mathcal{C}^{\text{t}_2}_k$, $\tilde{\textbf{X}}^{\mathcal{W}}_k$) ; \\ \small {\tcp{ update keyframe database and map}} $\mathcal{K}_k$ $\leftarrow$ updateKeyframeDatabase($\hat{\textbf{X}}^{\mathcal{W}}_{k}$, $\mathcal{\bar{P}}_k$) ; \\ $\mathcal{M}_k$ $\leftarrow$ $\mathcal{M}_{k-1}$ $\oplus$ $\left\{ \mathcal{K}_k \setminus \mathcal{K}_{k-1} \right\}$; \\ \small {\tcp{reuse structs for next iteration}} $\mathcal{T}^{\text{t}_1}_k$ $\leftarrow$ $\mathcal{T}^{\text{s}_1}_k$; $\mathcal{C}^{\text{t}_1}_k$ $\leftarrow$ $\mathcal{C}^{\text{s}_1}_k$ ; \\ \Return $\hat{\textbf{X}}^{\mathcal{W}}_{k}$, $\mathcal{M}_k$ \\ } \caption{Direct LiDAR Odometry} \label{alg:dlo} \end{algorithm} \subsection{Scan Matching via Generalized-ICP} LiDAR-based odometry can viewed as the process of resolving a robot's egomotion by means of comparing successive point clouds and point clouds in-memory to recover an $\mathbb{SE}(3)$ transformation, which translates to the robot's 6-DOF motion between consecutive LiDAR acquisitions. This process is typically performed in two stages, first to provide a best instantaneous guess, which is subsequently refined to be more globally consistent. \subsubsection{Scan-to-Scan} In the first stage, the scan-to-scan matching objective is to compute an optimal relative transform $\hat{\textbf{X}}^{\mathcal{L}}_k$ between a source $\mathcal{P}_{k}^{\text{s}}$ and a target $\mathcal{P}_{k}^{\text{t}}$ (where $\mathcal{P}_{k}^{\text{t}}$ = $\mathcal{P}_{k-1}^{\text{s}}$) captured in $\mathcal{L}$ such that: \begin{equation} \hat{\textbf{X}}^{\mathcal{L}}_k = \argmin_{\textbf{X}^{\mathcal{L}}_k} \, \mathcal{E} \left( \textbf{X}^{\mathcal{L}}_k \mathcal{P}_{k}^{\text{s}}, \mathcal{P}_{k}^{\text{t}} \right) \,. \\ \label{eq:s2s_1} \end{equation} \noindent The residual error $\mathcal{E}$ from GICP is defined as: \begin{equation} \mathcal{E} \left( \textbf{X}^{\mathcal{L}}_k \mathcal{P}_{k}^{\text{s}}, \mathcal{P}_{k}^{\text{t}} \right) = \sum_i^N d_i^\text{T} \left( \mathcal{C}_{k,i}^{\text{t}} \, + \, \textbf{X}^{\mathcal{L}}_k \mathcal{C}_{k,i}^{\text{s}} \textbf{X}^{\mathcal{L}^\text{T}}_k \right)^{-1} d_i\\ \label{eq:s2s_2} \end{equation} \noindent such that the overall objective for this stage is: \begin{equation} \hat{\textbf{X}}^{\mathcal{L}}_k = \argmin_{\textbf{X}^{\mathcal{L}}_k} \, \sum_i^N d_i^\text{T} \left( \mathcal{C}_{k,i}^{\text{t}} \, + \, \textbf{X}^{\mathcal{L}}_k \mathcal{C}_{k,i}^{\text{s}} \textbf{X}^{\mathcal{L}^\text{T}}_k \right)^{-1} d_i \\ \label{eq:s2s_3} \end{equation} \noindent for $N$ number of corresponding points between point clouds $\mathcal{P}_{k}^{\text{s}}$ and $\mathcal{P}_{k}^{\text{t}}$, where $d_i = p_i^{\text{t}} - \textbf{X}^{\mathcal{L}}_k p_i^{\text{s}}$, $p_i^{\text{s}} \in \mathcal{P}_{k}^{\text{s}}, p_i^{\text{t}} \in \mathcal{P}_{k}^{\text{t}}, \forall i$, and $\mathcal{C}_{k,i}^{\text{s}}$ and $\mathcal{C}_{k,i}^{\text{t}}$ are the corresponding estimated covariance matrices associated with each point $i$ of the source or target cloud, respectively, using ten nearest neighbors. As will be further discussed in Section~\ref{sec:optimization_priors}, we can initialize the above objective function with a prior supplied by external sensors or odometry in an attempt to converge towards a global minimum. That is, for Eq.~(\ref{eq:s2s_3}), if a prior $\tilde{\textbf{X}}^{\mathcal{L}}_k$ is available by means of IMU preintegration, we can set the initial guess $\textbf{X}^{\mathcal{L}}_k$ = $\tilde{\textbf{X}}^{\mathcal{L}}_k$ to create a loosely-coupled system. If a prior is not available however, the system reverts to pure LiDAR odometry in which $\textbf{X}^{\mathcal{L}}_k$ = $\textbf{I}$ and relies solely on point cloud correspondence matching for this step. \subsubsection{Scan-to-Map} After recovering an initial robot motion estimate, a secondary stage of scan-to-map matching is performed and follows a similar procedure to that of scan-to-scan. However, rather than computing a relative transform between two instantaneous point clouds, the objective here is to further refine the motion estimate from the previous step to be more globally-consistent by means of matching with a local submap. In other words, the task here is to compute an optimal transform $\hat{\textbf{X}}_k^{\mathcal{W}}$ between the current source cloud $\mathcal{P}_k^\text{s}$ and some derived submap $\mathcal{S}_k$ such that: \begin{equation} \hat{\textbf{X}}^{\mathcal{W}}_k = \argmin_{\textbf{X}^{\mathcal{W}}_k} \, \mathcal{E} \left( \textbf{X}^{\mathcal{W}}_k \mathcal{P}_{k}^{\text{s}}, \mathcal{S}_{k} \right) \,. \\ \label{eq:s2m_1} \end{equation} \noindent After similarly defining the residual error $\mathcal{E}$ from GICP as in Eq.~(\ref{eq:s2s_2}), the overall objective function for scan-to-map is: \begin{equation} \hat{\textbf{X}}^{\mathcal{W}}_k = \argmin_{\textbf{X}^{\mathcal{W}}_k} \, \sum_j^M d_j^\text{T} \left( \mathcal{C}_{k,j}^{\mathcal{S}} \, + \, \textbf{X}^{\mathcal{W}}_k \mathcal{C}_{k,j}^{\text{s}} \textbf{X}^{\mathcal{W}^\text{T}}_k \right)^{-1} d_j \\ \label{eq:s2m_2} \end{equation} \noindent for $M$ number of corresponding points between point cloud $\mathcal{P}_{k}^{\text{s}}$ and submap $\mathcal{S}_{k}$, where $\mathcal{C}_{k,j}^{\mathcal{S}}$ is the corresponding scan-stitched covariance matrix for point $j$ in the submap as defined later in Section~\ref{sec:algorithmic_impl}. Optimization Eq.~(\ref{eq:s2m_2}) is initialized using the propagated result from scan-to-scan in the previous section from $\mathcal{L}$ to $\mathcal{W}$, i.e. $\tilde{\textbf{X}}^{\mathcal{W}}_k$ = $\hat{\textbf{X}}^{\mathcal{W}}_{k-1}$ $\hat{\textbf{X}}^{\mathcal{L}}_k$, so that this prior motion can be compared against historical map data for global consistency. The output of this stage $\hat{\textbf{X}}^{\mathcal{W}}_k$ is the final estimated robot pose. We note here that a key innovation of this work is how we derive and manage our submap for this stage: whereas previous works create a submap by checking the locality of each point individually in a stored map, we associate scans to keyframes and search rather in keyframe-space to stitch point clouds together and create $\mathcal{S}_{k}$. The implications of this include a far faster and more consistent submap creation process each iteration, which is additionally more permissive as compared to a radius-based search and will be further discussed in Section~\ref{sec:submapping}. \subsection{Optimization Prior} \label{sec:optimization_priors} Eq.~(\ref{eq:s2s_3}) describes the scan-to-scan non-linear optimization problem and can be initialized with a prior to reduce the chances of converging into a sub-optimal local minima. This prior represents an initial guess of the relative motion between two LiDAR frames and can come from integrating angular velocity measurements an inertial measurement unit (IMU). More specifically, angular velocity measurements $\hat{\boldsymbol{\omega}}_k$ from an inertial measurement unit can be defined as $\hat{\boldsymbol{\omega}}_k = \boldsymbol{\omega}_k + \textbf{b}_k^{\boldsymbol{\omega}} + \textbf{n}_k^{\boldsymbol{\omega}}$ as is assumed to be measured in $\mathcal{L}$ with static bias $\textbf{b}_k^{\boldsymbol{\omega}}$ and zero white noise $\textbf{n}_k^{\boldsymbol{\omega}}$ for convenience. After calibrating for the bias and subtracting it off from subsequent raw measurements to provide $\hat{\boldsymbol{\omega}}_k$, a relative rotational motion of the robot's body between two LiDAR frames can be computed via gyroscopic propagation of the quaternion kinematics $\textbf{q}_{k+1} = \textbf{q}_k + (\tfrac{1}{2} \textbf{q}_k \otimes \boldsymbol{\omega}_k )\Delta t$, where $\textbf{q}_k$ is initialized to identity prior to integration, $\Delta t$ is the difference in time between IMU measurements in seconds, and only gyroscopic measurements found between the current LiDAR scan and the previous one are used. Note that we are only concerned with a rotational prior during IMU preintegration and leave the retrieval of a translational prior for future work. The resultant quaternion of this propagation is converted to an $\mathbb{SE}(3)$ matrix with zero translational component to be used as $\tilde{\textbf{X}}^{\mathcal{L}}_k$, the scan-to-scan prior. \subsection{Fast Keyframe-Based Submapping} \label{sec:submapping} \begin{figure}[!t] \centering \vspace{3mm} \includegraphics[width=0.85\columnwidth]{figures/pdf/keyframe_comparison.pdf} \vspace{-2mm} \caption{\textbf{Keyframe-based submapping.} A comparison between the different submapping approaches, visualizing the current scan (white), the derived submap (red), and the full map (blue). (A) A common radius-based submapping approach of $r$ = $20$m retrieved in point cloud-space. (B) Our keyframe-based submapping approach, which concatenates a subset of keyed scans and helps anchor even the most distant points in the current scan (green box) during the scan-to-map stage.} \label{fig:keyframe_comparison} \vspace{-2mm} \end{figure} A key innovation of this work lies in how our system manages map information and derives the local submap in scan-to-submap matching for global egomotion refinement. Rather than working directly with point clouds and storing points into a typical octree data structure, we instead create a secondary point cloud of \textit{keyframes} to search within, in which each keyframe is linked to its corresponding point cloud scan in a key-value dictionary. The resulting submap used for scan-to-submap matching is then created by concatenating the corresponding point clouds from a subset of the keyframes, rather than directly retrieving local points within some radius of the robot's current position. The implications of this are twofold: first, by searching in ``keyframe-space" rather than ``point cloud-space," a much more computationally-tractable problem is presented to the system when constructing a submap. Radius-based searches within a cumulative point cloud map can require distance calculations against hundreds of thousands of points, and even with an incremental octree data structure this process can quickly become infeasible and rapidly increase the computational expense over time as the number of points increases. Searching against keyframes however typically involves only a few hundred points even after long traversals and provides much more consistent computational performance, reducing the chance of a frame drop due to slow processing. Additionally, a keyframe-based approach can construct a much more permissive submap as compared to range-based methods. That is, since the size of a submap derived from keyframe point clouds relies solely on the LiDAR sensor's range rather than a predetermined distance, the derived submap can have a larger overlap with the current scan; this is illustrated in Fig.~\ref{fig:keyframe_comparison}. In this example, a submap of fixed radius $r$ = $20$m insufficiently overlaps with the current scan and can introduce drift over time due to containing only spatially-nearby points; however, a keyframe-based approach covers most of the current scan which can help with better global consistency. Expanding the radius size may help increase this overlap for radius-based methods, but doing so can significantly slowdown downstream tasks such as the point normal calculations for GICP. \subsubsection{Keyframe Selection via kNN and Convex Hull} \begin{figure}[!t] \centering \vspace{2mm} \includegraphics[width=0.9\columnwidth]{figures/pdf/convex_adaptive.pdf} \vspace{-2mm} \caption{\textbf{Keyframe selection and adaptive thresholds.} (A) Our method's submap (red) is generated by concatenating the scans from a subset of keyframes (green spheres), which consists of $K$ nearest neighbor keyframes and those that construct the convex hull of keyframe set. (B) An illustration of adaptive thresholding. In this scenario, the threshold decreases when traversing down a narrow ramp to better capture small-scale details.} \label{fig:convex_adaptive} \vspace{-2mm} \end{figure} To construct the submap $\mathcal{S}_k$, we concatenate the corresponding point clouds from a selected subset of environmental keyframes. Let $\mathcal{K}_k$ be the set of all keyframe point clouds such that $\mathcal{S}_k \subseteq \mathcal{K}_k$. We define submap $\mathcal{S}_k$ as the concatenation of $K$ nearest neighbor keyframe scans $\mathcal{Q}_k$ and $L$ nearest neighbor convex hull scans $\mathcal{H}_k$ such that $\mathcal{S}_k = \mathcal{Q}_k \oplus \mathcal{H}_k$, where the indices which specify the convex hull are defined by the set of keyframes which make up the intersection of all convex sets containing the keyframes which compose $\mathcal{K}_k$. The result of this is illustrated in Fig.~\ref{fig:convex_adaptive}A, in which the keyframes highlighted in green are those that compose the extracted submap, indicated in red. Intuitively, extracting nearest neighbor keyframes aims to help with overlap of nearby points in the current scan, while those from the convex hull --- which contain boundary map points --- increase the overlap with more distant points in the scan. This combination aims to reduce overall trajectory drift by providing the system with multiple scales of environmental features to align with. Note that keyframes which are classified as both a nearest neighbor and a convex hull index are only used once to reduce redundancy in the submap. \subsubsection{Adaptive Keyframing} The location of environmental keyframes affects the derived submap and can subsequently influence accuracy and robustness of the odometry. Keyframe nodes are commonly dropped using fixed thresholds (e.g., every $1$m or $10^\circ$ of translational or rotational change) \cite{palieri2020locus, shan2020lio, lvisam2021shan}, but the optimal position can be highly dependent on a surrounding environment's structure. More specifically, in large-scale settings, features captured by the point cloud scan are much more prominent and can be depended on for longer periods of time; however, for narrower, smaller-scale environments, a smaller threshold is necessary to continually capture the small-scale features (i.e., tight corners) in the submap for better localization. Thus, we choose to scale the translational threshold for new keyframes according to some notion of ``spaciousness" in the instantaneous point cloud scan, defined as $m_k = \alpha m_{k-1} + \beta M_k$, where $M_k$ is the median Euclidean point distance from the origin to each point in the preprocessed point cloud, $\alpha$ = $0.95$, and $\beta$ = $0.05$ chosen empirically, and $m$ is the smoothed signal used to scale keyframe distance threshold $th$ such that: $th_k$ = $10$m if $m_k\!>\!20$m; $th_k$ = $5$m if $m_k\!>\!10$m$\,\,\&\,\,m_k\!\leq\!20$m; $th_k$ = $1$m if $m_k\!>\!5$m$\,\,\&\,\,m_k\!\leq\!10$m; and $th_k$ = $0.5$m if $m_k\!\leq\!5$m, with rotational threshold held fixed at $30^\circ$. Fig.~\ref{fig:convex_adaptive}B illustrates the effects of this adaptive thresholding. \subsection{Algorithmic Implementation} \label{sec:algorithmic_impl} \begin{table}[!t] \renewcommand{\arraystretch}{1.2} \vspace{2mm} \caption{Summary of Data Structure Recycling} \vspace{-5mm} \label{table:structs} \begin{center} \begin{tabular}{c||cc} Element & Scan-to-Scan & Scan-to-Map \\ \hline $\mathcal{T}_k^{\text{source}}$ & \small build & $\xrightarrow{\text{reuse from S2S}}$ \\ $\mathcal{T}_k^{\text{target}}$ & $\mathcal{T}_{k-1}^{\text{source}}$ & \small build when $\mathcal{S}_{k} \neq \mathcal{S}_{k-1}$ \\ $\mathcal{C}_k^{\text{source}}$ & \small compute & $\xrightarrow{\text{reuse from S2S}}$ \\ $\mathcal{C}_k^{\text{target}}$ & $\mathcal{C}_{k-1}^{\text{source}}$ & $\sum_n^N \mathcal{C}^{\mathcal{S}}_{k,n}$ \\ \end{tabular} \end{center} \vspace{-2mm} \end{table} \subsubsection{Scan-Stitched Submap Normals} Generalized-ICP involves minimizing the plane-to-plane distance between two clouds, in which these planes are modeled by a computed covariance for each point in the scan. Rather than computing the normals for each point in the submap on every iteration (which can be infeasible for real-time operation), we assume that the set of submap covariances $\mathcal{C}^{\mathcal{S}}_k$ can be approximated by concatenating the normals $\mathcal{C}^{\mathcal{S}}_{k,n}$ from $N$ keyframes which populate the submap such that $\mathcal{C}^{\mathcal{S}}_k \approx \sum_n^N \mathcal{C}^{\mathcal{S}}_{k,n}$. As a consequence, each submap's set of normals need not be explicitly computed, but rather just reconstructed by stitching together those calculated previously. \subsubsection{Data Structure Recycling} Expanding on the above idea, we've in fact identified several redundant instances in our pipeline which can be benefit from data structure sharing between modules; this helps to reduce overall system overhead by removing unnecessary computations in the loop. Summarized in Table~\ref{table:structs}, the system requires eight total elements to successfully perform scan-to-scan and scan-to-map matching: a kdtree $\mathcal{T}_k$ used to search for point correspondences, and the set of point covariances $\mathcal{C}_k$, for both source and target clouds in each scan-matching process. Out of the four required kdtrees data structures, only two need to be built explicitly. That is, the tree for the source (input) cloud $\mathcal{T}_k^{\text{source}}$ can be built just once per scan acquisition and shared between both modules (as the same scan is used for both sources). For the scan-to-scan target tree $\mathcal{T}_k^{\text{target}}$, this is simply just the previous iteration's scan-to-scan source tree $\mathcal{T}_{k-1}^{\text{source}}$ and thus can be propagated. The scan-to-map target tree needs to be built explicitly, but since the submap is derived from a set of keyframes, this build only needs to be performed when the set of keyframes that create the submap changes, such that $\mathcal{S}_{k} \neq \mathcal{S}_{k-1}$. Otherwise, the data structure can just be reused again for additional computational savings. Point covariances $\mathcal{C}_k$ needed for GICP, on the other hand, only need to be computed once per scan aquisition, and its data can be shared directly in the other three instances. \begin{figure}[!t] \centering \vspace{2mm} \includegraphics[width=0.85\columnwidth]{figures/pdf/alpha.pdf} \vspace{-2mm} \caption{\textbf{Alpha course map.} Different views and angles of the dense 3D point cloud map generated using our DLO algorithm on the Urban Alpha dataset. Estimated positions at each timestamp were used to transform the provided scan into a world frame; this was performed for all scans across the dataset and concatenated / voxel filtered to generated the above images.} \label{fig:alpha} \vspace{-3mm} \end{figure} \begin{figure}[!t] \centering \vspace{2mm} \includegraphics[width=0.8\columnwidth]{figures/pdf/ape.pdf} \vspace{-2mm} \caption{\textbf{Error comparison.} The absolute pose error plotted across a $1200$s window of movement, showing the difference between radius and keyframe submapping schemes. Keyframe-based approaches do not have the range restriction that radius-based approaches inherently contain, which directly translates to a lower error in odometry due to more perceptive submapping.} \label{fig:ape} \vspace{-2mm} \end{figure} \subsubsection{Dual NanoGICP} To facilitate the cross-talking between scan-matching modules, we assembled NanoGICP, a custom iterative closest point solver which combines the FastGICP\cite{koide2020voxelized} and NanoFLANN\cite{blanco2014nanoflann} open-source packages with additional modifications for data structure sharing as described above. In particular, NanoGICP uses NanoFLANN to efficiently build kdtree data structures, which are subsequently used for point cloud correspondence matching by FastGICP. In practice, data structure sharing is performed between two separate NanoGICP instantiations --- one to target each scan-matching problem --- and done procedurally as detailed in Algorithm~\ref{alg:dlo}. \section{Results} \begin{figure}[!t] \centering \vspace{2mm} \includegraphics[width=0.8\columnwidth]{figures/pdf/gicp.pdf} \vspace{-2mm} \caption{\textbf{Average convergence time.} A comparison of average convergence times across $100$ benchmark alignments for each algorithm, including our NanoGICP solver and two other open-source GICP packages.} \label{fig:gicp} \vspace{-2mm} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{figures/pdf/data_recycling.pdf} \vspace{-2mm} \caption{\textbf{Ablation study of data recycling schemes.} Box plots of the processing time and CPU usage for four different recycling variants, ranging from no data structure reuse to partial reuse and full reuse.} \label{fig:data_recycling} \vspace{-2mm} \end{figure} \begin{table}[!t] \renewcommand{\arraystretch}{1.5} \small \begin{center} \caption{Dropped LiDAR Scans per Recycling Scheme} \label{table:dropped_scans} \begin{tabular}{c||cccc} & None & KDTrees & Covariances & Both \\ \hline \% Scans & $9.37\%$ & $4.51\%$ & $0.00\%$ & $0.00\%$ \end{tabular} \end{center} \vspace{-4mm} \end{table} \begin{table*}[!t] \vspace{2mm} \caption{Comparison on Benchmark Datasets} \vspace{-2mm} \label{table:comparison} \centering \setlength{\tabcolsep}{8 pt} \renewcommand{\arraystretch}{1.} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{3}{*}{Method}} & \multicolumn{4}{c|}{Alpha Course} & \multicolumn{4}{c|}{Beta Course} & \multicolumn{2}{c|}{CPU Usage} \\ \cline{2-11} \multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{APE {[}m{]}} & ME {[}m{]} & \multicolumn{3}{c|}{APE {[}m{]}} & ME {[}m{]} & \multicolumn{2}{c|}{No. of Cores} \\ \cline{2-11} \multicolumn{1}{|c|}{} & max & mean & std & rmse & max & mean & std & rmse & max & mean \\ \hline BLAM\cite{nelson} & 3.44 & 1.01 & 0.94 & 0.43 & 3.89 & 2.27 & 0.89 & 1.27 & 1.14 & 0.93 \\ \hline Cartographer\cite{hess2016real} & 5.84 & 2.91 & 1.60 & 1.05 & 2.64 & 1.37 & 0.67 & 0.31 & 1.75 & 0.88 \\ \hline LIO-Mapping\cite{ye2019tightly} & 2.12 & 0.99 & 0.51 & 0.45 & 1.60 & 1.18 & 0.22 & 0.61 & 1.80 & 1.53 \\ \hline LOAM\cite{zhang2014loam} & 4.33 & 1.38 & 1.19 & 0.60 & 2.58 & 2.11 & 0.44 & 0.99 & 1.65 & 1.41 \\ \hline LOCUS\cite{palieri2020locus} & 0.63 & 0.26 & 0.18 & 0.28 & 1.20 & 0.58 & 0.39 & 0.48 & 3.39 & 2.72 \\ \hline DLO & \textbf{0.40} & \textbf{0.18} & \textbf{0.06} & \textbf{0.19} & \textbf{0.50} & \textbf{0.16} & \textbf{0.09} & \textbf{0.19} & \textbf{0.92} & \textbf{0.62} \\ \hline \end{tabular} \vspace{-5mm} \end{table*} \subsection{Component Evaluation} To investigate the impact of our system's components, including keyframe-based submapping, submap normal approximation, and the reuse of data structures, we compare each component with its counterpart using the Alpha Course dataset from the Urban circuit of the DARPA Subterranean Challenge. This dataset contains LiDAR scans from a Velodyne VLP-16 sensor, in addition to IMU measurements from a VN-100, collected across 60 minutes in an abandoned powerplant located in Elma, WA which contains multiple perceptual challenges such as large or self-similar scenes (Fig.~\ref{fig:alpha}). For these component-wise evaluations, data was processed using a 4-core Intel i7 1.30GHz CPU. \subsubsection{Keyframe-Based Submapping} We compared the absolute pose error (APE), processing time, and CPU load across three submapping schemes, including: radius-based ($r$ = $10$m), keyframe-based with a $1$m static threshold, and keyframe-based with adaptive thresholding. For keyframe-based variants, we used 10 nearest-neighbor and 10 convex hull keyframes for submap derivation. From Fig.~\ref{fig:ape}, the influence of our approach is clear: submapping in keyframe-space can significantly reduce positional error by considering more distant points that would otherwise be outside the scope of a radius-based approach. These additional points influence the outcome of the GICP optimization process as they are considered during error minimization for the optimal transform; this is especially important in purely frontend-based odometry, since any additional error in pose can quickly propagate over time due to drift. Processing time and CPU load showed similar trends: radius-based processed each scan notably slower at $74.2$ms per scan with an average of $37.5\%$ CPU load as compared to $21.6$ms / $10.2\%$ and $19.1$ms / $9.1\%$ for static and adaptive schemes, respectively. \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{figures/pdf/extreme.pdf} \vspace{-2mm} \caption{\textbf{Extreme environments.} \textit{Top:} Part of an underground mine in Lexington, KY mapped autonomously using our custom drone while running DLO. This environment contains challenging conditions such as: (A) low illuminance, (B) object obstructions, and (C) wet and muddy terrain. \textit{Bottom:} Top-down (D) and side view (E) of the three levels of an abandoned subway located in Downtown Los Angeles, mapped via DLO using a Velodyne VLP-16. In this run, we manually tele-operated the legged robot to walk up, down, and around each floor for a total of $856$m.} \label{fig:extreme} \vspace{-2mm} \end{figure} \subsubsection{Data Structure Recycling} To evaluate the effectiveness of data reusage, we measured and compared the processing time and CPU usage between different recycling schemes via a box plot (Fig.~\ref{fig:data_recycling}) and percentage of dropped scans over the dataset (Table~\ref{table:dropped_scans}). In a naive system which explicitly calculates each kdtree and cloud covariance, computation time exceeded LiDAR rate (10Hz for Velodyne) with a high average of $69.8$ms per scan and nearly $10\%$ of scans dropped due to high processing time. Recycling kdtrees but not covariances provides a slight improvement in processing time and CPU percentage, while recycling covariances but not kdtrees provides a more prominent performance boost; this is reasonable since our covariance recycling scheme is more aggressive than kdtree reusage. Finally, using the full scheme as detailed in Table~\ref{table:structs} significantly decreases both metrics, with an average processing time of $21.9$ms and $9.5\%$ CPU load, which prevents any LiDAR frames from dropping. \subsubsection{NanoGICP} To compare NanoGICP with the state-of-the-art, we use FastGICP's \cite{koide2020voxelized} benchmark alignment code found in the authors' open-source repository. This benchmark measures the average convergence time to align two LiDAR scans across $100$ runs, and we compare against PCL's \cite{Rusu_ICRA2011_PCL} GICP implementation as well as FastGICP's multithreaded implementation. Note that we do not compare against the voxelized FastGICP variant, since this method approximates planes with groups of planes and decreases overall accuracy. As shown in Fig.~\ref{fig:gicp}, we observed that NanoGICP converged faster on average ($42.53$ms) when compared to FastGICP ($72.88$ms) and PCL GICP ($178.24$ms). \subsection{Benchmark Results} To examine how our algorithm compares against the state-of-the-art, we use the Alpha and Beta courses from the Urban dataset of the Subterranean Challenge and evaluate accuracy and CPU load against several other LiDAR-based odometry packages, including BLAM\cite{nelson}, Cartographer\cite{hess2016real}, LIO-Mapping\cite{ye2019tightly}, LOAM\cite{zhang2014loam}, and LOCUS\cite{palieri2020locus}. Table~\ref{table:comparison} provides an overview of these comparison (numbers retrieved from \cite{palieri2020locus}). We note that LIO-SAM \cite{shan2020lio}, a current state-of-the-art tightly-coupled approach, and LVI-SAM \cite{lvisam2021shan}, which combines LIO-SAM \cite{shan2020lio} with VIO, could not tested at the time of this work due to their sensitive calibration procedure and strict requirements input data. We observed that our method's CPU load was measured to be far lower than any other algorithm, using less than one core both on average and at its peak. This is likely a result how our system derives its submap, in addition its extensive reuse of internal data structures. This observation can also explain DLO's far lower absolute pose error (APE) and root-mean-square error (RMSE). With this faster processing time, our method outperformed all other methods in both Alpha and Beta courses, having more than twice the accuracy in the Beta course for max, mean and standard deviation. In addition to our more permissive submapping approach, we are less likely to drop frames than other methods and hence more up-to-date information for the system, ultimately resulting in a lower trajectory error. \subsection{Field Experiments} We additionally tested and implemented our solution on several custom robotic platforms for real-world field operation. Specifically, we integrated DLO onto an aerial vehicle (Fig.~\ref{fig:header}A) with an Ouster OS1, VN-100 IMU, and an Intel NUC Board NUC7i7DNBE 1.90 GHz CPU , and a Boston Dynamics Spot (Fig.~\ref{fig:header}B) with a Velodyne VLP-16, VN-100 IMU, and an Intel NUC7i7DN 1.9 GHz CPU. For these real-world runs, we conducted both manual and autonomous traversals across two perceptually-degraded environments: in an underground limestone cave in Lexington, KY and at an abandoned subway in Downtown Los Angeles (Fig.~\ref{fig:extreme}). Both environments contained environmental properties which often challenge perceptual systems, including poor lighting conditions, featureless corridors, and the presence of particulates such as dust or fog; these, however, did not pose a significant issue to our system. Despite traversing over $850$m across three different levels in the abandoned subway, our system reported only a $10$cm end-to-end drift. Our tests in the underground mine showed similar promise: while the environment completely lacked any external lighting deep within the cave, DLO could still stably track our aerial vehicle across $348$m of flight for autonomous traversal and exploration. The results from these field operations demonstrate the real-world reliability of our method. \begin{figure}[!t] \centering \vspace{2mm} \includegraphics[width=0.9\columnwidth]{figures/pdf/mega_cavern.pdf} \vspace{-2mm} \caption{\textbf{Mega Cavern.} Different views of the Mega Cavern in Louisville, KY mapped by our DLO algorithm. Data is courtesy of Team Explorer.} \label{fig:mega_cavern} \vspace{-2mm} \end{figure} \section{Discussion} This work presents Direct LiDAR Odometry (DLO), a light-weight and accurate frontend localization solution with minimal computational overhead for long-term traversals in extreme environments. A key innovation which distinguishes our work from others is how we efficiently derive a local submap for global pose refinement using a database of keyframe-point cloud pairs. This in turn permits a substantial number of solver data structures to be shared and reused between system modules, all of which is facilitated using our custom NanoGICP cloud registration package. We demonstrate the reliability of our approach through benchmarks and extensive field experiments on multiple platforms operating in large-scale perceptually-degraded environments. DLO was developed for and used on NASA JPL's Team CoSTAR's fleet of quadrotors in the DARPA Subterranean Challenge (Fig.~\ref{fig:mega_cavern}), and in the future we are interested in tighter IMU integration through a backend component as well as adding loop closures to further reduce long-term drift. \noindent \textbf{Acknowledgements:} The authors would like to thank Team CoSTAR teammates and colleagues, including Amanda Bouman, Luca Carlone, Micah Corah, Kamak Ebadi, Seyed Fakoorian, David Fan, Sung Kim, Benjamin Morrell, Joshua Ott, Andrzej Reinke, Toni Rosinol, and Patrick Spieler, for their valuable insight and productive discussions. \bibliographystyle{IEEEtran}
1,314,259,996,848
arxiv
\section{Introduction} Graphs and complex networks can be classified into two major categories: \emph{geographical} and \emph{non-geographical} ones. Whereas in the latter type of networks, nodes do not have specific positions, in the former, each node has a well-defined spatial position, expressible by respective coordinates. Several real-world networks are geographical in nature, including power distribution (e.g.~\cite{Albert_power:2004}), tourism (e.g.~\cite{Baggio:2007}), transportation (e.g.~\cite{Barthelemy:2006}), biological networks (e.g. bone structure~\cite{Costa_bone:2006}, gene expression expression~\cite{Costa_gene:2006, Costa_bioinf:2005}, and developing neuronal networks~\cite{Bao:2008,vanPelt:2004}). They all share the property that, to various extents, spatial proximity between nodes plays a role in shaping the connectivity structure. Often in these networks, spatially close nodes have a larger probability of being connected. Sometimes the role of spatial position is more intricate. For instance, in neuronal network development, axonal path finding is directed by the cooperation of multiple factors. These include mechanical ones, such as the presence of a fissure, the expression gradient of molecules as positive, permissive, or negative guidance factors and the cell adhesion molecules involved in fasciculation~\cite{Bao:2008}. In addition we need to consider~\cite{vanPelt:2004} neurothropic factors that regulate neuronal survival, differentiation, and signaling~\cite{Huang:2001,Sofroniew:2001,Squire:2003}, the gradients of neurotransmission~\cite{Semyanov:2005}, and the interactions amongst these factors~\cite{Gong:2004,Kwok:2007}. To take these into account, dynamical vector representations need to be associated with network nodes, vertices, or geographical locations. We wish to incorporate these requirements into geographical networks. A variety of geographical networks have been proposed in the literature (e.g.~\cite{Kaiser_geo:2004, Costa_Sznajd:2005, Gastner:2006, Warren:2002}). A new family of networks, namely the \emph{knitted networks}, was proposed recently~\cite{Costa_path:2007, Costa_comp:2007} to include all networks defined and composed by paths, i.e. sequences of edges without repetition of nodes. In this article, we expand the family of knitted networks by incorporating structures generated by trajectories defining paths following a given vector field. More specifically, a set of nodes is distributed within a given domain (a 2D space in this article, but the extension to higher dimensions is immediate); one node is chosen as origin, and the respective trajectory (line of force) is obtained while the nodes which are closer than a given maximum distance to the current point of the trajectory are sequentially incorporated into the path. This procedure is repeated several times, yielding a network with connections aligned to the vector field. In other words, the paths correspond to approximations of the solutions of the dynamical system represented by the vector field. Figure~\ref{fig:ex} illustrates two trajectory networks obtained from the vector fields $\vec{\phi}(x,y) = (y,x)$ and $\vec{\phi}(x,y) = (y,-x)$ (b). \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.4\linewidth]{ex.eps} \includegraphics[width=0.4\linewidth]{ex_circ.eps} \\ (a) \hspace{7cm} (b) \\ \caption{Trajectory networks obtained for the fields $\vec{\phi}(x,y) = (y,x)$ (a) and $\vec{\phi}(x,y) = (y,-x)$ (b). }~\label{fig:ex} \end{center} \end{figure*} Trajectory networks represent a natural putative model for several real-world structures and phenomen, for instance neural growth and development, including axonal navigation~\cite{Bao:2008}, the establishment of neuronal connections under the influence of neurotrophic fields (e.g.~\cite{Huang:2001, Sofroniew:2001, Squire:2003}), neurotransmitter diffusion~\cite{Semyanov:2005} and their relation with adaptive plasticity~\cite{Kwok:2007}, the growth of transportation systems under geographical and economical influences (e.g. `every path leads to Rome'), the growth of trees and roots under influence of trophic factors~\cite{Gregory:2006}, the development of channel-based systems such as bone structure and the vascular system, amongst many other important systems. The focus of attention in the current work is to investigate how the topology of trajectory networks, a geographical type of knitted network, is affected as a consequence of progressive \emph{geographical infiltration}. By geographical infiltration (hence infiltration for short), we mean any process which interconnects pairs of nodes. Infiltration affects several real-world systems, e.g. the appearance of cracks along channels, the establishment of new local routes between towns and cities, contaminations between vessels of fibers, gallery building by parasites, intentional attacks, internal spreading of diseases, to cite just a few cases. In the current work, the infiltration process is simulated by selecting nodes at random and connecting this node to all other nodes which are closer than a maximum distance $D_p$. Therefore, the adopted infiltration corresponds to the progressive incorporation of \emph{tufts} of local connectivity. Here, we investigat the effects of progressive infiltration on the topology of trajectory networks by quantifying the degree, clustering coefficient, size of the largest component, as well as the number and length of the chains present in the network. A recent study highlighted chains as an important category of network motifs~\cite{Costa_chains:2008}. Real-world networks often contain several chains, in ways specific to their structure and function. Thus, these networks are possibly the first theoretical model to naturally incorporate these motifs. These motifs are a consequence of the linking of spatially distributed nodes along the trajectories defined by the given vector fields. The common trait in real-world network structure that these models represent particularly well is the presence of independent paths, with relatively few collaterals. Therefore, it becomes particularly important to characterize the structure of trajectory networks before and after infiltration by considering the number and length of the existing chains. Interestingly, the effect of infiltrations can be either bad or good, depending on each specific system. For instance, the incorporation of additional local routes is in principle beneficial for transportation and communication systems. On the other hand, the addition of local connections in biological networks (e.g. bone or neuronal networks) may have catastrophic consequences. Observe that in the latter situation the main purpose of the chains/fibers is actually to provide mutual isolation. In both cases, the quantification of the effects of the infiltration over the topology of the respective networks can provide valuable information to be interpreted from the perspective of each problem. This article starts by presenting the basic concepts --- including the generation of trajectory networks and the geographical infiltrations --- and follows by describing the experiments and discussing the respectively obtained results. \section{Basic Concepts} A \emph{complex network} is a graph exhibiting a particularly intricate structure. The connectivity of a undirected, unweighted network can be completely represented in terms of the respective \emph{adjacency matrix} $K$, such that each interconnection between two nodes $i$ and $j$ implies $K(i,j)=K(j,i)=1$, with $K(i,j)=K(j,i)=0$ being otherwise imposed. The \emph{immediate neighbors} of a node $i$ are those nodes which receive an edge from $i$. The \emph{degree} of a node $i$ is equal to the number of its immediate neighbors. Two nodes are said to be \emph{adjacent} if they share an edge; two edges are adjacent if they share one node. A sequence of adjacent edges is a \emph{walk}. A \emph{path} is a walk which never repeats a node or edge. The length of a walk (or path) is equal to the respective number of involved edges. The \emph{clustering coefficient} of nodes $i$ is calculated by dividing the number of interconnections between its immediate neighbors and the maximum possible number of connections which could be established between those neighbors. A \emph{connected component} of a network is a subgraph such that each of its nodes can be reached from any of its other nodes~\footnote{Often a connected component is understood to be maximal, in the sense of incorporating all mutually reachable nodes in that component.}. A \emph{chain} is a subgraph of a network such as that each of its nodes has degree 1 or 2 and not additional nodes of degree 1 or 2 are connected to it~\cite{Costa_chains:2008}. The \emph{length} of a chain is given by its number of edges. Two measurements which can be used to characterize the chains in a given network include the number of such chains and average and standard deviation of their respective lengths. Chains are naturally related to paths along the network. \section{Trajectory Networks} A family of networks, namely the \emph{knitted complex networks}, was introduced recently~\cite{Costa_path:2007, Costa_comp:2007} incorporating all networks organized around the concept of \emph{paths}. Two main types of knitted networks were initially identified: \emph{path-transformed} and \emph{path-regular}. The former subcategory of knitted complex network is obtained by performing the start-path transformation~\cite{Costa_path:2007} on a given network (star and path connectivities can be understood as duals, e.g. through the line-graph transformation). Therefore, networks with power-law distribution of path lengths can be obtained by star-path transforming Barab\'asi-Albert networks~\cite{Albert_Barab:2002}. The second type of knitted complex networks, namely the path-regular networks, is particularly simple and involves starting with a set of $N$ isolated nodes and performing several paths encompassing all nodes. Path-regular networks have been found to exhibit marked similar properties between different configurations or nodes in the same configuration (e.g.~\cite{Costa_comp:2007, Costa_longest:2007}). An even more regular version of the path-regular network, with all nodes exhibiting identical degrees, was later reported in~\cite{Costa_equiv:2008, Costa_conc:2008}. Geographical networks are characterized by the fact that each of their nodes has a well-defined spatial position. Geographical networks represent an important category of complex networks because several real-world structures are inherently embedded into 2D or 3D spaces, and their connectivities are strongly affected by proximity and spatial adjacency. Given a set of spatially distributed nodes embedded in a continuous space to which a vector field is associated, it is possible to obtain geographical networks whose connections are a consequence not only of the proximity between nodes, but also of the orientations implied by the respectively associated vector field. Several real-world can be thought as involving a geographical distribution of nodes and associated vector fields. For instance, the neurons along the cortical surface can be represented as a set of geographically distributed nodes, while their connections are established to a great extent as a consequence of neurotrophic fields (e.g. electrical or chemical gradients). Systems of streets, roads and highways can also be understood as involving a set of spatially distributed nodes (the intersections between routes), with the interconnections being established in terms of the spatial proximity between nodes as well as geographical and economical fields (e.g. the trend to connect to a big city, to avoid a geographical obstacle or to follow level-sets of height). Several other natural and human-made complex systems can be modeled by trajectory networks. Trajectory networks are related to gradient networks (e.g.~\cite{Toroczkai:2004, Toro_Nature:2004, Costa_deriv:2005}), field interactions~\cite{Costa_gene:2006, Costa_bioinf:2005, Costa_saliency:2006}, as well as dynamical systems (e.g.~\cite{Thurner:2005, Borges:2007}). In the present work, we understand trajectory networks as a particular case of knitted networks. The trajectory networks considered in the present article are obtained as follows. First, a two-dimensional workspace of size $L \times L$ is defined, and a vector field $\vec{\phi}(x,y)$ is associated to it. For simplicity's sake we assume that $-L/2 \leq x,y \leq L/2$. All networks considered henceforth in this work are obtained for the vector field $\vec{\phi}(x,y) = (y,x)$. $N$ points are distributed along this space with uniform probability. A total of $N_p$ trajectories are then performed while obtaining each network. A starting point is randomly selected, and the respective line of force (always parallel to the vector field) is calculated by using the Euler leapfrog numerical method (e.g.~\cite{Mathews:1987}). At each current time, if a new node is found at a distance not exceeding $D_p$, that node is connected to the previous node, and so on. As it is clear from the example of trajectory network shown in Figure~\ref{fig:ex}, the combination of proximity and orientation constraints while performing the connections yield networks incorporating several chains, which closely follow the vector field orientation. Different degrees of interconnectivity between and along the chains can be obtained by varying the total number of points and the parameter $D_p$. Observe that the number of chains is is reduced for larger values of $D_p/N$. Once all trajectories are performed, the isolated points can be removed (as adopted henceforth) or not (allowing further connections). \section{Geographical Infiltrations} Given a geographical network, several types of perturbations of its structure can arise as a specific consequence of its geographical nature, in the sense that nodes which are spatially closer may interfere one another. For instance, in a neuronal system, unwanted connections may appear between nearby neurons as a consequence of diseases. In transportation systems, it is only too natural to incorporate new local connections to the network. Several other types of geographical interferences are possible, including those arising as a consequence of contaminations, attacks, infiltrations, amongst many other. In this work we incorporate progressive infiltrations to a given network geographical network by selecting one of its nodes and connecting to it all other nodes which are not further than a maximum distance $D_i$. \section{Results and Discussion} A set of 30 trajectory networks was obtained for the field $\vec{\phi}(x,y) = (y,x)$. A total of 1000 nodes was initially distributed within a squre region of side $L=100$ centered at $(0,0)$, and $N_p = 100$ trajectories were numerically calculated. Starting from a randomly chosen node, each node at a maximum distance $D_p=2$ from the current growing extremity of each trajectory was successively connected. An example of obtained trajectory network is shown in Figure~\ref{fig:ex}. Each of the 30 networks underwent progressive infiltrations assuming $D_i=5$ and $D_i = 10$. Figure~\ref{fig:inf_5} shows four stages (100, 200, 300 and 400) along the successive infiltrations for $D_i=5$. Examples of the results of infiltrations with $D_i = 10$ are depicted in figure~\ref{fig:inf_10}. \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.4\linewidth]{inf_5_a.eps} \includegraphics[width=0.4\linewidth]{inf_5_b.eps} \\ (a) \hspace{7cm} (b) \\ \includegraphics[width=0.4\linewidth]{inf_5_c.eps} \includegraphics[width=0.4\linewidth]{inf_5_d.eps} \\ (c) \hspace{7cm} (d) \\ \caption{The network in Fig.~\ref{fig:ex} after 100 (a), 200 (b), 300 (c) and 400 (d) infiltrations with $D_i = 5$. }~\label{fig:inf_5} \end{center} \end{figure*} \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.4\linewidth]{inf_10_a.eps} \includegraphics[width=0.4\linewidth]{inf_10_b.eps} \\ (a) \hspace{7cm} (b) \\ \includegraphics[width=0.4\linewidth]{inf_10_c.eps} \includegraphics[width=0.4\linewidth]{inf_10_d.eps} \\ (c) \hspace{7cm} (d) \\ \caption{The network in Fig.~\ref{fig:ex} after 100 (a), 200 (b), 300 (c) and 400 (d) infiltrations with $D_i = 10$. }~\label{fig:inf_10} \end{center} \end{figure*} In order to characterize the alterations in the topology of the trajectory networks as they underwent progressive infiltrations, a set of measurements (e.g.~\cite{Costa_surv:2007}) was taken along the process. These measurements included the average and standard deviation of the node degree, clustering coefficient, size of the largest connected component, and chain lengths along successive infiltration stages. Only chains longer than 3 edges were considered in the respective measurements. These chains were identified by starting from each of the network nodes with degree 1 or 2 and following along both sides (in case of degree 2) until the respective extremities of the chains (nodes with degree 1 or larger than 2) were found (each detected chain was removed from the network in order to accelerate the processing of the remaining nodes). The results obtained for $D_i=5$ and $D_i=10$ are shown in Figure~\ref{fig:inf_5} and~\ref{fig:inf_10}, respectively. Figure~\ref{fig:meas_all_5} and~\ref{fig:meas_all_10} show the above measurements for \emph{all} the 30 considered networks. \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.9\linewidth]{meas_5.eps} \\ \caption{Measurements of degree, clustering coefficient, size of the largest connected component and chain lengths in terms of the number of infiltrations (identified as `time') with $D_i=5$ for a network obtained for the vector field $\vec{\phi}(x,y) = (y,x)$. }~\label{fig:meas_5} \end{center} \end{figure*} \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.9\linewidth]{meas_10.eps} \\ \caption{Measurements of degree, clustering coefficient, size of the largest connected component and chain lengths in terms of the number of infiltrations (identified as `time') with $D_i=10$ for a network obtained for the vector field $\vec{\phi}(x,y) = (y,x)$. }~\label{fig:meas_10} \end{center} \end{figure*} \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.9\linewidth]{meas_all_5.eps} \\ \caption{Averages of degree, clustering coefficient, size of the largest connected component and chain lengths in terms of the number of infiltrations (identified as `time') with $D_i=5$ for each of the 30 networks obtained for the vector field $\vec{\phi}(x,y) = (y,x)$. }~\label{fig:meas_all_5} \end{center} \end{figure*} \begin{figure*}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.9\linewidth]{meas_all_10.eps} \\ \caption{Averages of degree, clustering coefficient, size of the largest connected component and chain lengths in terms of the number of infiltrations (identified as `time') with $D_i=10$ for each of the 30 networks obtained for the vector field $\vec{\phi}(x,y) = (y,x)$. }~\label{fig:meas_all_10} \end{center} \end{figure*} It is clear from Figures~\ref{fig:meas_5} to~\ref{fig:meas_all_10} that, as could be expected, the degree and clustering coefficient both increased as a consequence of the addition of the infiltration tufts. Both such increases are sublinear, with a steeper decrease in the rate of clustering coefficient increase observed for $D_i=10$ (Fig.~\ref{fig:meas_all_10}). The relative sizes of the maximum connected components suffer an abrupt transition before the 160 first infiltrations (most of the transitions take place before that value) for both settings of $D_i$, but is more abrupt for $D_i=10$. This change is related to the percolation of the chains in the original network. Another relatively abrupt change is observed for the path lengths, most of which stabilizing themselves at a value near 6 for $D_i=5$ and 4 for $D_i=10$. The interval from the start of the infiltrations until the average length of the chains stabilizes (as observed above) is called the \emph{period of collapse} of the chains. Very few networks remained with large average chain lengths larger after 200 infiltrations. This confirms the fact, evident from Figure~\ref{fig:scatt}, that the tuft infiltrations tend to quickly eliminate most of the long chains in the trajectory networks (the chain collapse). For larger values of $D_i$, after the collapse of the chains, the vector field influence on the network connectivity can be hardly distinguished by visual inspection, such as in Figures~\ref{fig:inf_10}(b-d). It is important to keep in mind that the fact that small values of $D_i$ tend to imply little effect over the chain structure of the trajectory networks is ultimately related to the number $N$ of initial nodes and the maximal distance $D_p$ considered for chaining the nodes during the construction of the networks. The two involved critical phenomena, namely the percolation of the networks and the collapse of the chains, were investigated further in order to search for possible relationship between their respective onsets. In order to do so, transition points along the successive infiltrations were identified automatically. These points, respectively $T_p$ and $T_c$, correspond to the first occurrence of the value 1 for the relative size of the largest connected component and the first occurrence of the average chain length which is smaller or equal than 5, respectively. Figure~\ref{fig:scatt} shows the respectively obtained distribution of $T_p$ and $T_c$ obtained for the 30 realizations of networks with $D_i=10$. It is clear from this figure that the two critical phenomena taking place in the considered trajectory networks seem to be largely independent, in the sense that no correlation has been observed between their critical values. Interestingly, as shown in Figure~\ref{fig:scatt}, the collapse of the chains can take place before the respective percolation. \begin{figure}[htb] \vspace{0.3cm} \begin{center} \includegraphics[width=0.9\linewidth]{scatt.eps} \\ \caption{Scatterplot of the percolation ($T_p$) and collapse ($T_c$) critical times for $D_i=10$. }~\label{fig:scatt} \end{center} \end{figure} \section{Concluding Remarks} Geographical networks represent an important category of complex networks because of their natural potential for modeling a large number of real-world and human-made complex structures and systems. At the same time, the category of complex networks built up by paths, namely the \emph{knitted networks}, constitutes an important superclass of complex structures because of their intrinsic association with the concept of paths (as opposited to star connectivity) and random walk dynamics (e.g.~\cite{Costa_path:2007, Costa_comp:2007, Costa_longest:2007}). In this work, trajectory networks have been understood to belong to the supercategory of knitted networks as a consequence of the fact that these structures are the result of path generation processes. Trajectory networks constitute a special case, in which the paths tend to follow an associated vector field. Our main interest in the present work, however, consisted in investigating how the topology of trajectory networks changed as a consequence of geographical infiltrations. While several types of attacks and perturbations have been considered and investigated in complex network research, relatively less attention has been devoted to perturbations intrinsically related to geographical constraints, especially the adjacency and proximity between nodes. Yet, several important real-world and human-made systems are prone to this type of perturbations, ranging from the onset of unwanted neuronal connections to the incorporation of new local routes to transportation systems. The main contributions reported in this article are listed and reviewed in the following: \emph{Trajectory networks as a special case of knitted complex networks:} We have defined trajectory networks as a novel sub-class of knitted networks. This type of geographical knitted network corresponds to an interesting case where the connectivity is the consequence of both the proximity between nodes and the orientation of the underlying vector field. \emph{New type of perturbation of network structure:} We considered, possibly for the first time, perturbations (or `attacks') to geographical networks which depend on the proximity between the spatially distributed nodes. We focused attention on `tuft' infiltrations, where a node $i$ is randomly chosen and all other nodes which are closer than a maximum distance $D_i$ are connected to node $i$. This type of topological change can be related to several real-world effects such as unwanted neuronal tangles as a consequence of diseases, establishment of local connections in transportation networks, contaminations, and attacks. \emph{Qualitative changes resulting from infiltrations:} The progressive infiltration of a trajectory network was investigated in a systematic manner, considering 30 realizations of networks obtained for the same configuration with respect to the vector field $\vec{\phi}(x,y) = (y,x)$. The changes in the networks topology was monitored by taking several measurements including the degree, clustering coefficient, size of the largest connected component, as well as the particularly relevant lengths of the existing chains. The latter measurements are especially important because the trajectory networks are inherently composed by chains. While the degree and clustering coefficients underwent relatively smooth increases, the size of the largest component and average chain lengths were subjected to relatively abrupt changes related to the percolation of the network (in the case of the largest connected component) and to the collapse of the chain structure (in the case of the average chain lengths). The value of $D_i$ was found to be have great influence on such topological changes induced by the infiltrations, with values much larger than $D_p$ implying particularly intense changes, especially regarding the chain structure. After the collapse of the chains, the effect of the original vector field on the network connectivity could hardly be discerned. Such findings are particularly important for a large number of real-world structures underlain by trajectory networks and geographical infiltrations. \emph{Independence of percolation and collapse of chains:} The progressive infiltration of trajectory networks involves two critical phenomena: its percolation and the collapse of its chain structure. Interestingly, no clear relationship between these phenomena has been identified by considering the critical times $T_p$ and $T_c$. This implies that the collapse of the chains can not be predicted from the percolation of the respective network, and vice-versa. As a matter of fact, it has also been observed that the collapse of the chains can take place before the percolation of the respective network. \emph{Modeling of Brain Development} The current study illustrates that, in order to avoid pathological network conditions, besides growth, a mechanism for the selective elimination of connections is also necessary. Such a mechanism can be observed at work in brain development. The formation of neuronal networks involves the extensive growth, but also elimination of neurons and connections~\cite{Quartz:99}. Isolated nerve cells undergo apoptosis; dendritic arbors are being built and retracted based on signaling efficacy and electrical activity in the pre and postsynaptic neurons. As a result, synapses undergo extensive rewiring after their initial attachment~\cite{Zhang_Poo:2001}. These processes work together to maintain a functional network architecture for effective communication between brain cells~\cite{Kwok:2007}. The several possibilities of future work include but are not limited to the following: \emph{Other types of vector fields:} It would be interesting to investigate how the patterns of topological changes observed in this work extends to trajectory networks obtained by considering other vector fields, as well as other configurations of the involved parameters. \emph{Orthogonal infiltrations:} In this work we focused attention on tuft infiltrations. It would be interesting to study the topological changes of trajectory networks with respect of other types of geographical perturbations, such as connecting points according to proximity and orientations orthogonal to the vector field (possibly also through trajectories). \emph{Infiltration by increasing distances:} While the infiltrations implemented in this article consisted in selecting nodes followed by tuft interconnection, it would be particularly interesting to investigate the topological alterations of trajectory networks while all pairs of nodes are joined according to successive distances. Such a type of infiltration is guaranteed to completely eliminate the chains after a critical interval. \emph{Application to real-world networks:} It would be interesting to quantify the alterations of real-world networks expressible by trajectory networks, including transportation networks, power distribution, communications, tourism and neuronal systems. \emph{Application to Image and Shape Analysis:} The analysis of images containing objects and shapes has remained a great challenge (e.g.~\cite{Costa_book:2001,Costa_saliency:2006}). It would be particularly interesting to consider the application of the concepts and methods reported in the current work to such problems. More specifically, trajectories can be obtained in gray-level images by considering their respective gradient fields. So, by distributing points through the image and interconnecting them while taking in to account trajectories driven by the gradient fields, it is possible to obtain respective network representations incorporating a great deal of the intrinsic geometric features. Shapes represented by their contour can also be mapped into trajectory networks by considering vector fields induced by their borders (e.g. electrical or distance fields). The topological properties of the respective measurements are expected to provide valuable features for image and shape analysis and classification. Signatures obtained by considering the evolution of several measurements of the so-obtained networks as the consequence of geographical infiltration can provide additional features for visual characterization and classification. \begin{acknowledgments} Luciano da F. Costa thanks CNPq (301303/06-1) and FAPESP (05/00587-5) for sponsorship. \end{acknowledgments}
1,314,259,996,849
arxiv
\section{Introduction} Among the various phenomena in dynamics associated with randomness, weak mixing and entropy stand out for the depth of their theory and the breadth of their applications (see for example \cite{EinWar11,Gla03}). In the setting of discrete acting groups, weak mixing makes sense in general while entropy, as classically formulated, requires the group to be amenable. One can view these two concepts in a unified way across both measurable and topological dynamics by means of the combinatorial notion of independence. In close parallel with the $\ell_1$ theorems of Rosenthal and Elton-Pajor in Banach space theory, weak mixing and positive entropy reflect two of the basic regimes in which combinatorial independence can occur across the orbit of a tuple of sets in a dynamical system. The first of these asks for independence over a subset of the group having infinite cardinality, while the other requires this subset to satisfy a positive density condition. Inspired by work in the local theory of entropy \cite{GlaYe09}, the authors studied this connection between independence, weak mixing, and entropy in \cite{KerLi07,KerLi09} as part of a program to develop a general theory of combinatorial independence in dynamics. Combinatorial independence is the basic set-theoretic expression of randomness in which we are concerned not with the size of intersections, as in the probabilistic context, but merely with their nonemptiness. A collection $\{ (A_{i,1} , \dots , A_{i,k} ) \}_{i\in J}$ of $k$-tuples of subsets of a set $X$ is {\it independent} if for every nonempty finite set $F\subseteq J$ and function $\omega : F\to \{ 1,\dots,k\}$ the intersection $\bigcap_{i\in F} A_{i,\omega(i)}$ is nonempty. If a group $G$ is acting on $X$ then given a tuple $(A_1 ,\dots,A_k )$ of subsets of $X$ we say that a set $J\subseteq G$ is an {\it independence set for $(A_1 ,\dots,A_k )$} if the collection $\{ (s^{-1} A_1 ,\dots ,s^{-1} A_k) \}_{s\in J}$ is independent. In the case of an action of a countable amenable group $G$ on a compact Hausdorff space $X$, the {\it independence density} $I({\overrightarrow{A}})$ of a tuple ${\overrightarrow{A}} = (A_1 ,\dots, A_k )$ of subsets of $X$ is defined as the limit of $\varphi_{{\overrightarrow{A}}}(F) /|F|$ as the nonempty finite set $F\subseteq G$ becomes more and more left invariant, where $\varphi_{{\overrightarrow{A}}}(F)$ denotes the maximum of the cardinalities of the independence sets for ${\overrightarrow{A}}$ which are contained in $F$ \cite[Prop.\ 3.23]{KerLi07}. We then say that a tuple $(x_1 , \dots , x_k ) \in X^k$ is an {\it IE-tuple} if for every product neighbourhood $U_1 \times\cdots\times U_k$ of $(x_1 , \dots , x_k )$ the tuple ${\overrightarrow{U}} = (U_1 ,\dots, U_k )$ satisfies $I({\overrightarrow{U}} ) > 0$. This condition on ${\overrightarrow{U}}$ is equivalent to the existence of an independence set $J\subseteq G$ for ${\overrightarrow{U}}$ which has positive density with respect to a given tempered F{\o}lner sequence $\{ F_i \}_{i\in {\mathbb N}}$ in the sense that $\lim_{i\to\infty} |F_i \cap J|/|F_i| > 0$. It turns out that a nondiagonal tuple is an IE-tuple if and only if it is an entropy tuple \cite[Sect.\ 3]{KerLi07}. For general acting $G$ we define IT-tuples in the same way as IE-tuples except that we ask instead for the independence set to have infinite cardinality. Versions of IE-tuples and IT-tuples can also be defined for probability-measure-preserving actions by requiring the condition on the independence set to hold whenever we remove small parts of the sets $s^{-1}U_1, \dots, s^{-1}U_k$ for each $s\in G$. The following are some of the results relating independence, weak mixing, and entropy that were established in \cite{KerLi07,KerLi09}. \begin{enumerate} \item A continuous action $G\curvearrowright X$ of an Abelian group on a compact Hausdorff space is (topologically) weakly mixing if and only if every tuple of points in $X$ is an IT-tuple (in which case the action is said to be {\it uniformly untame of all orders}). \item A probability-measure-preserving action $G\curvearrowright (X,\mu )$ of an arbitrary group is weakly mixing if and only if its universal topological model is uniformly untame of all orders. \item A continuous action $G\curvearrowright X$ of a discrete amenable group on a compact Hausdorff space has positive topological entropy if and only if it has a nondiagonal IE-pair (for $G={\mathbb Z}$ this was first proved by Huang and Ye in \cite{HuaYe06} using measure-theoretic techniques). Moreover, the action has uniformly positive entropy if and only if every pair of points in $X$ is an IE-pair, and uniformly positive entropy of all orders if and only if every tuple of points in $X$ is an IE-tuple. \item A probability-measure-preserving action $G\curvearrowright (X,\mu )$ of a discrete amenable group has positive measure entropy if and only if there is a nondiagonal measure IE-pair in some topological model. Moreover, the action has complete positive entropy if and only if every tuple of points is an IE-tuple in the universal topological model (for $G={\mathbb Z}$ this was proved by Glasner and Weiss in \cite{GlaWei95}). \end{enumerate} Chung and the second author applied IE-tuples in \cite{ChuLi11} as part of a new approach for studying the relation between homoclinicity and entropy in expansive algebraic actions that enabled them to break the commutativity barrier and establish some duality-type equivalences for polycyclic-by-finite acting groups. In this case, and more generally for actions of a countable amenable group on a compact group $X$ by automorphisms, the analysis of IE-tuples is governed by a single closed invariant normal subgroup of $X$ called the {\it IE group} \cite[Sect.\ 7]{ChuLi11}. Recent seminal work of Bowen in \cite{Bow10} has expanded the scope of the classical theory of entropy for actions of discrete amenable groups to the much broader realm of sofic acting groups. For a countable group $G$, soficity can be expressed as the existence of a sequence $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i ) \}$ of maps from $G$ into finite permutation groups which is asymptotically multiplicative and free in the sense that \begin{enumerate} \item[(i)] $\displaystyle\lim_{i\to\infty} \frac{1}{d_i} \big| \{ k\in \{ 1,\dots ,d_i \} : \sigma_{i,st} (k) = \sigma_{i,s} \sigma_{i,t} (k) \} \big| = 1$ for all $s,t\in G$, and \item[(ii)] $\displaystyle\lim_{i\to\infty} \frac{1}{d_i} \big| \{ k\in \{ 1,\dots ,d_i \} : \sigma_{i,s} (k) \neq \sigma_{i,t} (k) \} \big| = 1$ for all distinct $s,t\in G$. \end{enumerate} Such a sequence for which $\lim_{i\to\infty} d_i = \infty$ we call a {\it sofic approximation sequence}. By measuring the asymptotic exponential growth of dynamical models which are compatible with a fixed sofic approximation sequence, Bowen defined in \cite{Bow10} a collection of invariants for probability-measure-preserving actions of a countable sofic group admitting a generating partition with finite Shannon entropy. A remarkable application of this sofic measure entropy was a far-reaching extension of the Ornstein-Weiss classification of Bernoulli actions of amenable groups. The authors developed in \cite{KerLi11} a more general operator-algebraic approach to sofic entropy that enables one to remove the generator hypothesis (see also \cite{Ker12} for a formulation in terms of finite partitions). This led to a sofic version of topological entropy as well as a variational principle relating it to sofic measure entropy. We used this variational principle to compute the sofic topological entropy of a principal algebraic action $G\curvearrowright \widehat{{\mathbb Z} G/{\mathbb Z} G f}$ of a countable residually finite group in the case that the sofic approximation sequence arises from finite quotients of $G$ and $f$ is invertible in the full group C$^*$-algebra $C^* (G)$. In line with previous work on algebraic actions \cite{LinSchWar90,Den06,Li12,Bow11} (see also the more recent \cite{LiTho12}), this value turns out to be equal to the logarithm of the Fuglede-Kadison determinant of $f$ in the group von Neumann algebra ${\mathcal L} G$. We also showed how topological entropy can be used to give a proof that Gottschalk's surjunctivity conjecture holds for countable sofic groups, a result originally established by Gromov in \cite{Gro99}, where the idea of soficity itself first appeared. In the present work we initiate a local analysis of independence as it connects to topological entropy within this broadened framework of actions of sofic groups. Given a continuous action $G\curvearrowright X$ of a countable sofic group and a sofic approximation sequence $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i ) \}$ for $G$, we define the notion of a $\Sigma$-IE-tuple by externalizing the positive independence density condition in the amenable case to the finite sets $\{ 1,\dots , d_i \}$ appearing in the sequence $\Sigma$ (Definition~\ref{D-Sigma-IE}). We show in Section~\ref{S-Sigma} that $\Sigma$-IE-tuples share many of the same properties as IE-tuples for actions of discrete amenable groups. In particular, the action $G\curvearrowright X$ has positive entropy with respect to $\Sigma$ if and only if there is a nondiagonal $\Sigma$-IE-pair in $X\times X$. On the other hand, we do not know whether the product formula holds in general for $\Sigma$-IE-tuples. However, granted that we use a free ultrafilter ${\mathfrak F}$ over ${\mathbb N}$ to express the independence density condition in the definition of $\Sigma$-IE-tuples, we demonstrate in Theorem~\ref{T-ergodic to product} that the product formula holds under the assumption of ergodicity on the action of the commutant of $G$ inside the group of measure-preserving automorphisms of the Loeb space $\prod_{\mathfrak F} \{ 1,\dots ,d_i \}$ which arise from permutations of the sets $\{ 1,\dots ,d_i \}$. We then prove that this commutant acts ergodically when $G$ is residually finite and $\Sigma$ is built from finite quotients of $G$ (Theorem~\ref{T-rf ergodic}), and also when $G$ is amenable and $\Sigma$ is arbitrary (Theorem~\ref{T-amenable ergodic}). In the case that $G$ is nonamenable, a combination of results of Elek and Szabo \cite[Thm.\ 2]{EleSza11} and Paunescu \cite{Pau11} shows that there exist $\Sigma$ for which the ergodicity condition fails. The definition of IE-tuples for amenable $G$, as given in \cite{KerLi07}, involves an asymptotic density condition over finite subsets of $G$ which become more and more invariant. Although density in this sense loses its meaning in the nonamenable case, we might nevertheless ask what the external independence density in the definition of $\Sigma$-IE-tuples implies about the degree of independent behaviour across orbits in $X$. We observe in Proposition~\ref{P-sofic IE to orbit IE} that every $\Sigma$-IE-tuple (and more generally every sofic IE-tuple as defined in Definition~\ref{D-Sigma-IE}) is an {\it orbit IE-tuple}, by which we mean that for every product neighbourhood $U_1 \times\cdots\times U_k$ of the given tuple $(x_1,\dots,x_k)$ in $X^k$, the tuple $(U_1 , \dots , U_k )$ has positive independence density over $G$ in the sense that there is a $q>0$ such that every finite set $F\subseteq G$ has a subset of cardinality at least $q|F|$ which is an independence set for $(U_1 , \dots , U_k )$ (note that this definition makes sense for any acting group $G$). We show moreover in Theorem~\ref{T-amenable case: orbit IE=IE} that, for amenable $G$, $\Sigma$-IE-tuples, IE-tuples, and orbit IE-tuples are all the same thing. This puts us in the pleasant and somewhat surprising situation that IE-tuples can be identified by a density condition that does not structurally depend on amenability for its formulation, and raises the question about the relation between entropy and orbit IE-tuples for nonamenable sofic $G$. In another direction, Theorem~\ref{T-orbit IE to nontame} asserts that if a tuple of subsets of $X$ has positive independence density over $G$ then it has an infinite independence set in $G$, which implies that every orbit IE-tuple is an IT-tuple. By a theorem of Chung and the second author in \cite{ChuLi11}, an algebraic action of a countable group $G$ is expansive if and only if it is either the dual action $G\curvearrowright X_A := \widehat{({\mathbb Z} G)^n /({\mathbb Z} G)^n A}$ for some $n\in{\mathbb N}$ and matrix $A\in M_n ({\mathbb Z} G)$ which is invertible in $M_n (\ell^1 (G))$, or the restriction of such an action to a closed $G$-invariant subgroup of $X_A$. In the same paper it is shown that an expansive algebraic action $G\curvearrowright X$ of a polycyclic-by-finite group has completely positive entropy with respect to the Haar measure prescisely when the IE group is equal to $X$, which is equivalent to every tuple of points in $X$ being an IE-tuple. It is also shown that, when $G$ is amenable, every action of the form $G\curvearrowright X_A$ with $A$ invertible in $M_n (\ell^1 (G))$ has the property that every tuple of points in $X$ is an IE-tuple (see Lemma~5.4 and Theorems~7.3 and 7.8 in \cite{ChuLi11}). We prove in Theorem~\ref{T-invertible to UPE} that if $G$ is a countable sofic group, $n\in{\mathbb N}$, and $A$ is a matrix in $M_n ({\mathbb Z} G)$ which is invertible in $M_n (\ell^1 (G))$, then the algebraic action $G\curvearrowright X_A$ has the property that every tuple of points in $X_A$ is a $\Sigma$-IE-tuple for every sofic approximation sequence $\Sigma$. We use this to answer a question of Deninger in the case that $G$ is residually finite by combining it with an argument from \cite{ChuLi11} and the entropy computation for principal algebraic actions from \cite{KerLi11} mentioned above to deduce that if $f$ is an element of ${\mathbb Z} G$ which is invertible in $\ell^1 (G)$ and has no left inverse in ${\mathbb Z} G$ then the Fuglede-Kadison determinant of $f$ satisfies $\det_{{\mathcal L} G} f > 1$ (Corollary~\ref{C-answer to Deninger}). Deninger asked whether this holds for all countable groups \cite[Question 26]{Den09}, and affirmative answers were given in \cite{DenSch07} for residually finite amenable $G$ and more generally in \cite{ChuLi11} for amenable $G$. For a continuous action $G\curvearrowright X$ of a countably infinite group on a compact metrizable space with compatible metric $\rho$, we say that a pair $(x, y)\in X\times X$ is a {\it Li-Yorke pair} if \[ \limsup_{G \ni s\to \infty}\rho(sx, sy)>0 \hspace*{5mm}\text{and} \hspace*{5mm} \liminf_{G \ni s\to \infty}\rho(sx, sy)=0 , \] where the limit supremum and limit infimum mean the limits of $\sup_{s\in G\setminus F} \rho(sx, sy)$ and\linebreak $\inf_{s\in G\setminus F} \rho(sx, sy)$, respectively, over the net of finite subsets $F$ of $G$. Note that the definition of Li-Yorke pair does not depend on the choice of the metric $\rho$. The action $G\curvearrowright X$ is said to be {\it Li-Yorke chaotic} if there is an uncountable subset $Z$ of $X$ such that every nondiagonal pair $(x, y)$ in $Z\times Z$ is a Li-Yorke pair. The notion of Li-Yorke chaos stems from \cite{LiYor75}. In the case of a continuous map $T:X\to X$, a theorem Blanchard, Glasner, Kolyada, and Maass in \cite{BlaGlaKolMaa02} states that positive entropy implies Li-Yorke chaos. In \cite{KerLi07} the authors strengthened this by showing that for every $k\geq 2$ and product neighbourhood $U_1\times\cdots\times U_k$ of a nondiagonal IE-tuple $(x_1,\dots,x_k)\in X^k$ there are Cantor sets $Z_i \subseteq U_i$ for $i=1,\dots,k$ such that \begin{enumerate} \item every nonempty tuple of points in $\bigcup_i Z_i$ is an IE-tuple, and \item for all $m\in{\mathbb N}$, distinct $y_1, \dots ,y_m\in \bigcup_i Z_i$, and $y_1' ,\dots , y_m'\in \bigcup_i Z_i$ one has \[ \liminf_{n\to\infty} \max_{1\leq i\leq m} \rho (T^n y_i,y_i' ) = 0 . \] \end{enumerate} In Theorem~\ref{T-positive entropy to chaos} we show that a similar result holds when $G$ is sofic and IE-tuples are replaced by $\Sigma$-IE-tuples as defined with respect to a free ultrafilter ${\mathfrak F}$ on ${\mathbb N}$, where $\Sigma$ is any sofic approximation sequence for $G$. Using ${\mathfrak F}$ in the definition of entropy, we deduce that if the action has positive entropy for some $\Sigma$ then it is Li-Yorke chaotic. As a corollary, if the action $G\curvearrowright X$ is distal then $h_\Sigma (X,G)=0$ or $-\infty$. In particular, when $G$ is amenable every distal action $G\curvearrowright X$ has zero entropy, which is well known in the case $G={\mathbb Z}$ \cite{Parry}. The following diagram illustrates how some of the main results of the paper relate various properties of actions of a countable discrete group $G$ on a compact metrizable space $X$, which we assume to have more than one point. In the left column we assume that $G$ is sofic and that $\Sigma$ is a fixed but arbitrary sofic approximation sequence. The unlabeled implications are trivial. By pair we mean an element of $X\times X$. See \cite{KerLi07} for terminology related to tameness and nullness. \vspace*{1.5mm} \[ \footnotesize \xymatrix{ \txt{uniformly positive\\ entropy w.r.t.\ $\Sigma$} \ar@{<=>}[d]^-*+{\text{\tiny\ref{R-entropy tuple}}} &&& \\ \txt{every pair\\ is a $\Sigma$-IE pair} \ar@=[d]^-*+{\text{\tiny\ref{P-basic}(3)}}\ar@=[r]^-*+{\text{\tiny\ref{P-sofic IE to orbit IE}}} & \txt{every pair\\ is an\\ orbit IE-pair}\ar@=[r]^-*+{\text{\tiny\ref{C-orbit IE to IT}}} & \txt{uniformly untame\\ (every pair\\ is an IT-pair)} \ar@=[d]^-*+{\text{\tiny 6.4(2) in \cite{KerLi07}}} \ar@=[r] & \txt{uniformly nonnull\\ (every pair\\ is an IN-pair)} \ar@=[d]^-*+{\text{\tiny 5.4(2) in \cite{KerLi07}}} \\ \txt{positive entropy\\ w.r.t.\ $\Sigma$} \ar@{<=>}[d]^-*+{\text{\tiny\ref{P-basic}(3)}}&& \txt{untame} \ar@{<=>}[d]^-*+{\text{\tiny 6.4(2) in \cite{KerLi07}}} & \txt{nonnull} \ar@{<=>}[d]^-*+{\text{\tiny 5.4(2) in \cite{KerLi07}}} &\\ \txt{$\exists$ nondiag.\\ $\Sigma$-IE-pair} \ar@=[d]^-*+{\text{\tiny\ref{T-positive entropy to chaos}}} \ar@=[r]^-*+{\text{\tiny\ref{P-sofic IE to orbit IE}}}& \txt{$\exists$ nondiag.\\ orbit IE-pair}\ar@=[r]^-*+{\text{\tiny\ref{C-orbit IE to IT}}} & \txt{$\exists$ nondiag.\\ IT-pair} \ar@=[r] & \txt{$\exists$ nondiag.\\ IN-pair} \\ \txt{Li-Yorke\\ chaotic} &&& } \] \vspace*{3.5mm} The organization of the paper is as follows. In Section~\ref{S-entropy} we set up some basic notation and review sofic topological entropy. In Section~\ref{S-orbit} we introduce orbit IE-tuples and prove a product formula for them. Section~\ref{S-Sigma} introduces $\Sigma$-IE-tuples and includes our results relating them to orbit IE-tuples. In Section~\ref{S-product} we focus on the product formula for $\Sigma$-IE-tuples and the question of ergodicity for the action of $G'$ on the Loeb space. Section~\ref{S-algebraic} contains the material on algebraic actions. In Section~\ref{S-untame} we prove that positive independence density for a tuple of subsets implies the existence of an infinite independence set, showing that orbit IE-tuples are IT-tuples. Finally in Section~\ref{S-chaos} we establish the theorem connecting independence and entropy to Li-Yorke chaos in the sofic framework. \medskip \noindent{\it Acknowledgements.} The first author was partially supported by NSF grants DMS-0900938 and DMS-1162309. The second author was partially supported by NSF grant DMS-1001625. We are grateful to Wen Huang, Liviu Paunescu and Xiangdong Ye for helpful comments. \section{Sofic topological entropy}\label{S-entropy} We review here the definition of sofic topological entropy \cite{KerLi11,KerLi10} and in the process introduce some of the basic notation and terminology appearing throughout the paper. Our approach will bypass the operator algebra technology that appears in \cite{KerLi11,KerLi10}. Let $Y$ be a set equipped with a pseudometric $\rho$ and let $\varepsilon\geq 0$. A set $A\subseteq Y$ is said to be {\it $(\rho ,\varepsilon )$-separated} if $\rho (x,y) \geq \varepsilon$ for all distinct $x,y\in A$. Write $N_\varepsilon (Y, \rho )$ for the maximum cardinality of a $(\rho ,\varepsilon )$-separated subset of $Y$. Let $G\curvearrowright X$ be a continuous action of a countable sofic group on a compact metrizable space. Let $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i) \}$ be a sofic approximation sequence for $G$, meaning that \begin{enumerate} \item[(i)] $\displaystyle\lim_{i\to\infty} \frac{1}{d_i} \big| \{ k\in \{ 1,\dots ,d_i \} : \sigma_{i,st} (k) = \sigma_{i,s} \sigma_{i,t} (k) \} \big| = 1$ for all $s,t\in G$, \item[(ii)] $\displaystyle\lim_{i\to\infty} \frac{1}{d_i} \big| \{ k\in \{ 1,\dots ,d_i \} : \sigma_{i,s} (k) \neq \sigma_{i,t} (k) \} \big| = 1$ for all distinct $s,t\in G$, \end{enumerate} and $d_i \to\infty$ as $i\to\infty$. Depending on the situation, for $a\in\{1,\dots ,d_i \}$ we may write $\sigma_{i,s} (a)$, $\sigma_i (s)a$, or $sa$ to denote the image of $a$ under the evaluation of $\sigma_i$ at $s$. Let $\rho$ be a continuous pseudometric on $X$. For a given $d\in{\mathbb N}$, we define on the set of all maps from $\{ 1,\dots ,d\}$ to $X$ the pseudometrics \begin{align*} \rho_2 (\varphi , \psi ) &= \bigg( \frac{1}{d} \sum_{a=1}^d (\rho (\varphi (a),\psi (a)))^2 \bigg)^{1/2} , \\ \rho_\infty (\varphi ,\psi ) &= \max_{a=1,\dots ,d} \rho (\varphi (a),\psi (a)) . \end{align*} \begin{definition}\label{D-map top} Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{\mathbb N}$. Define ${\rm Map} (\rho ,F,\delta ,\sigma )$ to be the set of all maps $\varphi : \{ 1,\dots ,d\} \to X$ such that $\rho_2 (\varphi\sigma_s , \alpha_s \varphi ) < \delta$ for all $s\in F$, where $\alpha_s$ is the transformation $x\mapsto sx$ of $X$. \end{definition} \begin{definition} Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. For $\varepsilon > 0$ define \begin{align*} h_{\Sigma ,2}^\varepsilon (\rho ,F, \delta ) &= \limsup_{i\to\infty} \frac{1}{d_i} \log N_\varepsilon ({\rm Map} (\rho ,F,\delta ,\sigma_i ),\rho_2 ) ,\\ h_{\Sigma ,2}^\varepsilon (\rho ,F) &= \inf_{\delta > 0} h_{\Sigma ,2}^\varepsilon (\rho ,F,\delta ) ,\\ h_{\Sigma ,2}^\varepsilon (\rho ) &= \inf_{F} h_{\Sigma ,2}^\varepsilon (\rho ,F) ,\\ h_{\Sigma ,2} (\rho ) &= \sup_{\varepsilon > 0} h_{\Sigma ,2}^\varepsilon (\rho ) , \end{align*} where $F$ in the third line ranges over the nonempty finite subsets of $G$. In the case that ${\rm Map} (\rho ,F,\delta ,\sigma_i )$ is empty for all sufficiently large $i$, we set $h_{\Sigma ,2}^\varepsilon (\rho ,F, \delta ) = -\infty$. We similarly define $h_{\Sigma ,\infty}^\varepsilon (\rho ,F, \delta )$, $h_{\Sigma ,\infty}^\varepsilon (\rho ,F)$, $h_{\Sigma ,\infty}^\varepsilon (\rho )$ and $h_{\Sigma ,\infty} (\rho )$ using $N_\varepsilon(\cdot, \rho_\infty)$ in place of $N_\varepsilon(\cdot, \rho_2)$. \end{definition} Instead of the limit supremum above we could have taken a limit over a fixed free ultrafilter on ${\mathbb N}$, whose utility is apparent for example if we wish to have a product formula (see Section~\ref{S-product}). We will also use this variant in Section~ \ref{S-chaos}. The pseudometric $\rho$ is said to be {\it dynamically generating} if for every pair of distinct points $x,y\in X$ there is an $s\in G$ such that $\rho(sx, sy)>0$. \begin{lemma} \label{L-change pseudometric} Suppose that $\rho$ and $\rho'$ are continuous pseudometrics on $X$ and that $\rho'$ is dynamically generating. Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. Then there exist a nonempty finite subset $F'$ of $G$ and $\delta'>0$ such that for any $d\in {\mathbb N}$ and sufficiently good sofic approximation $\sigma: G\to {\rm Sym}(d)$ one has ${\rm Map}(\rho', F', \delta', \sigma)\subseteq {\rm Map}(\rho, F, \delta, \sigma)$. \end{lemma} \begin{proof} List the elements of $G$ as $s_1, s_2, \dots$. Since $\rho'$ is dynamically generating, we have the compatible metric $\rho''$ on $X$ defined by \[ \rho''(x, y)=\sum_{k=1}^{\infty}\frac{1}{2^k}\rho'(s_kx, s_ky). \] It follows that there are a nonempty finite subset $F''$ of $G$ and a $\delta''>0$ such that, for all $x,y\in X$, if $\max_{s\in F''}\rho'(sx, sy)<\delta''$ then $\rho(x, y)<\delta/2$. Set $F'=F''\cup (F''F)$. Let $\delta'>0$ and let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$ for some $d\in {\mathbb N}$. Let $\varphi\in {\rm Map}(\rho', F', \delta', \sigma)$. Then \begin{align*} |\{a\in \{1, \dots, d\}: \rho'(s_1s_2\varphi(a), &\varphi((s_1s_2)a))<\sqrt{\delta'} \text{ and}\\ \rho'(s_1 \varphi &(s_2a), \varphi(s_1(s_2a)))<\sqrt{\delta'} \text{ for all } s_1\in F'', s_2\in F\}|\\ &\ge (1-2|F''| |F|\delta')d. \end{align*} Suppose that $2\sqrt{\delta'}<\delta''$ and $\sigma$ is a good enough sofic approximation for $G$ so that \[ |\{a\in \{1, \dots, d\}: (s_1s_2)a=s_1(s_2a) \text{ for all } s_1\in F'', s_2\in F\}|\ge (1-\delta')d. \] Then \begin{align*} \lefteqn{|\{a\in \{1, \dots, d\}: \rho(s\varphi(a), \varphi(sa))<\delta/2 \text{ for all } s\in F\}|}\hspace*{15mm} \\ \hspace*{10mm} &\ge |\{a\in \{1, \dots, d\}: \rho'(s_1s_2\varphi(a), s_1\varphi(s_2a))<2\sqrt{\delta'} \text{ for all } s_1\in F'', s_2\in F\}|\\ &\ge (1-(1+2|F''| |F|)\delta')d. \end{align*} It follows that when $\delta'$ is small enough independently of $d$ and $\sigma$, one has $\varphi\in {\rm Map}(\rho, F, \delta, \sigma)$. \end{proof} The following proposition is contained in Proposition~2.4 of \cite{KerLi10}, whose statement and proof use the operator-algebraic formulation of sofic topological entropy from \cite{KerLi11}. \begin{proposition}\label{P-pseudometric} Let $\rho$ and $\rho'$ be continuous pseudometrics on $X$ which are dynamically generating. Then \[ h_{\Sigma ,2} (\rho ) = h_{\Sigma ,2} (\rho') = h_{\Sigma ,\infty} (\rho ) = h_{\Sigma ,\infty} (\rho') . \] \end{proposition} \begin{proof} Since the pseudometric $\rho_\infty$ dominates the pseudometric $\rho_2$, we have $h_{\Sigma ,2} (\rho ) \leq h_{\Sigma ,\infty} (\rho )$. Next we argue that $h_{\Sigma ,\infty} (\rho ) \leq h_{\Sigma ,2} (\rho )$. Let $F$ be a finite subset of $G$, $\delta > 0$, and $\sigma$ a map from $G$ to ${\rm Sym} (d)$ for some $d\in{\mathbb N}$. Let $1/2>\varepsilon > 0$. Let $\eta > 0$ be the minimum of $\varepsilon^2$ and the reciprocal of the minimum cardinality of an $(\varepsilon /2)$-spanning subset of $X$ with respect to $\rho$. Given a $\varphi\in{\rm Map} (\rho,F,\delta,\sigma)$, every element in the open $(\rho_2 ,\eta )$-ball in ${\rm Map} (\rho,F,\delta,\sigma)$ around $\varphi$ agrees with $\varphi$ to within $\sqrt{\eta}$, and hence to within $\varepsilon$, on a subset of $\{1,\dots,d\}$ of cardinality at least $(1-\eta )d$. Thus the maximum cardinality of a $(\rho_\infty ,\varepsilon )$-separated subset of the open $(\rho_2 ,\eta)$-ball around $\varphi$ is at most $\sum_{j=0}^{\lfloor\eta d\rfloor} \binom{d}{j} \eta^{-j}$, which by Stirling's approximation is bounded above, for all $d$ sufficiently large, by $e^{\beta d} \eta^{-\eta d}$ for some $\beta > 0$ not depending on $d$ with $\beta\to 0$ as $\varepsilon\to 0$. Hence \[ N_\varepsilon ({\rm Map} (\rho,F,\delta,\sigma),\rho_\infty ) \leq e^{\beta d} \eta^{-\eta d} N_\eta ({\rm Map} (\rho,F,\delta,\sigma),\rho_2 ) . \] It follows that \[ h_{\Sigma,\infty}^\varepsilon (\rho) \leq h_{\Sigma,2}^\eta (\rho) + \beta - \eta \log \eta , \] and since $\beta - \eta \log \eta \to 0$ as $\varepsilon\to 0$ we conclude that $h_{\Sigma ,\infty} (\rho ) \leq h_{\Sigma ,2} (\rho )$. Finally we show that $h_{\Sigma ,2} (\rho ) \leq h_{\Sigma ,2} (\rho' )$, which will establish the proposition as we can interchange the roles of $\rho$ and $\rho'$. Let $\varepsilon > 0$. Since $\rho$ is dynamically generating, we can find a finite set $K\subseteq G$ and an $\varepsilon' > 0$ such that, for all $x,y\in X$, if $\rho (sx,sy) < \sqrt{3\varepsilon'}$ for all $s\in K$ then $\rho' (x,y) < \varepsilon /\sqrt{2}$. By shrinking $\varepsilon'$ if necessary we may assume that $3\varepsilon' |K| < \varepsilon^2 /2$. Take a finite set $F\subseteq G$ containing $K$ and a $\delta > 0$ with $\delta\leq\varepsilon'$ such that $h_{\Sigma,2}^{\varepsilon'} (\rho ,F,\delta ) \leq h_{\Sigma,2}^{\varepsilon'} (\rho ) + \varepsilon$. Since $\rho'$ is dynamically generating, by Lemma~\ref{L-change pseudometric} there are a nonempty finite set $F' \subseteq G$ and a $\delta'>0$ such that for any $d\in {\mathbb N}$ and sufficiently good sofic approximation $\sigma: G\to {\rm Sym}(d)$ we have ${\rm Map}(\rho', F', \delta', \sigma)\subseteq {\rm Map}(\rho, F, \delta, \sigma)$. Given $\varphi ,\psi\in{\rm Map} (\rho' ,F',\delta' ,\sigma )$ such that $\rho_2 (\varphi , \psi ) < \varepsilon'$, for each $s\in K$ we have, writing $\alpha_s$ for the transformation $x\mapsto sx$ of $X$, \begin{align*} \rho_2 (\alpha_s \varphi , \alpha_s \psi ) \leq \rho_2 (\alpha_s \varphi , \varphi\sigma_s ) + \rho_2 (\varphi\sigma_s , \psi\sigma_s ) + \rho_2 (\psi\sigma_s , \alpha_s \psi ) < \delta + \varepsilon' + \delta \leq 3\varepsilon' . \end{align*} This implies that there is a set $W\subseteq \{1,\dots,d \}$ of cardinality at least $(1-3\varepsilon' |K|)d$ such that for all $a\in W$ we have $\rho (s\varphi (a),s\psi (a)) < \sqrt{3\varepsilon'}$ for every $s\in K$ and hence $\rho' (\varphi (a),\psi (a)) < \varepsilon /\sqrt{2}$. As a consequence, assuming (as we may by normalizing) that $X$ has $\rho'$-diameter at most one, \[ \rho'_2 (\varphi , \psi ) \leq \sqrt{(\varepsilon/\sqrt{2})^2 + 3\varepsilon' |K|} < \varepsilon . \] It follows that \[ N_\varepsilon ({\rm Map} (\rho' ,F',\delta' ,\sigma ),\rho'_2 ) \leq N_{\varepsilon'} ({\rm Map} (\rho ,F,\delta ,\sigma ),\rho_2 ) \] and hence $h_{\Sigma,2}^\varepsilon (\rho' ,F',\delta' ) \leq h_{\Sigma,2}^{\varepsilon'} (\rho ,F,\delta )$, so that \begin{align*} h_{\Sigma ,2}^\varepsilon (\rho') \leq h_{\Sigma,2}^\varepsilon (\rho' ,F',\delta' ) \leq h_{\Sigma,2}^{\varepsilon'} (\rho ,F,\delta ) \leq h_{\Sigma,2}^{\varepsilon'} (\rho) + \varepsilon \leq h_{\Sigma,2} (\rho) + \varepsilon . \end{align*} Since $\varepsilon$ was an arbitrary positive number we conclude that $h_{\Sigma ,2} (\rho' ) \leq h_{\Sigma,2} (\rho)$. \end{proof} \begin{definition}\label{D-topological entropy} The {\it topological entropy} $h_\Sigma (X,G)$ of the action $G\curvearrowright X$ with respect to $\Sigma$ is defined to be the common value in Proposition~\ref{P-pseudometric} over all dynamically generating continuous pseudometrics on $X$. \end{definition} Note that the approximate multiplicativity of a sofic approximation was only needed in the proof of Lemma~\ref{L-change pseudometric} to handle the situation in which one of $\rho$ and $\rho'$ is not an actual metric. Indeed we could have defined topological entropy more easily by using the obvious fact that $h_{\Sigma ,\infty} (\rho )$ takes a common value over all compatible metrics on $X$, with Proposition~\ref{P-pseudometric} then being regarded as a Kolmogorov-Sinai theorem. As with the $(n,\varepsilon)$-separated set definition of topological entropy for single transformations, it is by considering pseudometrics that we can compute the entropy for a nontrivial example like the shift action $G\curvearrowright \{ 1, \dots ,k \}^G$. In this case one can see that the value is $\log k$ independently of $\Sigma$ by considering the pseudometric $\rho$ on $\{ 1, \dots ,k \}^G$ given by $\rho (x,y) = 0$ or $1$ depending on whether or not the coordinates of $x$ and $y$ at $e$ agree. Indeed $\log k$ is easily seen to be an upper bound, and given a nonempty finite set $F\subseteq G$, a $\delta > 0$, and a good enough sofic approximation $\sigma : G\to{\rm Sym} (d)$ we can construct a $(\rho_\infty ,1/2)$-separated subset of ${\rm Map} (\rho ,F,\delta , \sigma )$ of cardinality $k^d$ by associating to every $\omega\in \{ 1,\dots ,k \}^d$ some $\varphi_\omega \in{\rm Map} (\rho ,F,\delta , \sigma )$ defined by $\varphi_\omega (a)(s^{-1}) = \omega (\sigma_s (a))$ for all $a\in \{ 1,\dots , d \}$ and $s\in G$. For actions of amenable $G$, the entropy $h_\Sigma (X,G)$ coincides with the classical topological entropy for every $\Sigma$ \cite{KerLi11}. Such an action always has a largest zero-entropy factor (i.e., a zero-entropy factor such that every zero-entropy factor factors through it), called the {\it topological Pinsker factor} \cite{BlaLac93}. More generally for sofic $G$, with respect to a fixed $\Sigma$ there exists a largest factor of the action $G\curvearrowright X$ which has entropy either $0$ or $-\infty$ (note that the value $-\infty$ does not occur for actions of amenable $G$). This follows from the fact that the property of having entropy $0$ or $-\infty$ is preserved under taking countable products and restricting to closed invariant sets. (The property of having entropy $-\infty$ is also preserved under taking countable products, though we do not know what happens to the property of having entropy $0$.) We say that the action has {\it completely positive entropy with respect to $\Sigma$} if each of its nontrivial factors has positive entropy with respect to $\Sigma$. Unlike in the amenable case, the largest factor with entropy $0$ or $-\infty$ might have factors with positive entropy. In fact for every nonamenable $G$ there exist zero-entropy actions of $G$ which have factors with positive entropy: Take an action $G\curvearrowright X$ with $h_\Sigma (X,G) > 0$ and an action $G\curvearrowright Y$ which has no $G$-invariant Borel probability measure, and consider the action of $G$ on $K:=(X\times Y) \coprod \{ z \}$ where $z$ is a point on which $G$ acts trivially. Then $h_\Sigma (K,G) = 0$ but the quotient action on $X\coprod \{ z \}$ satisfies $h_\Sigma (X\coprod \{ z \} ,G) > 0$. \section{Orbit IE-tuples}\label{S-orbit} Let $G\curvearrowright X$ be a continuous action of a discrete group on a compact Hausdorff space. Recall from the introduction that if ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ is a tuple of subsets of $X$ then we say that a subset $F$ of $G$ is an {\it independence set for ${\overrightarrow{A}}$} if for every finite subset $J$ of $F$ and every function $\omega : J\to \{ 1,\dots ,k \}$ we have $\bigcap_{s\in J} s^{-1} A_{\omega (s)} \neq\emptyset$. \begin{definition}\label{D-independence density} Let ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ be a tuple of subsets of $X$. We define the {\it independence density of ${\overrightarrow{A}}$ (over $G$)} to be the largest $q\ge 0$ such that every finite set $F\subseteq G$ has a subset of cardinality at least $q|F|$ which is an independence set for ${\overrightarrow{A}}$. \end{definition} \begin{definition}\label{D-orbit IE} We say that a tuple ${\overrightarrow{x}} = (x_1 ,\dots ,x_k )\in X^k$ is an {\it orbit IE-tuple} (or {\it orbit IE-pair} in the case $k=2$) if for every product neighbourhood $U_1 \times\dots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1 , \dots , U_k )$ has positive independence density. Write ${\rm IE}_k (X,G)$ for the set of all orbit IE-tuples of length $k$. \end{definition} As Theorem~\ref{T-amenable case: orbit IE=IE} below demonstrates, the notation ${\rm IE}_k (X,G)$ is consistent with its use in \cite{KerLi07} when $G$ is amenable. The equality in the next theorem statement is understood with respect to the identification of $((x_1,\dots,x_k),(y_1,\dots,y_k))\in X^k \times Y^k$ and $((x_1,y_1),\dots,(x_k,y_k))\in (X\times Y)^k$. \begin{theorem} \label{T-product for orbit IE} Let $G\curvearrowright X$ and $G\curvearrowright Y$ be continuous actions on compact Hausdorff spaces. Let $k\in {\mathbb N}$. Then \[ {\rm IE}_k (X\times Y,G)= {\rm IE}_k (X,G) \times {\rm IE}_k (Y,G). \] \end{theorem} \begin{proof} The inclusion ${\rm IE}_k (X\times Y,G)\subseteq {\rm IE}_k (X,G) \times {\rm IE}_k (Y,G)$ is trivial. To prove the other direction, it suffices to show that if ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ is a tuple of subsets of $X$ with independence density $q$ and ${\boldsymbol{B}}=(B_1, \dots, B_k)$ is a tuple of subsets of $Y$ with independence density $r$, then ${\overrightarrow{A}}\times {\boldsymbol{B}}:=(A_1\times B_1, \dots, A_k\times B_k)$ has independence density at least $qr$. Let $F$ be a nonempty finite subset of $G$. Then we can find a $J\subseteq F$ with $|J|\ge q|F|$ which is an independence set for ${\overrightarrow{A}}$. We can then find a $J_1\subseteq J$ with $|J_1|\ge r|J|$ which as an independence set for ${\boldsymbol{B}}$. Then $J_1$ is an independence set for ${\overrightarrow{A}}\times {\boldsymbol{B}}$ and $|J_1|\ge qr|F|$. \end{proof} In \cite{KerLi07} we defined a tuple ${\overrightarrow{x}} = (x_1 , \dots , x_k )\in X^k$ to be an {\it IN-tuple} if for every product neighbourhood $U_1 \times\dots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1 ,\dots, U_k )$ has arbitrarily large finite independence sets. The following fact is obvious. \begin{proposition}\label{P-orbit IE to IN} Suppose that $G$ is infinite. Then every orbit IE-tuple is an IN-tuple. \end{proposition} We will strengthen this assertion in Theorem~\ref{T-orbit IE to nontame}. \section{$\Sigma$-IE-tuples}\label{S-Sigma} Unless otherwise stated, throughout this section $G$ is a countable sofic discrete group, subject to further hypotheses as appropriate. We suppose $G$ to be acting continuously on a compact metrizable space $X$, and $\rho$ denotes a dynamically generating continuous pseudometric on $X$ unless otherwise stated. In order to be able to define the notion of a sofic IE-tuple as appears in Proposition~\ref{P-sofic IE to orbit IE}, we will set up our definitions for a general sofic approximation net $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i ) \}$, which is formally defined in the same way as the sequential version. \begin{definition} Let ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ be a tuple of subsets of $X$. Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{\mathbb N}$. We say that a set ${\mathcal J}\subseteq \{ 1,\dots ,d\}$ is a {\it $(\rho ,F,\delta ,\sigma )$-independence set for ${\overrightarrow{A}}$} if for every function $\omega : {\mathcal J}\to \{ 1,\dots ,k \}$ there exists a $\varphi \in{\rm Map} (\rho ,F,\delta ,\sigma )$ such that $\varphi (a) \in A_{\omega (a)}$ for every $a\in {\mathcal J}$. \end{definition} \begin{definition} \label{D-positive independence density} Let ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ be a tuple of subsets of $X$. Let $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i ) \}$ be a sofic approximation net for $G$. We say that ${\overrightarrow{A}}$ has {\it positive upper independence density over $\Sigma$} if there exists a $q > 0$ such that for every nonempty finite set $F\subseteq G$ and $\delta > 0$ there is a cofinal set of $i$ for which ${\overrightarrow{A}}$ has a $(\rho ,F,\delta ,\sigma_i )$-independence set of cardinality at least $qd_i$. By Lemma~\ref{L-change pseudometric} this definition does not depend on the choice of $\rho$. \end{definition} For the purposes of Sections~\ref{S-product} and \ref{S-chaos}, we will consider a variation of the above definition in which cofinality is replaced by the stronger requirement of membership in a fixed free ultrafilter ${\mathfrak F}$ on ${\mathbb N}$. The resulting notion of positive upper independence density over $\Sigma$ with respect to ${\mathfrak F}$ will then be used when interpreting the following definition of $\Sigma$-IE-tuples. By the {\it universal sofic approximation net} for $G$ we mean the net $(\sigma,F)\mapsto \sigma$ indexed by the directed set of pairs $(\sigma,F)$ where $\sigma$ is a map from $G$ to ${\rm Sym}(d)$ for some $d\in{\mathbb N}$ and $F$ is a finite subset of $G$, and $(\sigma' :G\to{\rm Sym} (d'),F' )\succ (\sigma:G\to{\rm Sym} (d),F)$ means that $d' \geq d$ and $|V(\sigma' ,F)|/d' \geq |V(\sigma ,F)|/d$, where $V(\omega,F)$ for a map $\omega : G\to{\rm Sym} (c)$ denotes the set of all $a\in \{1,\dots ,c\}$ such that $\sigma (s)\sigma(t)a = \sigma (st)a$ for all $s,t\in F$ and $\sigma (s)a \neq \sigma (t)a$ for all distinct $s,t\in F$. \begin{definition}\label{D-Sigma-IE} Let $\Sigma = \{ \sigma_i : G\to{\rm Sym} (d_i ) \}$ be a sofic approximation net for $G$. We say that a tuple ${\overrightarrow{x}} = (x_1 ,\dots ,x_k )\in X^k$ is a {\it $\Sigma$-IE-tuple} (or {\it $\Sigma$-IE-pair} in the case $k=2$) if for every product neighbourhood $U_1 \times\dots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1 , \dots , U_k )$ has positive upper independence density over $\Sigma$. We say that ${\overrightarrow{x}}$ is a {\it sofic IE-tuple} (or {\it sofic IE-pair} in the case $k=2$) if it is a $\Sigma$-IE-tuple for the universal sofic approximation net $\Sigma$. We denote the $\Sigma$-IE-tuples of length $k$ by ${\rm IE}^\Sigma_k (X,G)$ and the sofic IE-tuples of length $k$ by ${\rm IE}_k^{\rm sof} (X,G)$. \end{definition} Note that ${\rm IE}^\Sigma_k (X,G) \subseteq {\rm IE}_k^{\rm sof} (X,G)$ for every sofic approximation net $\Sigma$. We define $\Sigma$-IE-tuples and sofic IE-tuples of sets in the same way as for points above. \begin{remark}\label{R-entropy tuple} It follows from Lemma~3.3 of \cite{KerLi07} that a nondiagonal tuple of points in $X$ is a $\Sigma$-IE-tuple if and only if it is a $\Sigma$-entropy tuple in the sense of Section~5 in \cite{Zha12}. In particular, if in analogy with the amenable case we define the action to have {\it uniformly positive entropy with respect to $\Sigma$} when every nondiagonal pair in $X\times X$ is a $\Sigma$-entropy pair, then the action has this property precisely when every pair in $X\times X$ is a $\Sigma$-IE-pair. \end{remark} We will need the following consequence of Karpovsky and Milman's generalization of the Sauer-Shelah lemma \cite{Sau72,She72,KarMil78}. \begin{lemma}[\cite{KarMil78}]\label{L-KM} Given $k\geq 2$ and $\lambda > 1$ there is a constant $c>0$ such that, for all $n\in {\mathbb N}$, if $S\subseteq \{1, 2, \dots , k \}^{\{1, 2, \dots , n\}}$ satisfies $|S|\geq ((k-1)\lambda )^n$ then there is an $I\subseteq \{1, 2, \dots , n\}$ with $|I|\geq cn$ and $S|_I = \{1, 2,\dots , k \}^I$. \end{lemma} \begin{proposition} \label{P-sofic IE to orbit IE} A sofic IE-tuple is an orbit IE-tuple. \end{proposition} \begin{proof} Fix a compatible metric $\rho$ on $X$. Let ${\overrightarrow{x}} = (x_1 ,\dots ,x_k )$ be a $\Sigma$-IE-tuple and $U_1\times \dots \times U_k$ a product neighborhood of ${\overrightarrow{x}}$. We will show that the tuple $(U_1, \dots, U_k)$ has positive independence density over $G$. Suppose first that $k>1$. Take $1<\lambda<\frac{k}{k-1}$. Then we have the constant $c>0$ in Lemma~\ref{L-KM}. Let $V_1\times \dots \times V_k$ be a product neighborhood of ${\overrightarrow{x}}$ such that for some $\kappa>0$ the $\kappa$-neighborhood of $V_j$ is contained in $U_j$ for all $1\le j\le k$. Then there exists a $q>0$ such that for every nonempty finite subset $F$ of $G$ and $\delta>0$ there is a cofinal set of $i$ for which the tuple $(V_1, \dots, V_k)$ has a $(\rho, F, \delta, \sigma_i)$-independence set ${\mathcal J}_i$ of cardinality at least $qd_i$. Let $F$ be a nonempty finite subset of $G$. We will show $F$ has a subset of cardinality at least $(cq/2)|F|$ which is an independence set for the tuple $(U_1, \dots, U_k)$. Let $\delta$ be a small positive number to be determined in a moment. Take an $i$ in the above cofinal set with $|{\mathcal W}_i|\ge (1-\delta)d$ for ${\mathcal W}_i:=\{a\in \{1, \dots, d_i\}: F\overset{\sigma_i(\cdot)a}{\rightarrow} \sigma_i(F)a \text{ is injective}\}$. For each $\omega: {\mathcal J}_i\rightarrow \{1, \dots, k\}$, take a $\varphi_\omega\in {\rm Map} (\rho ,F,\delta ,\sigma )$ such that $\varphi_\omega(a)\in V_{\omega(a)}$ for all $a\in {\mathcal J}_i$. Then $|\{a\in \{1, \dots, d_i\}: \rho(s\varphi_\omega(a), \varphi_\omega(sa))\le \delta^{1/2}\}| \ge (1-\delta)d_i$ for each $s\in F$ and hence $|\Lambda_\omega|\ge (1-|F|\delta)d_i$ for \[ \Lambda_\omega:=\{a\in \{1, \dots, d_i\}: \rho(s\varphi_\omega(a), \varphi_\omega(sa))\le \delta^{1/2} \text{ for all } s\in F\}. \] Set $n = |F|$. When $n\delta <1/2$, the number of subsets of $\{1, \dots, d\}$ of cardinality no greater than $n\delta d$ is equal to $\sum_{j=0}^{\lfloor n\delta d \rfloor} \binom{d}{j}$, which is at most $n\delta d \binom{d}{n\delta d}$, which by Stirling's approximation is less than $\exp(\beta d)$ for some $\beta > 0$ depending on $\delta$ and $n$ but not on $d$ when $d$ is sufficiently large with $\beta\to 0$ as $\delta\to 0$ for a fixed $n$. Thus when $\delta$ is small enough and $i$ is large enough, there is a subset $\Omega_i$ of $\{1, \dots, k\}^{{\mathcal J}_i}$ with $\big(\frac{k}{(k-1)\lambda}\big)^{q d_i}|\Omega_i|\ge k^{|{\mathcal J}_i|}$ such that the set $\Lambda_\omega$ is the same, say $\Theta_i$, for every $\omega \in \Omega_i$, and $|\Theta_i|/d_i>1-|F|\delta$. Then \[ |\Omega_i|\ge k^{|{\mathcal J}_i|}\bigg(\frac{(k-1)\lambda}{k}\bigg)^{q d_i} \ge k^{|{\mathcal J}_i|}\bigg(\frac{(k-1)\lambda}{k}\bigg)^{|{\mathcal J}_i|}=((k-1)\lambda)^{|{\mathcal J}_i|}. \] By our choice of $c$, we can find a subset ${\mathcal J}'_i$ of ${\mathcal J}_i$ with $|{\mathcal J}'_i|\ge c|{\mathcal J}_i|\ge cq d_i$ such that every $\xi: {\mathcal J}'_i\rightarrow \{1, \dots, k\}$ extends to some $\omega\in \Omega_i$. Writing $\zeta$ for the uniform probability measure on $\{1,\dots,d\}$, we have \begin{align*} \int_{{\mathcal W}_i\cap \Theta_i} \sum_{s\in F}1_{{\mathcal J}'_i}(sa)\, d\zeta (a) &=\sum_{s\in F}\int_{{\mathcal W}_i \cap \Theta_i} 1_{{\mathcal J}'_i}(sa)\, d\zeta (a)\\ &\ge \sum_{s\in F}\bigg(\frac{|{\mathcal J}'_i|}{d_i}-(|F|+1)\delta\bigg)\ge (cq-(|F|+1)\delta) |F|, \end{align*} and hence $\sum_{s\in F}1_{{\mathcal J}'_i}(sa_i)\ge (cq-(|F|+1)\delta)|F|$ for some $a_i\in {\mathcal W}_i\cap \Theta_i$. Then $|J_i|\ge (cq-(|F|+1)\delta)|F|$ for $J_i:=\{s\in F: sa_i\in {\mathcal J}'_i\}$. We claim that $J_i$ is an independence set for the tuple $(U_1, \dots, U_k)$ when $\delta<\kappa^2$. Let $f\in \{1, \dots, k\}^{J_i}$. Since $a_i\in {\mathcal W}_i$, the map $J_i\overset{\sigma_i(\cdot)a_i}{\rightarrow} \sigma_i(J_i)a_i$ is bijective. Thus we can define $\xi'\in \{1, \dots, k\}^{\sigma(J_i)a_i}$ by $\xi'(sa_i)=f(s)$ for $s\in J_i$. Extend $\xi'$ to some $\xi\in \{1, \dots, k\}^{{\mathcal J}'_i}$. Then we can extend $\xi$ to some $\omega \in \Omega_i$. For every $s\in J_i$, since $sa_i\in {\mathcal J}_i$ and $a_i\in \Theta_i=\Lambda_\omega$, we have $\varphi_\omega(s a_i)\in V_{\omega (s a_i)}=V_{f(s)}$ and $\rho(s\varphi_\omega(a_i), \varphi_\omega(s a_i))\le \delta^{1/2}<\kappa$. By the choice of $\kappa$ we have $s \varphi_\omega(a_i)\in U_{f(s)}$. This proves our claim. Taking $\delta$ to be small enough, we have $|J_i|\ge (cq-(|F|+1)\delta)|F|\ge (cq/2)|F|$ as desired. The case $k=1$ can be established by a simpler version of the above argument that considers only a single map of the form $\varphi_\omega$ and does not require the invocation of the constant $c$ from Lemma~\ref{L-KM} and the associated use of Stirling's approximation. \end{proof} For the remainder of this subsection, $\Sigma = \{ \sigma_i :G\to{\rm Sym} (d_i) \}$ is a fixed but arbitrary sofic approximation sequence. In the case that $G$ is amenable, the independence density $I({\overrightarrow{A}})$ of a tuple ${\overrightarrow{A}}$ of subsets of $X$ was defined on page 887 of \cite{KerLi07} as the limit of $\varphi_{{\overrightarrow{A}}}(F) /|F|$ as the nonempty finite set $F\subseteq G$ becomes more and more left invariant, where $\varphi_{{\overrightarrow{A}}}(F)$ denotes the maximum of the cardinalities of the independence sets for ${\overrightarrow{A}}$ which are contained in $F$. A tuple $(x_1 , \dots , x_k ) \in X^k$ is an {\it IE-tuple} if for every product neighbourhood $U_1 \times\cdots\times U_k$ of $(x_1 , \dots , x_k )$ the independence density $I({\overrightarrow{U}})$ of the tuple ${\overrightarrow{U}} = (U_1 ,\dots, U_k )$ is positive. To establish Theorem~\ref{T-amenable case: orbit IE=IE} we need the following version of the Rokhlin lemma for sofic approximations, which appears as Lemma~4.6 in \cite{KerLi10}. For $\lambda\geq 0$, a collection of subsets of $\{ 1,\dots ,d\}$ is said to {\it $\lambda$-cover} $\{1, \dots, d\}$ if its union has cardinality at least $\lambda d$. \begin{lemma}\label{L-Rokhlin2} Let $G$ be a countable amenable discrete group. Let $0\le \tau<1$ and $0<\eta<1$. Let $K$ be a nonempty finite subset of $G$ and $\delta>0$. Then there are an $\ell\in {\mathbb N}$, nonempty finite subsets $F_1, \dots, F_\ell$ of $G$ with $|KF_k \setminus F_k|<\delta |F_k|$ and $|F_kK\setminus F_k|<\delta|F_k|$ for all $k=1, \dots, \ell$, a finite set $F\subseteq G$ containing $e$, and an $\eta'>0$ such that, for every $d\in {\mathbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$ for which there is a set $B\subseteq \{1, \dots, d\}$ satisfying $|B|\ge (1-\eta')d$ and \[ \sigma_{st}(a)=\sigma_s\sigma_t(a), \sigma_s(a)\neq \sigma_{s'}(a), \sigma_e(a)=a \] for all $a\in B$ and $s, t, s'\in F$ with $s\neq s'$, and every set $V\subseteq \{1, \dots, d\}$ with $|V|\ge (1-\tau)d$, there exist $C_1, \dots, C_\ell\subseteq V$ such that \begin{enumerate} \item for every $k=1, \dots, \ell$, the map $(s, c)\mapsto \sigma_s(c)$ from $F_k\times C_k$ to $\sigma(F_k)C_k$ is bijective, \item the family $\{ \sigma(F_1)C_1, \dots, \sigma(F_\ell)C_\ell \}$ is disjoint and $(1-\tau-\eta)$-covers $\{1, \dots, d\}$. \end{enumerate} \end{lemma} \begin{theorem} \label{T-amenable case: orbit IE=IE} Suppose that $G$ is amenable. Then IE-tuples, orbit IE-tuples, $\Sigma$-IE-tuples, and sofic IE-tuples are all the same thing. \end{theorem} \begin{proof} By Proposition~\ref{P-sofic IE to orbit IE}, sofic IE-tuples and $\Sigma$-IE-tuples are orbit IE-tuples. That orbit IE-tuples are IE-tuples is clear in view of the definition of the independence density $I({\overrightarrow{A}} )$ of a tuple ${\overrightarrow{A}}$ of subsets of $X$. It thus remains to show that IE-tuples are both sofic IE-tuples and $\Sigma$-IE-tuples. To prove that IE-tuples are $\Sigma$-IE-tuples, it suffices to demonstrate that, given a tuple ${\overrightarrow{U}} = (U_1,\dots,U_k)$ of subsets of $X$ with $I({\overrightarrow{U}}) > 0$, the tuple ${\overrightarrow{U}}$ has positive upper independence density over $\Sigma$. Set $\lambda = I({\overrightarrow{U}}) > 0$. Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\eta>0$, to be determined. By Lemma~\ref{L-Rokhlin2} we can find an $\ell\in{\mathbb N}$ and nonempty finite sets $F_1 ,\dots ,F_\ell \subseteq G$ such that (i) the sets $F_1,\dots,F_\ell$ are sufficiently left invariant so that for each $i=1,\dots ,\ell$ there is a set $J_i \subseteq F_i$ which is an independence set for ${\overrightarrow{U}}$ and has cardinality at least $\lambda |F_i|/2$, and (ii) for every good enough sofic approximation $\sigma : G\to{\rm Sym}(d)$ there exist $C_1,\dots,C_\ell \subseteq \{ 1,\dots ,d \}$ satisfying the following: \begin{enumerate} \item for every $i=1, \dots, \ell$ and $c\in C_i$, the map $s\mapsto \sigma_s(c)$ from $F_i$ to $\sigma(F_i)c$ is bijective, \item the family of sets $\sigma(F_i)c$ for $i=1,\dots,\ell$ and $c\in C_i$ is disjoint and $(1-\eta)$-covers $\{1, \dots, d\}$. \end{enumerate} Let $\sigma:G\rightarrow {\rm Sym} (d)$ be a sufficiently good sofic approximation for $G$ for some $d\in{\mathbb N}$. For every $h = (h_1,\dots,h_\ell )\in\prod_{i=1}^\ell X^{C_i}$ take a map $\varphi_h : \{ 1,\dots ,d\} \to X$ such that \[ \varphi_h (sc) = s(h_i (c)) \] for all $i\in \{1,\dots,\ell\}$, $c\in C_i$, and $s\in F_i$. We may assume in our invocation of Lemma~\ref{L-Rokhlin2} above that the sets $F_1,\dots,F_\ell$ are sufficiently left invariant so that, assuming that $\eta$ is sufficiently small and $\sigma$ is a sufficiently good sofic approximation, we have $\varphi_h \in{\rm Map} (\rho,F,\delta,\sigma)$ for every $h\in\prod_{i=1}^\ell X^{C_i}$. Write ${\mathcal J}$ for the subset $\bigcup_{i=1}^\ell \bigcup_{s\in J_i} \bigcup_{c\in C_i} \sigma_s (c)$ of $\{1,\dots,d\}$. From (1) and (2) we obtain \[ |{\mathcal J}|= \sum_{i=1}^\ell |J_i| |C_i|\geq\sum_{i=1}^\ell \frac{\lambda}{2} |F_i| |C_i| \geq \frac{\lambda}{2} (1-\eta)d \geq \frac{\lambda}{4} d \] assuming that $\eta\leq 1/2$. Now whenever we are given $\omega_i \in \{ 1,\dots ,k\}^{J_i}$ for $i=1,\dots,\ell$ we can find, since each $J_i$ is an independence set for ${\overrightarrow{U}}$, an $h = (h_1,\dots,h_\ell)\in\prod_{i=1}^\ell X^{C_i}$ such that $sh_i (c) \in U_{\omega_i (s)}$ for all $i=1,\dots,\ell$ and $s\in J_i$. The maps $\varphi_h$ for such $h$ then witness the fact that ${\mathcal J}$ is a $(\rho,F,\delta,\sigma)$-independence set for ${\overrightarrow{U}}$. It follows that ${\overrightarrow{U}}$ has positive upper independence density over $\Sigma$. Hence IE-tuples are $\Sigma$-IE-tuples. The above argument also shows that IE-tuples are sofic IE-tuples, and so we are done. One can also give the following direct proof that IE-tuples are orbit IE-tuples. It suffices to show that if ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ is a tuple of subsets of $X$ and $q>0$ is such that for every nonempty finite subset $K$ of $G$ and $\varepsilon>0$ there exist a nonempty finite subset $F$ of $G$ with $|KF\setminus F|\le \varepsilon |F|$ and a $J\subseteq F$ with $|J|\ge q|F|$ which is an independence set for ${\overrightarrow{A}}$, then the independence density of ${\overrightarrow{A}}$ over $G$ is at least $q$. Let $F_1$ be a nonempty finite subset of $G$. Let $1>\delta>0$. Take $\varepsilon>0$ be a small number which we shall determine in a moment. Then there exist a nonempty finite subset $F$ of $G$ with $|F_1^{-1}F\setminus F|\le \varepsilon |F|$ and a $J\subseteq F$ with $|J|\ge q|F|$ which is an independence set for ${\overrightarrow{A}}$. Set $F'=\{s\in F: F_1^{-1}s\subseteq F\}$. Taking $\varepsilon$ to be small enough, we have $|F'|\ge (1-\delta)|F|$. Note that the function $\sum_{s\in F}1_{F_1s}$ has value $|F_1|$ at every point of $F'$. Thus \begin{align*} \sum_{t\in J\cap F'}\sum_{s\in F}1_{F_1s}(t)=|J\cap F'||F_1|. \end{align*} We also have \begin{align*} \sum_{t\in J\cap F'}\sum_{s\in F}1_{F_1s}(t)=\sum_{s\in F}\sum_{t\in J\cap F'}1_{F_1s}(t)=\sum_{s\in F}|F_1s\cap (J\cap F')|. \end{align*} Therefore we can find an $s\in F$ with \begin{align*} |F_1s\cap (J\cap F')|\ge \frac {|J\cap F'||F_1|}{|F|}\ge (q-\delta)|F_1|. \end{align*} Since $F_1\cap (J\cap F')s^{-1}$ is an independence set for ${\overrightarrow{A}}$, we deduce that $F_1$ has a subset of cardinality at least $(q-\delta)|F_1|$ which is an independence set for ${\overrightarrow{A}}$. Letting $\delta\to 0$, we get that $F_1$ has a subset of cardinality at least $q|F_1|$ which is an independence set for ${\overrightarrow{A}}$. Therefore the independence density $I({\overrightarrow{A}})$ of ${\overrightarrow{A}}$ is at least $q$, and so we conclude that IE-tuples are orbit IE-tuples. \end{proof} The surprising fact above is that IE-tuples are orbit IE-tuples in the amenable case. It is clear however for a Bernoulli action that all tuples are orbit IE-tuples. Notice also that the argument above works equally well if in the definition of $\Sigma$-IE-tuples we use positive upper independence density over $\Sigma$ with respect to a fixed free ultrafilter ${\mathfrak F}$. \begin{remark} The product formula for ${\rm IE}$-tuples as defined in the amenable framework was established in Theorem~3.15 of \cite{KerLi07} using a measure-theoretic argument. We can now combine Theorems~\ref{T-amenable case: orbit IE=IE} and \ref{T-product for orbit IE} to obtain a combinatorial proof. \end{remark} \begin{remark}\label{R-density} The proof of Theorem~\ref{T-amenable case: orbit IE=IE} shows that the independence density $I({\overrightarrow{A}})$, as defined on page 887 of \cite{KerLi07} and recalled before the theorem statement, coincides with the independence density defined in Definition~\ref{D-independence density}. We may thus use the notation $I({\overrightarrow{A}})$ without ambiguity to denote the more general independence density of Definition~\ref{D-independence density}. \end{remark} \begin{remark} When $G$ is amenable, it is clear from the classical $(n,\varepsilon)$-separated set formulation of topological entropy that the entropy of an action $G\curvearrowright X$ is bounded below by the supremum of $I({\overrightarrow{A}}) \log k$ over all pairs $(k,{\overrightarrow{A}})$ where $k\in{\mathbb N}$ and ${\overrightarrow{A}}$ is a $k$-tuple of pairwise disjoint closed subsets of $X$. For Bernoulli actions the two quantities are equal. In the nonamenable case, the entropy fails in general to be bounded below by $\sup_{(k,{\overrightarrow{A}})} I({\overrightarrow{A}}) \log k$, where $I({\overrightarrow{A}})$ is as defined in Remark~\ref{R-density}. Indeed an example of Ornstein and Weiss \cite[Appendix C]{OrnWei87} shows that the Bernoulli action $F_2\curvearrowright \{ 0,1 \}^{F_2}$ over the free group on two generators has Bernoulli factors over arbitrarily large finite sets of symbols, in which case the supremum is infinite. \end{remark} We next aim to establish some basic properties of $\Sigma$-IE-tuples in Proposition~\ref{P-basic}. From Lemma~3.6 of \cite{KerLi07} we obtain: \begin{lemma}\label{L-decomposition indep} Let $k\in {\mathbb N}$. Then there is a constant $c>0$ depending only on $k$ with the following property. Let ${\overrightarrow{A}}=(A_1, \dots, A_k)$ be a $k$-tuple of subsets of $X$ and suppose $A_1 = A_{1, 1}\cup A_{1,2}$. Let $F$ be a nonempty finite subset of $G$ and $\delta > 0$. Let $\sigma$ be a map from $G$ to ${\rm Sym} (d)$ for some $d\in{\mathbb N}$. If a set $J\subseteq \{ 1,\dots ,d\}$ is a $(\rho ,F,\delta ,\sigma )$-independence set for ${\overrightarrow{A}}$, then there exists an $I\subseteq J$ such that $|I|\ge c|J|$ and $I$ is a $(\rho ,F,\delta ,\sigma )$-independence set for $(A_{1,1}, \dots, A_k)$ or $(A_{1,2}, \dots, A_k)$. \end{lemma} From Lemma~\ref{L-decomposition indep} we get: \begin{lemma}\label{L-decomposition E} Let ${\overrightarrow{A}}= (A_1, \dots, A_k )$ be a $k$-tuple of subsets of $X$ which has positive upper independence density over $\Sigma$. Suppose that $A_1 = A_{1, 1}\cup A_{1, 2}$. Then at least one of the tuples $(A_{1,1},\dots, A_k)$ and $(A_{1,2}, \dots, A_k)$ has positive upper independence density over $\Sigma$. \end{lemma} \begin{lemma} \label{L-positive entropy to independence} $h_\Sigma(X, G)>0$ if and only if there are disjoint closed subsets $A_0$ and $A_1$ of $X$ such that $(A_0, A_1)$ has positive upper independence density over $\Sigma$. \end{lemma} \begin{proof} Let $\rho$ be a compatible metric on $X$ with ${\rm diam}_\rho(X)\le 1$. Then $h_{\Sigma, \infty}(\rho)=h_\Sigma(X, G)$. The ``if'' part is obvious. So assume $h_{\Sigma, \infty}(\rho)>0$. Then $h^{6\varepsilon}_{\Sigma, \infty}(\rho)>0$ for some $\varepsilon>0$. Set $c=h^{6\varepsilon}_{\Sigma, \infty}(\rho)/2$. Take a finite $(\rho, 2\varepsilon)$-dense subset $Z$ of $X$. Consider on $X$ the continuous pseudometrics $\rho^z$, for $z\in Z$, and $\rho'$ given by \[ \rho^z(x, y)=|\rho(x, z)-\rho(y, z)|, \hspace*{5mm} \rho'(x, y)=\max_{z\in Z}\rho^z(x, y).\] Note that if $\rho(x, y)\ge 6\varepsilon$ for some $x, y\in X$, then $\rho'(x, y)\ge 2\varepsilon$. It follows that if $d\in {\mathbb N}$ and $\varphi$ and $\psi$ are maps from $\{1, \dots, d\}$ to $X$ with $\rho_{\infty}(\varphi, \psi)\ge 6\varepsilon$, then $\rho'_{\infty}(\varphi, \psi)\ge 2\varepsilon$. Take an increasing sequence $\{F_n\}_{n\in {\mathbb N}}$ of nonempty finite subsets of $G$ with union $G$ and a decreasing sequence $\{\delta_n\}_{n\in {\mathbb N}}$ of positive numbers converging to $0$. For each $n\in {\mathbb N}$, there is a cofinal set $I_n$ of $i$ for which one has $N_{6\varepsilon}({\rm Map}(\rho, F_n, \delta_n, \sigma_i), \rho_\infty)\ge \exp(c d_i)$. Then $N_{2\varepsilon}({\rm Map}(\rho, F_n, \delta_n, \sigma_i), \rho'_\infty)\ge \exp(c d_i)$ for all $i\in I_n$. For each $i\in I_n$ and $z\in Z$ take a $(\rho^z_\infty, \varepsilon)$-separated subset $W_{i, z}$ of ${\rm Map}(\rho, F_n, \delta_n, \sigma_i)$ of maximum cardinality. Then \[ N_{2\varepsilon}({\rm Map}(\rho, F_n, \delta_n, \sigma_i), \rho'_\infty) \le \prod_{z\in Z}|W_{i, z}| = \prod_{z\in Z}N_\varepsilon({\rm Map}(\rho, F_n, \delta_n, \sigma_i), \rho^z_\infty). \] Thus $N_\varepsilon({\rm Map}(\rho, F_n, \delta_n, \sigma_i), \rho^{z_{n, i}}_\infty)\ge \exp(c d_i/|Z|)$ for some $z_{n, i}\in Z$. Replacing $I_n$ by a confinal subset if necessary, we may assume that $z_{n, i}$ is the same, say $z_n$, for all $i\in I_n$. Passing to a subsequence of $\{(F_n, \delta_n)\}_{n \in {\mathbb N}}$ if necessary, we may assume that $z_n$ is the same, say $\mathfrak{z}$, for all $n\in {\mathbb N}$. Note that if $W$ is a $(\rho^{\mathfrak{z}}_\infty, \varepsilon)$-separated subset of ${\rm Map}(\rho, F_n, \delta_n, \sigma_i)$, then the set $\{ \rho(\mathfrak{z}, \cdot)\circ \varphi: \varphi \in W\}$ in $\ell_{\infty}^{d_i}$ is $(\|\cdot \|_\infty, \varepsilon)$-separated. By \cite[Lemma 2.3]{GlaWei95}, there are constants $c'$ and $\delta>0$ depending only on $c/|Z|$ and $\varepsilon$ such that for every $n\in {\mathbb N}$ and large enough $i\in I_n$ there are a $t_{n, i}\in [0, 1]$ and a subset $J_{n, i}$ of $\{1, \dots, d_i\}$ with $|J_{n, i}|\ge c'd_i$ so that for every $\omega: J_{n, i}\rightarrow \{0, 1\}$ there are a $\varphi_\omega\in {\rm Map}(\rho, F_n, \delta_n, \sigma_i)$ such that for all $a\in J_{n, i}$ we have $\rho(\mathfrak{z}, \varphi_\omega(a))\ge t_{n, i}+\delta$ or $\rho(\mathfrak{z}, \varphi_\omega(a))\le t_{n, i}-\delta$ depending on whether $\omega(a)=0$ or $\omega(a)=1$. Replacing $I_n$ by a confinal subset if necessary, we may assume that there is a $t_n\in [0, 1]$ such that $|t_{n, i}-t_n|<\delta/4$ for all $i\in I_n$. Replacing $\{F_n, \delta_n\}_{n\in {\mathbb N}}$ by a subsequence if necessary, we may assume that there is a $t\in [0, 1]$ such that $|t_n-t|<\delta/4$ for all $n\in {\mathbb N}$. Set $A_0=\{x\in X: \rho(\mathfrak{z}, x)\ge t+\delta/2\}$ and $A_1=\{x\in X: \rho(\mathfrak{z}, z)\le t-\delta/2\}$. Then for every $n\in {\mathbb N}$ and $i\in I_n$, the set $J_{n, i}$ is a $(\rho, F_n, \delta_n, \sigma_i)$-independence set for $(A_0, A_1)$. Thus $(A_0, A_1)$ has positive upper independence density over $\Sigma$. \end{proof} The following is obvious. \begin{lemma} \label{L-nonnegative entropy} $h_\Sigma(X, G)\ge 0$ if and only if $X$ as a $1$-tuple has positive upper independence density over $\Sigma$. \end{lemma} \begin{proposition}\label{P-basic} The following are true: \begin{enumerate} \item Let $(A_1, \dots , A_k )$ be a tuple of closed subsets of $X$ which has positive upper independence density over $\Sigma$. Then there exists a $\Sigma$-IE-tuple $(x_1,\dots , x_k)$ with $x_j\in A_j$ for all $1\le j\le k$. \item ${\rm IE}_1^\Sigma(X, G)$ is nonempty if and only if $h_\Sigma(X, G)\ge 0$. \item ${\rm IE}_2^\Sigma(X, G)\setminus \Delta_2(X)$ is nonempty if and only if $h_\Sigma(X, G)>0$, where $\Delta_2 (X)$ denotes the diagonal in $X^2$. \item ${\rm IE}_k^\Sigma(X, G)$ is a closed subset of $X^k$ which is invariant under the product action. \item Let $\pi:(X, G)\rightarrow (Y,G)$ be a factor map. Then $(\pi\times\cdots\times \pi )({\rm IE}_k^\Sigma(X, G))\subseteq {\rm IE}_k^\Sigma(Y, G)$. \item Suppose that $Z$ is a closed $G$-invariant subset of $X$. Then ${\rm IE}_k^\Sigma(Z, G )\subseteq {\rm IE}_k^\Sigma(X, G)$. \end{enumerate} \end{proposition} \begin{proof} Assertion (1) follows from Lemma~\ref{L-decomposition E} and a simple compactness argument. Assertion (2) follows from assertion (1) and Lemma~\ref{L-nonnegative entropy}. Assertion (3) follows directly from assertion (1) and Lemma~\ref{L-positive entropy to independence}. Assertion (4) follows from the observation that, given a compatible metric $\rho$ of $X$, for any $s\in G$, nonempty finite subset $F$ of $G$, and $\delta>0$ there is a $\delta'>0$ such that, for every $d\in {\mathbb N}$ and map $\sigma: G\rightarrow {\rm Sym}(d)$ which is a good enough sofic approximation for $G$, if $\varphi \in {\rm Map}(\rho, \{s^{-1}\}\cup (s^{-1}F), \delta', \sigma)$, then $\alpha_s\circ \varphi\circ \sigma_{s^{-1}}\in {\rm Map}(\rho, F, \delta, \sigma)$, where $\alpha_s$ is the transformation $x\mapsto sx$ of $X$. Assertions (5) and (6) are trivial. \end{proof} \begin{remark} The inclusion in (5) above is an equality when $G$ is amenable, since $\Sigma$-IE-tuples are the same as IE-tuples by Theorem~\ref{T-amenable case: orbit IE=IE}. Equality can fail however if $G$ is nonamenable: Take an action $G\curvearrowright X$ with $h_\Sigma (X,G)=-\infty$ and an action $G\curvearrowright Y$ with $h_\Sigma (Y,G) > 0$. Then $G\curvearrowright Y$ has a nondiagonal $\Sigma$-IE-pair, while the product action $G\curvearrowright X\times Y$, which factors onto $G\curvearrowright Y$ via the second coordinate projection, satisfies $h_\Sigma (X\times Y,G)=-\infty$ and hence has no nondiagonal $\Sigma$-IE-pairs. \end{remark} \begin{remark} The analogue for orbit IE-tuples of the localization in Proposition~\ref{P-basic}(1) does not hold in the nonamenable case. Indeed for any action $G\curvearrowright X$ of a discrete group the $1$-tuple $X$ has positive independence density, while the boundary action $F_2 \curvearrowright \partial F_2$ of the free group on two generators (where $\partial F_2$ consists of infinite reduced words in the standard generators and their inverses, with the action by left concatenation and reduction) is easily seen not to admit any orbit IE-$1$-tuples. \end{remark} From Proposition~\ref{P-basic}(5) we get the following. As in Theorem~\ref{T-product for orbit IE}, the inclusion below is understood with respect to the identification of $((x_1,\dots,x_k),(y_1,\dots,y_k))\in X^k \times Y^k$ and $((x_1,y_1),\dots,(x_k,y_k))\in (X\times Y)^k$. \begin{proposition} \label{P-product for IE} ${\rm IE}^\Sigma_k (X\times Y,G)\subseteq {\rm IE}^\Sigma_k (X,G) \times {\rm IE}^\Sigma_k (Y,G)$. \end{proposition} The problem of the reverse inclusion will be taken up in the next section. For the remainder of this section $X$ is the unit ball of $\ell^p(G)$ for some $1\le p<\infty$ equipped with the pointwise convergence topology, and the action $G\curvearrowright X$ is by left shifts. We will use some of the above results to compute the sofic topological entropy of this action to be zero when $G$ is infinite. Recall from the end of Section~\ref{S-orbit} that a tuple ${\overrightarrow{x}} = (x_1 , \dots , x_k )\in X^k$ is an IN-tuple if for every product neighbourhood $U_1 \times\dots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1 ,\dots, U_k )$ has arbitrarily large finite independence sets. We write ${\rm IN}_k (X,G)$ for the set of IN-tuples of length $k$. \begin{lemma} \label{L-unit ball null} For every $k\in{\mathbb N}$ the set ${\rm IN}_k(X,G)$ consists of the single element $(0, \dots, 0)$. \end{lemma} \begin{proof} Clearly $(0, \dots, 0)\in {\rm IN}_k(X)$ for every $k\in {\mathbb N}$. Also note that if ${\overrightarrow{x}}=(x_1,\dots, x_k)\in {\rm IN}_k(X)$ then $x_1, \dots, x_k\in {\rm IN}_1(X)$. Thus it suffices to show ${\rm IN}_1(X)\subseteq \{0\}$. Let $x\in X$ with $x\neq 0$. Then $(tx)_e\neq 0$ for some $t\in G$. Set $r=|(tx)_e|/2>0$ and $U=\{y\in X: |y_e|\ge r\}$. Then $U$ is a neighborhood of $tx$ in $X$. Let $F\subseteq G$ be a finite independence set for $U$. Then $\bigcap_{s\in F} s^{-1}U$ is nonempty. Take $y\in \bigcap_{s\in F} s^{-1}U$. Then $sy\in U$ and hence $|y_{s^{-1}}|=|(sy)_e|\ge r$ for every $s\in F$. It follows that \[ |F|r^p\le \sum_{s\in F}|y_{s^{-1}}|^p\le \|y\|_p^p\le 1, \] and hence $|F|\le r^{-p}$. Therefore $tx\not\in {\rm IN}_1(X)$. Since ${\rm IN}_1(X)$ is $G$-invariant, $x\not\in {\rm IN}_1(X)$. \end{proof} \begin{proposition} \label{P-unit ball zero entropy} Suppose that $G$ is infinite. Then $h_\Sigma(X, G)=0$. \end{proposition} \begin{proof} By Lemma~\ref{L-unit ball null}, Propositions~\ref{P-basic}(2), and Propositions~\ref{P-sofic IE to orbit IE} and \ref{P-orbit IE to IN}, we have $h_\Sigma(X, G)\le 0$. Since $X$ has the fixed point $0$, we have $h_\Sigma(X, G)\ge 0$. Therefore $h_\Sigma(X, G)=0$. \end{proof} \section{Product Formula for IE-tuples}\label{S-product} In order to hope for a product formula for $\Sigma$-IE-tuples beyond the amenable case, we must be able to witness independence density in some uniform way, in analogy with the definition of orbit IE-tuples in Section~\ref{S-orbit} (see Theorem~\ref{T-product for orbit IE}). This can be achieved by taking a free ultrafilter ${\mathfrak F}$ on ${\mathbb N}$ and requiring that the independence sets in Definition~\ref{D-positive independence density} exist for a set of $i$ belonging to ${\mathfrak F}$ instead a cofinal set of $i$. Thus for the purposes of this section we fix a free ultrafilter ${\mathfrak F}$ on ${\mathbb N}$ and switch to definition of $\Sigma$-IE-tuples based on this interpretation of positive density. We will similarly understand sofic topological entropy to be defined by using an ultralimit over ${\mathfrak F}$ in Definition~\ref{D-topological entropy} instead of the limit supremum. We do not know whether our product formula results, Proposition~\ref{P-entropy for product} and Theorem~\ref{T-ergodic to product}, hold for the original definitions. For the first part of our discussion, up to and including Lemma~\ref{L-tuple of product sets}, $G$ is a countable sofic group and $\Sigma=\{\sigma_i: G\rightarrow {\rm Sym}(d_i)\}_{i=1}^\infty$ a fixed but arbitrary sofic approximation sequence for $G$. \begin{proposition}\label{P-entropy for product} Let $G$ act continuously on compact metrizable spaces $X$ and $Y$. Then \[ h_\Sigma(X\times Y, G)=h_\Sigma(X, G)+h_\Sigma(Y, G). \] \end{proposition} \begin{proof} Fix compatible metrics $\rho^X$ and $\rho^Y$ on $X$ and $Y$ respectively. Define a compatible metric $\rho_{X\times Y}$ on $X\times Y$ by \[ \rho^{X\times Y}((x_1, y_1), (x_2, y_2))=\rho^X(x_1, x_2)+\rho^Y(y_1, y_2) \] for $(x_1, y_1), (x_2, y_2)\in X\times Y$. Let $d\in {\mathbb N}$. Identify $(X\times Y)^{\{1, \dots, d\}}$ with $X^{\{1, \dots, d\}}\times Y^{\{1, \dots, d\}}$ naturally. Note that for all $\varphi, \varphi'\in X^{\{1, \dots, d\}}$ and $\psi, \psi'\in Y^{\{1, \dots, d\}}$ one has \[ \max\big(\rho^X_2(\varphi, \varphi'), \rho^Y_2(\psi, \psi' )\big) \le \rho^{X\times Y}_2((\varphi, \psi), (\varphi', \psi')) \le \rho^X_2(\varphi, \varphi')+\rho^Y_2(\psi, \psi'). \] Let $F$ be a nonempty finite subset of $G$, $\delta>0$, and $\varepsilon>0$, and let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$. Then ${\rm Map}(\rho^X, F, \delta, \sigma)\times {\rm Map}(\rho^Y, F, \delta, \sigma)\subseteq {\rm Map}(\rho^{X\times Y}, F, 2\delta, \sigma)$. Furthermore, for every $(\rho^X_2, \varepsilon)$-separated subset ${\mathscr W}_X$ of ${\rm Map}(\rho^X, F, \delta, \sigma)$ and every $(\rho^Y_2, \varepsilon)$-separated subset ${\mathscr W}_Y$ of ${\rm Map}(\rho^X, F, \delta, \sigma)$, the set ${\mathscr W}_X\times {\mathscr W}_Y$ is $(\rho^{X\times Y}_2, \varepsilon)$-separated. It follows that $h^\varepsilon_{\Sigma, 2}(\rho^{X\times Y}, F, 2\delta)\ge h^\varepsilon_{\Sigma, 2}(\rho^X, F, \delta)+h^\varepsilon_{\Sigma, 2}(\rho^Y, F, \delta)$, and hence $h_\Sigma(X\times Y, G)\ge h_\Sigma(X, G)+h_\Sigma(Y, G)$. Note that for any $(\rho^X_2, \varepsilon)$-spanning subset ${\mathscr W}_X$ of ${\rm Map}(\rho^X, F, \delta, \sigma)$ and any $(\rho^Y_2, \varepsilon)$-spanning subset ${\mathscr W}_Y$ of ${\rm Map}(\rho^X, F, \delta, \sigma)$, the set ${\mathscr W}_X\times {\mathscr W}_Y$ is $(\rho^{X\times Y}_2, 2\varepsilon)$-spanning for (though not necessarily contained in) ${\rm Map}(\rho^{X\times Y}, F, \delta, \sigma)$. It follows that $N_{4\varepsilon}(\rho^{X\times Y}, F, \delta, \sigma)\le N_\varepsilon(\rho^X, F, \delta, \sigma)\times N_\varepsilon(\rho^Y, F, \delta, \sigma)$, and hence $h^{4\varepsilon}_{\Sigma, 2}(\rho^{X\times Y}, F, \delta)\le h^\varepsilon_{\Sigma, 2}(\rho^X, F, \delta)+h^\varepsilon_{\Sigma, 2}(\rho^Y, F, \delta)$. Consequently, $h_\Sigma(X\times Y, G)\le h_\Sigma(X, G)+h_\Sigma(Y, G)$. \end{proof} The Loeb space and the Loeb measure were introduced by Loeb in \cite{Loe75}. An exposition can be found in \cite{AleGleGor99}. The Loeb space is the ultraproduct space $\prod_{\mathfrak F}\{1, \dots, d_i\}$. A subset $Y$ of $\prod_{\mathfrak F}\{1, \dots, d_i\}$ is called {\it internal} if it is of the form $\prod_{\mathfrak F} Y_i$ for a sequence $\{Y_i\}_{i\in {\mathbb N}}$ with $Y_i\subseteq \{1, \dots, d_i\}$ for all $i\in {\mathbb N}$. The collection ${\mathfrak I}$ of inner subsets is an algebra. The Loeb measure is the unique probability measure $\mu$ on the $\sigma$-algebra ${\mathfrak B}$ generated by ${\mathfrak I}$ such that $\mu(Y)=\lim_{i\to {\mathfrak F}} |{\mathcal Y}_i|/d_i$ for every internal set $Y=\prod_{\mathfrak F} {\mathcal Y}_i$. For every $Z\in {\mathfrak B}$ there exists a $Y\in {\mathfrak I}$ such that $\mu(Y\Delta Z)=0$. For each $d\in {\mathbb N}$, denote by $\rho_{\rm Hamm}$ the normalized Hamming distance on ${\rm Sym}(d)$ defined by \[ \rho_{\rm Hamm}(\tau, \tau')=\frac{1}{d}|\{a\in \{1, \dots, d\}: \tau(a)\neq \tau'(a)\}|. \] The ultraproduct group $\prod_{\mathfrak F}{\rm Sym}(d_i)$ has a natural action on $\prod_{\mathfrak F}\{1, \dots, d_i\}$ preserving $\mu$. One has a bi-invariant pseudometric $\rho_L$ on $\prod_{\mathfrak F}{\rm Sym}(d_i)$ defined by $\rho_L(\tau, \tau')=\mu(\{y\in \prod_{\mathfrak F}\{1, \dots, d_i\}: \tau y\neq \tau'y\})$. For any $\tau=(\tau_i)_i, \tau'=(\tau'_i)_i\in \prod_{\mathfrak F}{\rm Sym}(d_i)$ with $\tau_i, \tau'_i\in {\rm Sym}(d_i)$ for all $i\in {\mathbb N}$, one has $\rho_L(\tau, \tau')=\lim_{i\to {\mathfrak F}}\rho_{\rm Hamm}(\tau_i, \tau'_i)$. Denote by ${\mathfrak G}$ the quotient group of $\prod_{\mathfrak F}{\rm Sym}(d_i)$ by $\rho_L$. Then we may think of ${\mathfrak G}$ as acting on $\prod_{\mathfrak F} \{1, \dots, d_i\}$ by $\mu$-preserving transformations. The sofic approximation sequence $\Sigma$ gives rise to a natural group embedding of $G$ into ${\mathfrak G}$. Thus we may think of $G$ as a subgroup of ${\mathfrak G}$. Denote by $G'$ the subgroup of ${\mathfrak G}$ consisting of elements commuting with $G$. As before, the equality below is understood with respect to the identification of $((x_1,\dots,x_k),(y_1,\dots,y_k))\in X^k \times Y^k$ and $((x_1,y_1),\dots,(x_k,y_k))\in (X\times Y)^k$. \begin{theorem}\label{T-ergodic to product} Suppose that the action of $G'$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic. Let $G$ act continuously on compact metrizable spaces $X$ and $Y$. Let $k\in {\mathbb N}$. Then \[ {\rm IE}^\Sigma_k (X\times Y,G)={\rm IE}^\Sigma_k (X,G) \times {\rm IE}^\Sigma_k (Y,G). \] \end{theorem} We prove the theorem by way of the following results. \begin{definition}\label{D-independence set on Loeb space} Let $G$ act continuously on a compact metrizable space $X$. Let $\rho$ be a dynamically generating continuous pseudometric on $X$. Let ${\overrightarrow{A}} = (A_1 , \dots ,A_k )$ be a tuple of subsets of $X$. We say that an internal set $Y=\prod_{\mathfrak F} Y_i$ with $Y_i\subseteq \{1, \dots, d_i\}$ for all $i\in {\mathbb N}$ is an {\it independence set} for ${\overrightarrow{A}}$ if for every nonempty finite subset $F$ of $G$ and every $\delta>0$ the set of all $i\in {\mathbb N}$ for which $Y_i$ is a $(\rho, F, \delta, \sigma_i)$-independence set for ${\overrightarrow{A}}$ belongs to ${\mathfrak F}$. \end{definition} From Lemma~\ref{L-change pseudometric} it is easy to see that Definition~\ref{D-independence set on Loeb space} does not depend on the choice of $\rho$. Consistent with our interpretation of the equality in Theorem~\ref{T-ergodic to product}, in Proposition~\ref{P-basics of internal independence sets} and Lemma~\ref{L-tuple of product sets} we understand ${\overrightarrow{A}} \times {\boldsymbol{B}}$ to mean $(A_1 \times B_1 ,\dots , A_k \times B_k)$ where ${\overrightarrow{A}} = (A_1 ,\dots ,A_k)$ and ${\boldsymbol{B}} = (B_1 ,\dots ,B_k)$. \begin{proposition} \label{P-basics of internal independence sets} Let $G$ act continuously on compact metrizable spaces $X$ and $Y$. Let ${\overrightarrow{A}}$ and ${\boldsymbol{B}}$ be $k$-tuples of subsets of $X$ and $Y$ respectively for some $k\in {\mathbb N}$. Then the following hold: \begin{enumerate} \item ${\overrightarrow{A}}$ has positive upper independence density over $\Sigma$ if and only if ${\overrightarrow{A}}$ has an internal independence set $Z$ with $\mu(Z)>0$. \item The set of internal independence sets for ${\overrightarrow{A}}$ is $G'$-invariant. \item An internal set is an independence set for ${\overrightarrow{A}} \times {\boldsymbol{B}}$ if and only if it is an independence set for both ${\overrightarrow{A}}$ and ${\boldsymbol{B}}$. \end{enumerate} \end{proposition} \begin{proof} Fix a compatible metric $\rho$ on $X$ which gives $X$ diameter at most $1$. (1). The ``if'' part is obvious. Suppose that ${\overrightarrow{A}}$ has positive upper independence density over $\Sigma$. Let $q>0$ be as in Definition~\ref{D-positive independence density}. Let $\{F_n\}_{n\in {\mathbb N}}$ be an increasing sequence of finite subsets of $G$ with $\bigcup_{n\in {\mathbb N}} F_n=G$. For each $n\in {\mathbb N}$, denote by $W'_n$ the set of all $i\in {\mathbb N}$ for which there is a $(\rho_X, F_n, 1/n, \sigma_i)$-independence set $Z_i$ for ${\overrightarrow{A}}$ with $\zeta(Z_i)\ge q$. Also set $W_n=W'_n\setminus \{1, \dots, n-1\}$. Then $W'_n\in {\mathfrak F}$ by our assumption and hence $W_n\in {\mathfrak F}$ for each $n\in {\mathbb N}$. Note that the sequence $\{W_n\}_{n\in {\mathbb N}}$ is decreasing, and $\bigcap_{n\in {\mathbb N}}W_n=\emptyset$. We define an internal set $Z=\prod_{\mathfrak F} {\mathcal Z}_i$ as follows. If $i\in {\mathbb N} \setminus W_1$, we take any ${\mathcal Z}_i \subseteq \{1, \dots, d_i\}$. If $i\in W_n\setminus W_{n+1}$ for some $n\in {\mathbb N}$, we take ${\mathcal Z}_i$ to be a $(\rho, F_n, 1/n, \sigma_i)$-independence set for ${\overrightarrow{A}}$ with $|{\mathcal Z}_i|/d_i\ge q$. Then ${\mathcal Z}_i$ is a $(\rho, F_n, 1/n, \sigma_i)$-independence set for ${\overrightarrow{A}}$ for all $n\in {\mathbb N}$ and $i\in W_n$. Thus $Z$ is an internal independence set for ${\overrightarrow{A}}$. As $|{\mathcal Z}_i|/d_i\ge q$ for all $i\in W_1$, we have $\mu(Z)=\lim_{i\to {\mathfrak F}}\zeta(Z_i)\ge q$. This proves the ``only if'' part. (2). Let $Z=\prod_{\mathfrak F} {\mathcal Z}_i$ be an internal independence set for ${\overrightarrow{A}}$, and let $\tau=(\tau_i)_i\in G'$. Then $\tau Z=\prod_{\mathfrak F} \tau_i{\mathcal Z}_i$. Let $F$ be a nonempty finite subset of $G$ and $1>\delta>0$. Then the set $W$ of all $i\in {\mathbb N}$ for which ${\mathcal Z}_i$ is a $(\rho, F, \delta, \sigma_i)$-independence set for ${\overrightarrow{A}}$ is in ${\mathfrak F}$. Since $\tau\in G'$, the set $V$ of all $i\in {\mathbb N}$ for which $\max_{s\in F}\rho_{\rm Hamm}(\tau^{-1}_i \sigma_{i,s}, \sigma_{i,s}\tau^{-1}_i) \le \delta^2$ is also in ${\mathfrak F}$. Then $V\cap W$ is in ${\mathfrak F}$. For every $i\in V$, $\varphi\in {\rm Map}(\rho, F, \delta, \sigma_i)$, and $s\in F$ one has \begin{align*} \lefteqn{\rho_2(\alpha_s\circ \varphi\circ \tau^{-1}_i, \varphi \circ \tau^{-1}_i \circ \sigma_{i,s})}\hspace*{20mm} \\ \hspace*{20mm} &\le \rho_2(\alpha_s\circ \varphi\circ \tau^{-1}_i, \varphi \circ \sigma_{i,s}\circ \tau^{-1}_i) +\rho_2(\varphi\circ \sigma_{i,s} \circ \tau^{-1}_i, \varphi \circ \tau^{-1}_i \circ \sigma_{i,s}) \\ &\le \rho_2(\alpha_s\circ \varphi, \varphi \circ \sigma_{i,s}) +(\rho_{\rm Hamm}(\tau^{-1}_i \circ \sigma_{i,s}, \sigma_{i,s} \circ \tau^{-1}_i))^{1/2}\\ &\le \delta+\delta= 2\delta, \end{align*} where $\alpha_s$ is the transformation $x\mapsto sx$ of $X$, and hence $\varphi \circ \tau^{-1}_i\in {\rm Map}(\rho, F, 2\delta, \sigma_i)$. It follows that for every $i\in V\cap W$ the set $\tau_i {\mathcal Z}_i$ is a $(\rho, F, 2\delta, \sigma_i)$-independence set for ${\overrightarrow{A}}$. Therefore $\tau Z$ is an internal independence set for ${\overrightarrow{A}}$. (3). This can be proved using arguments similar to the proof of Proposition~\ref{P-entropy for product}. \end{proof} \begin{lemma}\label{L-ergodic to positive intersection} Suppose that $\Gamma$ is a subgroup of ${\mathfrak G}$ and the action of $\Gamma$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic. Let $Y, Z\in {\mathfrak B}$ be such that $\mu(Y), \mu(Z)>0$. Then $\mu(Y\cap \tau Z)>0$ for some $\tau\in \Gamma$. \end{lemma} \begin{proof} Set $r=\sup_F \mu(\bigcup_{\tau \in F} \tau Z)$ with $F$ ranging over the nonempty countable subsets of $\Gamma$. Then we can find nonempty finite subsets $F_1, F_2, \dots$ of $\Gamma$ such that $r=\lim_{n\to \infty}\mu(\bigcup_{\tau \in F_n} \tau Z)$. Set $W=\bigcup_{n\in {\mathbb N}} F_n$ and $Z'=\bigcup_{\tau \in W} \tau Z$. Then $W$ is a countable subset of $\Gamma$ and $r=\mu(Z')$. For every $\tau'\in \Gamma$ we have $\mu(Z\cup \tau'Z)=\mu(\bigcup_{\tau \in W\cup \tau' W} \tau Z)\le r$ and hence $\mu(\tau' Z\setminus Z)=0$. Since the action of $\Gamma$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic, we conclude that $r=1$. Thus $\mu(Y)=\mu(Y\cap Z')\le \sum_{\tau \in W}\mu(Y \cap \tau Z)$, and hence $\mu(Y \cap \tau Z)>0$ for some $\tau \in W$. \end{proof} \begin{lemma} \label{L-tuple of product sets} Suppose that the action of $G'$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic. Let $G$ act continuously on compact metrizable spaces $X$ and $Y$. Let $k\in{\mathbb N}$ and let ${\overrightarrow{A}}$ and ${\boldsymbol{B}}$ be $k$-tuples of subsets of $X$ and $Y$, respectively. Suppose that both ${\overrightarrow{A}}$ and ${\boldsymbol{B}}$ have positive upper independence density over $\Sigma$. Then ${\overrightarrow{A}}\times {\boldsymbol{B}}$ also has positive upper independence density over $\Sigma$. \end{lemma} \begin{proof} This follows from Proposition~\ref{P-basics of internal independence sets} and Lemma~\ref{L-ergodic to positive intersection}. \end{proof} Theorem~\ref{T-ergodic to product} now follows from Proposition~\ref{P-product for IE} and Lemma~\ref{L-tuple of product sets}. The remainder of this section is devoted to the problem of when the ergodicity hypothesis in Theorem~\ref{T-ergodic to product} is satisfied. We prove that this happens when $G$ is residually finite and $\Sigma$ arises from finite quotients of $G$, and also when $G$ is amenable and $\Sigma$ is arbitrary. A combination of results of Elek and Szabo \cite[Thm.\ 2]{EleSza11} and Paunescu \cite{Pau11} shows on the other hand that if $G$ is nonamenable then there is always a sofic approximation sequence $\Sigma$ for which the commutant $G'$ does not act ergodically. Let $G$ be an infinite residually finite group, and let $\{G_i\}_{i\in {\mathbb N}}$ be a sequence of finite-index normal subgroups of $G$ such that $\bigcap_{n\in {\mathbb N}}\bigcup_{i\ge n}G_i=\{e\}$. Then we have the sofic approximation sequence $\Sigma = \{ \sigma_i : G\to{\rm Sym} (|G/G_i| ) \}$ by identifying $\{1, \dots, |G/G_i|\}$ with $G/G_i$ and setting $\sigma_i(s)(tG_i)=stG_i$ for $s, t\in G$. \begin{theorem} \label{T-rf ergodic} Under the above hypotheses, the action of $G'$ on $(\prod_{\mathfrak F}\{1, \dots, |G/G_i|\}, {\mathfrak B}, \mu)$ is ergodic. \end{theorem} \begin{proof} Consider the right multiplication action $\sigma'$ of $G/G_i$ on itself given by $\sigma'_i(sG_i)(tG_i)=ts^{-1}G_i$ for $s, t\in G$. Since this commutes with $\sigma_i$, it suffices to show that the action of $\prod_{\mathfrak F}\sigma'_i(G/G_i)\subseteq G'$ on $(\prod_{\mathfrak F}\{1, \dots, |G/G_i|\}, {\mathfrak B}, \mu)$ is ergodic. Let ${\mathcal Y}_i\subseteq G/G_i$. Then, using the $\ell^1$-norm with respect to the uniform probability measure on $G/G_i$, \begin{align*} \frac{1}{|G/G_i|}\sum_{sG_i\in G/G_i}|\sigma'_i(sG_i){\mathcal Y}_i\Delta {\mathcal Y}_i| &= \sum_{sG_i\in G/G_i}\big\|1_{\sigma'_i(sG_i){\mathcal Y}_i}-1_{{\mathcal Y}_i}\big\|_1\\ &\ge \bigg\|\sum_{sG_i\in G/G_i}1_{\sigma'_i(sG_i){\mathcal Y}_i}-|G/G_i|\cdot 1_{{\mathcal Y}_i} \bigg\|_1\\ &= \big\||{\mathcal Y}_i|\cdot 1_{G/G_i}-|G/G_i|\cdot 1_{{\mathcal Y}_i}\big\|_1=2\frac{|{\mathcal Y}_i|}{|G/G_i|}\bigg(|G/G_i|-|{\mathcal Y}_i|\bigg). \end{align*} Thus there is some $s_iG_i\in G/G_i$ with $\frac{1}{|G/G_i|}|\sigma'_i(s_iG_i){\mathcal Y}_i\Delta {\mathcal Y}_i|\ge 2\frac{|{\mathcal Y}_i|}{|G/G_i|}\big(1-\frac{|{\mathcal Y}_i|}{|G/G_i|}\big)$. Let $Y=\prod_{\mathfrak F} {\mathcal Y}_i$ be an internal subset of $\prod_{\mathfrak F}\{1, \dots, |G/G_n|\}$. Take $s_iG_i \in G/G_i$ as above for each $i\in {\mathbb N}$. Set $s=(s_iG_i)_i$. Then \begin{align*} \mu(\sigma'(s)Y\Delta Y)&=\lim_{n\to {\mathfrak F}}\frac{|\sigma'(s_iG_i){\mathcal Y}_i\Delta {\mathcal Y}_i|}{|G/G_i|} \\ &\ge \lim_{n\to {\mathfrak F}}2\frac{|{\mathcal Y}_i|}{|G/G_i|}\bigg(1-\frac{|{\mathcal Y}_i|}{|G/G_i|}\bigg)=2\mu(Y)(1-\mu(Y)). \end{align*} If $\mu(\sigma'(s)Y\Delta Y)=0$, then $\mu(Y)=0$ or $1$. This finishes the proof. \end{proof} \begin{theorem}\label{T-amenable ergodic} Let $G$ be a countable amenable group. For every sofic approximation sequence $\Sigma$ for $G$, the action of $G'$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic. \end{theorem} The proof of Theorem~\ref{T-amenable ergodic} requires several lemmas. We will use the following terminology. Let $(X,\mu )$ be a finite measure space and let $\delta \geq 0$. A family of measurable subsets of $X$ is said to {\it $\delta$-cover} $X$ if its union has measure at least $\delta \mu (X)$. A collection $\{ A_i \}_{i\in I}$ of positive measure sets is {\it $\delta$-disjoint} if there exist pairwise disjoint sets $\widehat{A}_i \subseteq A_i$ such that $\mu (\widehat{A}_i ) \geq (1-\delta ) \mu (A_i )$ for all $i\in I$. The following is the Rokhlin lemma for sofic approximations, which is based on the quasitiling theory of Ornstein and Weiss and appears as Lemma~4.5 in \cite{KerLi10}. The statement of the latter does not contain condition (3) below, but it is not hard to see from the proof in \cite{KerLi10} that it can be arranged. \begin{lemma}\label{L-Rokhlin1} Let $G$ be a countable discrete group. Let $0\le \theta<1$, and $0<\eta<1$. Then there are an $\ell'\in {\mathbb N}$ and $\kappa, \eta''>0$ such that, whenever $e\in F_1\subseteq F_2\subseteq \cdots \subseteq F_{\ell'}$ are finite subsets of $G$ with $|(F_{k-1}^{-1}F_k) \setminus F_k|\le \kappa |F_k|$ for $k=2, \dots, \ell'$, there exist $\lambda_1 , \dots, \lambda_{\ell'} \in [0,1]$ such that for every $\delta>0$, every sufficiently large $d\in {\mathbb N}$ (depending on $\delta$), every map $\sigma: G\rightarrow {\rm Sym}(d)$ with a set ${\mathcal B}\subseteq \{1, \dots, d\}$ satisfying $|{\mathcal B}|\ge (1-\eta'')d$ and \[ \sigma_{st}(a)=\sigma_s\sigma_t(a), \ \sigma_s(a)\neq \sigma_{s'}(a),\ \sigma_e(a)=a \] for all $a\in {\mathcal B}$ and $s, t, s'\in F_{\ell'}\cup F_{\ell'}^{-1}$ with $s\neq s'$, and every set ${\mathcal V}\subseteq \{1, \dots, d\}$ with $|{\mathcal V}|\ge (1-\theta)d$, there exist ${\mathcal C}_1, \dots, {\mathcal C}_{\ell'}\subseteq {\mathcal V}$ such that \begin{enumerate} \item for every $k=1, \dots, \ell'$ and $c\in {\mathcal C}_k$, the map $s\mapsto \sigma_s(c)$ from $F_k$ to $\sigma(F_k)c$ is bijective, \item the sets $\sigma(F_1){\mathcal C}_1, \dots, \sigma(F_{\ell'}){\mathcal C}_{\ell'}$ are pairwise disjoint and the family $\bigcup_{k=1}^{\ell'}\{\sigma(F_k)c: c\in {\mathcal C}_k\}$ is $\eta$-disjoint and $(1-\theta-\eta)$-covers $\{1, \dots, d\}$, \item $\sum_{k=1}^{\ell'}||\sigma(F_k){\mathcal C}_k|/d-\lambda_k|<\delta$. \end{enumerate} \end{lemma} \begin{lemma}\label{L-density} Let $G$ be a countable discrete group. Let $F$ be a nonempty finite subset of $G$. For every $d\in {\mathbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$, every set ${\mathcal B}\subseteq \{1, \dots, d\}$ satisfying \[ \sigma_s(a)\neq \sigma_t(a) \] for all $a\in {\mathcal B}$ and distinct $s, t\in F$, every ${\mathcal J}\subseteq \{1, \dots, d\}$, and every $0<\lambda<1$, there exists a ${\mathcal V}\subseteq {\mathcal B}$ such that $|{\mathcal V}|\ge \frac{|{\mathcal B}|(1-\lambda)-d+|{\mathcal J}| }{1-\lambda}$ and $|\sigma(F)a\cap {\mathcal J}|>\lambda |F| $ for all $a\in {\mathcal V}$. \end{lemma} \begin{proof} Set $X=\{1, \dots, d\}$. Denote by $\zeta$ the uniform probability measure on $X$. One has \begin{align*} \frac{1}{d}\sum_{a\in {\mathcal B}}|\sigma(F)a\cap {\mathcal J}|&=\int_{{\mathcal J}}\sum_{a\in {\mathcal B}} \mathbf{1}_{\sigma(F)a}(x)\, d\zeta(x)\\ &=\int_X\sum_{a\in {\mathcal B}}\mathbf{1}_{\sigma(F)a}(x)\, d\zeta(x)-\int_{X\setminus {\mathcal J}}\sum_{a\in {\mathcal B}} \mathbf{1}_{\sigma(F)a}(x)\, d\zeta(x)\\ &\ge \frac{|{\mathcal B}|\cdot |F|}{d}-\int_{X\setminus {\mathcal J}}|F| \, d\zeta(x)\\ &=\frac{|{\mathcal B}|\cdot |F|}{d}-\bigg(1-\frac{|{\mathcal J}|}{d}\bigg)|F|. \end{align*} Set ${\mathcal V}=\{a\in {\mathcal B}: |\sigma(F)a\cap {\mathcal J}|> \lambda |F|\}$. Then \begin{align*} \frac{1}{d}\sum_{a\in {\mathcal B}}|\sigma(F)a\cap {\mathcal J}|\le \frac{|{\mathcal V}|\cdot |F|}{d}+\frac{(|{\mathcal B}|-|{\mathcal V}|)\lambda |F|}{d}. \end{align*} Thus \[ \frac{|{\mathcal V}|\cdot |F|}{d}+\frac{(|{\mathcal B}|-|{\mathcal V}|)\lambda |F|}{d}\ge \frac{|{\mathcal B}|\cdot |F|}{d}-\bigg(1-\frac{|{\mathcal J}|}{d}\bigg)|F|. \] It follows that \[ |{\mathcal V}|\ge \frac{|{\mathcal B}|(1-\lambda)-d+|{\mathcal J}| }{1-\lambda}. \qedhere \] \end{proof} The proof of Lemma~4.4 in \cite{KerLi10} shows the following. \begin{lemma}\label{L-disjointcover} Let $(X,\mu )$ be a finite measure space. Let $\delta ,\eta\in [0,1)$ and let $\{ A_i \}_{i\in I}$ be a finite $\delta$-even covering of $X$ by positive measure sets. Then every $\eta$-disjoint subcollection of $\{ A_i \}_{i\in I}$ can be enlarged to an $\eta$-disjoint subcollection of $\{ A_i \}_{i\in I}$ which $\eta (1-\delta )$-covers $X$. \end{lemma} \begin{lemma}\label{L-bijection} Let $G$ be a countable discrete group. Let $F$ be a nonempty finite subset of $G$, $0<\tau\le 1$, and $0<\eta<1/2$. Then for every large enough $d\in {\mathbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$ with sets ${\mathcal B}_1, {\mathcal B}_2 \subseteq \{1, \dots, d\}$ satisfying $|{\mathcal B}_i|\ge (\frac{\tau}{2}+\frac{2-2\tau}{2-\tau})d$ and \[\sigma_s(a)\neq \sigma_{t}(a)\] for all $a\in {\mathcal B}_i$ and distinct $s, t\in F$, and every ${\mathcal J}_1, {\mathcal J}_2\subseteq \{1, \dots, d\}$ with $|{\mathcal J}_i|\ge \tau d$ for $i=1, 2$, there exist ${\mathcal C}_i\subseteq {\mathcal B}_i$ such that \begin{enumerate} \item for every $i=1, 2$, the family $\{\sigma(F)c: c\in {\mathcal C}_i\}$ is $\eta$-disjoint and $\eta\frac{\tau}{16}$-covers $\{1, \dots, d\}$, \item there is a bijection $\varphi: {\mathcal C}_1\rightarrow {\mathcal C}_2$ such that for any $c\in {\mathcal C}_1$, one has $|\{s\in F: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2 |F|$. \end{enumerate} \end{lemma} \begin{proof} Note that for all distinct $a, c\in \{1, \dots, d\}$ and $s\in F$ we have \[ \sigma_s(a)\neq \sigma_s(c). \] Taking $\lambda=\tau/2$ and ${\mathcal J}={\mathcal J}_1$ in Lemma~\ref{L-density}, we find a ${\mathcal V}_1\subseteq {\mathcal B}_1$ such that $|{\mathcal V}_1|/d\ge \frac{|{\mathcal B}_1|(1-\lambda)/d-1+|{\mathcal J}_1|/d}{1-\lambda}\ge \frac{(\tau/2+(2-2\tau)/(2-\tau))(1-\lambda)-1+\tau}{1-\lambda}=\frac{\tau}{2}$ and $|\sigma(F)a\cap {\mathcal J}_1|/|F|\ge \frac{\tau}{2}$ for all $a\in {\mathcal V}_1$. Observe that \[ \sum_{c\in {\mathcal V}_1}|\sigma(F)c|=|F|\cdot |{\mathcal V}_1|\ge |F|\cdot \frac{\tau}{2}d=|F|\cdot \bigg(1-\frac{2-\tau}{2}\bigg)d , \] so that the family $\{\sigma(F)c\}_{c\in {\mathcal V}_1}$ is a $\frac{2-\tau}{2}$-even covering of $\{1, \dots, d\}$ with multiplicity $|F|$. By Lemma~\ref{L-disjointcover}, we can find a set ${\mathcal W}_1\subseteq {\mathcal V}_1$ such that the family $\{\sigma(F)c\}_{c\in {\mathcal W}_1}$ is $\eta$-disjoint and $\eta\frac{\tau}{2}$-covers $\{1, \dots, d\}$. We may assume that $|\sigma(F){\mathcal W}_1|< \eta \tau d/2 +|F|$. List all the subsets of $F$ with cardinality $\lceil |F|\tau/2\rceil$ as $F_1, \dots, F_n$ for some $n\in {\mathbb N}$, where $\lceil x\rceil$ for a real number $x$ denotes the smallest integer no less than $x$. Then we can write ${\mathcal W}_1$ as the disjoint union of sets ${\mathcal W}_{1, j}$ for $1\le j\le n$ such that $\sigma(F_j)c\subseteq {\mathcal J}_1$ for all $1\le j\le n$ and $c\in {\mathcal W}_{1, j}$. Throwing away those empty ${\mathcal W}_{1, j}$, we may assume that each ${\mathcal W}_{1, j}$ is nonempty. For each $1\le j\le n$, taking $\lambda=\tau/2$ and ${\mathcal J}={\mathcal J}_2$ in Lemma~\ref{L-density}, we find a ${\mathcal V}_{2, j}\subseteq {\mathcal B}_2$ such that $|{\mathcal V}_{2, j}|/d\ge \frac{|{\mathcal B}_2|(1-\lambda)/d-1+|{\mathcal J}_2|/d}{1-\lambda}\ge \frac{(\tau/2+(2-2\tau)/(2-\tau))(1-\lambda)-1+\tau}{1-\lambda}=\frac{\tau}{2}$ and $|\sigma(F_j)a\cap {\mathcal J}_2|/|F_j|\ge \frac{\tau}{2}$ for all $a\in {\mathcal V}_{2, j}$. We will recursively construct pairwise disjoint sets ${\mathcal C}_{2, 1}, \dots, {\mathcal C}_{2, n}$ such that the family $\{\sigma(F)c: c\in {\mathcal C}_{2, j}, 1\le j\le n\}$ is $\eta$-disjoint, and ${\mathcal C}_{2, j}\subseteq {\mathcal V}_{2, j}$ and $|{\mathcal C}_{2, j}|=\lfloor |{\mathcal W}_{1, j}|/2\rfloor$ for every $1\le j\le n$, where $\lfloor x\rfloor$ for any real number $x$ denotes the largest integer no bigger than $x$. Note that \[ \sum_{c\in {\mathcal V}_{2, 1}}|\sigma(F)c|=|F|\cdot |{\mathcal V}_{2, 1}|\ge |F|\cdot \frac{\tau}{2}d=|F|\cdot \bigg(1-\frac{2-\tau}{2}\bigg)d , \] so that the family $\{\sigma(F)c\}_{c\in {\mathcal V}_{2, 1}}$ is a $\frac{2-\tau}{2}$-even covering of $\{1, \dots, d\}$ with multiplicity $|F|$. By Lemma~\ref{L-disjointcover}, we can find a set ${\mathcal W}_{2, 1}\subseteq {\mathcal V}_{2, 1}$ such that the family $\{\sigma(F)c\}_{c\in {\mathcal W}_{2, 1}}$ is $\eta$-disjoint and $\eta\frac{\tau}{2}$-covers $\{1, \dots, d\}$. Note that \[ |{\mathcal W}_{2, 1}|\cdot |F|\ge |\sigma(F){\mathcal W}_{2, 1}|\ge \eta\frac{\tau}{2}d, \] and since the family $\{\sigma(F)c\}_{c\in {\mathcal W}_1}$ is $\eta$-disjoint, we have \begin{align} \label{E-Rokhlin1} \frac{1}{2}|{\mathcal W}_1|\cdot |F|\le(1-\eta)|{\mathcal W}_1|\cdot |F|\le |\sigma(F){\mathcal W}_1|< \eta\frac{\tau}{2}d+|F|. \end{align} Thus \[ \frac{1}{2}|{\mathcal W}_1|\cdot |F|<|{\mathcal W}_{2, 1}|\cdot |F|+|F|, \] and hence \[ |{\mathcal W}_{2, 1}|> \frac{1}{2}|{\mathcal W}_1|-1\ge \frac{1}{2}|{\mathcal W}_{1, 1}|-1. \] Therefore we can take a subset ${\mathcal C}_{2, 1}$ of ${\mathcal W}_{2, 1}$ with cardinality $\lfloor \frac{1}{2}|{\mathcal W}_{1, 1}|\rfloor$. Suppose that we have found pairwise disjoint sets ${\mathcal C}_{2, 1}, \dots, {\mathcal C}_{2, k}$ for some $1\le k<n$ such that the family $\{\sigma(F)c: c\in {\mathcal C}_{2, j}, 1\le j\le k\}$ is $\eta$-disjoint, and ${\mathcal C}_{2, j}\subseteq {\mathcal V}_{2, j}$ and $|{\mathcal C}_{2, j}|=\lfloor |{\mathcal W}_{1, j}|/2\rfloor$ for every $1\le j\le k$. Note that \begin{align*} \sum_{c\in {\mathcal V}_{2, k+1}\cup \bigcup_{1\le j\le k}{\mathcal C}_{2, j}}|\sigma(F)c| &=|F |\cdot \bigg|{\mathcal V}_{2, k+1}\cup \bigcup_{1\le j\le k}{\mathcal C}_{2, j} \bigg| \\ &\ge |F|\cdot |{\mathcal V}_{2, k+1}|\ge |F|\cdot \frac{\tau}{2}d=|F|\cdot \bigg(1-\frac{2-\tau}{2}\bigg)d , \end{align*} so that the family $\{\sigma(F)c\}_{c\in {\mathcal V}_{2, k+1}\cup \bigcup_{1\le j\le k}{\mathcal C}_{2, j}}$ is a $\frac{2-\tau}{2}$-even covering of $\{1, \dots, d\}$ with multiplicity $|F|$. By Lemma~\ref{L-disjointcover}, we can find a set ${\mathcal W}_{2, k+1}\subseteq {\mathcal V}_{2, k+1}\setminus \bigcup_{1\le j\le k}{\mathcal C}_{2, j}$ such that the family $\{\sigma(F)c\}_{c\in {\mathcal W}_{2, k+1}\cup \bigcup_{1\le j\le k}{\mathcal C}_{2, j}}$ is $\eta$-disjoint and $\eta\frac{\tau}{2}$-covers $\{1, \dots, d\}$. Note that \[ \bigg(|{\mathcal W}_{2, k+1}|+\sum_{1\le j\le k}|{\mathcal C}_{2, j}|\bigg)\cdot |F| \ge \bigg|\sigma(F)\bigg({\mathcal W}_{2, k+1}\cup \bigcup_{1\le j\le k}{\mathcal C}_{2, j}\bigg)\bigg|\ge \eta\frac{\tau}{2}d. \] Thus, combining with \eqref{E-Rokhlin1}, we have \begin{align*} \frac{1}{2}\sum_{1\le j\le k+1}|{\mathcal W}_{1, j}|\cdot |F|&\le \frac{1}{2}|{\mathcal W}_1|\cdot |F|\\ &<\bigg(|{\mathcal W}_{2, k+1}|+\sum_{1\le j\le k}|{\mathcal C}_{2, j}|\bigg)\cdot |F|+|F|\\ &\le \bigg(|{\mathcal W}_{2, k+1}|+\frac{1}{2}\sum_{1\le j\le k}|{\mathcal W}_{1, j}|\bigg)\cdot |F|+|F|, \end{align*} and hence \[ |{\mathcal W}_{2, k+1}|> \frac{1}{2}|{\mathcal W}_{1, k+1}|-1. \] Therefore we can take a subset ${\mathcal C}_{2, k+1}$ of ${\mathcal W}_{2, k+1}$ with cardinality $\lfloor \frac{1}{2}|{\mathcal W}_{1, k+1}|\rfloor$. This completes the recursive construction. For each $1\le j\le n$ take a subset ${\mathcal C}_{1, j}$ of ${\mathcal W}_{1, j}$ with cardinality $\lfloor |{\mathcal W}_{1, j}|/2\rfloor$. Set ${\mathcal C}_i=\bigcup_{1\le j\le n}{\mathcal C}_{i, j}$ for $i=1, 2$. Take a bijection $\varphi: {\mathcal C}_1\rightarrow {\mathcal C}_2$ such that $\varphi({\mathcal C}_{1, j})={\mathcal C}_{2, j}$ for all $1\le j\le n$. For each $c\in {\mathcal C}_1$, considering $j$ such that $c\in {\mathcal C}_{1, j}$ one has \begin{align*} |\{s\in F: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi(c))\in {\mathcal J}_2\}| &\ge |\{s\in F_j: \sigma_s(\varphi(c))\in {\mathcal J}_2\}| \\ &\ge \frac{\tau}{2}|F_j| \ge \bigg(\frac{\tau}{2}\bigg)^2|F|. \end{align*} Note that \[ |{\mathcal W}_1|\cdot |F|\ge |\sigma(F){\mathcal W}_1|\ge \eta\frac{\tau}{2}d, \] and hence for $i=1, 2$, \[ |{\mathcal C}_i|=\sum_{1\le j\le n}| {\mathcal C}_{i, j}|\ge \sum_{1\le j\le n}\bigg(\frac{1}{2}|{\mathcal W}_{1, j}|-1\bigg) \ge \frac{1}{2}|{\mathcal W}_1|-2^{|F|}\ge \frac{1}{4}|{\mathcal W}_1| \] when $d$ is sufficiently large. Since the family $\{\sigma(F)c: c\in {\mathcal C}_i\}$ is $\eta$-disjoint, we get \[ |\sigma(F){\mathcal C}_i|\ge (1-\eta)|{\mathcal C}_i|\cdot |F|\ge (1-\eta)\eta\frac{\tau}{8}d\ge \eta\frac{\tau}{16}d. \] \end{proof} \begin{lemma} \label{L-Rokhlin} Let $G$ be a countable discrete group. Let $0<\tau\le 1$, and $0<\eta<1/2$ with $\eta\frac{\tau}{16}<\frac{1-\tau'}{24}$, where $\tau'=\frac{\tau}{2}+\frac{2-2\tau}{2-\tau}<1$. Then there are an $\ell\in {\mathbb N}$ and $\eta'>0$ such that, whenever $e\in F_1\subseteq F_2\subseteq \cdots \subseteq F_\ell$ are finite subsets of $G$ with $|(F_{k-1}^{-1}F_k) \setminus F_k|\le |F_k|$ for $k=2, \dots, \ell$, for every large enough $d\in {\mathbb N}$, every map $\sigma: G\rightarrow {\rm Sym}(d)$ with a set ${\mathcal B}\subseteq \{1, \dots, d\}$ satisfying $|{\mathcal B}|\ge (1-\eta')d$ and \[\sigma_{st}(a)=\sigma_s\sigma_t(a),\ \sigma_s(a)\neq \sigma_{s'}(a),\ \sigma_e(a)=a\] for all $a\in {\mathcal B}$ and $s, t, s'\in F_\ell\cup F_\ell^{-1}$ with $s\neq s'$, and any ${\mathcal J}_1, {\mathcal J}_2\subseteq \{1, \dots, d\}$ with $|{\mathcal J}_i|\ge \tau d$ for $i=1, 2$, there exist ${\mathcal C}_{i, 1}, \dots, {\mathcal C}_{i, \ell}\subseteq {\mathcal B}$ such that \begin{enumerate} \item for every $i=1, 2$, $k=1, \dots, \ell$ and $c\in {\mathcal C}_{i, k}$, the map $s\mapsto \sigma_s(c)$ from $F_k$ to $\sigma(F_k)c$ is bijective, \item for every $i=1, 2$, the sets $\sigma(F_1){\mathcal C}_{i, 1}, \dots, \sigma(F_\ell){\mathcal C}_{i, \ell}$ are pairwise disjoint, the family $\bigcup_{k=1}^\ell\{\sigma(F_k)c: c\in {\mathcal C}_{i, k}\}$ is $\eta$-disjoint, and $(1-\eta)\frac{1-\tau'}{24}d\le |\bigcup_{k=1}^\ell\sigma(F_k){\mathcal C}_{i, k}|\le (\frac{1}{1-\eta}\frac{1-\tau'}{24}+\eta)d$, \item for every $k=1, \dots, \ell$, there is a bijection $\varphi_k: {\mathcal C}_{1, k}\rightarrow {\mathcal C}_{2, k}$ such that for each $c\in {\mathcal C}_{1, k}$ one has $|\{s\in F_k: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi_k(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2 |F_k|$. \end{enumerate} \end{lemma} \begin{proof} Set $\eta'=\frac{1-\tau'}{2}$. Take $\ell$ to be the largest integer satisfying $\ell \eta\frac{\tau}{16}\le \frac{1-\tau'}{12}$. Then $\ell \eta\frac{\tau}{16}\ge \frac{1-\tau'}{24}$. We will recursively construct sets ${\mathcal C}_{i, 1}', \dots, {\mathcal C}_{i, \ell}'$ in reverse order so that (i) for every $i=1, 2$ and $1\le k\le \ell$, the sets $\sigma(F_k){\mathcal C}_{i, k}', \dots, \sigma(F_\ell){\mathcal C}_{i, \ell}'$ are pairwise disjoint and the family $\bigcup_{n=k}^\ell\{\sigma(F_n)c: c\in {\mathcal C}'_{i, n}\}$ is $\eta$-disjoint and $(\ell-k+1)\eta\frac{\tau}{16}$-covers $\{1, \dots, d\}$, and (ii) for every $k=1, \dots, \ell$, there is a bijection $\varphi_k: {\mathcal C}_{1, k}'\rightarrow {\mathcal C}_{2, k}'$ such that for every $c\in {\mathcal C}_{1, k}'$ one has $|\{s\in F_k: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi_k(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2 |F_k|$. Taking ${\mathcal B}_i={\mathcal B}$ for $i=1, 2$ in Lemma~\ref{L-bijection} we find ${\mathcal C}_{i, \ell}'\subseteq {\mathcal B}$ for $i=1, 2$ such that the family $\{\sigma(F_\ell)c: c\in {\mathcal C}_{i, \ell}'\}$ is $\eta$-disjoint and $\eta\frac{\tau}{16}$-covers $\{1, \dots, d\}$ for $i=1, 2$ and there is a bijection $\varphi_\ell: {\mathcal C}_{1, \ell}'\rightarrow {\mathcal C}_{2, \ell}'$ with $|\{s\in F_\ell: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi_\ell(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2|F_\ell|$ for all $c\in {\mathcal C}_{1, \ell}'$. Suppose that $1\le k<\ell$ and we have found ${\mathcal C}_{i, k+1}', \dots, {\mathcal C}_{i, \ell}'\subseteq {\mathcal B}$ for $i=1, 2$ such that the sets $\sigma(F_{k+1}){\mathcal C}_{i, k+1}', \dots, \sigma(F_\ell){\mathcal C}_{i,\ell}'$ are pairwise disjoint and the family $\bigcup_{n=k+1}^\ell\{\sigma(F_n)c: c\in {\mathcal C}_{i, n}'\}$ is $\eta$-disjoint and $(\ell-k)\eta\frac{\tau}{16}$-covers $\{1, \dots, d\}$ for each $i=1,2$, and there is a bijection $\varphi_j:{\mathcal C}_{1, j}'\rightarrow {\mathcal C}_{2, j}'$ with $|\{s\in F_j: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi_j(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2|F_j|$ for all $j=k+1, \dots, \ell$ and $c\in {\mathcal C}_{1, j}'$. Set $\theta_{i, k}=|\bigcup_{j=k+1}^\ell \sigma(F_j){\mathcal C}_{i, j}'|/d$ and ${\mathcal B}_{i, k}=\big\{c\in {\mathcal B}: \sigma(F_k)c\cap \big(\bigcup_{j=k+1}^\ell\sigma(F_j){\mathcal C}_{i, j}'\big)=\emptyset\big\}$ for $i=1, 2$. If $1-\tau'-\eta'< 3\theta_{m, k}$ for some $m=1, 2$, then we set ${\mathcal C}_{i, k}'=\emptyset$ for each $i=1, 2$. Then \begin{align*} \sum_{j=k}^\ell|{\mathcal C}'_{m, j}|\cdot |F_j|\ge \bigg|\bigcup_{j=k}^\ell\sigma(F_j){\mathcal C}_{m, j}' \bigg| =\theta_{m, k}d\ge \frac{1-\tau'-\eta'}{3}d= \frac{1-\tau'}{6}d. \end{align*} Since the family $\bigcup_{j=k}^\ell\{\sigma(F_j)c: c\in {\mathcal C}_{i, j}'\}$ is $\eta$-disjoint, one has \begin{align*} \bigg|\bigcup_{j=k}^\ell\sigma(F_j){\mathcal C}_{i, j}' \bigg| &\ge (1-\eta)\sum_{j=k}^\ell|{\mathcal C}_{i, j}'|\cdot |F_j| \\ &\ge (1-\eta)\frac{1-\tau'}{6}d\ge \frac{1-\tau'}{12}d\ge \ell\eta\frac{\tau}{16}d\ge (\ell-k+1)\eta\frac{\tau}{16}d \end{align*} for $i=1, 2$. Assume that $1-\tau'-\eta' \ge 3\theta_{i, k}$ for every $i=1, 2$. Let $i\in \{1, 2\}$. For every $c\in {\mathcal B}\setminus {\mathcal B}_{i, k}$ we have $\sigma_s(c)=\sigma_t(a)$ for some $j\in \{k+1, \dots, \ell \}$, $a\in {\mathcal C}_{i, j}'$, $t\in F_j$, and $s\in F_k$, and hence \[ c=\sigma_{s^{-1}}\sigma_s(c)=\sigma_{s^{-1}}\sigma_t(a)=\sigma_{s^{-1}t}(a)\in \bigcup_{j=k+1}^\ell\sigma(F_k^{-1}F_j){\mathcal C}_{i, j}'. \] Therefore \[ {\mathcal B}\setminus {\mathcal B}_{i, k}\subseteq \bigcup_{j=k+1}^\ell\sigma(F_k^{-1}F_j){\mathcal C}_{i, j}'. \] Since the family $\bigcup_{j=k+1}^\ell \{\sigma(F_j)c: c\in {\mathcal C}_{i, j}'\}$ is $\eta$-disjoint we have \[ \frac{1}{2}\sum_{j=k+1}^\ell |F_j|\cdot |{\mathcal C}_{i, j}'|\le \sum_{j=k+1}^\ell (1-\eta)|F_j|\cdot |{\mathcal C}_{i, j}'| \le \bigg|\bigcup_{j=k+1}^\ell \sigma(F_j){\mathcal C}_{i,j}' \bigg|=\theta_{i, k}d. \] Thus \begin{align*} \bigg|\bigcup_{j=k+1}^\ell \sigma(F_k^{-1}F_j){\mathcal C}_{i, j}'\bigg| &\le \bigg|\bigcup_{j=k+1}^\ell \sigma((F_k^{-1}F_j)\setminus F_j){\mathcal C}_{i, j}'\bigg|+\bigg|\bigcup_{j=k+1}^\ell \sigma(F_j){\mathcal C}_{i, j}'\bigg|\\ &\le \sum_{j=k+1}^\ell |(F_k^{-1}F_j)\setminus F_j|\cdot |{\mathcal C}_{i, j}'|+\theta_{i, k}d\\ &\le \sum_{j=k+1}^\ell |(F_{j-1}^{-1}F_j)\setminus F_j|\cdot |{\mathcal C}_{i, j}'|+\theta_{i, k}d\\ &\le \sum_{j=k+1}^\ell |F_j|\cdot |{\mathcal C}_{i, j}'|+\theta_{i, k}d\\ &\le 3 \theta_{i, k}d. \end{align*} Therefore \begin{align*} |{\mathcal B}_{i, k}|=|{\mathcal B}|-|{\mathcal B}\setminus {\mathcal B}_{i, k}| &\ge (1-\eta')d-\bigg|\bigcup_{j=k+1}^\ell \sigma(F_k^{-1}F_j){\mathcal C}_{i, j}'\bigg|\\ &\ge (1-\eta')d-3\theta_{i, k}d \\ &\ge \tau' d. \end{align*} Taking ${\mathcal B}_i={\mathcal B}_{i, k}$ in Lemma~\ref{L-bijection}, we find ${\mathcal C}'_{i, k}\subseteq {\mathcal B}_{i, k}$ for $i=1, 2$ such that the family $\{\sigma(F_k)c: c\in {\mathcal C}_{i, k}'\}$ is $\eta$-disjoint and $\eta\frac{\tau}{16}$-covers $\{1, \dots, d\}$ for $i=1, 2$ and there is a bijection $\varphi_k: {\mathcal C}_{1, k}'\rightarrow {\mathcal C}_{2, k}'$ with $|\{s\in F_k: \sigma_s(c)\in {\mathcal J}_1, \sigma_s(\varphi_k(c))\in {\mathcal J}_2\}|\ge (\frac{\tau}{2})^2|F_k|$ for all $c\in {\mathcal C}_{1, k}'$. Then for each $i=1, 2$, the sets $\sigma(F_k){\mathcal C}_{i, k}', \dots, \sigma(F_\ell){\mathcal C}_{i, \ell}'$ are pairwise disjoint, the family $\bigcup_{j=k}^\ell\{\sigma(F_j)c: c\in {\mathcal C}_{i, j}'\}$ is $\eta$-disjoint, and \begin{align*} \bigg|\bigcup_{j=k}^\ell\sigma(F_j){\mathcal C}_{i, j}'\bigg|&=|\sigma(F_k){\mathcal C}_{i, k}'|+\bigg|\bigcup_{j=k+1}^\ell\sigma(F_j){\mathcal C}_{i, j}'\bigg|\\ &\ge \eta\frac{\tau}{16}d+(\ell-k)\eta\frac{\tau}{16}d=(\ell-k+1)\eta\frac{\tau}{16}d, \end{align*} completing the recursive construction. When $d$ is large enough, take a subset ${\mathcal C}_{1, k}$ of ${\mathcal C}_{1, k}'$ for each $1\le k\le \ell$ such that \[ \frac{1-\tau'}{24}d\le \bigg|\bigcup_{k=1}^\ell\sigma(F_k){\mathcal C}_{1, k}\bigg|\le \frac{1-\tau'}{24}d+|F_\ell|\le \frac{1-\tau'}{24}d+\eta(1-\eta)d. \] Set ${\mathcal C}_{2, k}=\varphi_k({\mathcal C}_{1, k})$ for each $1\le k\le \ell$. Since the families $\bigcup_{k=1}^\ell\{\sigma(F_k)c: c\in {\mathcal C}_{i, k}\}$ for $i=1, 2$ are $\eta$-disjoint, we have \begin{align*} \bigg|\bigcup_{k=1}^\ell \sigma(F_k){\mathcal C}_{2, k}\bigg| \ge (1-\eta)\bigg|\bigcup_{k=1}^\ell \sigma(F_k){\mathcal C}_{1, k}\bigg|\ge (1-\eta)\frac{1-\tau'}{24}d, \end{align*} and \begin{align*} \bigg|\bigcup_{k=1}^\ell \sigma(F_k){\mathcal C}_{2, k}\bigg| \le \frac{1}{1-\eta}\bigg|\bigcup_{k=1}^\ell \sigma(F_k){\mathcal C}_{1, k}\bigg|\le \bigg(\frac{1}{1-\eta}\cdot\frac{1-\tau'}{24}+\eta\bigg)d. \end{align*} \end{proof} We are ready to prove Theorem~\ref{T-amenable ergodic}. \begin{proof}[Proof of Theorem~\ref{T-amenable ergodic}] It suffices to show that for any internal sets $Y=\prod_{{\mathfrak F}}{\mathcal Y}_n$ and $Z=\prod_{{\mathfrak F}}{\mathcal Z}_n$ with strictly positive measure, there is some $s\in G'$ with $\mu(Z\cap sY)>0$. In turn it is enough to show that there is some $\lambda>0$ such that for every finite subset $F$ of $G$ and $\varepsilon>0$ the set of all $n\in {\mathbb N}$ for which there is some $\varphi\in {\rm Sym}(d_n)$ satisfying $\rho_{\rm Hamm}(\varphi\sigma_s, \sigma_s\varphi)<\varepsilon$ for all $s\in F$ and $\frac{|\varphi({\mathcal Y}_n)\cap {\mathcal Z}_n|}{d_n}\ge \lambda$ belongs to ${\mathfrak F}$. Set $\tau=\min(\mu(Y), \mu(Z))/2$, $\tau'=\frac{\tau}{2}+\frac{2-2\tau}{2-\tau}$, and $\lambda=\frac{\tau^2(1-\tau')}{384}$. Take $0<\eta<1/2$ to be a small number with $\eta\frac{\tau}{16}<\frac{1-\tau'}{24}$, to be determined in a moment. Then the set $V$ of all $n\in {\mathbb N}$ satisfying $\min(|{\mathcal Y}_n|/d_n, |{\mathcal Z}_n|/d_n)\ge \tau$ belongs to ${\mathfrak F}$. Let $\ell$ and $\eta'$ be as in Lemma~\ref{L-Rokhlin}. We may assume that $\eta'<\varepsilon/2$. Set $\theta=\frac{1}{(1-\eta)^2}\frac{1-\tau'}{24}+\frac{\eta}{1-\eta}+\eta'$. Let $\ell', \kappa$, and $\eta''$ be as in Lemma~\ref{L-Rokhlin1}. Take $F'_1\subseteq F'_2\subseteq \dots \subseteq F'_{\ell'}\subseteq F_1\subseteq F_2\subseteq \dots \subseteq F_\ell$ to be finite subsets of $G$ containing $e$ such that \begin{enumerate} \item $|((F'_{\ell'})^{-1}F_k)\setminus F_k|<\eta |F_k|$ for all $k=1, \dots, \ell$, \item $|((F'_{k-1})^{-1}F'_k)\setminus F'_k|<\kappa |F'_k|$ for all $k=2, \dots, \ell'$ and $|\tilde{F}'_k|\ge (1-\eta)|F'_k|$ for all $k=1, \dots, \ell'$, where $\tilde{F}'_k=\{s\in F'_k: Fs\subseteq F'_k\}$, and \item $|(F_{k-1}^{-1}F_k)\setminus F_k|<|F_k|$ for all $k=2, \dots, \ell$ and $|\tilde{F}_k|\ge (1-\eta)|F_k|$ for all $k=1, \dots, \ell$, where $\tilde{F}_k=\{s\in F_k: Fs\subseteq F_k\}$. \end{enumerate} Then we have $\lambda_1, \dots, \lambda_{\ell'}$ as in Lemma~\ref{L-Rokhlin1}. When $n\in V$ is large enough, one has $|{\mathcal B}|\ge (1-\min(\eta', \eta''))d_n$ where ${\mathcal B}$ denotes the set of all $a\in \{1, \dots, d_n\}$ satisfying \[ \sigma_{n,st}(a)=\sigma_{n, s}\sigma_{n, t}(a), \ \sigma_{n, s}(a)\neq \sigma_{n, s'}(a),\ \sigma_{n, e}(a) \] for all $s,t\in (F\cup F_\ell)\cup(F\cup F_\ell)^{-1}$ and distinct $s, s'\in F_\ell \cup F_\ell^{-1}$, and one has ${\mathcal C}_{i, 1}, \dots, {\mathcal C}_{i, \ell}\subseteq {\mathcal B}$ for $i=1, 2$ as in Lemma~\ref{L-Rokhlin} for $d=d_n$, $\sigma=\sigma_n$, ${\mathcal J}_1={\mathcal Y}_n$, and ${\mathcal J}_2={\mathcal Z}_n$. Let $i\in \{1, 2\}$. Set ${\mathcal V}_i=\{c\in {\mathcal B}: \sigma_n(F'_{\ell'})c \cap \bigcup_{k=1}^\ell\sigma_n(F_k){\mathcal C}_{i, k}=\emptyset\}$. Then ${\mathcal B}\setminus {\mathcal V}_i\subseteq \bigcup_{k=1}^\ell \sigma_n((F'_{\ell'})^{-1}F_k){\mathcal C}_{i, k}$. Since the family $\bigcup_{k=1}^\ell\{\sigma_n(F_k)c: c\in {\mathcal C}_{i, k}\}$ is $\eta$-disjoint, one has \begin{align*} |{\mathcal B}\setminus {\mathcal V}_i|&\le \bigg|\bigcup_{k=1}^\ell \sigma_n((F'_{\ell'})^{-1}F_k){\mathcal C}_{i, k}\bigg|\\ &\le \bigg|\bigcup_{k=1}^\ell \sigma_n(((F'_{\ell'})^{-1}F_k)\setminus F_k){\mathcal C}_{i, k}\bigg|+\bigg|\bigcup_{k=1}^\ell \sigma_n(F_k){\mathcal C}_{i, k}\bigg|\\ &\le \sum_{k=1}^\ell |\sigma_n(((F'_{\ell'})^{-1}F_k)\setminus F_k){\mathcal C}_{i, k}|+\bigg|\bigcup_{k=1}^\ell \sigma_n(F_k){\mathcal C}_{i, k}\bigg|\\ &\le \sum_{k=1}^\ell |((F'_{\ell'})^{-1}F_k)\setminus F_k|\cdot |{\mathcal C}_{i, k}|+\bigg|\bigcup_{k=1}^\ell \sigma_n(F_k){\mathcal C}_{i, k}\bigg|\\ &\le \eta\sum_{k=1}^\ell |F_k|\cdot |{\mathcal C}_{i, k}|+\bigg|\bigcup_{k=1}^\ell \sigma_n(F_k){\mathcal C}_{i, k}\bigg|\\ &\le \bigg(\frac{\eta}{1-\eta}+1\bigg)\bigg|\bigcup_{k=1}^\ell \sigma_n(F_k){\mathcal C}_{i, k}\bigg|\\ &\le \frac{1}{(1-\eta)^2}\frac{1-\tau'}{24}d_n+\frac{\eta}{1-\eta}d_n, \end{align*} and thus \[ |{\mathcal V}_i|= |{\mathcal B}|-|{\mathcal B}\setminus {\mathcal V}_i|\ge (1-\theta)d_n. \] Take $\delta>0$ with $2\eta\delta+2\eta\delta \ell'+2\delta\ell'\le \eta$. Taking ${\mathcal V}={\mathcal V}_i$ in Lemma~\ref{L-Rokhlin1}, when $n\in V$ is large enough, we find ${\mathcal C}_{i, 1}', \dots, {\mathcal C}_{i, \ell'}'\subseteq {\mathcal V}_i$ such that \begin{enumerate} \item for every $k=1, \dots, \ell'$ and $c\in {\mathcal C}_{i, k}'$, the map $s\mapsto \sigma_{n, s}(c)$ from $F'_k$ to $\sigma_n(F'_k)c$ is bijective, \item the sets $\sigma_n(F'_1){\mathcal C}_{i, 1}', \dots, \sigma_n(F'_{\ell'}){\mathcal C}_{i, \ell'}'$ are pairwise disjoint and the family $\bigcup_{k=1}^{\ell'}\{\sigma_n(F'_k)c: c\in {\mathcal C}_{i, k}'\}$ is $\eta$-disjoint and $(1-\theta-\eta)$-covers $\{1, \dots, d\}$, \item $\sum_{k=1}^{\ell'}||\sigma_n(F'_k){\mathcal C}_{i, k}'|/d_n-\lambda_k|<\delta$. \end{enumerate} Note that \[ \sum_{k=1}^{\ell'}\lambda_k\le \frac{1}{d_n} \bigg|\bigcup_{k=1}\sigma_n(F'_k){\mathcal C}_{1, k}' \bigg|+\delta\le 1+\delta. \] For each $1\le k\le \ell'$, take ${\mathcal C}''_{1, k}\subseteq {\mathcal C}'_{1, k}$ and ${\mathcal C}''_{2, k}\subseteq {\mathcal C}'_{2, k}$ with \[ |{\mathcal C}''_{1, k}|=|{\mathcal C}''_{2, k}|=\min(|{\mathcal C}'_{1, k}|, |{\mathcal C}'_{2, k}|). \] Take a bijection $\varphi'_k: {\mathcal C}''_{1, k}\rightarrow {\mathcal C}''_{2, k}$. We have \begin{align*} |{\mathcal C}'_{1, k}|\cdot |F'_k|-|{\mathcal C}'_{2, k}|\cdot |F'_k| &\le \frac{1}{1-\eta}|\sigma_n(F'_k){\mathcal C}'_{1, k}|-|\sigma_n(F'_k){\mathcal C}'_{2, k}|\\ &\le (1+2\eta)|\sigma_n(F'_k){\mathcal C}'_{1, k}|-|\sigma_n(F'_k){\mathcal C}'_{2, k}|\\ &\le (1+2\eta)(\lambda_k+\delta)d_n-(\lambda_k-\delta)d_n\\ &= (2\eta\lambda_k+2\eta\delta+2\delta)d_n, \end{align*} and similarly $|{\mathcal C}'_{2, k}|\cdot |F'_k|-|{\mathcal C}'_{1, k}|\cdot |F'_k|\le (2\eta\lambda_k+2\eta\delta+2\delta)d_n$. Thus for each $i=1, 2$ one has \begin{align*} |\sigma_n(F'_k)({\mathcal C}'_{i, k}\setminus {\mathcal C}''_{i, k})| &\le |{\mathcal C}'_{i, k}\setminus {\mathcal C}''_{i, k}|\cdot |F'_k| \\ &=||{\mathcal C}'_{1, k}|\cdot |F'_k|-|{\mathcal C}'_{2, k}|\cdot |F'_k||\le (2\eta\lambda_k+2\eta\delta+2\delta)d_n, \end{align*} and hence \begin{align*} \bigg|\bigcup_{k=1}^{\ell'}\sigma_n(F'_k)({\mathcal C}'_{i, k}\setminus {\mathcal C}''_{i, k})\bigg|&=\sum_{k=1}^{\ell'}|\sigma_n(F'_k)({\mathcal C}'_{i, k}\setminus {\mathcal C}''_{i, k})|\\ &\le \sum_{k=1}^{\ell'}(2\eta\lambda_k+2\eta\delta+2\delta)d_n\\ &\le (2\eta(1+\delta)+2\eta\delta \ell'+2\delta\ell')d_n\le 3\eta d_n. \end{align*} Therefore \begin{align*} \bigg|\bigcup_{k=1}^{\ell'}\sigma_n(F'_k){\mathcal C}''_{i, k}\bigg| &\ge \bigg|\bigcup_{k=1}^{\ell'}\sigma_n(F'_k){\mathcal C}'_{i, k}\bigg|-\bigg|\bigcup_{k=1}^{\ell'}\sigma_n(F'_k)({\mathcal C}'_{i, k}\setminus {\mathcal C}''_{i, k})\bigg| \\ &\ge (1-\theta-\eta)d_n-3\eta d_n=(1-\theta-4\eta)d_n \end{align*} for $i=1, 2$. Since the families $\bigcup_{k=1}^\ell\{\sigma_n(F_k)c: c\in {\mathcal C}_{i, k}\}$ for $i=1, 2$ are $\eta$-disjoint, we can find $F_{i, c}\subseteq F_k$ with $|F_{i, c}|\ge (1-\eta)|F_k|$ for all $i=1, 2$, $k=1,\dots, \ell$, and $c\in {\mathcal C}_{i, k}$ so that for each $i=1, 2$, the sets $\sigma_n(F_{i, c})c$ for $c\in \bigcup_{k=1}^\ell {\mathcal C}_{i, k}$ are pairwise disjoint. For every $k=1,\dots,\ell$ and $c\in {\mathcal C}_{1, k}$, set $\bar{F}_c=F_{1, c}\cap F_{2, \varphi_k(c)}$ and $\hat{F}_c=\{s\in \bar{F}_c: Fs\subseteq \bar{F}_c\}$. Then $|\bar{F}_c|\ge (1-2\eta)|F_k|$ and $|\hat{F}_c|\ge |\tilde{F}_k|-2\eta |F|\cdot |F_k|\ge (1-(2|F|+1)\eta)|F_k|$. Similarly, for every $1\le k\le \ell'$ and $c\in {\mathcal C}''_{1, k}$, we find some $\tilde{F}'_c\subseteq F'_k$ with $|\tilde{F}'_c|\ge (1-2\eta)|F'_k|$ such that the sets $\sigma_n(\tilde{F}'_c)c$ for $c\in \bigcup_{k=1}^\ell {\mathcal C}''_{1, k}$, as well as the sets $\sigma_n(\tilde{F}'_c)\varphi'_k(c)$ for $c\in \bigcup_{k=1}^\ell {\mathcal C}''_{1, k}$, are pairwise disjoint. Setting $\hat{F}'_c=\{s\in \bar{F}'_c: Fs\subseteq \bar{F}'_c\}$, we have $|\hat{F}'_c|\ge (1-(2|F|+1)\eta)|F'_k|$. Note that \begin{align*} \bigg|\bigcup_{k=1}^\ell\bigcup_{c\in {\mathcal C}_{1, k}}\sigma_n(\hat{F}_c)c\bigg|&=\sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}|\hat{F}_c|\\ &\ge \sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}\big(1-(2|F|+1)\eta\big)|F_k|\\ &\ge \big(1-(2|F|+1)\eta\big)\bigg|\bigcup_{k=1}^\ell\sigma_n(F_k){\mathcal C}_{1, k}\bigg|\\ &\ge \big(1-(2|F|+1)\eta\big)(1-\eta)\frac{1-\tau'}{24}d_n. \end{align*} Similarly, \begin{align*} \bigg|\bigcup_{k=1}^{\ell'}\bigcup_{c\in {\mathcal C}''_{1, k}}\sigma_n(\hat{F}'_c)c \bigg| &\ge \big(1-(2|F|+1)\eta\big)\bigg|\bigcup_{k=1}^{\ell'}\sigma_n(F'_k){\mathcal C}''_{1, k}\bigg|\\ &\ge \big(1-(2|F|+1)\eta\big)(1-\theta-4\eta)d_n. \end{align*} Set ${\mathcal W}=\big(\bigcup_{k=1}^\ell\bigcup_{c\in {\mathcal C}_{1, k}}\sigma_n(\hat{F}_c)c\big)\cup\big(\bigcup_{k=1}^{\ell'}\bigcup_{c\in {\mathcal C}''_{1, k}}\sigma_n(\hat{F}'_c)c\big)$. Then \[ |{\mathcal W}|\ge \big(1-(2|F|+1)\eta\big)(1-\eta)\frac{1-\tau'}{24}d_n+\big(1-(2|F|+1)\eta\big)(1-\theta-4\eta)d_n. \] Take a $\varphi\in {\rm Sym}(d_n)$ such that $\varphi(\sigma_{n, s}(c))=\sigma_{n, s}(\varphi_k(c))$ for all $k=1,\dots,\ell$, $c\in {\mathcal C}_{1, k}$, and $s\in \bar{F}_c$, and $\varphi(\sigma_{n, s}(c))=\sigma_{n, s}(\varphi'_k(c))$ for all $k=1,\dots,\ell'$, $c\in {\mathcal C}''_{1, k}$, and $s\in \bar{F}'_c$. For every $s\in F$, note that $\sigma_{n, s}\varphi=\varphi\sigma_{n, s}$ on ${\mathcal W}$, and hence \begin{align*} \rho_{\rm Hamm}(\sigma_{n, s}\varphi, \varphi\sigma_{n, s}) &\le 1-\frac{|{\mathcal W}|}{d_n} \\ &\le 1-\big(1-(2|F|+1)\eta\big)(1-\eta)\frac{1-\tau'}{24}-\big(1-(2|F|+1)\eta\big)(1-\theta-4\eta)<\varepsilon \end{align*} when $\eta$ is small enough. We also have \begin{align*} |\varphi({\mathcal Y}_n)\cap {\mathcal Z}_n|&\ge \sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}|\{s\in \bar{F}_c:\sigma_{n, s}(c)\in {\mathcal Y}_n, \sigma_{n, s}(\varphi_k(c))\in {\mathcal Z}_n\}|\\ &\ge\sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}(|\{s\in F_k:\sigma_{n, s}(c)\in {\mathcal Y}_n, \sigma_{n, s}(\varphi_k(c))\in {\mathcal Z}_n\}|-2\eta|F_k|)\\ &\ge \sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}\bigg(\bigg(\frac{\tau}{2}\bigg)^2|F_k|-2\eta|F_k|\bigg)\\ &\ge \sum_{k=1}^\ell\sum_{c\in {\mathcal C}_{1, k}}\frac{\tau^2}{8}|F_k| \ge \frac{\tau^2}{8}\bigg|\bigcup_{k=1}^\ell\sigma_n(F_k){\mathcal C}_{1, k}\bigg|\\ &\ge \frac{\tau^2}{8}(1-\eta)\frac{1-\tau'}{24} d_n \ge \frac{\tau^2}{16}\frac{1-\tau'}{24}d_n=\lambda d_n \end{align*} when $\eta$ is small enough. \end{proof} \begin{question} Does every countable sofic group $G$ admit a sofic approximation sequence $\Sigma$ such that the action of $G'$ on $(\prod_{\mathfrak F}\{1, \dots, d_i\}, {\mathfrak B}, \mu)$ is ergodic? \end{question} \section{IE-tuples and algebraic actions}\label{S-algebraic} By an {\it algebraic action} we mean an action of a countable discrete group $G$ on a compact metrizable Abelian group $X$ by (continuous) automorphisms. The structure of such an action is captured by the Pontryagin dual $\widehat{X}$ viewed as a module over the integral group ring ${\mathbb Z} G$. The ring ${\mathbb Z} G$ consists of the finitely supported ${\mathbb Z}$-valued functions on $G$, which we write in the form $\sum_{s\in G} f_s s$, with addition $(\sum_{s\in G} f_s s )+(\sum_{s\in G} g_s s) = \sum_{s\in G} (f_s + g_s)s$ and multiplication $(\sum_{s\in G} f_s s )(\sum_{s\in G} g_s s) = \sum_{s\in G} (\sum_{t\in G} f_t g_{t^{-1} s} )s$. Given a matrix $A$ in $M_n({\mathbb Z} G)$, the left action of $G$ on $({\mathbb Z} G)^n/({\mathbb Z} G)^n A$ gives rise via Pontryagin duality to the algebraic action $G\curvearrowright X_A := \widehat{({\mathbb Z} G)^n/({\mathbb Z} G)^nA}$. Write $A^*$ for the matrix in $M_n({\mathbb Z} G)$ whose $(i,j)$ entry is the result of applying the involution $(\sum_{s\in G} f_s s )^* = \sum_{s\in G} f_s s^{-1}$ to the $(j,i)$ entry of $A$. Viewing $\widehat{({\mathbb Z} G)^n}$ as $(({\mathbb R} /{\mathbb Z} )^G )^n$, we can then identify $X_A$ with the closed $G$-invariant subset \[ \big\{ x\in (({\mathbb R} /{\mathbb Z} )^G )^n : xA^* = 0_{(({\mathbb R} /{\mathbb Z} )^G)^n} \big\} \] of $(({\mathbb R} /{\mathbb Z} )^G )^n$ equipped with the action of $G$ by left translation. In the case that $A$ is invertible in $M_n(\ell^1 (G))$ the action $G\curvearrowright X_A$ is expansive, and in fact such actions and their restrictions to closed $G$-invariant subgroups constitute precisely all of the expansive algebraic actions \cite[Thm.\ 3.1]{ChuLi11}. When $G$ is amenable, given an action of the form $G\curvearrowright X_A$ with $A$ invertible in $M_n (\ell^1 (G))$, every tuple of points in $X$ is an IE-tuple (see Lemma~5.4 and Theorems~7.3 and 7.8 in \cite{ChuLi11}). We will extend this result in two ways in Theorems~\ref{T-invertible to dense} and \ref{T-invertible to UPE}, which demonstrate that in broader contexts independent behaviour similarly saturates the structure of actions of the form $G\curvearrowright X_A$ with $A$ invertible in $M_n(\ell^1 (G))$. First however we examine orbit IE-tuples in the context of actions $G\curvearrowright X$ on a compact metrizable (not necessarily Abelian) group by automorphisms. It was shown in \cite[Theorem 7.3]{ChuLi11} that, when $G$ is amenable, the IE-tuples for such an action are determined by a closed $G$-invariant normal subgroup of $X$ called the {\it IE group}. We now proceed to record some observations showing that the basic theory of the IE-group from \cite{ChuLi11} can be extended from amenable $G$ to general $G$ using orbit IE-tuples. Thus $G$ will be an arbitrary countable discrete group until we turn to the sofic setting in Theorem~\ref{T-invertible to UPE}. The proof of Lemma~3.11 in \cite{KerLi07} shows the following: \begin{lemma}\label{L-measure to density} Let $G$ act continuously on a compact metrizable space $X$. Let $A$ be a Borel subset of $X$, and $\mu$ a $G$-invariant Borel probability measure on $X$. Then $A$ has independence density at least $\mu(A)$ over $G$. \end{lemma} From Lemma~\ref{L-measure to density} we immediately obtain: \begin{lemma} \label{L-support to orbit} Let $G$ act continuously on a compact metrizable space $X$. Let $\mu$ be a $G$-invariant Borel probability measure on $X$. Then every point in the support of $\mu$ is an orbit ${\rm IE}$-$1$-tuple. \end{lemma} We now suppose that $G$ acts continuously on a compact metrizable group $X$ by automorphisms. From Lemma~\ref{L-support to orbit} we have: \begin{lemma} \label{L-orbit 1 tuple} Every point of $X$ is an orbit ${\rm IE}$-$1$-tuple. \end{lemma} Denote by ${\rm IE}(X)$ the set of all $x\in X$ such that $(x, e_X)$ is an orbit ${\rm IE}$-pair, where $e_X$ is the identity element of $X$. The proof of Theorem~7.3 in \cite{ChuLi11} shows the following. \begin{theorem} \label{T-IE group} ${\rm IE}(X)$ is a closed $G$-invariant normal subgroup of $X$. For every $k\in {\mathbb N}$ the set ${\rm IE}_k(X, G)$ of all orbit IE-$k$-tuples is a closed $G$-invariant subgroup of the group $X^k$ and \begin{align*} {\rm IE}_k(X, G)&=\{(x_1y, \dots, x_ky): x_1, \dots, x_k\in {\rm IE}(X),\ y\in X\}\\ &=\{(yx_1, \dots, yx_k): x_1, \dots, x_k\in {\rm IE}(X),\ y\in X\}. \end{align*} \end{theorem} Now we suppose that $X$ is Abelian. In this case a point $x\in X$ is said to be {\it $1$-homoclinic} if the function $s\mapsto \varphi (sx) - 1$ on $G$ lies in $\ell^1 (G)$ for every $\varphi$ in the Pontryagin dual $\widehat{X}$. The set of $1$-homoclinic points is written $\Delta^1(X)$. This set was studied in \cite{LSV, LSV12,SV} in the case $G={\mathbb Z}^d$ and in \cite{ChuLi11} for more general $G$. From the proof of Theorem~7.8 in \cite{ChuLi11} we obtain the following. \begin{theorem}\label{T-homoclinic to IE} Suppose that $\widehat{X}$ is a finitely generated left ${\mathbb Z} G$-module. Then $\Delta^1(X)\subseteq {\rm IE}(X)$. \end{theorem} From Theorem~\ref{T-homoclinic to IE} and \cite[Lemma 5.4]{ChuLi11} we obtain: \begin{theorem}\label{T-invertible to dense} Let $n\in{\mathbb N}$, and let $A$ be an element of $M_n({\mathbb Z} G)$ which is invertible in $M_n(\ell^1(G))$. Then for the action $G\curvearrowright X_A$ one has ${\rm IE}(X_A)=X_A$. \end{theorem} Now we let $G$ be a countable sofic group and $\Sigma=\{\sigma_i: G\rightarrow {\rm Sym}(d_i)\}_{i=1}^\infty$ a sofic approximation sequence for $G$. \begin{theorem}\label{T-invertible to UPE} Let $n \in {\mathbb N}$, and let $A$ be an element of $M_n({\mathbb Z} G)$ which is invertible in $M_n(\ell^1(G))$. Consider the action $G\curvearrowright X_A$. Then, for each $k\in {\mathbb N}$, every $k$-tuple of points in $X_A$ is a $\Sigma$-${\rm IE}$-tuple. \end{theorem} Before proceeding to the proof of Theorem~\ref{T-invertible to UPE}, we give an application to a problem of Deninger. For an invertible element $f$ in the group von Neumann algebra ${\mathcal L} G$ of a countable discrete group $G$ the Fuglede-Kadison determinant is defined by $\det_{{\mathcal L} G} f = \exp {\rm tr} (\log |f|)$ where ${\rm tr}$ is the canonical trace on ${\mathcal L} G$. In \cite[Question 26]{Den09} Deninger asked whether $\det_{{\mathcal L} G} f>1$ whenever $f\in {\mathbb Z} G$ is invertible in $\ell^1(G)$ and has no left inverse in ${\mathbb Z} G$. An affirmative answer was given by Deninger and Schmidt in the case that $G$ is residually finite and amenable \cite[Cor.\ 6.7]{DenSch07} and more generally by Chung and the second author in the case $G$ is amenable \cite[Corollary 7.9]{ChuLi11}. Using Theorem~\ref{T-invertible to UPE}, Proposition~\ref{P-basic}(3), Theorem~7.1 in \cite{KerLi11}, and the argument in the proof of Corollary~7.9 in \cite{ChuLi11}, we obtain an answer to Deninger's question for all countable residually finite groups: \begin{corollary} \label{C-answer to Deninger} Suppose that $G$ is residually finite and that $f$ is an element of ${\mathbb Z} G$ which is invertible in $\ell^1(G)$ and has no left inverse in ${\mathbb Z} G$. Then $\det_{{\mathcal L} G} f>1$. \end{corollary} Let $n\in {\mathbb N}$. For $A=(A_{ij})_{1\le i, j\le n}\in M_n(\ell^1(G))$, we set \[ \|A\|_1=\sum_{1\le i, j\le n}\|A_{ij}\|_1. \] For $(a_1,\dots, a_n)\in {\mathbb R}^d$, we set $\|(a_1, \dots, a_n)\|_\infty=\max_{1\le j\le n}|a_j|$. For $\xi: \{1, \dots, d\}\rightarrow {\mathbb Z}^n$, we set \[ \|\xi\|_\infty=\max_{1\le j\le d}\|\xi(j)\|_\infty. \] Denote by $P$ the natural quotient map $({\mathbb R}^n)^G\rightarrow (({\mathbb R}/{\mathbb Z})^n)^G$. Denote by $\rho$ the canonical metric on ${\mathbb R}/{\mathbb Z}$ defined by \[ \rho(t_1+{\mathbb Z}, t_2+{\mathbb Z}):=\min_{m\in {\mathbb Z}}|t_1-t_2-m|. \] By abuse of notation, we also use $\rho$ to denote the metric on $({\mathbb R}/{\mathbb Z})^n$ defined by \[ \rho((a_1, \dots, a_n), (b_1, \dots, b_n)):=\max_{1\le j\le n}\rho(a_j, b_j). \] Via the coordinate map at the identity element of $G$, we will think of $\rho$ as a continuous pseudometric on $(({\mathbb R}/{\mathbb Z})^n)^G$. \begin{lemma} \label{L-point} Let $n \in {\mathbb N}$, and let $A$ be an element of $M_n({\mathbb Z} G)$ which is invertible in $M_n(\ell^1(G))$. Consider the action $G\curvearrowright X_A$. Let $F$ be a nonempty finite subset of $G$ and let $M, \delta>0$. For every $d\in {\mathbb N}$, good enough sofic approximation $\sigma: G\rightarrow {\rm Sym}(d)$, and $\xi: \{1, \dots, d\}\rightarrow {\mathbb Z}^n$ with $\|\xi\|_\infty\le M$, if we define $h:\{1, \dots, d\}\rightarrow ({\mathbb Z}^n)^G$ and $\varphi: \{1, \dots, d\}\rightarrow X_A$ by \[ (h(a))_{t^{-1}}=\xi(ta) \text{ for all } t\in G \] and \[ \varphi(a)=P((h(a))(A^*)^{-1}) \] then $\varphi\in {\rm Map}(\rho, F, \delta, \sigma)$. \end{lemma} \begin{proof} Since $(A^*)^{-1}\in M_d(\ell^1(G))$, there exists a nonempty finite subset $K$ of $G$ such that for all $z_1, z_2\in ({\mathbb Z}^n)^G$ such that $\|z_1\|_\infty, \|z_2\|_\infty\le M$ and $z_1, z_2$ coincide on $K$, one has $\|(z_1(A^*)^{-1})_e-(z_2(A^*)^{-1})_e\|_\infty<\delta/2$, which implies that $\rho(P(z_1(A^*)^{-1}), P(z_2(A^*)^{-1}))<\delta/2$. Denote by $\Lambda$ the set of all $a\in \{1, \dots, d\}$ satisfying $t(sa)=(ts)a$ for all $t\in K^{-1}$ and $s\in F$. When $\sigma$ is a good enough sofic approximation for $G$, one has $|\Lambda|\ge (1-(\delta/2)^2)d$. Let $a\in \Lambda$ and $s\in F$. Then \[ s(\varphi(a))=P((s(h(a)))(A^*)^{-1}) \] and \[ \varphi(sa)=P((h(sa))(A^*)^{-1}).\] For every $t\in K^{-1}$ one has \[ (s(h(a)))_{t^{-1}}=(h(a))_{s^{-1}t^{-1}}=\xi((ts)a)=\xi(t(sa))=(h(sa))_{t^{-1}}. \] Thus, by the choice of $K$, we have $\rho(s(\varphi(a)), \varphi(sa))<\delta/2$. Note that $(({\mathbb R}/{\mathbb Z})^n)^G$ has diameter $1$ under $\rho$. It follows that \[ \rho_2(s\varphi(\cdot), \varphi(s\cdot))\le ((\delta/2)^2+1-|\Lambda|/d)^{1/2}<\delta. \] Therefore $\varphi\in {\rm Map}(\rho, F, \delta, \sigma)$. \end{proof} We are ready to prove Theorem~\ref{T-invertible to UPE}. \begin{proof}[Proof of Theorem~\ref{T-invertible to UPE}] Let ${\overrightarrow{x}}=(x_1,\dots, x_k)$ be a $k$-tuple of points in $X_A$. Then for each $1\le j\le k$ there is a $z_j\in ({\mathbb Z}^n)^G$ such that $\|z_j\|_\infty\le \|A\|_1$ and $x_j=P(z_j(A^*)^{-1})$. Let $U_1\times \cdots \times U_k$ be a product neighborhood of ${\overrightarrow{x}}$ in $X^k$. Since the map from bounded subsets of $({\mathbb Z}^n)^G$ equipped with the pointwise convergence topology to $X_A$ sending $z$ to $P(z(A^*)^{-1})$ is continuous \cite[Prop.\ 4.2]{DenSch07}, there is a nonempty finite subset $K$ of $G$ such that for every $1\le j\le k$ and $z\in ({\mathbb Z}^n)^G$ with $\|z\|_\infty\le \|A\|_1$ and $z|_K=z_j|_K$, one has $P(z(A^*)^{-1})\in U_j$. Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. Let $d\in {\mathbb N}$ and let $\sigma$ be a map from $G$ to ${\rm Sym}(d)$. Denote by $\Lambda$ the set of all $a\in \{1, \dots, d\}$ such that $sa\neq ta$ for all distinct $s, t\in K^{-1}$. When $\sigma$ is a good enough sofic approximation for $G$, we have $|\Lambda|\ge d/2$. Let ${\mathcal J}$ be a maximal subset of $\Lambda$ subject to the condition that the sets $K^{-1}a$ for $a\in {\mathcal J}$ are pairwise disjoint. Then $\Lambda\subseteq (\sigma(K^{-1}))^{-1}\sigma(K^{-1}){\mathcal J}$, and hence $|\Lambda|\le |K|^2|{\mathcal J}|$. Therefore $|{\mathcal J}|\ge d/(2|K|^2)$. We claim that ${\mathcal J}$ is a $(\rho, F, \delta, \sigma)$-independence set for ${\overrightarrow{U}}=(U_1, \dots, U_k)$ when $\sigma$ is a good enough sofic approximation for $G$. Let $\omega$ be a map from $\{1, \dots, d\}$ to $\{1, \dots, k\}$. Define $\xi: \{1, \dots, d\}\rightarrow {\mathbb Z}^n$ by $\xi(ta)=(z_{\omega(a)})_{t^{-1}}$ for all $a\in {\mathcal J}$ and $t\in K^{-1}$, and $\xi(b)=0$ for all $b$ not in $K^{-1}{\mathcal J}$. Then we have $h:\{1, \dots, d\}\rightarrow ({\mathbb Z}^n)^G$ and $\varphi\in {\rm Map}(\rho, F, \delta, \sigma)$ defined in Lemma~\ref{L-point} for $M=\|A\|_1$ when $\sigma$ is a good enough sofic approximation. Let $a\in {\mathcal J}$. For any $t\in K^{-1}$, one has \[ (h(a))_{t^{-1}}=\xi(ta)=(z_{\omega(a)})_{t^{-1}}. \] By the choice of $K$ we have $\varphi(a)=P(h(a)(A^*)^{-1})\in U_{\omega(a)}$. This proves our claim, and finishes the proof of the theorem. \end{proof} \section{Orbit ${\rm IE}$-tuples and untameness}\label{S-untame} Let $G$ be a countably infinite group acting continuously on a compact Hausdorff space $X$. \begin{theorem}\label{T-orbit IE to nontame} Let $k\in{\mathbb N}$ and let ${\overrightarrow{A}}$ be a $k$-tuple of subsets of $X$. Suppose that ${\overrightarrow{A}}$ has positive independence density over $G$. Then ${\overrightarrow{A}}$ has an infinite independence set in $G$. \end{theorem} \begin{proof} Denote by $q$ the density of ${\overrightarrow{A}}$ over $G$. Let $F_1$ be a nonempty finite subset of $G$. Take $s_1, s_2, \dots$ in $G$ such that setting $F_{n+1}=F_n \cup F_ns_n$ for all $n\in {\mathbb N}$ one has $F_n\cap F_ns_n=\emptyset$ for all $n\in {\mathbb N}$. Let $n\in {\mathbb N}$. Take an independence set $E_n$ of ${\overrightarrow{A}}$ contained in $F_n$ with $|E_n|\ge q|F_n|$. We will construct, inductively on $m$, nonempty finite subsets $F^{(n)}_{m, k}$ and $E^{(n)}_m$ of $G$ for all $1\le m\le k\le n$ and $t^{(n)}_m\in G$ for all $1\le m<n$ such that \begin{enumerate} \item $F^{(n)}_{n, n}=F_n$ and $E^{(n)}_n=E_n$; \item $t^{(n)}_m$ is equal to either $e$ or $s_m^{-1}$ for each $1\le m<n$; \item $F^{(n)}_{m, k}=F^{(n)}_{m+1, k}t^{(n)}_m$ and $F^{(n)}_{m, m}=F_m$ for all $1\le m< k\le n$; \item $E^{(n)}_m=E^{(n)}_{m+1}t^{(n)}_m$ for each $1\le m<n$; \item $|E^{(n)}_m\cap F^{(n)}_{m, k}|\ge q|F^{(n)}_{m, k}|$ for all $1\le m\le k\le n$. \end{enumerate} To start with, we define $F^{(n)}_{n, n}$ and $E^{(n)}_n$ according to (1). If $|E^{(n)}_n\cap F_{n-1}|\ge q|F_{n-1}|$, we set $t^{(n)}_{n-1}=e$. Otherwise, since $|E^{(n)}_n\cap F^{(n)}_{n, n}|\ge q|F^{(n)}_{n, n}|$ and $F^{(n)}_{n, n}=F_n$ is the disjoint union of $F_{n-1}$ and $F_{n-1}s_{n-1}$, we must have $|E^{(n)}_n\cap F_{n-1}s_{n-1}|\ge q|F_{n-1}s_{n-1}|$, and we set $t^{(n)}_{n-1}=s^{-1}_{n-1}$. Defining $F^{(n)}_{n-1, k}$ for $n-1\le k\le n$ and $E^{(n)}_{n-1}$ according to (3) and (4) respectively, we have that (5) holds for $m=n-1$. Next, if $|E^{(n)}_{n-1}\cap F_{n-2}|\ge q|F_{n-2}|$, we set $t^{(n)}_{n-2}=e$. Otherwise, since $|E^{(n)}_{n-1}\cap F^{(n)}_{n-1, n-1}|\ge q|F^{(n)}_{n-1, n-1}|$ and $F^{(n)}_{n-1, n-1}=F_{n-1}$ is the disjoint union of $F_{n-2}$ and $F_{n-2}s_{n-2}$, we must have $|E^{(n)}_{n-1}\cap F_{n-2}s_{n-2}|\ge q|F_{n-2}s_{n-2}|$, and we set $t^{(n)}_{n-2}=s^{-1}_{n-2}$. Defining $F^{(n)}_{n-2, k}$ for $n-2\le k\le n$ and $E^{(n)}_{n-2}$ according to (3) and (4) respectively, we have that (5) holds for $m=n-2$. Continuing in this way, we define $F^{(n)}_{m, k}, E^{(n)}_m$, and $t^{(n)}_m$ satisfying the above conditions. Note that if $E'$ is an independence set for ${\overrightarrow{A}}$ in $G$, then $E's$ is an independence set for ${\overrightarrow{A}}$ in $G$ for all $s\in G$. By induction on $m$, we find easily that $E^{(n)}_m$ is an independence set for ${\overrightarrow{A}}$ in $G$ for all $n\in {\mathbb N}$ and $1\le m\le n$. Also note that for any $1\le m\le k\le n$, $F^{(n)}_{m, k}$ depends only on $F_k\cap E^{(n)}_k$. In particular, for any fixed $k\in {\mathbb N}$ the number of sets appearing in $F^{(n)}_{1, k}$ for all $n\ge k$ is finite. Thus we can find a strictly increasing sequence $n_1<n_2<\dots $ in ${\mathbb N}$ such that for any fixed $k\in {\mathbb N}$ the sets $F^{(n_l)}_{1, k}$ and $E^{(n_l)}_1\cap F^{(n_l)}_{1, k}$ do not depend on $l\ge k$. Set $E=\bigcup_{k\in {\mathbb N}}(E^{(n_k)}_1\cap F^{(n_k)}_{1, k})$. Since $|E^{(n_k)}_1\cap F^{(n_k)}_{1, k}|\ge q|F^{(n_k)}_{1, k}|=q|F_k|=q|F_1|2^{k-1}$ for every $k\in {\mathbb N}$, the set $E$ is infinite. For evey $k\in {\mathbb N}$ one has \begin{align*} (E^{(n_{k+1})}_1\cap F^{(n_{k+1})}_{1, k+1})\cap F^{(n_k)}_{1, k}&=(E^{(n_{k+1})}_1 \cap F^{(n_{k+1})}_{1, k+1})\cap F^{(n_{k+1})}_{1, k}\\ &=E^{(n_{k+1})}_1\cap F^{(n_{k+1})}_{1, k}\\ &=E^{(n_k)}_1\cap F^{(n_k)}_{1, k}. \end{align*} Thus the sequence $\{E^{(n_k)}_1\cap F^{(n_k)}_{1, k}\}_{k\in {\mathbb N}}$ is increasing. Since the family of independence sets for ${\overrightarrow{A}}$ in $G$ is closed under taking increasing unions, we conclude that $E$ is an independence set for ${\overrightarrow{A}}$ in $G$. \end{proof} Recall that that a tuple $(x_1 , \dots , x_k ) \in X^k$ is an {\it IT-tuple} if for every product neighbourhood $U_1 \times\cdots\times U_k$ of $(x_1 , \dots , x_k )$ the tuple $(U_1 ,\dots, U_k )$ has an infinite independence set \cite{KerLi07}. \begin{corollary}\label{C-orbit IE to IT} Every orbit IE-tuple of the action $G\curvearrowright X$ is an IT-tuple. \end{corollary} Write $C(X)$ for the Banach space of continuous complex-valued functions on $X$ with the supremum norm. The action $G\curvearrowright X$ is said to be {\it tame} if no element $f\in C(X)$ admits an infinite subset $J$ of $G$ such that, for $s$ ranging in $J$, the family of functions $x\mapsto f(s^{-1} x)$ in $C(X)$ is equivalent to the standard basis of $\ell^1$, meaning that there is a bijection between the two which extends to an isomorphism (i.e., a bounded linear map with bounded inverse) between the closures of their linear spans \cite{Koh95,Gla06}. The action is tame if and only if there is no nondiagonal IT-pair in $X\times X$ \cite[Prop.\ 6.4]{KerLi07}. Thus from the above corollary we see that a tame action has no nondiagonal orbit IE-tuples. \section{$\Sigma$-IE-tuples and Li-Yorke Chaos}\label{S-chaos} Let $G$ be a countably infinite sofic group and $\Sigma=\{\sigma_i: G\rightarrow {\rm Sym}(d_i)\}_{i=1}^\infty$ a sofic approximation sequence for $G$. We fix a free ultrafilter ${\mathfrak F}$ on ${\mathbb N}$ and use it in the definitions of sofic topological entropy and $\Sigma$-IE-tuples, as in Section~\ref{S-product}. Let $G\curvearrowright X$ be a continuous action on a compact metrizable space. Let $\rho$ be a compatible metric on $X$. We say that $(x, y)\in X\times X$ is a {\it Li-Yorke pair} if \[ \limsup_{G \ni s\to \infty}\rho(sx, sy)>0 \hspace*{3mm}\text{and} \hspace*{3mm} \liminf_{G \ni s\to \infty}\rho(sx, sy)=0. \] where the limit supremum and limit infimum mean the limits of $\sup_{s\in G\setminus F} \rho(sx, sy)$ and\linebreak $\inf_{s\in G\setminus F} \rho(sx, sy)$, respectively, over the net of finite subsets $F$ of $G$. Note that the definition of Li-Yorke pair does not depend on the choice of the metric $\rho$. We say that the action $G\curvearrowright X$ is {\it Li-Yorke chaotic} if there is an uncountable subset $Z$ of $X$ such that every nondiagonal pair $(x, y)$ in $Z\times Z$ is a Li-Yorke pair. These definitions adapt those for continuous ${\mathbb N}$-actions, which have their origins in \cite{LiYor75}. In that setting Blanchard, Glasner, Kolyada, and Maass showed that positive entropy implies Li-Yorke chaos \cite{BlaGlaKolMaa02}. The following theorem demonstrates that, in our sofic context, positive topological entropy with respect to some sofic approximation sequence implies Li-Yorke chaos (cf.\ \cite[Thm.\ 3.18]{KerLi07}). \begin{theorem}\label{T-positive entropy to chaos} Suppose that $k\ge 2$ and ${\overrightarrow{x}}=(x_1, \dots, x_k)$ is a $\Sigma$-${\rm IE}$-tuple in $X^k$ with $x_1, \dots, x_k$ pairwise distinct. For each $1\le j\le k$, let $A_j$ be a neighbourhood of $x_j$. Then there exist Cantor sets $Z_j\subseteq A_j$ for $j=1,\dots ,k$ such that the following hold: \begin{enumerate} \item every nonempty finite tuple of points in $Z:=\bigcup_jZ_j$ is a $\Sigma$-${\rm IE}$-tuple; \item for all $m\in {\mathbb N}$, distinct $y_1, \dots, y_m \in Z$, and $y'_1, \dots, y'_m \in Z$ one has \[ \liminf_{G\ni s\to \infty}\max_{1\le i\le m} \rho(sy_i, y'_i)=0.\] \end{enumerate} \end{theorem} We now set out to prove Theorem~\ref{T-positive entropy to chaos}. We begin with the following lemmas. \begin{lemma}\label{L-LY} Let $k\ge 2$ and ${\overrightarrow{A}}=(A_1, \dots, A_k)$ be a tuple of closed subsets of $X$ with positive upper independence density over $\Sigma$. For each $j=1, \dots, k$ let $U_j$ be an open set containing $A_j$. Let $E$ be a finite subset of $G$. Then there exists an $s\in G\setminus E$ such that the tuple ${\overrightarrow{A}}'$ consisting of $A_i\cap s^{-1}U_j$ for all $i, j=1, \dots, k$ has positive upper independence density over $\Sigma$. \end{lemma} \begin{proof} Take $1<\lambda<\frac{k}{k-1}$. Then we have the constant $c>0$ in Lemma~\ref{L-KM}. Take a $q>0$ such that for every nonempty finite subset $F$ of $G$ and $\delta>0$ the set $V_{F, \delta}$ of all $i\in {\mathbb N}$ for which ${\overrightarrow{A}}$ has a $(\rho, F, \delta, \sigma_i)$-independence set of cardinality at least $qd_i$ is in ${\mathfrak F}$. Take a finite subset $W$ of $G$ such that $cq|W|>8$ and for any distinct $s, t\in W$ one has $s^{-1}t\not \in E$. When $0<|W|^2\kappa<1/2$, the number of subsets of $\{1, \dots, d\}$ of cardinality no greater than $|W|^2\kappa d$ is equal to $\sum^{\lfloor |W|^2\kappa d\rfloor}_{j=0}\binom{d}{j}$, which is at most $|W|^2\kappa d \binom{d}{|W|^2\kappa d}$, which by Stirling's approximation is less than $\exp(\beta d)$ for some $\beta > 0$ depending on $\kappa$ but not on $d$ when $d$ is sufficiently large with $\beta\to 0$ as $\kappa\to 0$. Take $cq/(2|W|^2)>\kappa>0$ such that for any $1\le j\le k$ and $x\in X\setminus U_j$ one has $\rho(x, A_j)>\sqrt{\kappa}$ and for all sufficiently large $d\in {\mathbb N}$ the number of subsets of $\{1, \dots, d\}$ of cardinality no greater than $|W|^2\kappa d$ is at most $\big(\frac{k}{(k-1)\lambda}\big)^{q d}$. Let $F$ be a nonempty finite subset of $G$ and $\delta>0$. Set $F'=F\cup W$ and $\delta'=\min(\delta, \kappa)$. Let $i\in{\mathbb N}$ be such that ${\overrightarrow{A}}$ has a $(\rho, F', \delta', \sigma_i)$-independence set ${\mathcal J}_i$ of cardinality at least $qd_i$. For each $\omega\in \{1, \dots, k\}^{{\mathcal J}_i}$ take a $\varphi_\omega\in {\rm Map}(\rho, F', \delta', \sigma_i)$ such that $\varphi_\omega(a)\in A_{\omega(a)}$ for every $a\in {\mathcal J}_i$. For each $\omega\in \{1, \dots, k\}^{{\mathcal J}_i}$, there is some $\Lambda_\omega\subseteq \{1, \dots, d_i\}$ with $|\Lambda_\omega|\ge (1-|W|^2\delta')d_i$ such that $\rho(\varphi_\omega(\sigma_i(s)a),s\varphi_\omega(a))<\sqrt{\delta'}$ for all $s\in W^{-1}W$ and $a\in \Lambda_\omega$. By the choice of $\kappa$, when $i$ is large enough there is a subset $\Omega_i$ of $\{1, \dots, k\}^{{\mathcal J}_i}$ with $\big(\frac{k}{(k-1)\lambda}\big)^{q d_i}|\Omega_i|\ge k^{|{\mathcal J}_i|}$ such that the set $\Lambda_\omega$ is the same, say $\Theta_i$, for every $\omega \in \Omega_i$, and $|\Theta_i|/d_i\ge 1-|W|^2\delta'>1-cq/2$. Then \[ |\Omega_i|\ge k^{|{\mathcal J}_i|}\bigg(\frac{(k-1)\lambda}{k}\bigg)^{q d_i} \ge k^{|{\mathcal J}_i|}\bigg(\frac{(k-1)\lambda}{k}\bigg)^{|{\mathcal J}_i|}=((k-1)\lambda)^{|{\mathcal J}_i|}. \] By our choice of $c$, we can find a subset ${\mathcal J}'_i$ of ${\mathcal J}_i$ with $|{\mathcal J}'_i|\ge c|{\mathcal J}_i|\ge cq d_i$ such that every map ${\mathcal J}'_i\rightarrow \{1, \dots, k\}$ extends to some $\omega\in \Omega_i$. When $i$ is large enough, one also has $|{\mathcal W}_i|\ge (1-cq/4)d_i$ for the set \begin{align*} {\mathcal W}_i &= \big\{a\in \{1, \dots, d_i\} : ((\sigma_i(s))^{-1}\sigma_i(t))(a)=\sigma_i(s^{-1}t)(a) \text{ for all } s, t\in W \\&\hspace*{40mm} \text{and } \sigma_i(s)(a)\neq a \text{ for all } s\in W^{-1}W\setminus \{e\} \big\}. \end{align*} Note that $|{\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i|\ge cqd_i/4$ and every map ${\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i\rightarrow \{1, \dots, k\}$ extends to some $\omega\in \Omega_i$. Denote by $\eta$ the maximum of $|\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap \sigma_i(t)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|/d_i$ for $s, t$ ranging over distinct elements of $W$. Then for each $s\in W$ there is a subset $\Upsilon_{i, s}$ of $\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)$ with cardinality at most $\eta |W|d_i$ such that the sets $(\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i))\setminus \Upsilon_{i, s}$ for $s\in W$ are pairwise disjoint. It follows that \begin{align*} \sum_{s\in W}|\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)| &\le \eta |W|^2d_i +\bigg|\bigcup_{s\in W}((\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i))\setminus \Upsilon_{i, s})\bigg| \\ &\le \eta |W|^2d_i+d_i. \end{align*} On the other hand, we have \[ \sum_{s\in W}|\sigma_i(s)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|=|W|\cdot |{\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i|\ge |W|cq d_i/4\ge 2d_i. \] Thus $\eta\ge 1/|W|^2$. Then we can find some distinct $t_i, t'_i\in W$ with $|\sigma_i(t_i)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap \sigma_i(t'_i)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|\ge d_i/|W|^2$. Set $s_i=t_i^{-1}t'_i$. Then $s_i\in W^{-1}W\setminus \{e\}$, and \begin{align*} \lefteqn{|({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap (\sigma_i(s_i))^{-1}({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|}\hspace*{35mm} \\ \hspace*{30mm} &=|({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap \sigma_i(s_i)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|\\ &=|\sigma_i(t_i)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap \sigma_i(t'_i)({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|\\ &\ge d_i/|W|^2. \end{align*} Take a maximal subset $\Xi_i$ of $({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap (\sigma_i(s_i))^{-1}({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)$ subject to the condition that for any $a\in \Xi_i$, neither $\sigma_i(s_i)(a)$ nor $(\sigma_i(s_i))^{-1}(a)$ is in $\Xi_i$. Then $\Xi_i\cup \sigma_i(s_i)\Xi_i\cup (\sigma_i(s_i))^{-1}\Xi_i\supseteq ({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap (\sigma_i(s_i))^{-1}({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)$. It follows that $|\Xi_i|\ge |({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)\cap (\sigma_i(s_i))^{-1}({\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i)|/3\ge d_i/(3|W|^2)$. Note that $\Xi_i$ and $\sigma_i(s_i)\Xi_i$ are disjoint subsets of ${\mathcal W}_i\cap \Theta_i\cap {\mathcal J}'_i$. Let $\xi=(\xi_1, \xi_2): \Xi_i\rightarrow \{1, \dots, k\}^2$. Define a map $\xi': \Xi_i\cup \sigma_i(s_i)\Xi_i\rightarrow \{1, \dots, k\}$ by $\xi'(a)=\xi_1(a)$ and $\xi'(\sigma_i(s_i)(a))=\xi_2(a)$ for all $a\in \Xi_i$. Extend $\xi'$ to some $\omega\in \Omega_i$. Then $\varphi_\omega \in {\rm Map}(\rho, F, \delta, \sigma_i)$, $\varphi_\omega(a)\in A_{\omega(a)}=A_{\xi_1(a)}$ and $\varphi_\omega(\sigma_i(s_i)(a))\in A_{\omega(\sigma_i(s_i)(a))}=A_{\xi_2(a)}$ for all $a\in \Xi_i$. For any $a\in \Xi_i$, since $\rho(\varphi_\omega(\sigma_i(s_i)a),s_i\varphi_\omega(a))<\sqrt{\delta'}\le \sqrt{\kappa}$, by the choice of $\kappa$ we have $s_i\varphi_\omega(a)\in U_{\xi_2(a)}$, and hence $\varphi_\omega(a)\in A_{\xi_1(a)}\cap s_i^{-1}U_{\xi_2(a)}$. Therefore $\Xi_i$ is a $(\rho, F, \delta, \sigma_i)$-independence set of cardinality at least $d_i/(3|W|^2)$ for the tuple consisting of $A_l\cap s_i^{-1}U_j$ for all $l, j=1, \dots, k$. There is some $s_{F, \delta}\in W^{-1}W\setminus \{e\}$ such that the set of $i\in V_{F', \delta'}$ for which $s_i$ is defined and $s_i=s_{F, \delta}$ lies in ${\mathfrak F}$. It follows that we can find an $s\in W^{-1}W\setminus \{e\}$ such that for any nonempty finite subset $F$ of $G$ and $\delta>0$ there are some nonempty finite subset $\tilde{F}$ of $G$ and $\tilde{\delta}>0$ with $F\subseteq \tilde{F}$ and $\delta>\tilde{\delta}$ such that $s_{\tilde{F}, \tilde{\delta}}=s$. Then the tuple ${\overrightarrow{A}}'$ consisting of $A_l\cap s^{-1}U_j$ for all $l, j=1, \dots, k$ has upper independence density at least $1/(3|W|^2)$ over $\Sigma$. From the choice of $W$ we have $s\not \in E$. \end{proof} From Lemma~\ref{L-LY} by induction on $m$ we have: \begin{lemma}\label{L-LY multiple} Let $k\ge 2$ and ${\overrightarrow{A}}=(A_1, \dots, A_k)$ be a tuple of closed subsets of $X$ with positive upper independence density over $\Sigma$. For each $j=1, \dots, k$ let $U_j$ be an open set containing $A_j$. Let $E$ be a finite subset of $G$ and $m\in {\mathbb N}$. Then there exist $s_1, \dots, s_m\in G\setminus E$ such that $s_i^{-1}s_j\not \in E$ for all distinct $1\le i, j\le m$ and the tuple ${\overrightarrow{A}}'$ consisting of $A_i\cap s^{-1}_1U_{\omega(1)}\cap \dots \cap s^{-1}_mU_{\omega(m)}$ for all $1\le i\le k$ and $\omega \in \{1, \dots, k\}^m$ has positive upper independence density over $\Sigma$. \end{lemma} We are ready to prove Theorem~\ref{T-positive entropy to chaos}. \begin{proof}[Proof of Theorem~\ref{T-positive entropy to chaos}] We may assume that the $A_j$ are closed and pairwise disjoint. Take an increasing sequence $E_1\subseteq E_2\subseteq \dots$ of finite subsets of $G$ with union $G$. We shall construct, via induction on $m$, closed nonempty subsets $A_{m, j}$ of $X$ for $1\le j\le k^{2^{m-1}}$ with the following properties: \begin{enumerate} \item[(a)] $A_{1, j} = A_j$ for all $1\le j\le k$, \item[(b)] for every $m\ge 2$ and $1\le i\le k^{2^{m-2}}$, $A_{m-1, i}$ contains exactly $k^{2^{m-2}}$ of the $A_{m, j}$ for $1\le j\le k^{2^{m-1}}$, \item[(c)] for every $m\ge 2$ and map $\gamma:\{1, \dots, k^{2^{m-1}}\}\rightarrow \{1, \dots, k^{2^{m-2}}\}$ there exists a $t_{\gamma}\in G\setminus E_{m-1}$ such that $t_{\gamma}A_{m, j}\subseteq \overline{U_{m-1, \gamma(j)}}$ for all $1\le j\le k^{2^{m-1}}$, where $U_{m-1, i}=\{x\in X: \rho(x, A_{m-1, i})<2^{-m}\delta_{m-1}\}$ for all $1\le i\le k^{2^{m-2}}$ and $\delta_{m-1}=\min \rho(x, y)$ for $x, y$ ranging over points in distinct $A_{m-1, j}$, \item[(d)] when $m\ge 2$, ${\rm diam}(A_{m, j})\le 2^{-m}$ for all $1\le j\le k^{2^{m-1}}$, \item[(e)] for every $m$, the sets $A_{m, j}$ for $1\le j\le k^{2^{m-1}}$ are pairwise disjoint, \item[(f)] for every $m$, the collection $\{A_{m, j} : 1\le j\le k^{2^{m-1}}$\}, ordered into a tuple, has positive upper independence density over $\Sigma$. \end{enumerate} Suppose that we have constructed such $A_{m, j}$ over all $m$. Properties (b), (d) and (e) imply that $Z=\bigcap_{m\in {\mathbb N}}\bigcup_{j=1}^{k^{2^{m-1}}}A_{m, j}$ is a Cantor set. Property (a) implies that $Z_j:=Z\cap A_j$ is also a Cantor set for each $1\le j\le k$. Condition (1) follows from properties (d) and (f). Condition (2) follows from properties (c) and (d). We now construct the $A_{m, j}$. Define $A_{1, j}$ for $1\le j\le k$ according to property (a). By assumption properties (e) and (f) are satisfied for $m=1$. Assume that we have constructed $A_{m, j}$ for all $j=1,\dots , k^{2^{m-1}}$ with the above properties. Set $n=1+(k^{2^{m-1}})^{k^{2^m}}$. By Lemma~\ref{L-LY multiple} we can find $s_1, \dots, s_n\in G\setminus E_m$ such that the tuple consisting of $A_{m,i}\cap s^{-1}_1U_{m, \omega(1)}\cap \dots \cap s^{-1}_nU_{m, \omega(n)}$ for all $1\le i\le k^{2^{m-1}}$ and $\omega \in \{1, \dots, k^{2^{m-1}}\}^n$ has positive upper independence density over $\Sigma$. Take a bijection $\varphi: \{1, \dots, k^{2^{m-1}}\}^{\{1, \dots, k^{2^m}\}}\rightarrow \{2, \dots, n\}$. For each $\gamma:\{1, \dots, k^{2^{m}}\}\rightarrow \{1, \dots, k^{2^{m-1}}\}$, set $t_\gamma=s_{\varphi(\gamma)}$. For all $1\le i, j\le k^{2^{m-1}}$, define $\omega_{i, j}\in \{1, \dots, k^{2^{m-1}}\}^n$ by $\omega_{i, j}(1)=j$ and $\omega_{i, j}(\varphi(\gamma))=\gamma((i-1)k^{2^{m-1}}+j)$ for all $\gamma \in \{1, \dots, k^{2^{m-1}}\}^{\{1, \dots, k^{2^m}\}}$, and set $A_{m+1, (i-1)k^{2^{m-1}}+j}=A_{m,i}\cap s^{-1}_1\overline{U_{m, \omega_{i, j}(1)}}\cap \dots \cap s^{-1}_n\overline{U_{m, \omega_{i, j}(n)}}$. Then properties (b), (c), (e) and (f) hold for $m+1$. For each $1\le j\le k^{2^m}$ write $A_{m+1, j}$ as the union of finitely many closed subsets each with diameter no bigger than $2^{-(m+1)}$. Using Lemma~\ref{L-decomposition E} we may replace $A_{m+1, j}$ by one of these subsets. Consequently, property (d) is also satisfied for $m+1$. This completes the induction procedure and hence the proof of the theorem. \end{proof} \begin{corollary}\label{C-positive entropy to chaos} If $h_\Sigma (X,G) > 0$ for some sofic approximation sequence $\Sigma$ then the action is Li-Yorke chaotic. \end{corollary} An action $G\curvearrowright X$ is said to be {\it distal} if $\inf_{s\in G}\rho(sx, sy)>0$ for all distinct $x, y\in X$. We refer the reader to \cite{Auslander} for the basics of distal actions. Since distal actions have no Li-Yorke pairs, from Corollary~\ref{C-positive entropy to chaos} we obtain the following consequence, which extends the result of Parry that distal integer actions on compact metrizable spaces have zero entropy \cite{Parry}. For amenable $G$ we write $h_{\text{\rm top}} (X,G)$ for the classical topological entropy, which is equal to the sofic entropy $h_\Sigma (X,G)$ for every $\Sigma$ \cite{KerLi10}. \begin{corollary}\label{C-distal} If the action $G\curvearrowright X$ is distal, then $h_\Sigma (X,G)=0$ or $-\infty$. In particular, if $G$ is amenable and $G\curvearrowright X$ is distal, then $h_{\text{\rm top}} (X,G)=0$. \end{corollary} We remark that every distal action has an invariant Borel probability measure \cite[page 125]{Auslander} \cite[page 496]{Vries}. But we do not know whether one can conclude that $h_\Sigma (X,G)=0$ in Corollary~\ref{C-distal}.
1,314,259,996,850
arxiv
\section{Introduction} \label{S1} {L}ate-life depression (LLD), mild cognitive impairment (MCI), and dementia are prevalent disorders worldwide that affect older adults. Previous studies have shown a close relationship between depression, cognitive impairment (CI), and progressive dementia in late life, especially Alzheimer's disease (AD)~\citep{ly2021late, rashidi2020frontal, burke2019diagnosing, alexopoulos2019mechanisms, geerlings2017late, lebedeva2017mri, joko2016patterns}. Current research considers these entities related such that late onset LLD may be a prodromal symptom of dementia~\citep{burke2019diagnosing}. While there are multiple clinical pathways between these three disorders, the mechanism of action that links them is complex and poorly understood~\citep{butters2008pathways}; thus, the pathologic mechanisms remain unclear. Existing studies on the diagnosis of depression are mainly based on questionnaires and clinical interviews rather than using clinically relevant biomarkers ~\citep{burdisso2019text, abrams2019changes, balsamo2018assessment}. The lack of recognized objective biomarkers may lead to a high degree of diagnostic heterogeneity, which complicates the task of identifying etiologies and predicting outcomes ~\citep{hermida2012association}. It is generally challenging to find objective biomarkers for predicting cognitive decline in LLD and including development of AD that might promote early intervention and effective treatment of the disease. Neuroimaging provides a promising method for understanding the complex pathophysiological progress of LLD that may support biomarker-driven diagnosis and treatment. Specifically, structural magnetic resonance imaging (sMRI) provides a non-invasive solution for objectively quantifying physical disorders that lead to significant mental illness. Increasing evidence has shown that local white matter and gray matter changes are directly related to depressive symptoms~\citep{guan2021cost, teodorczuk2010relationship, rashidi2020frontal}. Several studies have tried to discriminate LLD patients with differential cognitive progression based on sMRI~\citep{mousavian2019depression, lebedeva2017mri, joko2016patterns}. These MRI-based methods focus on classification, detection, and prediction of MCI, AD, and LLD with diagnosis information of baseline or 1-year follow up time. Few studies pay attention to longitudinal analysis of diagnostic cognitive change in LLD with sMRI. To this end, we propose a Hybrid Representation Learning (\textbf{HRL}) framework for longitudinal diagnostic discrimination in LLD based on sMRI. The hypothesis is that the effective fusion of data-driven and handcrafted MRI features helps improve predictive performance of the deep learning model. As shown in Fig.~\ref{fig-network}, the HRL consists of 4 components: (1) data-driven MRI feature learning, (2) handcrafted MRI feature extraction, (3) feature fusion and abstraction, and (4) classification. We evaluate the HRL on 294 subjects from two studies, and the experimental results suggest its effectiveness in detection and prediction tasks related to diagnostic cognitive change in LLD. The source code has been released to the public via GitHub~\footnote{https://github.com/goodaycoder/LLDprogression}. This paper is built upon our conference paper~\citep{zhang2022MLMI} with notable improvements. (1) Besides data-driven MRI features extracted by a deep neural network, we employ diverse handcrafted MRI features such as surface area, cortical thickness and gray matter volume. (2) Data-driven and handcrafted features of T1-weighted MRI are integrated into a unified framework through a Transformer encoder module. (3) We visualize the most informative brain regions that contributed to the prediction task. These brain regions may contain potential biomarkers for diagnostic outcome prediction in LLD. (4) More experiments and ablation studies are conducted to demonstrate the effectiveness of each component of the proposed HRL. \if false (1) In the previous work, FSL~\citep{jenkinson2012fsl} and ANTs~\citep{avants2009advanced} are used to process the MRI data and extract features of image intensity. We further extract more handcrafted features of brain structures from the MRI data with FreeSurfer~\citep{fischl2012FreeSurfer} in this work. (2) We combine the original classification and prediction model with a modified Transformer encoder from ViT~\citep{dosovitskiy2020image}, which can integrate handcrafted features and 3D MRI data into one framework to improve model prediction accuracy. (3) We conduct more experiments and ablation studies on the classification and prediction tasks to demonstrate the effectiveness of the improved approach. (4) More importantly, we apply our method to neuroimage analysis by visualizing and analyzing the most informative brain regions that contributed to the classification. These brain regions might contain the related potential biomarkers for LLD progression. \fi To the best of our knowledge, this is among the first attempts dedicated to predicting cognitive diagnosis in LLD over 5 years and finding possible biomarkers from sMRI scans. The main contributions of this work are summarized as follows: \vspace{-1mm} \begin{itemize} \item A Hybrid Representation Learning (HRL) framework is developed for predicting diagnostic outcome of a 5-year longitudinal period in LLD using sMRI data. Compared to the MRI-based classification models in LLD-related studies, HRL integrates data-driven and handcrafted features of sMRI into a unified framework. \vspace{-2mm} \item To identify structural MRI-based imaging biomarkers of the longitudinal diagnostic outcome, we visualize feature maps extracted by HRL and the most informative brain regions in diagnostic outcome prediction tasks. \vspace{-2mm} \item Extensive experiments are conducted to validate the effectiveness of HRL in LLD identification, LLD-to-CI and LLD-to-AD diagnostic outcome prediction. \end{itemize} The remainder of this paper is organized as follows. Section~\ref{S2} reviews the most relevant studies. In Section~\ref{S3}, we introduce the participants and proposed method. In Section~\ref{S4}, we compare our method with several competing methods for LLD identification and LLD-to-CI diagnostic outcome prediction, and analyze the most informative brain regions that might contain the related potential biomarkers for diagnostic outcome prediction in LLD. We further analyze several important aspects related to the performance of HRL, apply our HRL on LLD-to-AD diagnostic outcome prediction, and analyze the limitations of the current work in Section~\ref{S5}. This paper is finally concluded in Section~\ref{S6}. \section{Related Work} \label{S2} \subsection{Studies on Late-Life Depression Progression} In recent years, more attention has been drawn to late-life depression (LLD) progression. Butters et al. summarized and analyzed the possible pathways that link LLD to MCI and AD, and further proposed predominant mechanisms by which depression increases the risk for AD~\citep{butters2008pathways}. Plassman et al. included a pathway of persistent cognitive impairment that was neither clearly progressive nor clearly prodromal AD, which for in the context of our studies we have diagnosed as cognitive impairment, no dementia (CIND)~\citep{plassman2008prevalence}. A 5-year longitudinal study using statistical analysis methods was conducted by Ly et al., and the results suggested that LLD participants of late-onset depression subtypes experienced faster cognitive decline than normal control (NC)~\citep{ly2021late}. In addition to statistical research based on clinical neuropsychological assessment, there were some LLD-related studies based on brain MRI data. Lebedeva et al. showed that LLD patients developing MCI and dementia could be discriminated from LLD patients remaining cognitively stable with good accuracy based on baseline structural MRI alone~\citep{lebedeva2017mri}. Mousavian et al. investigated machine learning algorithms for major depression disorder (MDD) detection from MRI Images ~\citep{mousavian2019depression}. A previous study in~\citep{joko2016patterns} suggested that the method of comparing hippocampal atrophy by region might be useful in distinguishing AD, MCI, MDD, and NC. These studies either used machine learning methods that rely on handcrafted features or used deep learning models for MRI-based LLD classification. Moreover, these tasks were based on immediate diagnosis at MRI scan time, and there is no relevant literature on long-term research of LLD progression. In this work, we aim to perform a longitudinal study of predicting 5-year diagnostic outcome in LLD with sMRI. \subsection{Brain MRI Representation Learning} Previous studies have shown that sMRI could provide information for identifying depression~\citep{zhuo2019rise, binnewies2021associations}. In recent years, much research has been devoted to neuroimage analysis for computer-aided diagnosis of brain diseases~\citep{sarmento2019automatic, zhang2018multi}. Some studies tried to analyze potential biomarkers extracted from sMRI using machine learning~\citep{gao2018machine, nouretdinov2011machine}. For machine learning-based classification, the key issue is feature selection and reduction. Nouretdinov et al. suggested that feature selection determines the reliability of the predictions~\citep{nouretdinov2011machine}. Khan et al. used extreme learning machine to select deep learning features for better feature fusion and classification~\citep{khan2020multimodal}. Liu et al. suggested that the low-dimensional handcrafted volumetric features of the brain could preserve the biological information density and models trained with them yielded comparable performance to those utilizing whole-brain MRI~\citep{liu2019using}. \begin{figure}[!t] \setlength{\belowdisplayskip}{-2pt} \setlength{\abovedisplayskip}{-1pt} \setlength{\abovecaptionskip}{-4pt} \setlength{\belowcaptionskip}{-2pt} \centering \includegraphics[width=0.48\textwidth]{Figs/Fig2-pathway_v2.pdf} \caption{Illustration of possible pathways of cognitive change in our study. The subjects of CIND and AD categories are combined into a CI group in the following classification task due to the limited sample size.} \label{fig-pathway} \end{figure} Many deep learning methods have been proposed for data-driven MRI feature extraction and imaging biomarker exploration. For example, a degenerative adversarial neuroimage net was developed using sMRI data to capture the patterns of regional brain intensity changes that accompany disease progression ~\citep{ravi2019degenerative}. Ghazi et al. utilized a recurrent neural network (RNN) to model the progression of AD using six biomarkers of sMRI~\citep{ghazi2019training}. Uyulan et al. built an electroencephalography-based diagnosis model for MDD diagnosis with CNN to explore translational biomarkers~\citep{uyulan2021major}. There are several studies that have tried to fuse the deep learning features with handcrafted features for disease diagnosis and tumor detection~\citep{shankar2021novel,bansal2022detection, hagerty2019deep} with 2D medical images. Saba et al. concatenated the deep learning features with handcrafted (shape and texture) features to achieve accurate and fast classification of brain tumors~\citep{saba2020brain}. However, effectively fusing deep learning features and handcrafted features is often challenging due to their inherent heterogeneity. Transformer has recently been successfully applied to data fusion~\citep{shamshad2022transformers,zheng2022transformer,xing2022nestedformer}, thanks to its multi-head self-attention mechanism in selectively fusing image features. Zheng et al. proposed a transformer-based multi-feature fusion model to fuse cortical features in MCI conversion prediction ~\citep{zheng2022transformer}. Xing et al. conducted multi-modal MRI feature fusion for brain tumor segmentation with a spatial attention transformer and a cross-modality attention transformer~\citep{xing2022nestedformer}. These studies focused on handcrafted features or deep learning features separately. Inspired by these studies, in this work, we propose a hybrid representation learning framework with a Transformer encoder that can integrate deep learning and handcrafted MRI features for diagnostic outcome prediction. \begin{table}[!tbp] \renewcommand\arraystretch{1.1} \centering \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \scriptsize \caption{Selection criteria of studied subjects in our work. The diagnoses are made in a 5-year follow-up time window from baseline.} \setlength{\tabcolsep}{5mm}{}{ \begin{tabular}{l|l} \toprule \multicolumn{1}{l}{Category} & \multicolumn{1}{|l}{Description} \\ \midrule CN-N &Cognitively Normal\\ \hline CN-D &Depressed, Cognitively Normal \\ \hline \multirow{2}{*}{CIND} &Cognitive Impairment, No Dementia;\\ &Cognitive Impairment, due to Vascular Disease \\ \hline \multirow{2}{*}{AD} &Probable AD, Possible AD, Subsyndromal AD;\\ &Dementias of undetermined etiology\\ \bottomrule \end{tabular} \label{Diagnosis_information} } \end{table} \section{Materials and Methodology} \label{S3} \subsection{Studied Subjects and Image Preprocessing} \textbf{Participants and MRI Acquisition}. The studied subjects are enrolled in two studies: (1) Neurocognitive Outcomes of Depression in the Elderly (NCODE) study~\citep{steffens2004methodology} and (2) Neurobiology of Late-life Depression (NBOLD) study~\citep{steffens2017negative}. Both studies included individuals with LLD and a comparison sample of never-depressed controls. The 3T T1-weighted MRIs are included in this work. The MRI of NCODE was acquired under a 3 Tesla whole-body MRI Siemens Trio system (Siemens Medical Systems, Malvern, PA), and processed by the Neuro-psychiatric Imaging Research Laboratory (NIRL), located at Duke University Medical Center. The 3D axial TURBOFLASH sequence was used, with TR/TE=22/7~$ms$, flip angle=25°, a 100 Hz/pixel bandwidth, a 256×256 matrix, a 256~$mm$ diameter field-of-view, 160 slices with a 1~$mm$ slice thickness, and voxel size=1×1×1~$mm^3$. The MRI of NBOLD was acquired using a Skyra 3T scanner (Siemens, Erlangen, Germany) located at Olin Neuropsychiatric Research Center (ONRC), using a magnetization-prepared rapid gradient-echo (MPRAGE) protocol with TR/TE=2,200/2.88~$ms$, flip angle=13°, matrix=220×320×208, and voxel size=0.8×0.8×0.8~$mm^3$. \textbf{Subject Selection and Grouping.} Depression diagnoses are assigned by trained psychiatrists using standardized assessment instruments and diagnostic algorithms at enrollment time, as described in~\citep{steffens2004methodology, steffens2017negative}. All individuals included in this study were diagnosed as cognitively normal as the time of the baseline assessment. Importantly, subjects in both studies were diagnosed with the same instruments and protocol, assessed annually with the same battery of neuropsychological tests, and adjudicated for cognitive diagnosis by a group of experts following the same consensus diagnostic guidelines. Diagnosis at Year 5 in the study is the outcome of interest in this study, which included cognitively normal (CN), cognitive impairment , no dementia (CIND), or AD. At the follow-up visit time (i.e., 5 years), subjects stayed stable or converted to CIND, or AD are recorded, while the possible pathways included in this work are illustrated in Fig.~\ref{fig-pathway}. The detailed subject selection criteria for this work are shown in Table~\ref{Diagnosis_information}. Following the criteria in Table~\ref{Diagnosis_information}, 311 subjects in total are initially selected for image preprocessing. After MRI preprocessing, 17 subjects are excluded due to the failed processing and low image quality. Finally, 294 subjects are included in our study, and the demographic information is shown in Table~\ref{Demographic_information}. Using MRI scan time as the baseline, category labels are given based on the 5-year diagnosis. As shown in Fig.~\ref{fig-pathway}, these subjects are categorized into 4 groups: (1) {87} never-depressed cognitively normal (CN-N) subjects: CN-N subjects at baseline who don't progress to cognitive impairment within the 5-year follow-up; (2) {172} depressed CN (CN-D) subjects: CN-D subjects at baseline but cognitively normal at diagnosis and within the 5-year follow-up; (3) {19} CIND subjects: depressed subjects who develop cognitive impairment but not dementia in 5-year follow-up; (4) {16} AD subjects: depressed participants with a diagnosis of AD at 5 years after baseline. \textbf{Prediction Task.} Table~\ref{Demographic_information} shows that each category has a limited number of subjects, especially the CIND and AD groups. So, we perform the diagnostic outcome prediction study in two classification tasks: (1) binary classification for LLD detection (i.e., CN-D vs. CN-N classification) and (2) three-category classification for predicting 5-year cognitive diagnosis (i.e., CI vs. CN-D vs. CN-N classification). In the task of three-category classification, CIND and AD subjects are combined into the CI group, which helps to increase the sample size. Besides, the CIND-to-AD diagnostic outcome prediction (i.e., CIND vs. AD classification) is also performed using a transfer learning strategy. That is, we first use 795 subjects selected from ADNI~\citep{jack2008alzheimer} for model training and then transfer the trained model to CIND and AD subjects in this study for testing, with details given in Section~\ref{S5.4}. \begin{table}[!tbp] \renewcommand\arraystretch{1.1} \centering \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \scriptsize \caption{Demographic information of studied subjects in our work. The values are denoted as ``mean $\pm$ standard deviation''. F/M: Female/Male, MADRS: Montgomery-Asberg Depression Rating Scale~\citep{fantino2009self}, MMSE: Mini-mental state examination~\citep{galea2005mini}.} \setlength{\tabcolsep}{4pt}{}{ \begin{tabular}{l|cc cc cc c} \toprule ~{Dataset} &{Category} &{Gender (F/M)} & {Age} &{Education} & MADRS & MMSE\\ \midrule \multirow{4}{*}{NCODE} & CN-N &31/22 &67.5$\pm$5.0 &15.7$\pm$1.4 &- &29.1$\pm$1.0\\ & CN-D & 67/39 &66.1$\pm$5.7 &15.0$\pm$2.2 &15.3$\pm$8.1 &28.5$\pm$1.4 \\ & CIND & 5/5 &71.9$\pm$6.3 &14.4$\pm$2.1 &16.7$\pm$8.8 &27.4$\pm$2.2 \\ & AD & 5/3 &72.0$\pm$5.6 &14.4$\pm$2.1 &18.4$\pm$10.5 &24.8$\pm$5.4 \\ \hline \multirow{4}{*}{NBOLD} & CN-N & 27/7 &71.6$\pm$7.1 &15.6$\pm$2.1 &- &29.1$\pm$1.5 \\ & CN-D & 50/16 &70.0$\pm$6.9 &16.3$\pm$2.3 &19.7$\pm$5.5 &29.5$\pm$0.8 \\ & CIND & 6/3 &74.9$\pm$4.3 &14.3$\pm$2.7 &17.5$\pm$10.7 &29.2$\pm$1.2 \\ & AD & 3/5 &76.5$\pm$7.5 &17.8$\pm$1.5 &17.5$\pm$7.1 &28.5$\pm$1.7 \\ \bottomrule \end{tabular} \label{Demographic_information} } \end{table} \textbf{Image Preprocessing}. Each sMRI scan is preprocessed using the FSL, ANTs, and FreeSurfer tools. The processing pipeline includes (1) bias field correction with N4, (2) linear registration to AAL3~\citep{rolls2020automated} template in the Montreal Neurological Institute (MNI) space, resampling to 1~$mm^3$ resolution, and cropping the whole brain to the size of 181×217×181~$mm^3$, (3) brain extraction, (4) non-linear registration to AAL3, and (5) partition regions-of-interest (ROI) of AAL3 into the registered sMRI volumes. All MRI scans have the same size as the MNI template after preprocessing. The average image intensities of gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) are computed within the 170 ROIs defined by AAL3. Nonlinear registration is performed for warping the ROIs of AAL3 back to each subject using ANTs. {\color{black}Three 166-dimensional feature vectors are generated after eliminating 4 tiny ROIs for each subject, used as handcrafted features for the following classification task based on machine learning methods.} Furthermore, anatomical structural features are extracted using FreeSurfer, such as volume, cortical thickness, mean curvature, surface area, and white matter parcellation information, with more details given in Section~\ref{S3_2_2}. {\color{black All MRIs are also preprocessed via histogram standardization, zero-mean normalization, and rescaling the intensity between [0, 1] using TorchIO~\citep{perez2021torchio}\footnote{https://torchio.readthedocs.io/}}. \subsection{Proposed Method} \if false In this work, we perform a challenging classification task for assessing cognitive change in LLD. In order to combine the advantages of deep learning and handcrafted features, we proposed HRL framework that can take both the MRI data and handcrafted features as inputs. \fi We reasonably assume that effective fusion of data-driven and handcrafted MRI features can promote detection and prediction of diagnostic cognitive change in LLD, and exploit both types of MRI features via a hybrid representation learning (HRL) framework. As shown in Fig.~\ref{fig-network}, the HRL consists of 4 components: (1) {data-driven MRI feature learning} with a residual neural network (ResNet)~\citep{he2016deep} as backbone, (2) {handcrafted MRI feature extraction}, (3) {feature fusion and abstraction} via a Transformer encoder, and (4) {classification}. \begin{figure}[!t] \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.49\textwidth]{Figs/Fig-Resnet-ResidualBlock-v2.pdf} \caption{The detailed architecture of (a) ResNet34 Backbone and (b) Transformer Encoder. The numbers 64, 128, 256, and 512 denote the channels of the corresponding convolution layers.} \label{fig-RNB} \end{figure} \subsubsection{Data-Driven MRI Feature Learning} To explore imaging biomarkers from MRIs in a data-driven manner, we propose to employ a ResNet34~\citep{he2016deep} as backbone for MRI feature learning. As shown in the top left panel of Fig.~\ref{fig-network}, the ResNet34 takes 3D MRI scans as input for feature learning. The detailed architecture of ResNet34 used in our work is shown in Fig.~\ref{fig-RNB}~(a). The ResNet34 backbone contains a convolutional layer (kernel size: 7×7×7, stride: 2) and a max pooling layer (kernel size: 3×3×3, stride: 2), followed by four residual network blocks (RNB). Each RNB consists of a stack of identical residual blocks. There are 3, 4, 6, and 3 residual blocks in the four RNBs, respectively. Each residual block has two convolutional layers, and each convolutional layer is followed by a batch normalization layer and a rectified linear unit (ReLU) activation function. A skip connection crosses these two convolution layers and adds the input directly before the final ReLU activation function. The length, width, and height dimensions are down-sampled with stride 2 between two adjacent RNBs. {\color{black}The channel numbers of convolution layers within each RNB are the same and decide the number of feature maps.} So, 512 feature maps are finally yielded by the ResNet34 backbone. {\color{black As shown in the upper middle part of Fig.~\ref{fig-network}, these feature maps are reshaped into small image patches with a resolution of ($l, w, h$) pixels through the following operations: \begin{itemize} \vspace{-2mm} \item Each feature map is reshaped into a vector ($x_p^n$) of patches with the dimension of $l\times w\times h$, where $n = 1, \cdots, N$ and $N$ denotes the number of patches. The size of the feature maps is small enough (i.e., $3\times 4\times 3$) due to the down-sampling between RNBs. So, the final 512 feature maps are regarded as patches directly, and the $N=512$ here. \vspace{-2mm} \item A sequence of embedded patches is generated by mapping the vector $x_p^n$ to $D$ dimensions through a trainable linear projection (i.e., 3D convolution). Since the input (i.e., embedded patches) and output (named hidden states) of the Transformer encoder are vectors of the same dimension, the dimension $D$ is also known as hidden size. \vspace{-2mm} \item A classification token $x_{class}$ is prepended to the sequence of embedded patches. The final hidden state of the classification token usually represents the aggregate representation for classification. When applied to our work, a better way of averaging or pooling the sequence of hidden states for the whole input embeddings is used ~\citep{devlin2018bert}. \vspace{-2mm} \item The embedded patches are further augmented with a one-dimensional position embedding $E_{pos}$, hence introducing positional information into the input, which is also learned during training. The embedded patches with position embedding can be represented as: $Z_0 = [x_{class}; x_p^1; \cdots; x_p^N] + E_{pos}$, where $E_{pos} = [0, \cdots, N]$ is the position embedding. \if false \begin{equation} \vspace{-8pt} \centering $Z = [x_{class}; x_p^1; \cdots; x_p^N] + E_{pos}$, \vspace{-8pt} \end{equation} where $E_{pos} = [0; \cdots; N]$. \fi \end{itemize} } Finally, the embedded patches $Z_0$ are fed into the subsequent feature fusion module. Note that the proposed HRL framework is very flexible, while other deep learning models can also be used as a backbone for feature extraction. \subsubsection{Handcrafted MRI Feature Extraction} \label{S3_2_2} The handcrafted MRI features are introduced for promoting the training of HRL and improving the performance of classification. While features extracted by the ResNet backbone are generally based on MR image intensity, handcrafted MRI features could provide more structural information on local anatomical structures. Specifically, three types of handcrafted features are extracted from each MRI, including (1) structural statistics, such as size of surface area and gray matter volume, (2) attribute information like cortical thickness and mean curvature of the cortical surface, and (3) white matter parcellation information. These features can be calculated through MRI preprocessing with tissue segmentation and parcellation tools (i.e., FreeSurfer in our work), with details reported in Tables SI-SIX in \emph{Supplementary Materials}. The handcrafted features of each subject are concatenated as a 1,007-dimensional vector. As shown in the lower middle part of Fig. \ref{fig-network}, the handcrafted feature vector is reshaped to a list of vectors with a uniform length (i.e., the hidden size that is the same length as the embedded patches $x_p^n$). After reshaping, the list of vectors can be regarded as embedded patches $Z_1$ and fed into the Transformer encoder with $Z_0$ for feature fusion and abstraction. Note that the data-driven MRI features need position embedding while the handcrafted features don't, since the 1D handcrafted features vector already eliminates position information. \vspace{-2pt} \subsubsection{Feature Fusion and Abstraction} To effectively fuse the handcrafted features and data-driven MRI features, we propose using a Transformer encoder for hybrid feature fusion and abstraction. The Transformer encoder receives 1-dimensional data as input, so the embedded patches $Z_0$ and $Z_1$ are directly fed to the encoder. Due to the structure of the Transformer encoder, the output hidden states correspond to the input embedded patches one by one, and the size is the same (as shown in the right part of Fig. \ref{fig-network}). Fig. \ref{fig-RNB}(b) shows the detailed architecture of the Transformer encoder with three components. (1) Multi-head self-attention (MSA) layer: This layer concatenates all the self-attention outputs linearly to embed information globally across the overall features. The attention heads help train local and global dependencies between features. (2) Multi-layer perceptron (MLP) Layer: The MLP contains a two-layer with Gaussian error linear unit. (3) Normalization Layer: This is added prior to MSA and MLP for improving the training time and overall performance. Moreover, residual connections are included after MSA and MLP. As a result, the Transformer encoder outputs a $D$-dimensional vector representation (i.e., hidden states in Fig.~\ref{fig-network}) for each embedded patch of the input sequence. The Transformer encoder in HRL is used to fuse data-driven and handcrafted MRI features. Through its unique multi-head self-attention mechanism, long-term dependencies between different features can be established to realize feature fusion and representation learning ~\citep{zeng2022nlfftnet,sun2022transformer}. The encoder in Fig.~\ref{fig-RNB}~(b) usually are stacked on top of each other for feature extraction in computer vision tasks (typically stacking 6 or 12 of these blocks). But the Transformer encoder in HRL is used for feature fusion rather than feature extraction, only 1 iteration of the encoder is sufficient in our HRL. The hidden size and number of self-attention heads of the encoder are empirically set to $128$ and $16$, respectively, in the experiments. \vspace{-2pt} \subsubsection{Classification} In order to quickly converge the model, we modified the original MLP head in ViT that is used for classification. The newly designed classification module consists of an average pooling layer and a linear layer with \emph{tanh} activation as non-linearity. The final hidden states from the Transformer encoder are averaged and fed to the linear layer for class prediction. The final outputs are converted to the probabilities that the subject belongs to different categories using the SoftMax function. Thus, a cross-entropy loss function is used for training the HRL. \subsection{Implementation} The HRL is trained via a two-stage optimization strategy. (1) In the 1st stage, the ResNet can be trained, with 3D MRIs as input and their category labels as supervision. Then, we use the parameters of the trained model for the backbone initialization. (2) In the 2nd stage, we train the Transformer encoder (with ResNet backbone frozen), where handcrafted features and data-driven MRI features learned from ResNet are used as input and the category labels are treated as output. To balance the data, sMRI scans are duplicated and augmented using random affine transformation with TorchIO during the training stage. The Adam optimizer with a learning rate of ${10}^{-4}$ is used. {\color{black}We train the HRL with a maximum of 300 epochs, and an early stop strategy is applied when the prediction accuracy on training set exceeds the threshold (e.g., 0.9 in our work). The batch size is set as 4 with full-size MRIs as input due to the limitation of GPU memory.} The experiments are implemented on PyTorch under the environment of Python 3.7 on the Ubuntu 18.04 system with a GPU (NVIDIA TITAN Xp) of 12 GB memory. \section{Experiments} \label{S4} \subsection{Experimental Setup} \paragraph{\textbf{Evaluation Metrics}} Several metrics are used for performance evaluating, including the area under the ROC curve (AUC), accuracy (ACC), sensitivity (SEN), specificity (SPE), and F1-Score (F1s). A 5-fold cross-validation strategy is used in the experiments. Specifically, we first randomly split all subjects into 5 groups for each category. Then, one group is alternatively used as a test set and the remaining 4 groups are used as training set. The experiments are repeated 5 times independently to avoid any bias introduced by the random split. Two tasks are performed in the experiments, including (1) CN-D vs. CN-N classification, and (2) CI vs. CN-D vs. CN-N classification. In the binary task ({\em i.e.}, CN-D vs. CN-N classification), both groups stay stable during the follow-up 5 years, and the mean and standard deviation of the evaluation metrics achieved in 5-fold cross validation are calculated. In the three-category task ({\em i.e.}, CI vs. CN-D vs. CN-N classification), we use a well-known one-versus-all classification strategy~\citep{aly2005survey} and record all the prediction results of 5 folds and calculate the overall ACC and SEN results for each category. The confusion matrices are presented for detailed comparison and analysis. \paragraph{\textbf{Competing Methods}} We compare the proposed HRL with the most popular machine learning methods used for depression detection and classification in LLD research, including support vector machine (\textbf{SVM})~\citep{kim2018application, mousavian2019depression, kambeitz2017detecting, saidi2020hybrid}, random forest (\textbf{RF})~\citep{lebedeva2017mri} and \textbf{XGBoost} (XGB)~\citep{chen2016xgboost, arun2018exploratory, arun2018boosted, sharma2020improving}). (1) In the SVM method, an SVM with Radial basis function kernel and regularization parameter $C=1.0$ is used for classification. (2) In the RF, the random forest classifier with 100 decision trees is used for classification. (3) In XGB, a grid search strategy is used to find a good combination of hyperparameters (i.e., number of boosting rounds, maximum tree depth for base learners, learning rate). We use XGB with boosting rounds of 300, a maximum tree depth of 4, and a learning rate of 0.2. These three classifiers take handcrafted MRI features as inputs, including (1) average image intensities of gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) within pre-defined ROIs in AAL3 (denoted as {SVM/RF/XGB-GM, SVM/RF/XGB-WM} and {SVM/RF/XGB-CSF}, respectively) and (2) anatomical structural features extracted via FreeSurfer (denoted as {SVM/RF/XGB-FF}). For three-category problems, the most commonly used One-vs-the-rest (OvR) classification strategy is {\color{black}used~\citep{wu2006one}.} We also compare the HRL with the state-of-the-art deep learning methods for MRI-based LLD research, including (1) \textbf{ResNet}~\citep{he2016deep, yang2021deep}, (2) \textbf{Med3D}~\citep{chen2019med3d}, and (3) \textbf{ViT}~\citep{dosovitskiy2020image}. (1) {ResNet}: ResNet is a CNN-based architecture of a kind that stacks residual blocks on top of each other to form a network. The ResNet takes 3D MRI data as input, extracts feature through the stacked residual blocks, and makes the final class prediction using an average pooling layer and a fully connected layer. ResNet has many variants with similar architecture but different numbers of layers, the variants include ResNet18, ResNet34, ResNet50, ResNet101, ResNet152, and ResNet200. We choose the 3D version of ResNet18, ResNet34, and ResNet50 for comparison, considering our input size and GPU memory capacity. The Adam optimizer with a learning rate of ${10}^{-4}$, and a batch size of 4 is used for model training. The trained ResNet34 model can be used for the ResNet34 backbone initialization in our HRL (i.e., the 1st stage optimization). (2) {Med3D}: Med3D is a heterogeneous 3D network to co-train multi-domain 3D medical image segmentation so as to make a series of pretrained models. The Med3D pretrained models on large datasets can be transferred to new models in image segmentation and classification tasks and help improve the models' performance. We download the Med3D pretrained models through links in GitHub \footnote{https://github.com/Tencent/MedicalNet}, and transferred them to ResNet18, ResNet34, and ResNet50 for fine-tuning and performance comparison (denoted as {Med3D18/Med3D34/Med3D50}). The parameters setting for training Med3D is the same as ResNet. (3) {ViT}: ViT consists of a stack of Transformer encoders for feature extraction and an MLP head for classification. The architecture of the Transformer encoder is the same as in HRL. For the convenience of comparison, we chose the 3D implementation of ViT (denoted as {ViT3D}). When the ViT model takes 3D MRI data as input, a large amount of GPU memory is required during training. For ViT3D, the inputs are down-sampled by $2\times2\times2$, and the iteration of the Transformer encoder is set to 6 due to the limits of GPU memory size (typically recommended set to 6 or 12 in 2D image classification in \citep{dosovitskiy2020image}). Other parameters as the hidden size and number of attention heads are set to 128 and 16 respectively, which is the same as in HRL. The AdamW optimizer with a learning rate of ${10}^{-4}$, and average pooling in MLP is used for ViT training. The patch size is $8\times8\times8$, the batch size is 6, and the dimension of MLP is 512. All competing deep learning models are trained with the sMRI data only and the same parameters setting of data preprocessing (e.g., histogram standardization, zero-mean normalization, intensity rescaling) and data augmentation as for HRL. \begin{table}[!t] \renewcommand{\arraystretch}{1} \scriptsize \centering \caption{Results achieved by different methods (in terms of mean $\pm$ standard deviation) in the binary classification for LLD detection (i.e., CN-D vs. CN-N classification), with models trained on handcrafted features and MRI data. Methods marked as ``-FF'' denote that handcrafted Features extracted by FreeSurfer. \setlength\tabcolsep{3pt} \begin{tabular}{lcccccccccc} \toprule ~Method &AUC (\%) &ACC (\%) &SEN (\%) &SPE (\%) &F1s (\%) \\ \hline ~SVM-GM &58.14±4.94 &57.58±7.23 &56.34±14.76 &59.93±12.51 &63.11±9.21\\ ~SVM-WM &57.00±7.63 &56.42±6.42 &55.36±6.07 &58.63±12.87 &62.77±5.89\\ ~SVM-CSF &54.63±6.30 &51.79±6.14 &46.12±6.04 &\textbf{63.14±6.99} &55.91±6.31\\ ~SVM-FF &53.43±3.66 &55.64±7.37 &60.38±20.46 &46.47±21.43 &63.13±10.01\\ ~RF-GM &51.70±4.50 &53.42±3.41 &56.48±8.92 &46.93±14.77 &61.43±5.16\\ ~RF-WM &52.84±7.46 &54.13±7.41 &56.46±9.94 &49.22±12.02 &61.78±8.29\\ ~RF-CSF &50.34±4.98 &51.55±2.86 &53.75±2.62 &46.93±11.88 &59.62±1.28\\ ~RF-FF &\textbf{61.01±8.48} &\textbf{61.01±7.41} &60.40±14.62 &61.63±22.24 &\textbf{66.60±9.13}\\ ~XGB-GM &55.27±4.45 &59.57±5.73 &68.13±8.71 &42.42±5.37 &68.95±5.61\\ ~XGB-WM &51.25±5.16 &56.88±6.35 &68.06±9.51 &34.44±6.82 &67.50±6.67\\ ~XGB-CSF &46.67±5.31 &50.75±4.82 &58.96±5.44 &34.38±7.57 &61.38±4.09\\ ~XGB-FF &52.55±6.71 &56.83±8.46 &\textbf{64.97±15.93} &40.13±15.13 &65.89±10.34\\ \hline ~ResNet18 &59.44$\pm$7.69 &62.34$\pm$5.39 &67.77$\pm$8.69 &51.11$\pm$19.29 &70.61$\pm$4.54 \\ ~ResNet34 &59.44$\pm$6.45 &62.34$\pm$6.37 &67.77$\pm$8.91 &51.11$\pm$11.73 &70.54$\pm$6.10 \\ ~ResNet50 &55.31$\pm$4.28 &57.49$\pm$5.76 &61.66$\pm$12.93 &48.95$\pm$13.78 &65.44$\pm$8.75 \\ ~Med3D18 &57.12$\pm$6.97 &61.20$\pm$4.88 &{68.88$\pm$3.62} &45.35$\pm$14.31 &{70.48$\pm$3.17} \\ ~Med3D34 &{61.91$\pm$5.22} &{64.98$\pm$8.31} &{70.55$\pm$14.24} &\textbf{53.26$\pm$6.10} &{72.44$\pm$8.50} \\ ~Med3D50 &58.57$\pm$7.09 &62.71$\pm$8.03 &{70.55$\pm$9.93} &46.60$\pm$6.01 &71.52$\pm$7.41 \\ ~ViT3D &54.28±11.13&58.94±11.66&67.78±13.97&40.78±12.81&68.48±10.42\\ ~HRL~(Ours) &\textbf{62.75±7.23} & \textbf{66.45±5.18} &\textbf{73.33±10.32} &{52.16±20.19} &\textbf{74.36±4.77}\\ \bottomrule \end{tabular} \label{MDD2_comparison} \end{table} \subsection{Results of Binary Classification } The results achieved by the proposed HRL and the competing methods in binary classification task ({\em i.e.}, CN-D vs. CN-N) are reported in Table~\ref{MDD2_comparison}. From Table~\ref{MDD2_comparison}, we have the following observations. \emph{First}, our HRL method outperforms conventional machine learning methods (i.e., SVM, RF, and XGB) and three deep learning methods (i.e., ResNet, Med3D and ViT3D) in most cases. Especially, the HRL achieves the highest SEN value (i.e., $73.33\%$) which may be very useful to accurately identify subjects with depression. These results suggest the effectiveness of the proposed hybrid learning framework for LLD identification. \emph{Second}, deep learning methods usually perform a little better than machine learning methods. This implies that incorporating MRI feature learning and classification into a unified framework via deep learning helps boost the identification performance, compared to the twelve machine learning methods that treat MRI feature extraction and classification as standalone tasks. \emph{Besides}, among the seven competing deep learning methods, Med3D34 performs overall better than others. The possible reason is that using complex network architecture (e.g., ViT3D) or using deeper layers (i.e., ResNet50 and Med3D50) may not necessarily boost the learning performance due to the limited number of training samples. \emph{In addition}, machine learning methods (i.e., SVM-FF, RF-FF and XGB-FF) with anatomical features usually yield higher SEN values compared to their counterparts (e.g., SVM-CSF, RF-CSF and XGB-CSF) with image intensity-based features. This may due to the fact that features extracted via FreeSurfer contain more information about brain structures, which helps to identify anatomical differences between CN-D and CN-N in brain MRI. \begin{table}[!t] \renewcommand{\arraystretch}{1} \scriptsize \centering \caption{Accuracy (ACC) results achieved by different methods in three-category classification (i.e., CI vs. CN-D vs. CN-N), with models trained on handcrafted features and MRI data. Methods marked as "-FF" denote that handcrafted Features extracted by FreeSurfer. \setlength\tabcolsep{7pt} \begin{tabular}{lc cc c } \toprule ~Method &ACC (\%) &ACC$_{CN-N}$ (\%) &ACC$_{CN-D}$ (\%) &ACC$_{CI}$ (\%)\\ \hline ~SVM-GM &40.14 &51.70 &47.62 &80.95\\ ~SVM-WM &42.86 &53.40 &50.68 &81.63\\ ~SVM-CSF &39.12 &50.68 &47.28 &80.27\\ ~SVM-FF &37.41 &48.98 &47.28 &78.57\\ ~RF-GM &53.40 &64.63 &56.46 &85.71\\ ~RF-WM &53.40 &62.24 &55.44 &\textbf{89.12}\\ ~RF-CSF &53.06 &63.27 &56.12 &86.73\\ ~RF-FF &\textbf{55.44} &\textbf{65.99} &\textbf{57.82} &87.07\\ ~XGB-GM &50.68 &63.61 &53.74 &84.01\\ ~XGB-WM &52.38 &62.59 &54.42 &87.76\\ ~XGB-CSF &46.26 &57.82 &49.66 &85.03\\ ~XGB-FF &54.42 &64.97 &56.80 &87.07\\ \hline ~ResNet18 &55.78 &66.67 &61.56 &83.33\\ ~ResNet34 &51.36 &65.31 &57.82 &79.59\\ ~ResNet50 &54.08 &62.24 &\textbf{61.90} &84.01\\ ~Med3D18 &50.00 &60.54 &56.12 &83.33\\ ~Med3D34 &52.04 &66.67 &60.54 &76.87\\ ~Med3D50 &51.02 &61.56 &57.82 &82.65\\ ~ViT3D &49.32 &62.93 &58.16 &77.55\\ ~HRL~(Ours) &\textbf{57.48} &\textbf{69.05} &61.56 &\textbf{84.35}\\ \bottomrule \end{tabular} \label{LLD3_ACC} \end{table} \subsection{Results of Three-Category Classification } We further perform diagnostic outcome prediction in LLD via a three-category classification task, {\em i.e.}, CI vs. CN-D vs. CN-N classification. The ACC and SEN values of each category achieved by different methods in this task are reported in Table~\ref{LLD3_ACC} and Table~\ref{LLD3_SEN}, respectively. The confusion matrices achieved by different methods are shown in Fig.~\ref{LLD3-confusion-matrix}. Table~\ref{LLD3_ACC} suggests that our HRL achieves the best overall ACC result, while RF-FF yields the second-best ACC value. Table~\ref{LLD3_SEN} suggests that the HRL generates the highest SEN value for the CN-D category among the deep learning methods. Besides, even our method can achieve good ACC values for the CI category when compared to the other two classes (see Table~\ref{LLD3_ACC}), but the sensitivity for CI is not good (see Table~\ref{LLD3_ACC}). Similarly, SVM-based models generally obtain high SEN scores for CN-N but not good SEN results for CN-D and CI. A reasonable explanation is that the number of CI subjects is very small ({\em i.e.}, 25) and even fewer for the test in 5-fold cross validation. On the other hand, from Tables~\ref{LLD3_ACC}-\ref{LLD3_SEN} and Table~\ref{MDD2_comparison}, we can see that the overall performance of each model drops dramatically as the number of categories increases. The low recognition rate of the newly added category ({\em i.e.}, CI) greatly reduces the overall performance of each predictive model. The confusion matrices in Fig.~\ref{LLD3-confusion-matrix} show more detailed and specific performance of each method on the three different categories. Fig.~\ref{LLD3-confusion-matrix} suggests that there is a data imbalance issue in the three-category classification task, while deep learning methods generally outperform conventional machine learning methods in terms of the overall accuracy. The most possible reason is that machine learning methods use handcrafted MRI features whose dimensionality and discriminative power are limited by templates and segmentation tools, while deep learning models can extract MRI features tailored for downstream tasks and thus achieve better classification performance due to the homogeneity between MRI features and the prediction model. Also, our HRL performs reasonably better than the other deep learning methods in general. The above experimental results and analysis imply that the main limiting factor for the application of deep learning in diagnostic outcome prediction of LLD could be the problem of data imbalance between different categories. \begin{table}[!t] \renewcommand{\arraystretch}{1} \scriptsize \centering \caption{Sensitivity (SEN) results achieved by different methods in three-category classification (i.e., CI vs. CN-D vs. CN-N), with models trained on handcrafted features and MRI data. Methods marked as "-FF" denote that handcrafted Features extracted by FreeSurfer. \setlength\tabcolsep{7pt} \begin{tabular}{lc cc c } \toprule ~Method &SEN (\%) &SEN$_{CN-N}$ (\%) &SEN$_{CN-D}$ (\%) &SEN$_{CI}$ (\%)\\ \hline ~SVM-GM &43.90 &\textbf{77.01} &23.26 &31.43\\ ~SVM-WM &\textbf{46.21} &72.41 &29.07 &\textbf{37.14}\\ ~SVM-CSF &40.29 &67.82 &27.33 &25.71\\ ~SVM-FF &37.42 &79.31 &21.51 &11.43\\ ~RF-GM &37.63 &29.89 &74.42 & 8.57\\ ~RF-WM &40.66 &29.89 &72.09 &20.00\\ ~RF-CSF &40.28 &28.74 &72.09 &20.00\\ ~RF-FF &39.36 &28.74 &\textbf{77.91} &11.43\\ ~XGB-GM &34.75 &26.44 &72.09 & 5.71\\ ~XGB-WM &37.42 &22.99 &75.00 &14.29\\ ~XGB-CSF &33.94 &18.39 &66.28 &17.14\\ ~XGB-FF &40.10 &22.99 &77.33 &20.00\\ \hline ~ResNet18 &\textbf{47.13} &\textbf{60.92} &60.47 &\textbf{20.00}\\ ~ResNet34 &39.30 &42.53 &63.95 &11.43\\ ~ResNet50 &41.99 &44.83 &66.86 &14.29\\ ~Med3D18 &41.94 &54.02 &54.65 &17.14\\ ~Med3D34 &41.59 &44.83 &62.79 &17.14\\ ~Med3D50 &39.68 &41.38 &63.37 &14.29\\ ~ViT3D &40.42 &42.53 &58.72 &\textbf{20.00}\\ ~HRL~(Ours) &46.01 &52.87 &\textbf{68.02} &17.14\\ \bottomrule \end{tabular} \label{LLD3_SEN} \end{table} \begin{figure*}[!t] \setlength{\belowdisplayskip}{-1pt} \setlength{\abovedisplayskip}{-1pt} \setlength{\abovecaptionskip}{-1pt} \setlength{\belowcaptionskip}{-1pt} \centering \includegraphics[width=0.99\textwidth]{Figs/Fig-confusion-matrix-v1.pdf} \caption{Confusion matrices achieved by different methods in three-category classification (i.e., CI vs. CN-D vs. CN-N).} \label{LLD3-confusion-matrix} \end{figure*} \begin{figure*}[!t] \setlength{\belowdisplayskip}{-1pt} \setlength{\abovedisplayskip}{-1pt} \setlength{\abovecaptionskip}{-1pt} \setlength{\belowcaptionskip}{-1pt} \centering \includegraphics[width=1\textwidth]{Figs/Fig-featuremap_v1.pdf} \caption{Feature maps extracted by the ResNet backbone in our HRL model in three-category classification. The boxes and ellipses mark the most informative ROIs defined with the AAL3 template. (Red boxes in the axial view: ACC\_pre\_L and ACC\_pre\_R; Yellow boxes in the axial view: Frontal\_Inf\_Oper\_L; Red ellipses in the sagittal view: Hippocampus\_L or Hippocampus\_R; Yellow boxes in the sagittal view: Cerebellum; Red ellipses in the coronal view: ACC\_sup\_L and ACC\_sup\_R; Yellow ellipses in the coronal view: Caudate\_L and Caudate\_R).} \label{fig_featuremap} \end{figure*} \subsection{Learned Feature Maps} The feature maps extracted by the first residual network block of ResNet in our HRL in the task of three-category classification are shown in Fig.~\ref{fig_featuremap}, with the most discriminative ROIs marked by boxes and ellipses. For each category, we show three representative subjects and the learned feature maps in the axial, sagittal, and coronal views. As shown in Fig.~\ref{fig_featuremap}, the most discriminative ROIs (marked by boxes and ellipses) include ACC\_pre\_L and ACC\_pre\_R (i.e., anterior cingulate cortex, subgenual and pregenual, red boxes in the axial view), Frontal\_Inf\_Oper\_L (i.e., the opercular part of left inferior frontal gyrus, yellow boxes in the axial view), Hippocampus\_L and Hippocampus\_R (i.e., hippocampus, red ellipses in the sagittal view), Caudate\_L and Caudate\_R (i.e., caudate, yellow ellipses in the coronal view), ACC\_sup\_L and ACC\_sup\_R (i.e., anterior cingulate cortex, supracallosal, red ellipses in the coronal view), cerebellum (yellow boxes in the sagittal view). These findings are consistent with previous studies ~\citep{zhang2018brain, yang2021deep}, suggesting that our HRL has great potential to discover the most discriminative ROIs that contribute to diagnostic outcome prediction. \begin{figure*}[!t] \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.99\textwidth]{Figs/Fig-tSNE.pdf} \caption{Feature distributions of the ResNet backbone in HRL using t-SNE on the training set. Results of 5 folds in (a) binary classification and (b) three-category classification are shown. The red, blue, and green colors denote CN-N, CN-D, and CI categories, respectively, {\color{black}while 0 and 1 denote the imaging site of NCODE and NBOLD, respectively.}} \label{fig_tSNE} \end{figure*} \subsection{Visualization of Data-Driven MRI Features} To better understand results in Tables~\ref{MDD2_comparison}-\ref{LLD3_SEN} and the MRI features extracted by HRL, the t-SNE method~\citep{van2008visualizing} is used to analyze the distribution of intermediate MRI features in the binary and three-category classification tasks, with results reported in Fig.~\ref{fig_tSNE}. The features generated by the last layer of the ResNet backbone of HRL in 5 folds on training data are extracted for analysis. In Fig.~\ref{fig_tSNE}, different colors denote different categories (i.e., red for CN-N, blue for CN-D, green denotes for CI) and different digits correspond to different datasets (i.e., 0 for NCODE and 1 for NBOLD). It can be observed from Fig.~\ref{fig_tSNE} that the learned MRI features are generally well separated by category labels and datasets on the training set. But the performance of our HRL on the test set are not that promising, as shown in Tables~\ref{MDD2_comparison}-~\ref{LLD3_SEN}. This may due to the significant inter-dataset differences in the NCODE and NBOLD studies, which is a common problem in multi-site studies that negatively affect generalization ability of deep learning models~\citep{guan2021domain}. This effect is pronounced for HRL with limited sample size. Designing advanced strategies to reduce such inter-dataset data distribution differences is expected to further promote the prediction performance. \vspace{-8pt} \section{Discussion} \label{S5} {\color{black} \if false \subsection{Comparison with Previous Studies} In this work, we investigate the 5-year cognitive progression of LLD based on T1-weighted MRI and diagnosis information of long follow-up time. Experimental results in both tasks of CN-D vs. CN-N and CI vs. CN-D vs. CN-N classification demonstrate the effectiveness of the proposed methods. Besides, from the confusion matrices in Fig.~\ref{LLD3-confusion-matrix}, it can be seen that the classification performance is imbalanced between categories with machine learning models. The most possible reason is that the dimension of the handcrafted features is limited by the atlas template and segmentation tools. The deep learning model can extract MRI features according to downstream task, and use the visualization of the feature map to help discover potential features and the association between features. \fi \subsection{Interpretation of Identified MRI Biomarkers} In this work, we investigate the 5-year cognitive progression of LLD based on T1-weighted MRI and diagnosis information of long follow-up time. From the feature maps in Fig.~\ref{fig_featuremap}, we can see the most discriminative brain regions identified by our method include anterior cingulate cortex (ACC), frontal lobe, hippocampus, caudate, and cerebellum. These brain regions are generally consistent with those in previous studies related to depression~\citep{zhang2018brain,banasr2021macro,toenders2022association,yang2021deep}. For example, Zhang et al.~\citep{zhang2018brain} reviewed the previous findings on brain structural changes in depression and found significant alterations in several brain regions such as the frontal lobe, hippocampus, temporal lobe, thalamus, striatum, and amygdala in patients with major depressive disorder. Barnas et al.~\citep{banasr2021macro} focused on the study of prefrontal cortex and hippocampus because these brain regions have been found in many previous studies of depression to be associated with decreased volume of gray matter. A recent study found that depression severity was associated with a thinner rostral ACC and melancholic depressive symptoms were negatively associated with caudal ACC thickness~\citep{toenders2022association}. Yang et al.~\citep{yang2021deep} reported that the corpus callosum, hippocampus, cerebellum, insular, caudate nucleus, and brain steam are the most discriminative regions for depression symptom factors regression via deep learning. From the comparison, we can see that the identified discriminative brain regions in MRI could be used as potential MRI biomarkers for the assessment of cognitive change in LLD subjects with 5 years of a cognitively normal diagnosis, which is a time frame in which preventative or disease-modifying treatments may be most effective. } \subsection{Ablation Study} \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \scriptsize \centering \caption{Ablation study in binary classification task (i.e., CN-D vs. CN-N), with MRI down-sampled by 2×2×2 for training efficacy. The terms ``-H'' and ``-D'' denote models with handcrafted MRI features extracted by FreeSurfer and data-driven MRI features via the ResNet backbone, respectively.} \setlength\tabcolsep{4pt} \begin{tabular}{lc cc cc } \toprule ~Method &AUC (\%) &ACC (\%) &SEN (\%) &SPE (\%) &F1s (\%)\\ \hline ~HRL-H &52.75±5.54 &56.07±5.67 &62.83±15.55 &42.68±21.47 &64.77±8.20\\ ~HRL-D &56.16±4.13 &58.47±5.55 &\textbf{62.97±11.18} &49.35±9.82 &66.42±7.55\\ ~HRL &\textbf{59.06±8.84} &\textbf{60.14±6.60} &61.97±14.61 &\textbf{56.14±25.91} &\textbf{66.72±8.82}\\ \bottomrule \end{tabular} \label{ablation} \end{table} The proposed HRL contains a Transformer encoder for fusion of data-driven and handcrafted MRI features. To investigate the influence of the transformer, we compare the HRL (with ResNet34 as the backbone) with its variants for ablation analysis, including \textbf{HRL-H} that uses handcrafted MRI features only and \textbf{HRL-D} with data-driven MRI features via ResNet only. For a fair comparison, we set their key hyperparameters (e.g., learning rate and batch size) to be the same as HRL. All models are initialized with pretrained Med3D parameters and retrained without any limitations. Due to the limitation of GPU memory, we down-sampled the input MRI scans by 2×2×2 for training efficiency. The results of our HRL and its variants in the binary classification task are reported in Table~\ref{ablation}. It can be seen from Table~\ref{ablation} that our HRL achieves the overall best performance than its two degenerate variants (i.e., HRL-M and HRL-D). These results suggest that fusing handcrafted and data-driven MRI features via the Transformer encoder (as we do in HRL) is effective in identifying CN-D from CN-N subjects, compared to methods that use only handcrafted or data-driven MRI features. Besides, HRL-D outperforms HRL-M with a large margin. For instance, HRL-D yields an AUC of 56.16\%, which is 3.91\% higher than that of HRL-H. This demonstrates that data-driven MRI features learned by the ResNet backbone may be more discriminative in detecting control normal subjects with depression compared to handcrafted MRI features. \begin{figure}[!t] \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.47\textwidth]{Figs/Fig-ROI-Msk.pdf} \caption{Performance comparison of HRL and its variant HRL-Msk that use a predefined ROI mask in CN-D vs. CN-N classification. A total of 34 ROIs defined in AAL3 are selected by two experienced radiologists and clinicians to mask the input MRI scans.} \label{fig_ROI_MSK} \end{figure} \subsection{Influence of Predefined ROI Mask} Based on the analysis results in Fig.~\ref{fig_featuremap}, we can see that some brain regions may be more informative for diagnostic outcome prediction. So, we perform an additional experiment to validate whether using predefined ROI masks help improve the prediction performance. A total of 34 potential LLD-associated ROIs (defined in the AAL3 template) are selected by two experienced radiologists and clinicians, including ACC (i.e., anterior cingulate cortex), OFC (i.e., orbitofrontal cortex), prefrontal cortex, hippocampus, caudate, insular, cingulate, amygdala, and nuclear acumens. The names and IDs of these ROIs are given in Table SX of the \emph{Supplementary Materials}. These selected ROIs are used as a ROI mask of the input MRIs for model training and test. Results of our HRL and its variant with the predefined ROI mask (denote as HRL-Msk) are reported in Fig.~\ref{fig_ROI_MSK}. The results in Fig.~\ref{fig_ROI_MSK} show that the performance of HRL-Msk is comparable with that of HRL, even when most of the brain regions are masked out in MRIs. It indicates that these predefined ROIs contain considerable information for the classification task. \if false The pre-selected 34 ROIs: 5 Frontal_Mid_2_L 5 6 Frontal_Mid_2_R 6 7 Frontal_Inf_Oper_L 7 8 Frontal_Inf_Oper_R 8 21 Frontal_Med_Orb_L 21 22 Frontal_Med_Orb_R 22 23 Rectus_L 23 24 Rectus_R 24 25 OFCmed_L 25 26 OFCmed_R 26 27 OFCant_L 27 28 OFCant_R 28 29 OFCpost_L 29 30 OFCpost_R 30 31 OFClat_L 31 32 OFClat_R 32 33 Insula_L 33 34 Insula_R 34 39 Cingulate_Post_L 39 40 Cingulate_Post_R 40 41 Hippocampus_L 41 42 Hippocampus_R 42 45 Amygdala_L 45 46 Amygdala_R 46 75 Caudate_L 75 76 Caudate_R 76 151 ACC_sub_L 151 152 ACC_sub_R 152 153 ACC_pre_L 153 154 ACC_pre_R 154 155 ACC_sup_L 155 156 ACC_sup_R 156 157 N_Acc_L 157 158 N_Acc_R 158 \fi \begin{figure}[!t] \setlength{\belowdisplayskip}{0pt} \setlength{\abovedisplayskip}{0pt} \setlength{\abovecaptionskip}{0pt} \setlength{\belowcaptionskip}{0pt} \centering \includegraphics[width=0.47\textwidth]{Figs/Fig-CI2AD.pdf} \caption{Results of cross-dataset CI-to-AD diagnostic outcome prediction (i.e., CIND vs. AD classification). The models are trained for binary classification (i.e., AD vs. CN) with MRI selected from ADNI and transferred to CIND vs. AD classification.} \label{fig-LLD2AD} \end{figure} \if false \begin{table}[!t] \setlength{\belowdisplayskip}{-1pt} \setlength{\abovedisplayskip}{-1pt} \setlength{\abovecaptionskip}{-1pt} \setlength{\belowcaptionskip}{-1pt} \setlength{\tabcolsep}{0.65mm} \centering \footnotesize \caption{Results of Cross-Dataset CI-to-AD diagnostic outcome prediction (i.e., CIND vs. AD classification).} \setlength\tabcolsep{3pt} \begin{tabular}{lccccc} \toprule ~Method &AUC (\%) &ACC (\%) &SEN (\%) &SPE (\%) &F1s (\%) \\ \hline ~{ResNet34} &52.30 &51.43 &42.11 &\textbf{62.50} &54.05\\ ~HRL (Ours) &\textbf{57.57} &\textbf{57.14} &\textbf{52.63} &\textbf{62.50} &\textbf{57.14}\\ \bottomrule \end{tabular} \label{tab-LLD2AD} \end{table} \fi \subsection{Cross-Dataset CIND-to-AD Prediction} \label{S5.4} Due to the limited number of CIND and AD subjects in this study, we cannot train a learning model for CIND-to-AD diagnostic outcome prediction based on the current data. To address this issue, we further propose a transfer learning strategy by training a model on the large-scale ADNI dataset~\citep{jack2008alzheimer} and testing it on subjects in this study. Inspired by ~\citep{cheng2015domain}, we use CN and AD subjects as auxiliary domain rather than MCI and AD subjects for the training of HRL. Specifically, a total of 795 subjects (i.e., 436 CN and 359 AD subjects) from ADNI are used for model training. And 20 subjects of each category in ADNI are randomly selected for validation and the remaining are used for training the model to identify AD patients from cognitively normal subjects. Then, we perform AD vs. CIND classification on subjects in this study using the model trained on ADNI directly. In this experiment, we compare the proposed HRL with ResNet34, where HRL fuse handcrafted and data-driven MRI features and ResNet34 only use MRI as input. The results of our HRL and ResNet34 in AD vs. CIND classification are reported in Fig.~\ref{fig-LLD2AD}. It can be observed from Fig.~\ref{fig-LLD2AD} that both HRL and ResNet34 produce reasonable results in the challenging task, even though there exists a semantic gap between category labels in these two studies (i.e., CN \& AD in ADNI, and CIND \& AD in this work). This suggests that using large-scale MRI data for model training could help to improve the performance of deep learning methods. In addition, our HRL generally outperforms ResNet34 in predicting the diagnostic outcome of CIND-to-AD, further validating the necessity of fusing data-driven and hand-crafted MRI features (as we do in HRL). \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \scriptsize \centering \caption{Results achieved by HRL with different training strategies in binary category classification (i.e., CN-D vs. CN-N), with MRI down-sampled by 2×2×2 for training efficacy. HRL-S denotes the model trained from scratch. HRL-R uses pretrained ResNet for network parameter initialization, and is then fine-tuned by joint training of ResNet and Transformer encoder. } \setlength\tabcolsep{3pt} \begin{tabular}{lc cc cc } \toprule ~Method &AUC (\%) &ACC (\%) &SEN (\%) &SPE (\%) &F1s (\%)\\ \hline ~HRL-S &60.11±10.51 &58.52±9.25 &54.93±10.91 &\textbf{65.29±18.39} &63.47±9.64\\ ~HRL-R &59.06±8.84 &60.14±6.60 &61.97±14.61 &56.14±25.91 &66.72±8.82\\ ~HRL &\textbf{61.12±8.05} &\textbf{65.77±7.95} &\textbf{75.11±8.02} &47.12±9.31 &\textbf{74.37±6.69}\\ \bottomrule \end{tabular} \label{strategy} \end{table} \subsection{Influence of Two-Stage Optimization Strategy} To validate the effectiveness of the proposed two-state optimization strategy, we further compare the HRL with its two variants (called \textbf{HRL-S} and \textbf{HRL-R}) that use different training solutions. Specifically, the HRL-S is trained from scratch, without pretraining the ResNet backbone. Similar to our HRL, the HRL-R also uses a two-stage optimization strategy. The difference is that, in the 2nd stage, HRL-R jointly trains the ResNet backbone and the Transformer encoder for classification, while our HRL only train the Transformer with the ResNet backbone frozen. The comparison results are shown in Table~\ref{strategy}. Table~\ref{strategy} suggests that the proposed two-stage training strategy helps promote the performance of HRL. Also, the joint training of ResNet and Transformer in HRL-R cannot produce improved results, possibly due to the heterogeneity between the ResNet and Transformer architectures. \subsection{Limitations and Future Work} Several issues need to be considered in future studies. \emph{First}, the impact of data imbalance is more evident under the limited sample size. We will investigate more suitable feature extraction models or training strategies (e.g., meta-learning~\citep{vilalta2002perspective}, unsupervised constructive learning~\citep{pankajavalli2022independent}) to address the issues of small-sample-size and imbalanced data. \emph{Besides}, there exists significance inter-site difference between NCODE and NBOD studies, as can be observed through t-SNE analysis in Fig.~\ref{fig_tSNE}. We will design advanced data harmonization/adaptation methods~\citep{kamnitsas2017unsupervised,guan2021domain} to reduce the data heterogeneity between different sites. \emph{Furthermore}, we use only structural MRI in this work, without considering other modalities such as functional MRI~\citep{pilmeyer2022functional} and demographic information~\citep{nayak2022socio}. In the future, we will fuse multiple data modalities to discover potential biomarkers for diagnostic outcome prediction and AD detection. \vspace{-4pt} \section{Conclusion} \label{S6} \vspace{-4pt} In this paper, we investigate the 5-year cognitive progression of late-life depression (LLD) for the first time based on T1-weighted MRI and diagnosis information of long follow-up time. This task is very challenging due to the 5-year time gap between MRI scan time and category labels (diagnosis information) and the heterogeneous relations between depression and cognitive impairment. We develop an HRL framework that can effectively fuse data-driven and handcrafted MRI features for longitudinal diagnostic discrimination in LLD. Experimental results on $294$ subjects with MRI acquired from two studies suggest the potential of our HRL in the automated diagnostic discrimination in LLD. We further analyze the feature maps and find possible related ROIs that may contribute to the prediction of diagnostic cognitive change in LLD. We also discuss the factors that limit deep learning models via experiments and possible solutions to promote learning performance with limited data. \vspace{-4pt} \bibliographystyle{model2-names.bst} \biboptions{authoryear}
1,314,259,996,851
arxiv
\section{Chain mapping} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{A1} \caption{Chain mapping of the multi-environment Hamiltonian in the main text to a star-structure that includes excitonic and photonic degrees of freedom in the root operator.}\label{fig:chain_mapping_2} \end{figure} We here give a short overview of the chain mapping that converts an environment of multiple bosonic modes with linear coupling to a system into a chain of bosonic modes with nearest-neighbor hopping, with only the first chain site coupled to the system~\cite{Chin2011s}. Such an environment is described by the general Hamiltonian \begin{align} \hat{H}_{\mathcal{E}} &= \sum_{k=1}^{M} \left[ \omega_{k} \hat{b}_{k}^\dagger \hat{b}_{k}^{\phantom{\dagger}} + (\lambda_{k} \hat{O}_\mathcal{S}^\dagger \hat{b}_{k} + \mathrm{H.c.}) \right] , \end{align} where $\hat{O}_\mathcal{S}$ is an arbitrary system operator, and $\hat{b}_k$ the bath oscillator annihilation operators. The environment is fully characterized through the spectral density $J(\omega) = \pi \sum_k \lambda_k^2 \delta(\omega-\omega_k)$. Rewriting $\hat{H}_{\mathcal{E}} = \hat{\boldsymbol{\beta}}^{\dagger} \mathcal{C} \hat{\boldsymbol{\beta}}$, where $\hat{\boldsymbol{\beta}} = (\hat{O}_\mathcal{S}, \hat{b}_{1}, \cdots \hat{b}_{M})^{T}$, the chain mapping is obtained by tridiagonalization (e.g., with the Lanczos algorithm~\cite{Lanczos1950s}) of the coefficient matrix $\mathcal{C} = \left(\begin{smallmatrix} 0 & \boldsymbol{\lambda} \\ \boldsymbol{\lambda}^\dagger &\bar{\boldsymbol{\omega}}\end{smallmatrix}\right)$, where $\boldsymbol{\lambda}=(\lambda_1,\cdots,\lambda_{M})$ and $\bar{\boldsymbol{\omega}}=\mathrm{diag}(\omega_1,\cdots,\omega_{M})$. This gives $\hat{H}_{\mathcal{E}} = \hat{\boldsymbol{\beta}}'^{\dagger} \mathcal{C} \hat{\boldsymbol{\beta}}'$, with $\hat{\boldsymbol{\beta}}' = (\hat{O}_\mathcal{S}, \hat{c}_{1}, \cdots \hat{c}_{M})^{T}$, and \begin{align} C' = U^\dagger C U = \begin{pmatrix} 0 & \eta & 0\\ \eta & \tilde{\omega}_1 & t_{1}\\ 0 & t_{1} & \tilde{\omega}_{2} & \ddots\\ & & \ddots & \ddots & t_{M-1}\\ & & & t_{M-1} & \tilde{\omega}_{M} \end{pmatrix}, \end{align} i.e., the desired chain Hamiltonian, \begin{multline} \hat{H}_{\mathcal{E}} = \sum_{k=1}^M \tilde{\omega}_k\hat{c}_k^{\dagger} \hat{c}_k + \eta \left(\hat{O}_\mathcal{S} \hat{c}_{1}^\dagger + \hat{O}_\mathcal{S}^\dagger \hat{c}_{1}\right)\\ + \sum_{k=1}^{M-1} t_k \left(\hat{c}^{\dagger}_{k}\hat{c}_{k+1}+\hat{c}^{\dagger}_{k+1}\hat{c}_{k}\right), \label{eq:H_chain_general} \end{multline} where the reaction coordinate that interacts with the system is given by $\hat{c}_1 = \sum_k\lambda_k\hat{b}_k/\eta$, with coupling $\eta = \sqrt{\sum_k |\lambda_k|^2}$ and frequency $\tilde{\omega}_1 = \sum_k \omega_k |\lambda_k|^2 / \eta^2$. \section{Tree tensor network} For the system treated in the main text (a collection of Rh800 molecules coupled to a cavity mode), the phononic bath $\mathcal{E}_\mathrm{v}^{(i)}$ on molecule $i$ interacts with the molecular exciton through the operator $\hat{O}_\mathcal{S} = \sigma^{(i)}_+\sigma^{(i)}_-$, while the free-space photon modes $\mathcal{E}_\mathrm{r}$ interact with the cavity photon via $\hat{O}_\mathcal{S} = a$ (in the rotating-wave approximation). For the photons, the spectral density $J_r(\omega)\propto\omega^3$ is of the Leggett form ($\propto \omega^s$, $s>0$)~\cite{Leggett1987s}, which enables closed expressions for all $\{\tilde\omega_k,t_k\}$~\cite{Prior2010s,Chin2011s}. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{A2} \caption{Representation as a tree-like tensor network by further singular-value decompositions of the root node in the star-tensor-network that mimicks \autoref{fig:chain_mapping_2} (left). This requires to introduce auxiliary non-physical sites, where the acting Hamiltonian (right) presents empty operators (grey circles).}\label{fig:chain_mapping_tree} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=1\linewidth]{TDVMPS_N4_N16} \caption{Population dynamics for $N=4$ (red) and $N=16$ (blue) molecules for different Rabi frequencies (shown in titles). The occupations $\rho_{GG}$, $\rho_{++}$ are displayed in the upper panels, while $\rho_{--}$, $\rho_{\mathcal{D}\mathcal{D}}$ is shown in the lower ones, with distinctive line styles.}\label{fig:N4_vs_N16} \end{figure*} The application of the chain mapping to all environments yields the ``star'' Hamiltonian sketched in \autoref{fig:chain_mapping_2}. This Hamiltonian only contains nearest-neighbor coupling terms and thus can be efficiently implemented in tensor-network descriptions that share the same network topology~\cite{Schollwock2011s,Schroder2017Multis}. Since the chain mapping is linear and invertible, physical environment observables in the original basis can be obtained by applying the inverse transformation on the chain basis used in the numerical implementation. In this section, we discuss how a tree tensor network structure provides significant memory savings compared to the ``naive'' star network topology discussed above. In the star network, the ``system'' ($\mathcal{S}$) is represented by a tensor with $N+2$ dimensions (one physical index representing the coupled exciton-photon state, as well as $N+1$ internal indices representing the coupling to the environments, with maximum bond dimension $D$). This leads to a severe memory bottleneck for large $N$ as the root tensor size scales exponentially with $N$, $\mathcal{O}(D^{N+1})$. In order to circumvent this exponential scaling while maintaining precision, we decompose the system into a tree tensor network state~\cite{Szalay2015s, Shi2006s}, where each final branch is coupled only to a single chain (see \autoref{fig:chain_mapping_tree}). This introduces additional auxiliary tensors with no physical indexes, called ``entanglement renormalization tensors''~\cite{Vidal2007s} which take in the complete chain states and pass on a reduced number (joint states) to the system. In general, developing an efficient tree model requires an explicit analysis of the entanglement topology of the state, in essence analyzing possible regroupings and decompositions of bond legs over the star-tensor-network. The condition for this compression to be effective is that there are correlations between the chains, i.e., the sum of the reduced state entropy of each chain is greater than the joint entropy of the chains. This idea has recently been implemented to allow the simulation of multi-environment linear vibronic models constructed from ab initio parametrizations of small molecules~\cite{Schroder2017Multis}. However, in our specific case, permutation symmetry between the (identical) molecules holds. An efficient tensor network is thus given by a structure with no privileged distribution of phononic chains, i.e., the perfect binary tree in \autoref{fig:chain_mapping_tree} with $\zeta$ levels for $2^\zeta$ molecules. For simplicity, the environment $\mathcal{E}_\mathrm{r}$ is introduced as a tensor chain connected directly to the root node, as the additional leg does not increase memory storage critically. Once the quantum wavefunction and the global Hamiltonian are represented in tree-form, the time-dependent variational principle algorithm~\cite{Haegeman2011s} can be implemented, generalizing the single-chain algorithm~\cite{Haegeman2016s} by recursively optimizing each of the child tensors of a given node in the tree-tensor network, and environmental chains once the leaves of the tree are reached. More details on this approach can be found in~\cite{Schroder2017Multis, Schroder2017Tensors}. \section{Ensemble size effects} In this section, we discuss the effect of changing the number of molecules in the time evolution. To this effect, the system populations initialized in the vibration-free upper polariton $\ket{+}$ for $N=4$ and $N=16$ molecules are shown in \autoref{fig:N4_vs_N16}. In all cases, the increased number of dark states when increasing $N$ leads to more efficient population transfer, both when driven through off-resonant and multi-phonon processes (for $\Omega_R=1~$eV, \autoref{fig:N4_vs_N16}a), and when a vibrational transition is (close to) resonant with transitions between polaritons and dark states ($\Omega_R=0.5~$eV and $\Omega_R=0.3$~eV, \autoref{fig:N4_vs_N16}b-c). An analytical calculation of the rates in the Markovian limit (the Bloch-Redfield-Wangness approach in the secular approximation) predicts that, for a Rabi frequency below the vibrational cutoff ($\Omega_R<\omega_c$), the global rate for a transition from the upper polariton into the dark-state subspace $\mathcal{D}$ scales as $(N-1)/N$, while the rate for the transitions $\rho_{++} \to \rho_{--}$ and $\rho_{\mathcal{D}\mathcal{D}} \to \rho_{--}$ is suppressed as $\sim 1/N$~\cite{DelPino2015Quantums}. However, our results in \autoref{fig:N4_vs_N16}b,c indicate that dark-state decay happens at comparable rates for $N=4$ and $N=16$. Additionally, the time-dependent oscillation pattern is quite similar for $N=4$ and $N=16$, with the threefold oscillation $\rho_{++} \leftrightarrow \rho_{\mathcal{D}\mathcal{D}} \leftrightarrow \rho_{--}$ determined by the phononic reaction coordinate dynamics, but largely independent of ensemble size. For $\Omega_R=0.1$~eV, shown in \autoref{fig:N4_vs_N16}d, the ``universality'' in the early-time oscillation frequency is preserved, but in contrast to the larger Rabi splittings, the rate at which dark states decay into the lower polariton again decreases with $N$. We interpret this as due to the breakdown of strong coupling, which leads to the initial state $\ket{+}$ having contributions from strongly vibrationally-dressed dark states which only decay inefficiently. \section{Comparison with Holstein--Tavis--Cummings model} \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{EXACT_vs_TDVMPS} \caption{Comparison of the cavity population dynamics under the single-phonon mode description of $N=2$ molecules (HTC) between exact time propagation and a TDVMPS calculation. For this test, $\Delta=0.112$~eV and $\tilde{\omega}_1=0.154$~eV.}\label{fig:EXACT_vs_TDVMPS} \end{figure} \begin{figure}[b]\setlength{\hfuzz}{1.1\columnwidth} \begin{minipage}{\textwidth} \centering \includegraphics[width=1\linewidth]{TDVMPS_HTC_N16} \caption{Comparison of population dynamics for $N=16$ between the full model (blue) and a single-phonon-mode description of the molecules (HTC, red). Parameters as in \autoref{fig:N4_vs_N16}.}\label{fig:N16_vs_HTC} \end{minipage} \end{figure} In this section we first check the reliability of the numerical method by comparing the time evolution of the loss-less Holstein-Tavis-Cummings (HTC) model, which only includes a single vibrational mode per molecule, via TDVMPS algorithm, with the result arising from an exact computation of the time propagation, retrieved via the open source library QuTiP~\cite{Johansson2013s}. The parameters of the HTC model are chosen to reproduce the reaction coordinate frequency of the Rh800 molecule ($\omega_\mathrm{HTC}=\tilde{\omega}_1=0.181$~eV) and total reorganization energy $\lambda_\mathrm{HTC} = \sqrt{\omega_\mathrm{HTC}\Delta}$, with $\Delta=\sum_k \lambda_k^2/\omega_k = 0.0356$~eV. This mapping has been found to reproduce the most accurate lower phonon-polariton state~\cite{DelPino2018Grounds}. As displayed by the reduced population $\rho_{11}$ in \autoref{fig:EXACT_vs_TDVMPS}, the TDVMPS time evolution almost exactly (to within the linewidth of the plot) reproduces the oscillatory features arising from the light-matter coupling and vibronic-induced effects in the exact calculation. This motivates the extension of the approach to the exact multi-mode dynamics discussed in the main text, a regime where exact time propagation becomes unfeasible in practice. We address in the following the question of whether the full many-mode time dynamics of the system can be accounted for by means of the simplistic HTC model with parameters described above. As seen in \autoref{fig:N16_vs_HTC}a, while the HTC model reproduces the initial dynamics in the first few fs reasonably well (where the reaction coordinate could be assumed to dominate the collective response), it consistently overestimates the coherent oscillations observed for times larger than about $10~$fs. In particular, it fails to correctly predict the excitation trapping in the dark state subspace and in contrast leads to enduring oscillations that are not dissipated into $\mathcal{D}$ but only lost into photons. We have checked that choosing different parameters (e.g., $\lambda_\mathrm{HTC}=\eta$) does not improve the agreement significantly (not shown). \vspace{147mm} \section{Convergence tests}\label{sec:conv} Here we provide convergence checks of the tree-tensor network simulations, using the reduced population for the excited cavity ($\rho_{11}$) and vacuum states ($\rho_{GG}$) as benchmark observables. The relevant parameters in this analysis are \textit{i)} the maximum bond dimension of the tensor network $D$, \textit{ii)} the length of the chains for environments $\mathcal{E}_\mathrm{v},\mathcal{E}_\mathrm{r}$, denoted $L$ (and set equal to the number of photonic/phononic modes $L=M_v=M_r$), \textit{iii)} the time-step $\Delta t$ of time evolution, and finally \textit{iv)} the smallest singular value kept along the calculation, denoted as $\mathrm{sv}^{\mathrm{tol}}$. For reference, the values chosen in the main text are $D=20$, $L=350$, $\Delta t=0.1$ eV$^{-1}$ (0.066 fs) and $\mathrm{sv}^{\mathrm{tol}}=10^{-4}$. In the following analysis, we will sweep each of these parameters separately, leaving the rest at these given values. In addition, it will be convenient to define the maximum relative error between the two solutions $\rho_{GG}^{\mathrm{sol}_1},\rho_{GG}^{\mathrm{sol}_2}$ where a single parameter is varied, $\epsilon=\max_t|\rho_{GG}^{\mathrm{sol}_1}(t)-\rho_{GG}^{\mathrm{sol}_2}(t)|/\rho_{GG}^{\mathrm{sol}_2}(t)$. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{convergence_D} \caption{Comparison of population dynamics for $N=4$ between the full model with vaying bond dimension $D$. Other parameters as in \autoref{fig:N4_vs_N16}.}\label{fig:conv_D} \end{figure} In particular, $D$ corresponds to the number of ‘auxiliary’ states that encode the quantum correlations between neighboring degrees of freedom, thus setting a cutoff for the maximum entanglement entropy allowed in a given bond between two physical or entanglement renormalization nodes ($S_{\mathrm{max}}\sim\log D$~\cite{Schollwock2011s}). In \autoref{fig:conv_D}, we find acceptable results for $D>15$ for all Rabi frequencies $\Omega_R$. Convergence of populations is remarkably more demanding in terms of $D$ for the cases $\Omega_R=0.5$ and $0.1$ eV, where precisely Markovian (Bloch-Redfield-Wangsness) and non-Markovian time dynamics present stronger relative deviations (see main text). This observation establishes a direct link between the large amount of system-environment correlations and non-Markovianity in the time evolution. Moreover, analysis of the relative errors between the case ($D=20$), analyzed in the main text, and the best-converged case ($D=25$) shows a maximum deviation of $\epsilon<1\%$ during the first $200$ fs. Maximum dimensions $D^{\mathrm{OBB}}$ are also set for the bonds between chain tensors (for environments $\mathcal{E}_\mathrm{v}^{(i)},\mathcal{E}_r$) and the matrices mapping the optimal boson basis (OBB) into physical bosonic states~\cite{Guo2012s}. In practice, a value of $D^{\mathrm{OBB}}=50$ is sufficient to converge the dynamics in the main text, while total phonon populations in the output of the calculation (sum over chain occupations) stay typically below 1 for all cases analyzed in the main text. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{convergence_L} \caption{Comparison of population dynamics for $N=4$ for different chain lengths $L$ for environments $\mathcal{E}_\mathrm{v},\mathcal{E}_\mathrm{r}$. Other parameters as in \autoref{fig:N4_vs_N16}.}\label{fig:conv_L} \end{figure} Similar results of the time dynamics as a function of the chain length $L$ in \autoref{fig:conv_L} show artifacts in the density matrix $\hat{\rho}_\mathcal{S}$ at a time that can be estimated empirically to be $\sim L$ fs, with a weak dependence on the Rabi frequency. These are caused by the arrival of the (unphysically) reflected waves at the end boundaries of photonic and phononic finite chains, which is reminiscent of TDVMPS simulations of the spin-boson model~\cite{Schroder2016Simulatings}. In particular, inspection of the chain populations for different environments reveals that the limiting factor is the faster velocity group (steeper dispersion curve in chain wavevector space) of photonic wavepackets. In contrast with time evolution of system observables, environmental dynamics is profoundly sensitive to the finite boundaries (e.g. reflection of a photonic excitation implies an artificial breakdown of irreversible emission dynamics after the inverse chain mapping), demanding of the order of twice this chain length to calculate populations properly. In particular, to retrieve the emission spectrum dynamics up to $t>100$ fs in the main text, we employ a large value $L=350$, preventing finite-size effects in the simulations. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{convergence_dt} \caption{Comparison of population dynamics for $N=4$ for decreasing time step $\Delta t$. Other parameters as in \autoref{fig:conv_D}.}\label{fig:conv_dt} \end{figure} During the simulation, the many-body state is constructed after each time-step $\Delta t$ after application of the full time evolution operator. The error accrued in this propagation is discussed in detail in Ref.~\cite{Haegeman2016s} and its references, which show that the error arises only from the numerical method employed to integrate the TDVP equations ($\mathcal{O}(\Delta t^3)$ for a left-right sweep along a single chain)~\cite{Haegeman2011s,Schroder2016Simulatings}. In addition, it has recently been pointed out that TDVMPS is accurate for local degrees of freedom at very long times even with reduced $D$, as the projection technique (leading to the TDVP approach) yields an effective Hamiltonian that respects underlying conservation laws in the system~\cite{Leviatan2017s}. In order to provide a quantification of the error accrued during a global update of the tensor network by $\Delta t$, convergence plots as a function of the time-step are shown in \autoref{fig:conv_dt}. Here $\rho_{GG}$ quickly approaches the asymptotically converged value for $\Delta t<0.4$ eV$^{-1}$ (0.33 fs) (see \autoref{fig:conv_dt}), with relative errors that can be lowered below $\epsilon=0.1\%$ for $\Delta t=0.1$ eV$^{-1}$ (value in main text), for times shorter than 200 fs. In the algorithm implementation, bond dimensions are truncated or expanded adaptively, according to the criteria that singular values $\{{\nu}_l\}$, which measure the entanglement between either physical or entanglement renormalization nodes (entanglement entropy $S=-\sum_l|\nu_l|^2\log(|\nu_l|^2)\leq S_{\mathrm{max}}$), are truncated below a value $\mathrm{sv}^{\mathrm{tol}}$, such that only the dominant configurations required to reproduce the entangled many-body wave function are kept during the calculation. The truncation scheme above is employed similarly to the bonds between optimal boson basis matrices and the chain tensors. \autoref{fig:conv_SVD} reveals converged populations for any value below $\mathrm{sv}^{\mathrm{tol}}=10^{-4}$ (value for main text calculations), and relative error analysis shows relative maximum deviations of the state populations on the order of $\epsilon\sim1\%$ in comparison with the most compute-intensive case, where $\mathrm{sv}^{\mathrm{tol}}=10^{-7}$, for times shorter than 200 fs. \begin{figure}[htb] \centering \includegraphics[width=1\linewidth]{convergence_SVD} \caption{Comparison of population dynamics for $N=4$ for decreasing tolerance for singular value decomposition. Parameters as in \autoref{fig:N4_vs_N16}.}\label{fig:conv_SVD} \end{figure}
1,314,259,996,852
arxiv
\section{Introduction} \label{SecIntroduction} In the presence of strong local constraints, certain classical systems exhibit so-called `Coulomb phases', where correlation functions have power-law forms and nontrivial directional dependence.\cite{Youngblood,Henley,Huse,Bramwell,Kasteleyn} These phases are of considerable theoretical interest because of their unusual properties, and are also of direct experimental relevance. Examples include frustrated magnets such as spin ice \cite{Bramwell} and molecular dimers adsorbed onto surfaces.\cite{Blunt} Coulomb phases stand in contrast to ordered phases characterized by a broken symmetry and an associated order parameter.\cite{Landau} Continuous transitions between Coulomb and ordered phases present a novel situation where the behavior at the transition cannot be captured purely in terms of a Ginzburg-Landau theory of the fluctuations of this order parameter. Instead, a complete description of the transition requires the long-range correlations in the Coulomb phase to be taken into account.\cite{Bergman,Alet,Jaubert,SpinIce,Letter,Pickles} Classical dimer models \cite{Fowler,Kasteleyn} are among the simplest possible model systems that exhibit Coulomb phases, and the discovery of a direct transition into an ordered crystalline phase in the dimer model on the cubic lattice has stimulated considerable interest.\cite{Alet,Misguich,Letter,Charrier,Chen,Papanikolaou} The question of whether the transition is continuous or first order remains controversial, but it is clear that the correlation length at the transition is either divergent or at least several orders of magnitude larger than the lattice spacing. The long-distance properties near the transition can therefore be described in terms of a continuum theory including only the relevant degrees of freedom. It has been suggested\cite{Letter,Charrier,Chen} that the appropriate description is given in terms of a noncompact \U1 gauge theory with \SU2-symmetric matter fields, or noncompact $CP^1$ (NC$CP^1$). In this paper, we analyze the transition in the classical dimer model on the cubic lattice by using a mapping to an equivalent quantum model in two spatial dimensions. A brief outline of this mapping and the predictions that result has been presented previously.\cite{Letter} We use the standard approach of relating classical statistical mechanics in $d$ dimensions to quantum mechanics in $d-1$ dimensions, which in principle provides an exact identity between the partition functions in the two cases. By an appropriate choice of the mapping, we represent the interacting dimers on the links of a cubic lattice as hard-core bosons on the sites of a kagome\ lattice. The Coulomb phase then corresponds to the condensed phase of the bosons, and the power-law correlations can be understood in terms of the coupling to the phonon mode of the superfluid. The thermal transition into the dimer crystal is equivalent to a (zero-temperature) quantum phase transition from the superfluid to a Mott insulator at fractional filling. Interestingly, this belongs to a class of unconventional quantum phase transitions considered by Balents et al.\cite{Balents} In these cases, the phases on the two sides of the transition have different order parameters, and a na\"\i ve application of the Landau-Ginzburg-Wilson (LGW) paradigm predicts that a continuous transition requires simultaneous fine-tuning of two independent parameters. Balents et al.\cite{Balents}\ instead propose a critical theory in terms of dual vortex degrees of freedom, which allows for a generic continuous transition between the two phases. Applying this approach to our effective quantum model gives a continuum gauge theory for the transition that is the same as has been obtained using a direct mapping carried out in three spatial dimensions.\cite{Charrier,Chen} We have previously\cite{SpinIce} applied a similar approach to a model of nearest-neighbor spin ice, where a transition takes place from a high-temperature Coulomb phase to a low-temperature saturated phase, which has neither power-law correlations nor symmetry breaking.\cite{Jaubert} The mapping again leads to a theory of quantum bosons, but in that case the thermal phase transition maps to the standard quantum phase transition between the superfluid and a vacuum state, described by the conventional LGW approach. In the remainder of this section, we define the cubic dimer model and review its phase structure. In Section~\ref{SecMapping}, we introduce the mapping from the classical dimer model to a quantum model of bosons on the kagome\ lattice. We then show, in Section~\ref{SecCoulombPhase}, that the properties of the Coulomb phase of the classical model can be understood in terms of those of the condensed phase of the quantum bosons. In Section~\ref{SecTransition}, we address the phase transition and use a dual picture in terms of vortices to derive a continuum theory to describe the critical properties. We conclude in Section~\ref{SecDiscussion} with discussion. In the Appendix, we briefly consider modifications to the dimer model that lead to the appearance of an intermediate disordered phase and show how this can be understood in terms of the quantum mapping. \subsection{Model} \label{SecModel} We treat a model of classical dimers on the links of a simple cubic lattice. In a given configuration of the classical model, each link is occupied by either one dimer or none, with the close-packing constraint that every site of the lattice has a total of precisely one dimer on the adjoining links. We define the variables $d_\mu({\mathbf{r}}) \in \{0,1\}$, giving the number of dimers on the link joining the sites at ${\mathbf{r}}$ and ${\mathbf{r}} + \boldsymbol{\delta}_\mu$, where $\boldsymbol{\delta}_\mu$ ($\mu \in \{x,y,z\}$) is a basis vector of the cubic lattice. The close-packing constraint can then be expressed as \beq{EqClosePacking} \sum _\mu \left[ d_\mu({\mathbf{r}}) + d_\mu({\mathbf{r}} - \boldsymbol{\delta}_\mu)\right] = 1\punc{,} \end{equation} for all sites ${\mathbf{r}}$. Each configuration is assigned an energy (and hence Gibbs-Boltzmann weight) that favors the parallel alignment of the dimers on neighboring links. In the simplest case, the energy of a configuration is $\mathcal{E} = -n_\parallel$, where $n_\parallel$ is the total number of plaquettes (of any orientation) with parallel dimers. In a system with periodic boundary conditions and an even number of sites in all three directions, the minimum of $\mathcal{E}$ occurs when all dimers are placed on parallel links, giving $n_\parallel = N$, the total number of sites. There are six such configurations; one example has $d_\mu({\mathbf{r}}) = 1$ for $\mu = z$ and $r_z$ odd, and zero otherwise. We expect our continuum theory to be equally applicable to other potentials with the same symmetry that also favour columnar crystalline order. At temperature $T = 0$, the system minimizes $\mathcal{E}$ by selecting one of these six configurations, breaking both the translational and rotational symmetries of the lattice. We define the `magnetization' order parameter \beq{EqOrderParameter} m_\mu({\mathbf{r}}) = \frac{1}{2}(-1)^{r_\mu}\left[ d_\mu({\mathbf{r}}) - d_\mu({\mathbf{r}} - \boldsymbol{\delta}_\mu) \right]\punc{,} \end{equation} so that the six ground states have $\mathbf{m} \in \{ \pm \boldsymbol{\delta}_x , \pm \boldsymbol{\delta}_y , \pm \boldsymbol{\delta}_z \}$ for all ${\mathbf{r}}$. For small positive $T$, a low-temperature expansion predicts that $\langle\mathbf{m}\rangle$ will remain nonzero and directed along one of the cubic axes. In the opposite limit, $T \rightarrow \infty$, there is equal statistical weight for all configurations obeying the close-packing constraint, the number of which grows exponentially with $N$. This limit has been considered by Huse et al.,\cite{Huse} who showed that the system exhibits a Coulomb phase, where the correlation functions are algebraic at long distances. In particular, for the connected part of the dimer correlation function, one finds the standard 3D dipolar form,\cite{Huse} \beq{EqDipolarCorrelations} \langle d_\mu({\mathbf{r}}) d_\nu({\boldsymbol{0}}) \rangle _\mathrm{c} \sim \eta _{\mathbf{r}} \frac{3 r_\mu r_\nu - r^2 \delta _{\mu \nu}}{r^5}\punc{,} \end{equation} where $\eta_{\mathbf{r}} = (-1)^{\sum_{\mu}\!r_\mu}$ is $\pm 1$ on the two sublattices. This form for the correlation functions is expected to persist for large finite temperatures. The high- and low-temperature phases cannot be smoothly connected and must therefore be separated by one or more phase transitions. High-precision Monte Carlo simulations show that there is in fact a single phase transition at a critical temperature\cite{Alet} $T\sub{C} \approx 1.675$, and show that this is either continuous or very weakly first order. In either case, the correlation length at the transition is much larger than the lattice spacing and so a continuum description should be expected to capture the long-wavelength properties near the transition. As we have noted, we expect our theory to be equally applicable in the presence of modifications that maintain the symmetry of the configuration energy and the ordered states. One can also consider modifications of the model that reduce the cubic symmetry of the lattice, thereby reducing the symmetry of the effective quantum Hamiltonian and changing the degeneracy of the ordered states.\cite{Chen} We consider one example in the Appendix, in which the result is the appearance of an intermediate phase between the Coulomb and ordered phases. \section{Mapping to kagome\ bosons} \label{SecMapping} We now describe the first stage of our derivation of the continuum theory for the transition, in which we map from the statistics of classical dimers to a quantum model. We do so by using the standard mapping between classical statistical mechanics in $d$ dimensions and quantum mechanics in $d-1$ dimensions, in which one spatial dimension of the classical problem is interpreted as the (imaginary) time direction for the quantum problem. \subsection{Definition of mapping} \label{SecDefineMapping} In defining the mapping, we have the freedom to choose the time direction for the quantum problem, and we do so in a way that does not distinguish between the ordering patterns in the low-temperature phase. While one of the cubic axes might seem to be a natural choice, taking this would necessarily distinguish those cases where $\langle \mathbf{m} \rangle$ is parallel to the time direction from those where it is perpendicular. Instead, we choose the $[111]$ direction and define the quantum imaginary time as $\tau = \sum _\mu r_\mu$. The mapping follows the standard procedure of using a transfer matrix to connect the degrees of freedom in one layer of the system to those in the next, followed by interpretation of the transfer matrix as the exponential of a quantum Hamiltonian. We first divide the links of the cubic lattice into layers by the imaginary-time coordinates of their midpoints. Each layer is treated as a time slice and the rows and columns of the transfer matrix ${\mathcal{T}} _1$ are labeled by the configurations of two adjacent layers. The configurations of a given $(111)$ plane are mapped onto the basis states of a quantum Hilbert space by simply identifying the presence (or absence) of a dimer with the presence (absence) of a boson. \begin{figure} \putinscaledfigure{Cubic111Projection} \caption{\label{FigCubic111Projection}(color online) Projection of the cubic lattice onto a $(111)$ plane, with the kagome\ lattice superimposed. The cubic sites are shown by the numbers $0$, $1$ and $2$, giving the quantum imaginary-time coordinate $\tau \bmod 3$ (see main text). Points with solid lines show the sites of the kagome\ lattice, situated at the centres of the cubic bonds (dashed lines) between sites with $\tau \bmod 3 = 1$ and $2$; they therefore lie in planes with $\tau \bmod 3 = \frac{3}{2}$. The larger red circles shown superimposed on some of the kagome\ sites illustrate the occupied sites in one of the six degenerate ordering patterns, corresponding to the six ordered states of the cubic dimers. The elementary unit vectors of the kagome\ lattice, $\mathbf{e} _1$ and $\mathbf{e} _2$, are shown with dashed blue arrows. The coordinate axes shown in the bottom-right of the figure are the projection of the cubic axes onto the $(111)$ plane.} \end{figure} In order for the resulting quantum model to describe lattice bosons, we must ensure two features. First, we require the sites of the lattice in one time slice to correspond to those in the next, and second, we require conservation of particle number. As far as the first requirement is concerned, Figure~\ref{FigCubic111Projection} shows that the midpoints of cubic links with a given value of $\tau$ form a kagome\ lattice, and that the lattices formed by adjacent layers are displaced with respect to each other. Our first requirement can nonetheless be satisfied by taking a product of (any multiple of) three elementary transfer matrices ${\mathcal{T}} _1$ to define a single time step. As illustrated in Figure~\ref{FigCubic111Projection}, planes separated by three units in the time direction coincide.\footnote{An alternative approach would be to define a transfer matrix consisting of ${\mathcal{T}} _1$ followed by a translation (or other transformation in the plane) to return the sites to their original locations. Such a choice would necessarily reduce the symmetry of the quantum Hamiltonian that results.} (Each of the three elementary matrices has a different form because of the relative displacements between successive kagome\ layers, but for simplicity we denote them all without distinction by ${\mathcal{T}} _1$.) We also require conservation of particle number, meaning that the number of bosons in a given time slice should be constrained to equal the number in the following. The close-packing constraint in \refeq{EqClosePacking} implies that \beq{EqClosePacking2} \sum _{{\mathbf{r}} \in \tau} \sum _\mu d_\mu({\mathbf{r}}) = \sum _{{\mathbf{r}} \in (\tau - 1)} \left [ 1 - \sum _\mu d_\mu({\mathbf{r}}) \right]\punc{,} \end{equation} where ${\mathbf{r}} \in \tau$ indicates a sum over all cubic sites in imaginary-time slice $\tau$. This equation states that if the kagome\ plane at $\tau - \frac{1}{2}$ has $n$ bosons and a total of $A$ sites, then the plane at $\tau + \frac{1}{2}$ will have $\frac{A}{3} - n$ bosons. To define a transfer matrix that conserves particle number, we must therefore take a product of an even number of elementary transfer matrices ${\mathcal{T}} _1$. To satisfy the two requirements, we must therefore define the full transfer matrix ${\mathcal{T}} = {\mathcal{T}} _1^{\delta\tau}$ with $\delta\tau$ an integer divisible by both $3$ and $2$, and so we take $\delta\tau = 6$. The transfer matrix ${\mathcal{T}}$ therefore has rows and columns labeled by the configurations of two planes separated by $\delta\tau = 6$, and its elements give the statistical weights for those configurations, summed over all possible configurations of the intermediate planes. The partition function for the classical problem is given by ${\mathcal{Z}} = \Tr {\mathcal{T}}^{L/\delta \tau}$, where $L$ is the length of the system in the $[111]$ direction and periodic boundary conditions are assumed in this direction. The effective quantum Hamiltonian $\mathcal{H}$ is defined by ${\mathcal{T}} = \mathrm{e}^{-\mathcal{H} \delta\tau}$, so that ${\mathcal{Z}}$ is given by the quantum partition function at inverse temperature $\beta \propto L$. The classical thermodynamic limit is therefore given by the quantum zero-temperature limit, $\beta \rightarrow \infty$, and we will always work in this limit. \subsection{Quantum Hamiltonian} \label{SecQuantumHamiltonian} For a finite lattice, it is in principle possible to find the transfer matrix ${\mathcal{T}}$ exactly, by considering all allowed configurations of two planes separated by $\delta \tau = 6$ and summing the Boltzmann weights of all possible arrangements of the intermediate planes. The quantum Hamiltonian $\mathcal{H}$ can then be found by taking the (matrix) logarithm of ${\mathcal{T}}$. For even fairly small lattices, however, the number of configurations is large, making this a computationally difficult problem, and we have not attempted to find ${\mathcal{T}}$ or $\mathcal{H}$ exactly. Since we are interested in the long-wavelength properties of the model near the transition, we will instead use general considerations such as symmetry to determine the form of the Hamiltonian. As we have noted above, the Hamiltonian describes the dynamics of bosons on a kagome\ lattice, with conserved particle number. The hard-core nature of the dimers implies that the bosons have a similar hard-core constraint, restricting occupation numbers to zero or one on any site of the lattice. Further, the close-packing constraint in \refeq{EqClosePacking}, which ensures particle-number conservation, also implies that any triangle of the kagome\ lattice, of either orientation, can be occupied by at most one boson. This implies a nearest-neighbour repulsion of infinite magnitude. We define the number operator for kagome\ site $i$ as $n_i$, and (hard-core) bosonic creation and annihilation operators $b^\dagger _i$ and $b^{\phantom{\dagger}}_i$. The most general quantum Hamiltonian, with conserved particle number and obeying the close-packing constraint, can then be written in the form \beqm{EqQuantumHamiltonian} \mathcal{H} = - \mu \sum _i n_i + \frac{U}{2} \sum _i n_i(n_i - 1) + U\sum _{\langle i j \rangle} n_i n_j \\{}+ \sum_{i,j} V_{ij} n_i n_j - \sum _{i,j} t_{ij} b^\dagger_i b^{\phantom{\dagger}}_j + \cdots\punc{,} \end{multline} where the ellipsis represents other terms, such as three-body interactions and correlated hopping terms,\cite{SpinIce} whose precise form is not important. The hard-core constraints have been represented by interactions of strength $U \rightarrow \infty$, the coefficients $V_{ij}$ describe a further-neighbor repulsion, and $t_{ij}$ is the hopping. While the Hamiltonian conserves particle number, summing over all classical configurations means that all particle-number sectors should be included in the quantum-mechanical trace. The effective quantum problem is therefore defined in the grand-canonical ensemble, with a chemical potential $\mu$ that, like the other coefficients, emerges as an effective parameter. A generalization of the dimer model\cite{Charrier} to the case where the occupation number of a given link is allowed to take on values other than $0$ or $1$ would result in a similar quantum Hamiltonian, but with a correspondingly expanded on-site Hilbert space. An alternative generalization of the dimer model allows for `monomers', where the close-packing constraint in \refeq{EqClosePacking} can be violated to permit a site touched by zero or multiple dimers (with a finite energy cost). Such a modification breaks conservation of particle number and adds to $\mathcal{H}$ terms such as $\sum _i (J_i b_i^\dagger + J_i^* b_i)$ which eliminate the phase-rotation symmetry. Note that, as is generally the case for effective quantum Hamiltonians used to describe classical partition functions, it is not necessary for $\mathcal{H}$ to be hermitian.\cite{Hatano} For example, the hopping coefficients are in general not symmetric, $t_{ij} \neq t_{ji}^*$, following from the fact that choosing a particular $(111)$ plane to define the quantum problem breaks time-reversal symmetry. As we have discussed previously in regard to a related problem in spin ice,\cite{SpinIce} the nonhermitian terms are crucial for reproducing the correct spatial dependence of the long-range correlation functions. \subsubsection*{Locality} For the analysis that follows, an important condition on the Hamiltonian $\mathcal{H}$ is that it should be local, at least when projected into a suitable subspace of low-energy states. While the classical configuration energy $\mathcal{E}$ and the close-packing constraint in \refeq{EqClosePacking} are local, this does not necessarily imply the same for the transfer matrix or quantum Hamiltonian. We have no general proof that the locality condition is satisfied, and it is in fact possible, by considering states with sufficiently high energy, to construct configurations on which the effect of ${\mathcal{T}}$ is to cause hopping over arbitrarily long distances. We argue, however, that locality is satisfied in the region of interest, at low energy near the transition. In this region, low-energy configurations of the original cubic dimer problem can be described in terms of ordered regions separated by two-dimensional domain walls, which cost an energy proportional to their surface area. The intersections of these with a given time slice give the one-dimensional domain walls of the quantum problem, separating different density-wave orderings of the bosons. Consider, in the quantum picture, a time step in which a domain wall moves by a large distance, so that one of the two neighboring domains grows by an area $\delta A$. In the classical partition function, this corresponds to a configuration where a domain wall has a section of area $\delta A$ running parallel to the $(111)$ plane, in between the two consecutive time slices. Such a configuration has an energy cost that grows linearly with $\delta A$, and hence has an exponentially suppressed contribution to the transfer matrix. Further confirmation of the applicability of a local Hamiltonian comes from the analysis of the Coulomb phase in Section~\ref{SecCoulombPhase}, which reproduces the correct power-law form of the long-range correlation functions [see \refeq{EqCoulombCorrelations}] on the assumption that the low-energy excitations in the superfluid phase are phonons with a linear dispersion. A definitive answer to the question of locality could of course be found by computing the quantum Hamiltonian exactly on a sufficiently large lattice. \subsection{Phase structure} \label{SecHamPhases} \begin{figure} \putinscaledfigure{CubicKagome3D} \caption{\label{FigCubicKagome3D}(color online) Part of the cubic lattice showing one of the ordered states of the dimers (in blue). A $(111)$ plane is superimposed on the crystal structure, cutting diagonally through the cube. Such planes comprise the `time slices' on which the quantum Hilbert space is defined. The sites of the two-dimensional problem are situated where the $(111)$ plane intersects links of the cubic lattice; these form a kagome\ lattice. Where a cubic link is occupied by a dimer, the corresponding kagome\ site is occupied by a boson (shown with red spheres).} \end{figure} A major advantage of this particular choice of mapping is that the six distinct ordering patterns for the classical dimers map to six ordered states of the quantum problem, related to each other by symmetry. As noted in Section~\ref{SecModel}, the states that minimize the classical configuration energy $\mathcal{E}$ have all dimers parallel to one of the three cubic axes, and half of the links of this orientation occupied. An example is shown in Figure~\ref{FigCubicKagome3D}, along with the corresponding arrangement of quantum bosons. Cubic links with the same orientation map onto the same kagome\ sublattice, and since half of the cubic links of a given orientation are occupied, the same is true of the kagome\ sites of a given sublattice. As illustrated in Figure~\ref{FigCubicKagome3D}, the classical configurations that minimize $\mathcal{E}$ map to density-wave states of the bosons, at filling (bosons per site) of $\frac{1}{6}$. The three states with $\mathbf{m} = +\boldsymbol{\delta} _{x,y,z}$ are related by a rotation of the kagome\ plane, while $\mathbf{m} = \pm \boldsymbol{\delta} _z$ are related by a translation. As the (classical) temperature is raised from zero, thermal fluctuations in the dimer configuration will occur, with those of lowest energy being single flipped plaquettes. In terms of bosons, these correspond to quantum fluctuations away from the perfectly ordered density-wave patterns and occur once the hopping coefficients $t_{ij}$ in \refeq{EqQuantumHamiltonian} become nonzero. For sufficiently small hopping $t_{ij}$ relative to the further-neighbor repulsion $V_{ij}$, the ground state remains ordered. We identify this phase, where (connected) correlation functions are short-ranged and the lattice symmetry is broken, as a Mott insulator of bosons with density-wave order. When the classical temperature is raised beyond the critical value $T\sub{C}$, the dimer order is lost. In the resulting high-temperature Coulomb phase, the full lattice symmetry is restored and each cubic link has an equal average occupation of $\frac{1}{6}$. In the quantum model, the transition corresponds to a loss of density-wave order at a critical hopping $t_{ij}$, and restored lattice symmetry implies a uniform quantum ground state with average particle number $\frac{1}{6}$ on each site. (Note that, since there are three sites in the kagome\ unit cell, this filling corresponds to $\frac{1}{2}$ per unit cell. The hard-core repulsion between nearest-neighbor sites also means that filling $\frac{1}{6}$ is equal to half of the maximum possible filling.) This uniform ground state can be identified with the superfluid simply by noting that, at fractional filling and with neither quenched disorder nor spatial symmetry breaking, the only possible phase for quantum bosons at zero temperature is a condensate. In Section~\ref{SecCoulombPhase}, we will show that the power-law correlations within the Coulomb phase are correctly reproduced by the phase mode of the condensate, providing further justification for the identification of these phases. As an aside, we consider the insight into this equivalence that comes from the phenomenon of off-diagonal long-range order (ODLRO).\cite{SpinIce} The existence of a nonzero superfluid order parameter implies that, in the condensed phase, the quantum expectation value $\langle b^\dagger_i b^{\phantom{\dagger}}_j \rangle$ approaches a nonzero constant in the limit of large separation of the points $i$ and $j$. By contrast, no such ODLRO exists in the Mott insulator and the limiting value is zero. The quantum expectation value is defined by \beq{ODLRO} \langle b^\dagger_i b^{\phantom{\dagger}}_j \rangle = \frac{{\mathcal{Z}}_{ij}}{{\mathcal{Z}}} = \frac{1}{{\mathcal{Z}}} \Tr \left( {\mathcal{T}}^{L/\delta\tau} b^\dagger_i b^{\phantom{\dagger}}_j \right)\punc{.} \end{equation} The quantity ${\mathcal{Z}}_{i,j}$ can be understood as the sum over histories of the quantum problem, with a particle creation event at site $i$ and a particle annihilation at site $j$, on the same (arbitrary) time slice. Returning to the language of the classical statistical problem, these events at which particle conservation is broken become, according to the arguments following \refeq{EqClosePacking2}, points in three-dimensional space where the close-packing constraint is violated. One should therefore understand ${\mathcal{Z}}_{i,j}$ as the partition function of the dimer model calculated in the presence of two test monomers at positions $i$ and $j$ on the same (arbitrary) time slice. (Strictly, one is an empty site, while the other is a site where two dimers meet at a site.) In the low-temperature phase, such monomers disrupt the ordering pattern and cost energy proportional to the linear separation between the two sites. By contrast, in the Coulomb phase monomers are deconfined, separating them to infinity costs a finite energy,\cite{Huse} and ${\mathcal{Z}}_{i,j}$ approaches a nonzero limit for large separation. The ODLRO in the quantum superfluid is therefore equivalent to the deconfinement of monomers; this is consistent with our identification of the Coulomb and superfluid phases. \subsection{Symmetries of the Hamiltonian} \label{SecHamSymmetries} As we will show in Sections~\ref{SecCoulombPhase} and \ref{SecTransition}, an understanding of the behavior both deep in the Coulomb phase and near the ordering transition depends on an analysis of the symmetries of the quantum model. We treat here the case where the classical configuration energy $\mathcal{E}$ has full cubic symmetry. Chen et al.\cite{Chen}\ have studied the effects of various modifications that reduce this symmetry, and we consider one example in the Appendix. First, since the quantum Hamiltonian $\mathcal{H}$ describes a model with conserved particle number, it has a \U1 symmetry under global phase rotations of the bosonic creation and annihilation operators: $b_i \rightarrow b_i \mathrm{e}^{\mathrm{i} \vartheta}$ and $b_i^\dagger \rightarrow b_i^\dagger \mathrm{e}^{-\mathrm{i}\vartheta}$. This symmetry is spontaneously broken in the superfluid phase, where $b_i$ has a nonzero expectation value. \begin{figure} \putinscaledfigure{KagomeSymmetries} \caption{\label{FigKagomeSymmetries}(color online) Part of the kagome\ lattice, shown with points joined by solid lines, with the projection of the cubic lattice superimposed, as in Figure~\ref{FigCubic111Projection}. Four symmetry operations, $\mathbb{K} _1$, $\mathbb{K} _2$, $\mathbb{R} '$, and $\mathbb{X} _1$, are illustrated in red. The two primitive translations $\mathbb{K} _1$ and $\mathbb{K} _2$ are shown with straight arrows, while $\mathbb{R} '$, a rotation by $60{^\circ}$ about the center of a kagome\ hexagon, is shown with a curved arrow. The dashed red vertical line shows the line of reflection for $\mathbb{X} _1$. The three sites of the triangle at the bottom right are labeled $\sublattice{a}$, $\sublattice{b}$, and $\sublattice{c}$ to denote the three kagome\ sublattices.} \end{figure} Besides this internal symmetry, there are also spatial symmetries inherited from those of the classical dimer model, but modified in important ways by the particular choice of the imaginary-time direction. These symmetries can be constructed from combinations of three primitive symmetry operations, a translation $\mathbb{K} _1$, a rotation $\mathbb{R}$, and a reflection $\mathbb{X} _1$. These operations are illustrated in Figure~\ref{FigKagomeSymmetries}, along with the translation $\mathbb{K} _2 = \mathbb{R} \mathbb{K} _1 \mathbb{R}^{-1}$. We parametrize the positions of lattice sites in the kagome\ planes by orthogonal coordinates $\tilde{x}$ and $\tilde{y}$: \beq{EqKagomeCoordinates} \begin{split} \tilde{x} &= \sqrt{\frac{3}{2}} (r_y - r_x)\\ \tilde{y} &= \sqrt{2} \left(r_z - \frac{1}{2}r_x - \frac{1}{2}r_y\right)\punc{,} \end{split} \end{equation} where $r _\mu$ are coordinates referred to the cubic axes, which take integer values at the cubic lattice sites. The two operators $\mathbb{K} _1$ and $\mathbb{K} _2$ perform translations by the elementary unit vectors $\mathbf{e} _1$ and $\mathbf{e} _2$ of the kagome\ lattice, transforming the coordinates $\tilde{x}$ and $\tilde{y}$ according to \begin{align} \label{EqKagomeTrans} \begin{pmatrix} \tilde{x} \\ \tilde{y} \end{pmatrix} &\xrightarrow{\mathbb{K} _1} \begin{pmatrix} \tilde{x} \\ \tilde{y} \end{pmatrix} + \sqrt{6} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\ \begin{pmatrix} \tilde{x} \\ \tilde{y} \end{pmatrix} &\xrightarrow{\mathbb{K} _2} \begin{pmatrix} \tilde{x} \\ \tilde{y} \end{pmatrix} + \sqrt{6} \begin{pmatrix} 1/2 \\ \sqrt{3}/2 \end{pmatrix} \end{align} They can be expressed in terms of pairs of translation operators $\mathbb{T} _\mu$ for the cubic lattice, chosen so that the imaginary time coordinate $\tau$ is unchanged; for example, $\mathbb{K} _1 = \mathbb{T} _x^{-1} \mathbb{T} _y$. These transformations map the three kagome\ sublattices to themselves. We define $\mathbb{R} '$ as a rotation by $60{^\circ}$ about the center of a kagome\ hexagon, or, in terms of the cubic lattice, about a $[111]$ axis passing though a cubic site with $\tau \bmod 3 = 0$ (such as the one at the center of Figure~\ref{FigKagomeSymmetries}). This maps the kagome\ lattice to itself, but is not a symmetry of the cubic lattice, since, as can be seen in Figure~\ref{FigKagomeSymmetries}, it exchanges cubic sites with $\tau \bmod 3 = 1$ and $\tau \bmod 3 = 2$. We therefore define the operation $\mathbb{R}$ consisting of an improper rotation by $60 {^\circ}$ through this axis, which is a symmetry of the cubic lattice. In terms of the bosons, $\mathbb{R}$ consists of the rotation $\mathbb{R} '$ followed by a time-reversal operation $\tau \rightarrow 3 - \tau$, and it commutes with quantum Hamiltonian $\mathcal{H}$. (A similar time-reversed symmetry operation was found to apply in an effective quantum description of spin ice.\cite{SpinIce} The absence of time reversal as an independent symmetry reflects the fact, noted in Section~\ref{SecQuantumHamiltonian}, that $\mathcal{H}$ is not hermitian.) The rotation $\mathbb{R}$ permutes the three kagome\ sublattices cyclically. The remaining primitive symmetry operation of the kagome\ model is the reflection $\mathbb{X} _1$ through a line running perpendicular to the unit vector $\mathbf{e} _1$. It transforms $\tilde{x} \rightarrow -\tilde{x}$, and exchanges two of the sublattices (labeled $\sublattice{a}$ and $\sublattice{b}$) while leaving the third unchanged. Besides these symmetries of the effective quantum Hamiltonian, there are further symmetries of the original classical model that are broken by the explicit choice of the $[111]$ direction as imaginary time. These include reflections in the cubic $(100)$, $(010)$, and $(001)$ planes, which we denote $\mathbb{I} _x$, $\mathbb{I} _y$, and $\mathbb{I} _z$ respectively, and which relate different equivalent choices of the imaginary-time direction. They cannot be written as operations on the quantum Hilbert space, but in terms of the continuum space-time action to be derived below, they are simply reflections. As an aside, one can also consider the operator representing translation in the time direction by $3$ steps. As noted in Section~\ref{SecDefineMapping}, the full transfer matrix ${\mathcal{T}} = {\mathcal{T}} _1^6$ connects two $(111)$ planes separated by $6$ steps in the imaginary-time direction, and is the simplest choice consistent with conservation of particle number. We can nonetheless consider the operator ${\mathcal{T}}^{1/2} = {\mathcal{T}} _1^3$ representing `imaginary-time evolution' by three steps, which maps the kagome\ lattice to itself, and clearly commutes with ${\mathcal{T}}$ and hence with the quantum Hamiltonian $\mathcal{H}$. Unlike the other symmetries of $\mathcal{H}$, ${\mathcal{T}}^{1/2}$ cannot be written as a permutation matrix in the occupation-number basis, and as for the full transfer matrix, we have not attempted to find its precise form. For our purposes, it is sufficient to note its effect on the density: as follows from the observations of Section~\ref{SecDefineMapping}, if a given kagome\ plane has a density of $\rho$ bosons per site, then the plane $3$ steps later has density $\frac{1}{3} - \rho$. Following our assumption of the locality of ${\mathcal{T}}$, this implies that, in the coarse-grained limit, ${\mathcal{T}}^{1/2}$ simply changes the sign of local density fluctuations. The microscopic Hamiltonian is not invariant under a particle--hole transformation, and this is therefore an emergent symmetry of the long-wavelength limit. \section{Continuum theory for Coulomb phase} \label{SecCoulombPhase} As noted in Section~\ref{SecModel}, above the critical temperature $T\sub{C}$, the classical dimer model exhibits a Coulomb phase, in which there is no ordering, but long-range correlation functions have power-law forms and strong spatial dependence. This behavior can be understood in terms of a coarse-grained picture, in which the long-wavelength degrees of freedom are described by a solenoidal field.\cite{Huse} Deep within the Coulomb phase, this approach predicts the dipolar form for the dimer-dimer correlation function given in \refeq{EqDipolarCorrelations}. In this section, we will show that the long-distance behavior of the correlation functions can also be obtained from the effective quantum model derived in Section~\ref{SecMapping}. The power-law behavior of the correlation functions follows immediately from the presence of a Goldstone phase mode in the superfluid, while the precise spatial dependence of the dipolar correlations can be reproduced by taking into account the symmetries of the effective quantum Hamiltonian. We have applied a similar analysis to a related model of spin ice.\cite{SpinIce} \subsection{Kagome\ continuum action} \label{SecKagomeContinuum} To describe the long-distance properties of the superfluid phase of the quantum model, we pass from the microscopic description to a continuum action. This action is written in terms of bosonic fields $\Psi$ corresponding to the hard-core boson operator $b$ in the limit where the spatial coordinates $\tilde{x}$ and $\tilde{y}$ and the imaginary time $\tau$ are taken as continuous. To preserve the important effects of the kagome\ lattice structure, we define three ({\it c}-number) fields $\Psi _\sigma$ corresponding to the three kagome\ sublattices, $\sigma \in \{\sublattice{a},\sublattice{b},\sublattice{c}\}$ (illustrated in Figure~\ref{FigKagomeSymmetries}). The continuum action ${\mathcal{S}}$ will contain all powers of the fields and their derivatives consistent with the symmetries of the hard-core boson problem, described in Section~\ref{SecHamSymmetries}. We must therefore determine the effects of these symmetries on the fields and the coordinates $\tilde{x}$, $\tilde{y}$, and $\tau$. Firstly, the symmetry under uniform \U1 phase rotations of the boson operators leads to the same condition on the fields $\Psi _\sigma$ and hence restricts the terms in ${\mathcal{S}}$ to be those which are invariant under such phase rotations. The translation operators $\mathbb{K} _1$ and $\mathbb{K} _2$ do not affect the sublattices and simply lead to translations of the coordinates $\tilde{x}$ and $\tilde{y}$. Next, consider the operator $\mathbb{R}$, which consists of a rotation by $60{^\circ}$ followed by a time-reversal operation. The rotation permutes the three sublattices cyclically and also acts on the spatial coordinates, while time reversal corresponds to complex conjugation of the field operators\cite{SpinIce} $\Psi _\sigma^{\phantom{*}} \rightarrow \Psi _\sigma^*$. Finally, the reflection $\mathbb{X} _1$ exchanges sublattices $\sublattice{a}$ and $\sublattice{b}$ while also reflecting the coordinate $\tilde{y} \rightarrow -\tilde{y}$. These transformations can be written in a simpler form by defining the derivative operators $\tilde{\partial} _\pm = \pm \frac{\sqrt{3}}{2} \partial _{\tilde{x}} - \frac{1}{2}\partial_{\tilde{y}}$, the vectors $\tilde{\boldsymbol{\partial}}$ and $\boldsymbol{\Psi}$, \beq{EqVectors} \tilde{\boldsymbol{\partial}} = \begin{pmatrix} \tilde{\partial} _-\\ \tilde{\partial} _+\\ \partial _{\tilde{y}} \end{pmatrix}\qquad\text{and}\qquad \boldsymbol{\Psi} = \begin{pmatrix} \Psi _{\sublattice{a}}\\ \Psi _{\sublattice{b}}\\ \Psi _{\sublattice{c}} \end{pmatrix}\punc{,} \end{equation} and the matrices $\mathbf{R}$ and $\mathbf{X}_1$, \beq{EqMatrices} \mathbf{R} = \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{pmatrix} \qquad\text{and}\qquad \mathbf{X}_1 = \begin{pmatrix} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}\punc{.} \end{equation} The effects of the symmetry operations $\mathbb{R}$ and $\mathbb{X} _1$ on $\tilde{\boldsymbol{\partial}}$, $\boldsymbol{\Psi}$, and the time derivative $\partial _\tau$ are summarized in Table~\ref{TabSymmetries}. \begin{table} \caption{\label{TabSymmetries}Effects of the symmetry operators $\mathbb{R}$ and $\mathbb{X} _1$ on the continuum fields and derivative operators, expressed in terms of $\tilde{\boldsymbol{\partial}}$ and $\boldsymbol{\Psi}$ [\refeq{EqVectors}], $\mathbf{R}$ and $\mathbf{X}_1$ [\refeq{EqMatrices}], $\phi$ [\refeq{EqActionPhi}], and $\mathbf{n}$ [\refeq{EqDensityField}], as well as the cubic lattice position vector ${\mathbf{r}}$, derivative $\boldsymbol{\partial}$, and dimer density field $\mathbf{d}$, discussed in Section~\ref{SecCubicContinuum}.} \begin{ruledtabular} \begin{tabular}{ccc} \phantom{$\mathbb{R}$} & $\mathbb{R}$ & $\mathbb{X} _1$ \\ \hline $\tilde{\boldsymbol{\partial}}$ & $-\mathbf{R}\tilde{\boldsymbol{\partial}}$ & $\mathbf{X}_1\tilde{\boldsymbol{\partial}}$ \\ $\partial _\tau$ & $-\partial _\tau$ & $\partial _\tau$\\ \hline $\boldsymbol{\Psi}$ & $\mathbf{R}\boldsymbol{\Psi}^*$ & $\mathbf{X}_1\boldsymbol{\Psi}$ \\ $\phi$ & $-\phi$ & $\phi$ \\ $\mathbf{n}$ & $\mathbf{R} \mathbf{n}$ & $\mathbf{X}_1 \mathbf{n}$ \\ \hline ${\mathbf{r}}$ & $-\mathbf{R}{\mathbf{r}}$ & $\mathbf{X}_1 {\mathbf{r}}$\\ $\boldsymbol{\partial}$ & $-\mathbf{R}\boldsymbol{\partial}$ & $\mathbf{X}_1\boldsymbol{\partial}$\\ $\mathbf{d}$ & $\mathbf{R} \mathbf{d}$ & $\mathbf{X}_1 \mathbf{d}$ \end{tabular} \end{ruledtabular} \end{table} These symmetry considerations allow a continuum action to be written in terms of the field $\boldsymbol{\Psi}$, which should contain all terms that are invariant under the action of the full symmetry group. In the Coulomb phase of the classical model, the quantum bosons condense, so that the field $\boldsymbol{\Psi}$ acquires a nonzero expectation value. This phase breaks no spatial symmetries, and so $\langle \Psi _\sigma \rangle$ is equal on the three sublattices $\sigma$. Away from the transition, the long-wavelength properties are dominated by the gapless Goldstone mode describing uniform rotations of the phase, and corresponding to the broken \U1 symmetry. Writing $\Psi _\sigma \sim \mathrm{e}^{\mathrm{i} \phi}$, the effective action can be expressed in terms of the field $\phi$, whose symmetry properties are shown in Table~\ref{TabSymmetries}. The effective phase-only action ${\mathcal{S}} _\phi$ can be written in the form \beq{EqActionPhi} {\mathcal{S}} _\phi = \int \mathrm{d} \tilde{x} \, \mathrm{d} \tilde{y} \, \mathrm{d} \tau \left [ - \phi (\tilde{\boldsymbol{\partial}}^2 + \partial _\tau^2) \phi + \cdots \right]\punc{,} \end{equation} where the ellipsis denotes terms with higher powers of $\phi$ or higher derivatives. (All terms with a single derivative can be rewritten as total derivatives and so vanish on integration.) This continuum action is explicitly space-time symmetric; the relative coefficients of the spatial and temporal derivatives are required to be equal [in the appropriate units, chosen in \refeq{EqKagomeCoordinates}] by the full cubic symmetry of the original model (or equivalently by symmetry under the inversion operators $\mathbb{I} _\mu$). To evaluate the correlation functions of the dimer occupation numbers, we must relate these quantities to the continuum field $\phi$. First, consider the boson density, which we represent by the field $n_\sigma$ giving the local density measured relative to the average filling of $\frac{1}{6}$. The symmetry properties of the vector $\mathbf{n}$ are included in Table~\ref{TabSymmetries}; note that it is invariant under time reversal. These properties are sufficient to identify the density operator (to leading order) as \beq{EqDensityField} n_\sigma \sim \tilde{\partial} _\sigma \phi + \partial _\tau \phi\punc{,} \end{equation} where the relative coefficient is again fixed using cubic symmetry. (The first term is allowed only because of the absence of time-reversal symmetry, a consequence of the nonhermitian nature of $\mathcal{H}$ discussed in Section~\ref{SecQuantumHamiltonian}.) \subsection{Cubic lattice} \label{SecCubicContinuum} The symmetry operations $\mathbb{R}$ and $\mathbb{X} _1$ have so far been treated as acting within the kagome\ planes, but they are also symmetries of the full cubic lattice. Their action (in the continuum limit) on the three-dimensional position vector ${\mathbf{r}}$ and derivative operator $\boldsymbol{\partial}$ are shown in Table~\ref{TabSymmetries}, along with the behavior of the field $\mathbf{d}$, which is defined as the continuum limit of the dimer occupation number, with the average occupation of $\frac{1}{6}$ subtracted. To determine the correlation functions of the dimer field $\mathbf{d}$, we must relate it to $\phi$. It would be consistent with the symmetries listed in Table~\ref{TabSymmetries} to identify $d_\mu$ with the combination $\partial _\mu \phi$. This is, however, incorrect, as can most easily be seen by making use of the cubic reflections $\mathbb{I} _\mu$, defined in Section~\ref{SecHamSymmetries}: $\partial _\mu \phi$ changes sign under $\mathbb{I} _\mu$, whereas $d _\mu$ does not. To find the correct relationship between these fields, consider the effect of the cubic reflections on the microscopic dimer degrees of freedom, such as $d_x ({\boldsymbol{0}})$, which gives the occupation number for the link between the sites ${\boldsymbol{0}}$ and $\boldsymbol{\delta}_x$. Under $\mathbb{I} _x$, this maps to the link between $-\boldsymbol{\delta}_x$ and ${\boldsymbol{0}}$, described by $d _\mu (-\boldsymbol{\delta} _x)$; in general, the microscopic variable $d _\nu ({\mathbf{r}})$ maps to $d _\nu (\mathbb{I} _\mu {\mathbf{r}} - \delta_{\mu \nu} \boldsymbol{\delta} _\mu)$ under $\mathbb{I} _\mu$. Using $\eta _{\mathbf{r}}$, equal to $\pm 1$ on the two sublattices, we can therefore construct the combination $\eta _{\mathbf{r}} d _\mu({\mathbf{r}})$, which, after coarse-graining, changes sign under $\mathbb{I} _\mu$ (since $\eta _{\mathbf{r}} = -\eta _{\mathbb{I}_\mu {\mathbf{r}} - \boldsymbol{\delta} _\mu}$ for any $\mu$). The symmetries in Table~\ref{TabSymmetries} are insufficient in this case because they all map from one quantum plane to another; these are separated by multiples of $\delta \tau = 6$, and so $\eta _{\mathbf{r}} = (-1)^\tau$ is unchanged. One can instead use the operator ${\mathcal{T}}^{1/2}$, defined at the end of Section~\ref{SecHamSymmetries}. Particle-number conservation between adjacent planes with $\tau \bmod 6 = \frac{3}{2}$ leads to \U1 symmetry under rotations of the phase $\phi$. This symmetry therefore acts with the opposite sign in the planes with $\tau \bmod 6 = \frac{3}{2} + 3$, leading to the factor of $\eta _{\mathbf{r}} = (-1)^\tau$. We can therefore identify $\eta _{\mathbf{r}} d_\mu \sim \partial _\mu \phi$, which, together with the action given in \refeq{EqActionPhi}, allows the Coulomb-phase dimer-dimer correlation function to be found. The (imaginary-time ordered) propagator for the field $\phi$ is simply $1/|{\mathbf{k}}|^2$, leading\cite{SpinIce} to real-space correlations with a dipolar form, \beq{EqCoulombCorrelations} \langle d_\mu({\mathbf{r}}) d_\nu({\boldsymbol{0}}) \rangle \sim \eta _{\mathbf{r}} \frac{3 r_\mu r_\nu - |{\mathbf{r}}|^2 \delta _{\mu\nu}}{{|{\mathbf{r}}|}^5}\punc{.} \end{equation} This form for the correlators was predicted by Huse et al.,\cite{Huse} by considering the continuum limit of a coarse-grained action for the dimer degrees of freedom. \section{Continuum theory of transition} \label{SecTransition} The mapping described in Section~\ref{SecMapping} relates the thermal transition between a dimer crystal and a Coulomb phase to the quantum phase transition from a Mott insulator with density-wave order to a superfluid. A continuum theory to describe the phase transition in the dimer model can therefore be found by considering this equivalent quantum transition. As noted in Section~\ref{SecIntroduction}, the presence of two incompatible order parameters makes the standard LGW approach insufficient, and instead, the critical theory can be found using a mapping to dual vortex fields. \subsection{Duality mapping} This duality mapping has been described in detail by Balents et al.,\cite{Balents} and we will simply sketch a derivation. (Note that Sengupta et al.\cite{Sengupta}\ found the critical theory for a filling factor of $f = \frac{1}{3}$ on the kagome\ lattice.) The starting point is a current-loop representation of the quantum boson problem,\cite{Wallin} where the degrees of freedom are the currents ${\mathbf{J}}$ defined on the links of the space-time lattice, obeying the continuity equation $\divv {\mathbf{J}} = 0$ (where $\divv$ represents the lattice divergence). The essence of the duality mapping is a transformation from ${\mathbf{J}}$ to a gauge field ${\mathbf{A}}$ on the links of the dual lattice, according to ${\mathbf{J}} = \curl {\mathbf{A}}$. It should be noted that mapping to an action written in terms of currents, which can be done using a `Villain representation' for the hopping terms,\cite{Balents,Wallin} involves eliminating all but the nearest-neighbor hopping. (This can be performed explicitly by introducing extra auxiliary fields analogous to ${\mathbf{J}}$ to describe further-neighbor processes, before integrating these out to give renormalized couplings for the currents ${\mathbf{J}}$.) Reflection symmetries such as $\mathbb{X}_1$ ensure that the nearest-neighbor hopping coefficients $t_{ij}$ are symmetric, and so the nonhermitian nature of $\mathcal{H}$ has no effect on the continuum theory near the transition. (It was similarly found in a related model for spin ice\cite{SpinIce} that the directed hopping terms in the effective quantum Hamiltonian were irrelevant at the transition.) In our approach, the bosons occupy the sites of the kagome\ lattice, and so the space-time lattice is not cubic as in the original dimer problem, but instead consists of stacked kagome\ planes. As illustrated in Figure~\ref{FigKagomeDice}, the dual of kagome\ is the dice lattice,\cite{Vidal,Sengupta,Jiang} and so the gauge field $A _\ell$ is defined on the links $\ell$ of a lattice of stacked dice planes. \begin{figure} \putinscaledfigure{KagomeDice} \caption{\label{FigKagomeDice}(color online) The dice lattice (shown with thin solid lines), dual to the kagome\ lattice (dashed lines). The unit vectors $\mathbf{e} _1$ (horizontal) and $\mathbf{e} _2$, as in Figure~\ref{FigCubicKagome3D}, are shown with blue arrows. The thick red lines show two unit cells of the dice lattice, or one magnetic unit cell. Within this unit cell, the sites of the dice lattice are indicated with black circles containing numbers labeling the three sublattices, $\sigma = 0,1,2$. The background gauge field ${\bar{A}} _\ell$ is shown, up to an integer, with black arrows, where each arrowhead represents $\frac{1}{6}$ of a flux unit. The arrangement is chosen so that the curl (defined as the sum of ${\bar{A}} _\ell$ going counterclockwise around a loop) is equal to $f = \frac{1}{6}$ for every plaquette. (Moving to the right by $2 \mathbf{e} _1$, or one magnetic unit cell, increases ${\bar{A}} _\ell$ by $1$.)} \end{figure} The sites of the dice lattice form three sublattices, which we label as $\sigma = 0,1,2$, corresponding to the three distinct plaquettes of kagome . The positions of sites on the dice lattice are given by the (two-component) vector ${\mathbf{x}} = a_1 \mathbf{e} _1 + a_2 \mathbf{e} _2 + \sigma \boldsymbol{\zeta}$, where $a_1$ and $a_2$ are integers and $\boldsymbol{\zeta} = \frac{1}{3}(\mathbf{e} _1 + \mathbf{e} _2)$ is the displacement between the $0$ and $1$ sublattices. The currents ${\mathbf{J}}$, and hence the gauge field ${\mathbf{A}}$, take integer values (due to the discrete nature of the quantum bosons), and so the model is `frustrated', in the sense that there are many space-time configurations that give nearly equal contributions to the action. (This fact is a straightforward consequence of the fractional boson occupation number; many arrangements of bosons with filling $f = \frac{1}{6}$ have similar interaction energies.) It is more convenient to describe this frustration by introducing `matter fields' $\psi _i$ on the sites $i$ of the dual lattice, and promote ${\mathbf{A}}$ to a continuous-valued field. (This step can be performed explicitly by using the Poisson summation formula.\cite{Balents}) The frustration on ${\mathbf{A}}$ is then lifted, and we shift it by a (position-dependent) constant ${\bar{\mathbf{A}}}$ to make clear that $\psi$ now carries the frustration. The dual theory has a gauge invariance resulting from the definition of $A _\ell$, and can be written in terms of the gauge field $A _\ell$ and matter fields $\psi _i$, \beqm{EqDualAction} {\mathcal{S}}\sub{dual} = \kappa \sum_p |\curl {\mathbf{A}}|^2 \\- t\sub{v} \sum _\ell \left(\psi^\dagger _i \mathrm{e}^{2\pi \mathrm{i} (A_\ell + {\bar{A}} _\ell)} \psi^{\phantom{\dagger}}_j+ \text{h.c.} \right) \\+ \sum _i (r|\psi _i|^2 + u |\psi _i|^4) + \cdots\punc{,} \end{multline} where $\sum _p$ sums over plaquettes $p$ of the dual lattice; $\sum _\ell$ sums over dual-lattice links $\ell$, which start at site $i$ and end at site $j$; and $\sum _i$ sums over dual-lattice sites. Detailed derivations leading to \refeq{EqDualAction} have been given by Balents et al.\cite{Balents} The field $\psi _i$ corresponds to a vortex of the original bosons, and the gauge field induces the long-range interactions between vortices. By duality, the average boson density of $f = \frac{1}{6}$ per site affects the vortices as a flux of $f$ per plaquette. This is represented by the background field ${\bar{\mathbf{A}}}$ obeying $\curl {\bar{\mathbf{A}}} = f$; we choose the gauge illustrated in Figure~\ref{FigKagomeDice}. The action ${\mathcal{S}}\sub{dual}$ consists of a vortex field $\psi _i$ with a frustrating hopping term, coupled to a dynamical gauge field $A _\ell$. Following Balents et al.,\cite{Balents} our approach will be to neglect temporarily the interaction terms, and to consider the effect of the frustration on the dispersion of a single vortex. The full dual action, including the interactions, can then be rewritten in terms of the eigenstates of the single-vortex Hamiltonian ${\mathcal{H}} _1$. The precise form of these eigenstates will depend on the details of ${\mathcal{S}}\sub{dual}$, but we will use symmetry considerations to effectively block-diagonalize ${\mathcal{H}} _1$. \subsection{Projective symmetry group} The problem of a single vortex with frustrated hopping is equivalent to the Hofstadter problem for a charged particle moving on a lattice in the presence of a magnetic field. Choosing a specific gauge for the background field ${\bar{A}}$ reduces the spatial symmetry of the Hamiltonian, and it is convenient to introduce a so-called `projective symmetry group' (PSG).\cite{Balents,Wen} We first address the effect of the lattice symmetries in the real-space basis, before taking the Fourier transform to momentum states, in terms of which the eigenstates of ${\mathcal{H}} _1$ can be written. The single-particle Hilbert space is spanned by position states $\ket{{\mathbf{x}}}$ (where ${\mathbf{x}}$ denotes a lattice site), and ${\mathcal{H}} _1$ is a sum of hopping terms of the form \beq{EqHopping1} \ket{{\mathbf{x}} + {\mathbf{b}}}\bra{{\mathbf{x}}} \mathrm{e}^{\mathrm{i} \alpha({\mathbf{x}},{\mathbf{b}})}\punc{,} \end{equation} for displacements ${\mathbf{b}}$ linking sites of the lattice. The PSG associates with a lattice symmetry $\Qop$ (any of the translations, rotations, and reflections defined in Section~\ref{SecHamSymmetries}), which maps the site ${\mathbf{x}}$ to $\Qop({\mathbf{x}})$, a corresponding operator $\Qhat$ that commutes with the Hamiltonian ${\mathcal{H}} _1$. Its action on a state $\ket{{\mathbf{x}}}$ is given by a real-space transformation, accompanied by a gauge transformation, \beq{Qaction} \Qhat \ket{{\mathbf{x}}} = \mathrm{e}^{\mathrm{i} \chi_\Qop({\mathbf{x}})}\ket{\Qop({\mathbf{x}})}\punc{.} \end{equation} Because the Hamiltonian has a lower symmetry than the lattice, the operators $\Qhat$ do not obey the group multiplication table for the full lattice symmetry group. Instead, they obey it up to phase factors, determined by the functions $\chi_\Qop$. Applying $\Qhat$ to the hopping term in \refeq{EqHopping1} gives \beqm{EqQonHopping1} \Qhat \ket{{\mathbf{x}} + {\mathbf{b}}}\bra{{\mathbf{x}}} \mathrm{e}^{\mathrm{i} \alpha({\mathbf{x}},{\mathbf{b}})} \Qhat^{-1} \\= \mathrm{e}^{\pm \mathrm{i} \alpha({\mathbf{x}},{\mathbf{b}})} \mathrm{e}^{\mathrm{i} \chi_\Qop({\mathbf{x}} + {\mathbf{b}})}\mathrm{e}^{-\mathrm{i} \chi_\Qop({\mathbf{x}})}\ket{\Qop({\mathbf{x}}+{\mathbf{b}})}\bra{\Qop({\mathbf{x}})}\punc{,} \end{multline} where $\pm$ is positive (or negative) if $\Qhat$ is an (anti)unitary operator. Since $\Qop$ is a lattice symmetry, there must be a corresponding term in ${\mathcal{H}} _1$ given by \beq{EqHopping2} \ket{\Qop({\mathbf{x}} + {\mathbf{b}})}\bra{\Qop({\mathbf{x}})} \mathrm{e}^{\mathrm{i} \alpha\boldsymbol{(}\Qop({\mathbf{x}}),\Qop({\mathbf{x}}+{\mathbf{b}}) - \Qop({\mathbf{x}})\boldsymbol{)}}\punc{.} \end{equation} The phases $\chi_\Qop({\mathbf{x}})$ should be chosen for all ${\mathbf{x}}$ in order to make the expressions in \refeqand{EqQonHopping1}{EqHopping2} equal. We therefore require \beq{PSGeqs} \mathrm{e}^{\mathrm{i} [\chi_\Qop({\mathbf{x}} + {\mathbf{b}}) - \chi_\Qop({\mathbf{x}})]} = \mathrm{e}^{\mathrm{i} [\alpha\boldsymbol{(}\Qop({\mathbf{x}}),\Qop({\mathbf{x}}+{\mathbf{b}}) - \Qop({\mathbf{x}})\boldsymbol{)} \mp \alpha ({\mathbf{x}},{\mathbf{b}})]}\punc{.} \end{equation} This gives a set of equations for $\chi_\Qop({\mathbf{x}})$ which must be solved simultaneously. (For the translations $\mathbb{K}_{1,2}$ and rotation $\mathbb{R}$, a solution can be found for $\Qhat$ a unitary operator, but for the reflections such as $\mathbb{X}_1$, $\Qhat$ must be chosen antiunitary.\cite{Balents}) Using the choice of gauge illustrated in Figure~\ref{FigKagomeDice}, the transformation operators acting on the state $\ket{{\mathbf{x}} + \sigma \boldsymbol{\zeta}}$, with ${\mathbf{x}} = a_1\mathbf{e}_1 + a_2\mathbf{e}_2$, give \begin{align} \label{EqChis1} \hat{K}_1 \ket{{\mathbf{x}} + \sigma\boldsymbol{\zeta}} &= (-1)^{a_2}\ket{{\mathbf{x}} + \mathbf{e}_1 + \sigma\boldsymbol{\zeta}}\\ \label{EqChiK2} \hat{K}_2 \ket{{\mathbf{x}} + \sigma\boldsymbol{\zeta}} &=\ket{{\mathbf{x}} + \mathbf{e}_2 + \sigma\boldsymbol{\zeta}}\\ \hat{R}\,\ket{{\mathbf{x}} + \sigma\boldsymbol{\zeta}} &=\mathrm{e}^{\frac{\mathrm{i}\pi}{6}(3a_2+5)(2a_1+a_2+2\delta_{\sigma 2})}\ket{\mathbb{R}({\mathbf{x}} + \sigma\boldsymbol{\zeta})}\\ \hat{X}_1\ket{{\mathbf{x}} + \sigma\boldsymbol{\zeta}} &= (-1)^{\frac{1}{2}a_2(a_2+1)} \ket{\mathbb{X}_1 ({\mathbf{x}} + \sigma\boldsymbol{\zeta})}\punc{,} \label{EqChis2} \end{align} where $\mathbb{R} ({\mathbf{x}} + \sigma\boldsymbol{\zeta}) = (-a_2 - \delta_{\sigma 1} - \delta_{\sigma 2})\mathbf{e}_1 + (a_1 + a_2 + \delta_{\sigma 2})\mathbf{e}_2 + \bar{\sigma}\boldsymbol{\zeta}$ and $\mathbb{X}_1 ({\mathbf{x}} + \sigma\boldsymbol{\zeta}) = -(a_1 + a_2 + \sigma)\mathbf{e}_1 + a_2\mathbf{e}_2 + \sigma\boldsymbol{\zeta}$. Note that $\mathbb{R}$ exchanges sublattices $1$ and $2$; we have defined $\bar\sigma$ so that $\bar0=0$, $\bar1=2$, and $\bar2=1$. The operators $\Qhat$ form a group with multiplication laws equal, up to phase factors, to those of the original space group formed by the operators $\Qop$. These phases can be calculated using the transformations in the position basis, giving \begin{align} \label{EqCommutation1} \hat{K}_1 \hat{K}_2 &= -\hat{K}_2 \hat{K}_1\\ \hat{K}_2 \hat{R} &= \mathrm{e}^{\mathrm{i}\pi/3} \hat{R}\hat{K}_1\\ \hat{K}_2 \hat{R}^2 &= \hat{K}_1 \hat{R}^2 \hat{K}_1\\ \hat{R}^6 &= 1\\ \hat{K}_1 \hat{X}_1 \hat{K}_1 &= \hat{X}_1\\ \hat{K}_1 \hat{X}_1 \hat{K}_2 &= \hat{K}_2 \hat{X}_1\punc{.} \label{EqCommutation2} \end{align} These commutation properties depend only on the effective magnetic flux and are independent of the choice of gauge. Using the real-space transformations given in \refeqs{EqChis1}{EqChis2}, the Fourier transform to momentum space can be performed. We define the reciprocal lattice vectors $\mathbf{e}^*_1$ and $\mathbf{e}^*_2$ so that $\mathbf{e} _i \cdot \mathbf{e}^*_j = 2\pi \delta _{ij}$ and the momentum-space basis by \beq{EqMomentumBasis} \ket{{\mathbf{k}},\sigma} \propto \sum_{a_1,a_2} \mathrm{e}^{-2\pi \mathrm{i} (\kappa _1 a_1 + \kappa _2 a_2)}\ket{a_1 \mathbf{e} _1 + a_2 \mathbf{e} _2 + \sigma \boldsymbol{\zeta}}\punc{,} \end{equation} where ${\mathbf{k}} = \kappa _1 \mathbf{e}^*_1 + \kappa _2 \mathbf{e}^*_2$. The nonuniform phase factors $\chi_\Qop({\mathbf{x}})$ cause the operator $\Qhat$ to mix a discrete set of momentum values, but at certain high-symmetry points in the Brillouin zone (BZ), a smaller set of momenta are mixed. On the dice lattice with $f = \frac{1}{6}$, one finds that a generic momentum state belongs to a set of $24$ states that are mixed, but that there are four points within the (lattice) BZ that are closed under the action of the full symmetry group. With our choice of gauge field, their momenta are given by ${\mathbf{k}} _{m,n} = -(\frac{1}{12}+\frac{m}{2})\mathbf{e}^*_1 + (\frac{1}{12} + \frac{n}{2})\mathbf{e}^*_2$ with $m,n \in \{0,1\}$, and we expect the global minima of the single-particle dispersion to occur at these points.\footnote{It is possible to construct a Hamiltonian ${\mathcal{H}}_1$ so that these are only local extrema, and the global minima of the dispersion are elsewhere. The low-energy sector then involves more than two vortex fields, and the number of possible ordered states is necessarily greater than the six allowed by the microscopic model.} In fact, we argue that two independent linear combinations from the four points form global minima, as follows: It can be seen from Figure~\ref{FigKagomeDice} that $\mathbb{K} _2$ remains a full symmetry in the presence of the background gauge field, and so the corresponding operator $\hat{K}_2$ takes a particularly simple form. Its action on a position state is given in \refeq{EqChiK2}; acting on a momentum state it gives simply \beq{EqKagomeTrans2} \hat{K}_2 \ket{{\mathbf{k}}, \sigma} = \mathrm{e}^{\mathrm{i}\mathbf{e}_2 \cdot {\mathbf{k}}} \ket{{\mathbf{k}}, \sigma}\punc{.} \end{equation} This implies that the single-particle Hamiltonian ${\mathcal{H}} _1$ does not mix momentum states $\ket{{\mathbf{k}}_{m,n},\sigma}$ with distinct values of $n$. Considering first $n=0$, one of the energy eigenstates at the minimum of the single-particle dispersion can therefore be written \beq{EqKet0} \ket{0} = \sum_{m,\sigma} c_{m\sigma} \ket{{\mathbf{k}}_{m,0},\sigma}\punc{,} \end{equation} where $c_{m\sigma}$ are coefficients depending on the details of ${\mathcal{H}}_1$. (The magnetic BZ is half the size of the lattice BZ, so that momenta ${\mathbf{k}} _{m,n}$ with $m = 0,1$ correspond to the same point in the magnetic BZ, and are hence mixed by ${\mathcal{H}} _1$.) The state $\ket{0}$ is clearly an eigenstate of $\hat{K}_2$, with eigenvalue $\mathrm{e}^{\mathrm{i}\pi/6}$. It is straightforward to show, using \refeq{EqCommutation1}, that the state $\ket{1} = \mathrm{e}^{\mathrm{i} \pi/6} \hat{K}_1 \ket{0}$ is also an eigenstate of $\hat{K}_2$, with eigenvalue $-\mathrm{e}^{\mathrm{i}\pi/6}$, and hence $\langle 0 | 1 \rangle = 0$. Since $\hat{K}_1$ commutes with ${\mathcal{H}}_1$ (by construction), this state is also an energy eigenstate with equal eigenvalue. Using the Fourier transform of \refeq{EqChis1}, one finds explicitly \beq{EqKet1} \ket{1} = \sum_{m,\sigma} c_{m\sigma} (-1)^m \ket{{\mathbf{k}}_{m,1},\sigma}\punc{,} \end{equation} and $\hat{K}_1\ket{1} = \mathrm{e}^{-\mathrm{i} \pi/6}\ket{0}$. There are therefore two degenerate minima of the single-particle dispersion, $\ket{n}$ with $n \in \{0,1\}$. (The same result was found by Jiang and Ye\cite{Jiang} by directly diagonalizing a Hamiltonian with nearest-neighbor hopping.) While the values of the coefficients $c_{m\sigma}$ depend on the precise form of the vortex hopping Hamiltonian, the behavior of $\ket{0}$ and $\ket{1}$ under the action of the full symmetry group can be determined uniquely using \refeqs{EqCommutation1}{EqCommutation2}, giving \begin{align} \label{Eq01Transform1} \bra{n}\hat{K}_1\ket{n'} &= \mathrm{e}^{-\mathrm{i}\pi/6} \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}_{nn'}\\ \bra{n}\hat{K}_2\ket{n'} &= \mathrm{e}^{\mathrm{i}\pi/6} \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix}_{nn'}\\ \bra{n}\hat{R}\ket{n'} &= \frac{\mathrm{e}^{-\mathrm{i}\pi/12}}{\sqrt{2}} \begin{pmatrix} 1&1\\ \mathrm{i}&-\mathrm{i} \end{pmatrix}_{nn'}\\ \bra{n}\hat{X}_1\ket{n'} &= \frac{1}{\sqrt{2}} \begin{pmatrix} 1&\mathrm{i}\\ \mathrm{i}&1 \end{pmatrix}_{nn'}\punc{.} \label{Eq01Transform2} \end{align} We have so far considered only the single-particle kinetic terms in the action ${\mathcal{S}} \sub{dual}$ given in \refeq{EqDualAction}. To return to the full description, we first define creation operators $v_0^\dagger$ and $v_1^\dagger$ for the single particle states $\ket{0}$ and $\ket{1}$. The real-space vortex creation operator projected into the low-energy sector can then be written as \beq{EqVortexOperator} \psi^\dagger({\mathbf{x}}) = \varphi _0^* ({\mathbf{x}}) v_0^\dagger + \varphi _1^* ({\mathbf{x}}) v_1^\dagger\punc{,} \end{equation} where $\varphi _n ({\mathbf{x}})$ are slowly varying functions (on the lattice scale). The symmetry properties of the two-component vector $\boldsymbol{\varphi} = \begin{pmatrix}\varphi _0\\ \varphi _1\end{pmatrix}$ are determined by those of the states $\ket{n}$, and can be summarized as: \begin{align} \label{EqPhiTrans1} \boldsymbol{\varphi} &\xrightarrow{\mathbb{K} _1} \mathrm{e}^{\mathrm{i}\pi/6} \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \boldsymbol{\varphi}\\ \boldsymbol{\varphi} &\xrightarrow{\mathbb{K} _2} \mathrm{e}^{-\mathrm{i}\pi/6} \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} \boldsymbol{\varphi}\\ \boldsymbol{\varphi} &\xrightarrow{\mathbb{R}} \frac{\mathrm{e}^{\mathrm{i} \pi / 12}}{\sqrt{2}} \begin{pmatrix} 1&1\\ -\mathrm{i}&\mathrm{i} \end{pmatrix} \boldsymbol{\varphi}\\ \boldsymbol{\varphi} &\xrightarrow{\mathbb{X} _1} \frac{1}{\sqrt{2}} \begin{pmatrix} 1&-\mathrm{i}\\ -\mathrm{i}&1 \end{pmatrix} \boldsymbol{\varphi}^*\punc{.} \label{EqPhiTrans2} \end{align} Note that the reflection $\mathbb{X} _1$ is represented by an antiunitary operation. \subsection{Dual continuum action} The action ${\mathcal{S}} \sub{dual}$ can be rewritten in terms of $\boldsymbol{\varphi}$, and the other components of the vortex fields integrated out, giving renormalized values for the coefficients. The transformation properties of $\boldsymbol{\varphi}$ reproduce the nontrivial effects of the lattice structure and fractional filling $f$, allowing the continuum limit to be taken, with the spatial coordinates ${\mathbf{x}}$ extended from discrete values describing the dice lattice to continuous two-dimensional coordinates. The transformations given in \refeqs{EqPhiTrans1}{EqPhiTrans2} strongly constrain the form of the continuum action, which can be expressed as a series in powers of the vortex field $\boldsymbol{\varphi}$, the gauge field ${\mathbf{A}}$, and the space- and time-derivative operators. As in the Coulomb phase, described in Section~\ref{SecCoulombPhase}, the full cubic symmetry of the original dimer problem also constrains the final action to be space-time symmetric. We first consider the terms containing only the vortex fields $\boldsymbol{\varphi}$. These are most easily found by expressing all gauge-invariant (i.e., local and phase-rotation invariant) bilinears of the fields $\varphi _n$ in terms of the boson density operators, which are the order parameters for the density-wave phases. The ordering patterns with which we are concerned (such as the one illustrated in Figures~\ref{FigCubic111Projection} and \ref{FigCubicKagome3D}) are characterized by nonzero expectation values of the momentum-space density at ${\mathbf{k}} \in \{\frac{1}{2}\mathbf{e} _1^*,\frac{1}{2}\mathbf{e} _2^*,\frac{1}{2}\mathbf{e} _1^*+\frac{1}{2}\mathbf{e} _2^*\}$. Defining $\rho _{\kappa _1\kappa _2}$ as the boson density operator at momentum ${\mathbf{k}} = \kappa _1 \mathbf{e} _1^* + \kappa _2 \mathbf{e} _2^*$, the order parameters are $\rho _{\frac{1}{2}0}$, $\rho _{0\frac{1}{2}}$, and $\rho _{\frac{1}{2}\frac{1}{2}}$. In the continuum limit, we can identify these fields with the bilinears of $\varphi _n$ using their symmetry properties; we find \beq{EqBosonDensity} \begin{Bmatrix} \rho_{\frac{1}{2}\frac{1}{2}}\\ \rho_{\frac{1}{2}0}\\ \rho_{0\frac{1}{2}} \end{Bmatrix} \sim \boldsymbol{\varphi}^\dagger \mathbf{M}^\dagger \begin{Bmatrix} \boldsymbol{\sigma}^x\\ \boldsymbol{\sigma}^y\\ \boldsymbol{\sigma}^z \end{Bmatrix} \mathbf{M} \boldsymbol{\varphi} \punc{,} \end{equation} where $\mathbf{M} = (\boldsymbol{1} - \mathrm{i} \boldsymbol{\sigma}^x)/\sqrt{2}$ is a unitary matrix, and $\boldsymbol{\sigma}^\mu$ is a Pauli matrix. In the language of the original dimer problem, the order parameter is the magnetization $m _\mu$, related to the dimer occupation number by \refeq{EqOrderParameter}. This can similarly be expressed in terms of the vortex fields as \beq{EqMagnetizationFromVortices} m_\mu \sim \boldsymbol{\varphi}^\dagger \mathbf{M}^\dagger \boldsymbol{\sigma}^\mu \mathbf{M} \boldsymbol{\varphi}^{\phantom{\dagger}}\punc{.} \end{equation} We will now identify the allowed interaction terms for the vortex fields, and it is convenient to do this by writing them in terms of $m _\mu$. While the full action has cubic symmetry, the lowest-order terms in fact have \SO3 rotation symmetry, which corresponds, via \refeq{EqMagnetizationFromVortices}, to \SU2 symmetry for the vortex fields $\boldsymbol{\varphi}$. Our primary concern will be to identify the first term that explicitly breaks this higher symmetry. Defining the \SU2 Casimir invariant $\Omega = \boldsymbol{\varphi}^\dagger \boldsymbol{\varphi}$, all gauge-invariant combinations of the fields $\varphi _n$ can be written in terms of $m _{x,y,z}$ and $\Omega$, and it is straightforward to show that $|\mathbf{m}|^2 \sim \Omega^2$ is also an \SU2 invariant. While the action can contain any term involving only $\Omega$, terms involving functions of $\boldsymbol{\varphi}$ that break \SU2 are strongly constrained by symmetry. First note that the only allowed quadratic and quartic (in the vortex fields) combinations are $\Omega$ and $\Omega^2$. At sixth order, besides $\Omega^3$, one finds the combination $m_x m_y m_z$, which is invariant under all of the operations in \refeqs{EqPhiTrans1}{EqPhiTrans2}. It is nonetheless excluded by requiring symmetry under the cubic reflections $\mathbb{T} _\mu$, which take $m _\mu \rightarrow -m _\mu$. (In terms of the bosons, this corresponds to particle--hole symmetry, which swaps vortices and antivortices.) The lowest-order combination that satisfies all the symmetries of the problem, but explicitly breaks the \SU2 symmetry, is of eighth order in $\boldsymbol{\varphi}$, and its contribution to the action is \beq{EqLag1} {\mathcal{L}} _1 = v \sum _\mu m _\mu^4\punc{.} \end{equation} Symmetry does not fix the coefficient $v$, and it is in general allowed to take either sign. In the ordered phases that we describe here, however, a single component of the magnetization acquires a nonzero expectation value; such phases require $v < 0$. \subsection{Emergent \SU2 symmetry} The full action is given by the continuum limit of \refeq{EqDualAction} and can be written as a three-dimensional integral over a Lagrangian density ${\mathcal{L}} = {\mathcal{L}} _0 + {\mathcal{L}} _1 + \cdots$. The lowest order terms ${\mathcal{L}} _0$ take the forms expected for a \U1 gauge field minimally coupled to matter fields $\boldsymbol{\varphi}$, with the lattice curl becoming a differential curl, and the modified hopping term becoming a covariant derivative: \beq{EqLag0} {\mathcal{L}} _0 = |(\boldsymbol{\nabla} - \mathrm{i}{\mathbf{A}})\boldsymbol{\varphi}|^2 + s|\boldsymbol{\varphi}|^2 + u(|\boldsymbol{\varphi}|^2)^2 + \kappa |\boldsymbol{\nabla} \times {\mathbf{A}} |^2\punc{.} \end{equation} The interaction term ${\mathcal{L}} _1$ is given in \refeq{EqLag1} and the ellipsis represents further terms that are expected to be irrelevant. The cubic symmetry of the original dimer problem means that ${\mathcal{L}}$ (considered as the action for a quantum problem) is space--time symmetric, and this fact has been used to express ${\mathcal{L}} _0$ in terms of the three-dimensional derivative operator $\boldsymbol{\nabla}$. The transition occurs when the quadratic coefficient $s$ is tuned through its critical value, and the gauge field ${\mathbf{A}}$ acquires a gap by the Anderson-Higgs mechanism. The superfluid phase of the boson problem (the Coulomb phase of the dimer problem) is equivalent to the Coulomb phase of this gauge theory, within which the long-range correlations are reproduced by the gapless gauge field. The Mott insulator (ordered dimers) corresponds to the Higgs phase, where there are no gapless excitations and \SU2 symmetry is broken by condensation of the matter field $\boldsymbol{\varphi}$. Note that the gauge field ${\mathbf{A}}$ is by construction noncompact, so that monopoles are forbidden. Such monopoles would give points in space-time with nonzero divergence $\divv {\mathbf{J}}$ of the boson current, which map to monomers in the original dimer problem. It is not firmly established whether the action ${\mathcal{L}}$ has any nontrivial fixed points under the renormalization group (RG). As noted by Balents et al.,\cite{Balents} such a conjecture is difficult to test analytically, since, for example, an expansion in $\epsilon = 4 - d$, where $d$ is the spatial dimension, has no weak-coupling fixed points. As we remark in Section~\ref{SecDiscussion}, the numerical evidence remains inconclusive. It is clear from simulations, however, that the transition in the original dimer model is, at most, weakly first order, with a correlation length of at least hundreds of lattice spacings, and so a continuum description is appropriate. The terms appearing in ${\mathcal{L}} _0$ have full \SU2 symmetry, while the lowest-order term breaking this to the microscopic cubic symmetry, ${\mathcal{L}} _1$, is of eighth order in the field $\boldsymbol{\varphi}$. It is therefore highly likely that this term, as well as higher-order symmetry-breaking terms, is irrelevant in the continuum, and that the effective theory is given by ${\mathcal{L}} _0$ and has an emergent \SU2 symmetry. This leads naturally to the conclusion that physical properties measured sufficiently close to the transition should show full \SO3 symmetry, rather than the reduced cubic symmetry of microscopic model. This claim is in fact in agreement with qualitative observations made by Misguich et al.,\cite{Misguich}\ based on their numerical results near the transition. First, the measured distribution of the order parameter $\mathbf{m}$ at the critically becomes increasingly spherically symmetric for larger system sizes. Second, the dimer correlation function $\langle d_\mu ({\mathbf{r}}) d_\nu ({\boldsymbol{0}}) \rangle$, while taking the form given in \refeq{EqDipolarCorrelations} deep within the Coulomb phase, is dominated by a ``spinlike'' contribution near the transition. This follows from the fact that the magnetization couples directly (without derivatives) to a bilinear in the critical field, according to \refeq{EqMagnetizationFromVortices}, and forms a three-dimensional representation of \SU2. We therefore expect \beq{EqMagnetizationCorrelations} \langle m_\mu ({\mathbf{r}}) m_\nu ({\boldsymbol{0}}) \rangle \sim \delta _{\mu\nu}|{\mathbf{r}}|^{-d+2-\eta _m}\punc{,} \end{equation} where the critical exponent $\eta _m$ is the anomalous dimension of the magnetization (and a similar expression holds for the dimer correlation function). The absence of a weak-coupling fixed point prevents us from making quantitative predictions about the anomalous dimension $\eta _m$. As noted by Misguich et al.,\cite{Misguich} these properties, while explained straightforwardly by an \SU2-symmetric continuum theory, are incompatible with other, more obvious, candidate continuum theories, such as the \O3 model. \section{Discussion} \label{SecDiscussion} In this paper we have presented a derivation of a continuum theory to describe the phase transition from a dimer crystal to a Coulomb phase observed in simulations of a classical dimer model on the cubic lattice.\cite{Alet} Our approach proceeds by first mapping to a quantum model of bosons, which has a corresponding phase transition from a Mott insulator with density-wave order to a superfluid. The second stage of the derivation consists of applying a duality mapping to this quantum model, allowing the phase transition to be described in terms of an Anderson-Higgs transition for the dual vortex fields. The continuum theory we derive coincides with one obtained for the same model by Charrier et al.\cite{Charrier}\ and by Chen et al.,\cite{Chen} using a duality mapping applied directly to the classical model. The Lagrangian ${\mathcal{L}} _0$ is given in \refeq{EqLag0} and is referred to as NC$CP^1$; we expect the long-wavelength properties near the transition to be described by this theory. In other words, there should be some range of length scales, much larger than the lattice scale, at which ${\mathcal{L}} _0$ provides an appropriate description. This assumes that ${\mathcal{L}} _1$ and all higher-order terms are irrelevant in the RG sense. As we have noted above, the numerical results of Misguich et al.\cite{Misguich}\ support the claim that at least ${\mathcal{L}} _1$ and all other terms breaking \SU2 symmetry are irrelevant. The question of whether the transition in the dimer model is continuous or first order remains open, but is clearly related to same question for the transition between the Coulomb and Anderson-Higgs phases in the model ${\mathcal{L}} _0$. This remains contentious,\cite{Motrunich2,Motrunich,Sandvik,Melko,Kuklov,Kuklov2} with numerical evidence so far inconclusive and little prospect of insight from analytics. The theory ${\mathcal{L}} _0$ has one free parameter $\kappa$ (after $s$ is tuned to its critical value), and it is possible that it has a continuous transition for some value of $\kappa$ and not for others.\cite{Motrunich2} (It is not clear how $\kappa$ depends on the details of the classical energy $\mathcal{E}$---for instance, whether it increases or decreases when one adds further-neighbor interactions.) In fact, the most economical interpretation of the available numerical results is that the original cubic dimer model (with only nearest-neighbor interactions) is near a tricritical point. Evidence for this comes from the critical exponents in the original dimer model and the results of adding deformations. The critical exponents reported by Alet et al.\cite{Alet}\ for the cubic dimer model are, as noted by the authors, consistent with those expected generically at a tricritical point (and in fact those seen at the tricritical point in an NC$CP^1${} model by direct simulations\cite{Motrunich2}). They are inconsistent with those for the generic NC$CP^1${} transition as observed in simulations of various models expected to be described by this theory.\cite{Motrunich2,Motrunich,Sandvik,Melko} The scenario that the cubic dimer model with only nearest-neighbor interactions is near a tricritical point suggests that modifications to the Hamiltonian should drive the system away from this tricritical point, either to a strongly first-order transition or to the generic continuous transition. Recent numerical results show that some perturbations, including ones that preserve the full symmetry\cite{Papanikolaou} and ones that break it,\cite{Chen} can make the transition clearly first order. Confirmation of the tricritical-point scenario would require the demonstration of a line of continuous transitions for some range of parameters. Charrier et al.\cite{Charrier}\ have studied a model where \refeq{EqClosePacking} is still enforced, but the variables $d_\mu ({\mathbf{r}})$ are allowed to take all integer values (although their simulations treated a dual model). As for the unperturbed dimer model, they found results consistent with a continuous transition, but in this case with exponents in agreement with those reported for NC$CP^1$.\cite{Motrunich2,Motrunich,Sandvik,Melko} By introducing an energy cost for larger values of $d_\mu$, it is possible to interpolate continuously between this generalized version and the original dimer model; simulations for a range of values would be a useful test of the tricritical-point scenario. \begin{acknowledgments} We thank S. Simon for helpful discussions. The work was supported in part by EPSRC Grant No.\ EP/D050952/1. \end{acknowledgments}
1,314,259,996,853
arxiv
\section{Introduction} The discovery of the high-temperature superconductivity (SC) in copper-oxide based compounds - cuprates (Bednorz and M\"uller 1986) revived the interest in materials containing transition elements. The main feature of these materials is the crucial role of electron-electron interactions. This can result in electronic properties, very unusual when compared to the behaviour of conventional metals. Although the unconventional SC is clearly the most puzzling phenomenon in cuprates, we focus in this review predominantly on the analysis of more or less anomalous properties of the normal state which deviate essentially from the standard understanding of electrons in metals and still present one of the major theoretical challenges in the solid state physics. The cuprate superconductors have a very anisotropic structure, where the common building blocks are layers, in cuprates formed by combining one of the three possible structural elements containing Cu and O, as shown in Fig.~\ref{1.1}b. The CuO layered structures are stacked in the crystal, separated by various intercalant layers in different cuprates. In spite of vast differences in the structure of unit cells, electronic properties of the whole family of cuprates are quite universal. This can be explained by the predominant role of generic CuO$_2$ planes, Fig.~\ref{1.1}a, where conducting electrons reside. The electronic coupling between CuO$_2$ planes is very weak, resulting in a huge ratio of the in-plane resistivity to the perpendicular resistivity, in the anisotropy of the SC coherence length etc. Intermediate layers serve mainly as a charge reservoir for the planes. Consequently properties of cuprates are quite well classified according to the doping level of the reference CuO$_2$ electronic structure. \begin{figure}[ht] \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_1.1a.ps,height=3.3cm}} \quad \subfigure[]{ \epsfig{file=fig_1.1b.ps,height=3.3cm}}} \fi \caption{(a) Schematic structure of copper-oxide planes and (b) three possible building blocks of the planes, after Fulde (1991).} \label{1.1} \end{figure} As now well established, the reference cuprate compounds, as La$_2$CuO$_4$ and YBa$_2$Cu$_3$O$_6$, are Mott (charge-transfer) insulators due to strong correlations which induce in a half-filled band a charge gap $\sim 2~$eV. The spin degrees of planar CuO$_2$ electrons can be well mapped on the properties of a planar antiferromagnetic (AFM) $S=1/2$ Heisenberg model. The AFM ground state of undoped materials, emerging from strong correlations, has been quite early recognized as a crucial starting point for theoretical considerations (Anderson 1987) of doped materials, being strange metals in the normal state and exhibiting high transition temperature $T_c$ to the SC state. There are by now numerous indications that essential features of electronic properties of the doped AFM, as realized in cuprates, are well represented by prototype single-band models of correlated electrons, as the Hubbard model and the $t$-$J$ model (Rice 1995). In spite of their apparent simplicity both models are notoriously difficult to treat analytically, in particular in the most interesting regime of strong correlations. The lack of analytical tools for correlated electrons (for a general introduction see Fulde 1991) has increased the efforts towards numerical approaches (Dagotto 1994), which predominantly can be divided into two categories: the quantum Monte Carlo (QMC) methods and the exact diagonalization (ED) methods. The $t$-$J$ model, which incorporates the strong correlation requirement explicitly, is more adapted to the ED approach. So far most calculations were performed for the ground state (g.s.) at $T=0$, where the standard Lanczos technique (Lanczos 1950) offers an efficient ED analysis of small systems of reasonable sizes. Recently, present authors (Jakli\v c and Prelov\v sek 1994a) introduced a novel numerical method, combining the Lanczos method with a random sampling, which allows for an analogous treatment of statics and dynamics of many-body quantum models at $T>0$. The latter method, further referred to as the finite-temperature Lanczos method (FTLM), and results for the $t$-$J$ model obtained using this method, are the main subject of this review. The final goal is to understand properties of doped AFM in general, and of high-$T_c$ cuprates in particular. In the absence of reliable analytical methods and results, numerical calculations can help to answer several crucial questions. Are strong correlations, as incorporated in prototype models, enough to account for anomalous normal-state properties of the strange metal ? Which are the relevant energy and temperature scales in doped AFM, as represented by the $t$-$J$ model, and in which properties do they show up ? Which is the unifying phenomenological description of the normal state ? Is the $t$-$J$ model sufficient, or which ingredients should be added to account qualitatively and quantitatively for observed properties ? Can we learn something macroscopically meaningful from the studies of small systems, and why ? We note that at present the SC seems to be beyond the reach of numerical approaches including the FTLM, hence we do not investigate here in more detail the possible existence of SC and its origin in model systems. \setcounter{equation}{0}\setcounter{figure}{0} \section{Cuprates as doped antiferromagnets} \subsection{Electronic phase diagram and properties of the normal state} Reference cuprate compounds are AFM insulators and are so far best understood. Properties of various other layered cuprates can be interpreted in terms of doping the reference material, where (mainly) holes are introduced into CuO$_2$ planes. One of the major conceptual achievements, which emerged from careful experimental investigations of high-quality materials in the last decade, has been the realization of quite universal electronic phase diagram (Hwang {\it et al.} 1994, Batlogg {\it et al.} 1994, Batlogg 1997), revealing characteristic temperature scales as they develop as the function of hole doping. It is at present quite common to classify materials with respect to doping as underdoped, optimally doped and overdoped, and the corresponding phase diagram is shown in Fig.~\ref{2.1}. \begin{figure} \centering \iffigure \epsfig{file=fig_2.1.ps,height=8cm} \fi \caption{ Schematic electronic phase diagram of cuprates, after Batlogg (1997). } \label{2.1} \end{figure} \subsubsection{Optimum doping regime} Experimentally optimum doping is chosen to correspond to materials with the highest $T_c$ within the given class of chemically and structurely related compounds. It has been realized soon after the discovery of high-$T_c$ SC that also normal state properties at $T>T_c$ of the optimally doped materials are very anomalous, but at the same time also most universal. The prominent feature is the resistivity law $\rho \propto T$ (Takagi {\it et al.} 1992), valid essentially in the whole measurable range $T>T_c$ and clearly contradicting the normal Landau-Fermi-liquid (LFL) behaviour $\rho \propto T^2$. Related is the observation that the dynamical conductivity $\sigma(\omega)$ does not fall off for larger $\omega$ according to the Drude form $\sigma \propto 1/\omega^2$, but rather as $\sigma \propto 1/\omega$ (Tanner and Timusk 1992). The clearest evidence for an anomalous spin dynamics comes from the NMR and NQR relaxation (Slichter 1994), where the relaxation rate $1/T_1$ on planar $^{63}$Cu in optimally doped La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) with $x_{opt} \sim 0.15$ is nearly $T$-independent (as well as nearly doping independent) in contrast to the usual Korringa law for metals $1/T_1 \propto T$. The qualitative difference, i.e. a large enhancement of low-frequency spin fluctuations at low $T$, can be related to the persistence of short range AFM fluctuations, even at the optimum doping. The support for this comes also from neutron scattering experiments, where a substantial AFM correlation length $\xi$ has been measured in the same class of materials, with $\xi \sim 3.8 \AA /\sqrt{x}$ (Birgeneau {\it et al.} 1988). On the other hand, several properties at the optimum doping seem to be close to the normal LFL picture. The angle-resolved photoemission spectroscopy (ARPES) on cuprates (Shen and Dessau 1995), most reliable for BiSrCaCuO (BISCCO) compounds, shows electronic excitations consistent with a large Fermi surface (FS) and with the conserved FS volume (Luttinger theorem), although spectral shapes are quite distinct from a simple LFL picture. The specific heat (Loram {\it et al.} 1993) follows in the normal state roughly the LFL behaviour $C_v =\gamma T$ with $\gamma$ not very far from the free-fermion value, consistent with a nearly $T$-independent uniform susceptibility $\chi_0$, whereby the Wilson ratio is close to the free-fermion one (Loram {\it et al.} 1996). It is also the unifying characteristic of the optimum doping that properties do not reveal above $T_c$ any additional characteristic $T$ scale. \subsubsection{Overdoped and underdoped regime} In overdoped materials $T_c$ is decreasing and finally vanishing with increased doping. At the same time the electronic properties are getting closer to the usual metallic behaviour consistent with the LFL scenario. E.g., the resistivity behaviour moves towards the normal FLF form $\rho \propto T^2$ (Takagi {\it et al.} 1992), spectral shapes of electronic excitations, as revealed by ARPES, become sharper near the FS (Marshall {\it et al.} 1996) etc. These facts can be put together with a decreasing intensity of AFM fluctuations. It is thus plausible that we are dealing in the overdoped regime with the crossover to the normal LFL, however this crossover is not a trivial one and so far also not well understood either. The most evident progress in the investigations of normal-state properties has been made in last few years for the underdoped cuprates. In contrast to the optimum doping, experiments reveal in this regime additional characteristic temperatures $T>T_c$ (Batlogg {\it et al} 1994), which show up as the crossovers where particular properties qualitatively change. As summarized in Fig.~\ref{2.1}, there seems to be an indication for two distinct crossovers. The existence of both as well as their distinction is still widely debated, nevertheless we will refer to them as the AFM crossover scale $T^*$ and the pseudogap scale $T_{sg}$ for the lower one (Batlogg 1997). The $T^*$ scale (Batlogg {\it et al.} 1994) shows up most clearly as the maximum of the spin susceptibility $\chi_0(T=T^*)$ (Torrance {\it et al.} 1989). The in-plane resistivity $\rho(T)$ is linear $\rho \propto T$ for $T>T^*$ and decreases more steeply for $T<T^*$. Characteristic is also the anomalous $T$-dependence of the Hall constant $R_H(T)$ for $T<T^*$ (Ong 1990, Hwang {\it et al.} 1994). The latter is evidently hole-like in the classical sense $R_H(T \gtrsim T_c) \propto 1/x$, which is also not properly understood theoretically. It seems plausible that the $T^*$ crossover is related to the onset of short-range AFM correlations for $T<T^*$, since in the undoped AFM $T^*$ corresponds just to a well understood maximum due to a gradual transition from a disordered paramagnet to the one with short-range AFM correlations. The crossover $T_{sg}$ has been first identified in connection with the decrease of the NMR relaxation $1/T_1$ for $T<T_{sg}$ (Takigawa {\it et al.} 1991, Slichter 1994), indicating the reduction of low-energy spin excitations interpreted as the opening of the spin pseudogap in underdoped materials. Most striking evidence for an additional energy scale in underdoped cuprates is the observation of the leading-edge shift in ARPES measurements at $T>T_c$ (Marshall {\it et al} 1996), indicating features of the $d$-wave SC gap persisting within the normal phase. It should be pointed out, however, that the designation of crossover features is at present still controversial. In particular it is not evident whether we are dealing with two or more essentially different energy scales. \subsection{Phenomenology of the normal state} Properties of the normal LFL follow from the one-to-one correspondence of low-energy excitations of the interacting fermion system to that of a free-fermion gas. The prerequisite is that the volume of the FS is conserved (Luttinger 1960). Essential are the well defined quasiparticles (QP) with the vanishing damping at the FS, with $\Gamma \propto (E-E_F)^2$. Consequences are the linear specific heat $C_V =\gamma T$, nearly $T$-independent static and dynamical spin susceptibilities, the Korringa law for NMR relaxation $1/T_1 \propto T$, the resistivity $\rho(T) \propto T^2$ etc. Experimental facts on cuprates contradict the usual LFL picture. Several more or less elaborated scenarios have been proposed to capture main anomalous features. Focusing on the importance of AFM spin fluctuations, the concept of a {\it nearly AFM Fermi liquid} (NAFL) has been elaborated (see e.g. Monthoux and Pines 1994). Here one assumes that at low-frequencies $\chi''(\vec{q},\omega)\propto \omega$ at all $\vec{q}$, as expected in a LFL, but with strongly enhanced fluctuations near the AFM wavevector $\vec{Q}=(\pi,\pi)$, corresponding to the critical slowing down in the proximity of a phase transition to the AFM-ordered state. The following form has been proposed, which can be derived also via the self-consistent paramagnon theory (Moriya {\it et al.} 1990), \begin{equation} \chi(\vec{q},\omega)=\frac{\chi_{\vec Q}}{ 1+ \xi^2 |\vec{q}-\vec{Q}|^2 - i \omega/\omega_{SF}}\;. \label{cp1} \end{equation} In the proposal by Millis {\it et al.} (1990), originally devoted to the interpretation of the NMR and NQR relaxation, the main $T$-dependence is expected to arise from the AFM correlation length $\xi$, which is assumed to show a critical behaviour as $T$ is decreased, i.e. $\xi^2\propto T_x/(T+T_x)$. In order to explain the anomalous NQR relaxation $1/T_1(T) \sim const$ (Imai {\it et al.} 1993), contradicting the Korringa law, strong $T$-dependence of $\xi$ is essential with $T_x \sim 100~K$, as well as low $\omega_{SF}$ and large $\chi_{\vec Q}$ for $T\sim T_c$ (Monthoux and Pines 1994). The form of spin fluctuations is the basis for further investigations of the charge dynamics, strongly coupled to spin degrees of freedom. The calculated response functions such as the conductivity appear also anomalous, e.g. the resistivity is close to a linear law $\rho \propto T$. There are several proposals, analogous to the NAFL in the basic idea of {\it the proximity to a critical point}, enhancing fluctuations in the optimum doping regime. One proposal invokes the quantum critical scaling of the spin dynamics, established in the nonlinear sigma model (Chakravarty {\it et al.} 1989), induced by doping the AFM (Sokol and Pines 1993). Another scenario relates the quantum critical point to a charge-density-wave instability (Castellani {\it et al.} 1995). An alternative interpretation of experimental facts has been provided by the concept of a marginal Fermi liquid (MFL) (Varma {\it et al.} 1989). The hypothesis is that there exist excitations, contributing both to the charge and spin response, which show in a broad range of wavevectors $\vec q$ anomalous susceptibilities of the form \begin{equation} \chi''(\vec{q},\omega)\sim\left\{{C~ \omega/ T ~~~{\rm for}~~~ |\omega|<T, \atop C~{\rm sgn}\,\omega ~~~{\rm for}~~~ |\omega|>T}\right. \label{cp2} \end{equation} Due to the scattering on bosonic excitations with the spectrum (\ref{cp2}) the single-particle self-energy $\Sigma(\vec k,\omega)$ is also anomalous. Assuming its $\vec k$-independence in a broad range, it has been postulated using phenomenological arguments (Littlewood and Varma 1991), \begin{equation} \Sigma(\omega)\sim \pi \lambda \left[2 \omega\ln\frac{x}{\omega_c} -i x\right], \qquad x=\max(|\omega|,\pi T), \label{cp3} \end{equation} where $\omega_c$ is a high-frequency cutoff. Hence the QP lifetime $1/\tau \propto -\Sigma''(\omega)$ is anomalous, i.e. $1/\tau(\omega) \propto x$. It should be however mentioned that the Ansatz (\ref{cp3}) is not unique and also modified forms have been invoked, e.g. in the analysis of the optical conductivity (El Azrak {\it et al.} 1994, Baraduc {\it et al.} 1995) better fit has been obtained with \begin{equation} -\Sigma''(\omega)\sim \pi \lambda ( |\omega| +\pi T). \label{cp3a} \end{equation} While the FS should remain well defined with the volume equal to that of free fermions, the corresponding QP weight $\tilde Z$ at the FS, given by \begin{equation} \tilde Z^{-1} = 1- \partial \Sigma'(\omega) / \partial\omega |_{\omega =0}, \label{cp4} \end{equation} vanishes on the FS in analogy to the case of a one-dimensional Luttinger liquid (Haldane 1981). The MFL concept accounts for several remarkable properties at the optimum doping, as the anomalous resistivity $\rho(T)\propto T$, the optical conductivity $\sigma(\omega) \propto 1/\omega$, the NMR and the NQR relaxation rate $1/T_1(T) \sim const$. Note that the only low-energy scale within the MFL scenario, equations (\ref{cp2}) - (\ref{cp3a}), is given by $T$. Although there are certain similarities to the NAFL and other critical-point scenarios, the essential difference of the MFL concept is a non-critical $\vec q$ dependence. Hence the critical behaviour within the MFL is rather a local one. The attempts to derive MFL behaviour from a microscopic model have not been successful so far. \subsection{Models of correlated electrons in cuprates} A similarity of electronic properties in a wide class of different cuprates serves as a strong indication that the appropriate microscopic model should be quite universal and must in first place describe the electrons restricted to CuO$_2$ orbitals within a single plane. There has been quite an extensive effort put into finding a proper model, and at present there seems to be a wide consensus on its main features. This should be contrasted to various other materials with interacting electrons, e.g. heavy fermions and 1D conductors, where microscopic models are much less known. Since the physics of electrons in CuO$_2$ planes is governed by Cu $3d_{x^2-y^2}$ orbitals and O $2p_{x,y}$ orbitals (see the structure in Fig.~\ref{1.1}), quite a complete model seems to be the three-band Hubbard model (Emery 1987), describing fermion carriers (holes) added to closed $3d^{10}$ and $2p^6$ shells. Parameters are the Cu--O hopping $t_{pd}$, the direct O--O hopping $t_{pp}$, the on-site energies $\varepsilon_d$, $\varepsilon_p$, and the corresponding Coulomb repulsions $U_d$, $U_p$ on Cu and O sites, respectively. Parameters correspond to the charge-transfer regime with $\Delta =\varepsilon_p-\varepsilon_d>t_{pd}$ and $U_d > \Delta$. The reference (insulator) material contains one fermion/cell, entering predominantly the $d$ orbitals. Due to $U_d\gg t_{pd}$, a large charge gap $\sim 2$eV opens at half filling, while spin degrees can be mapped on the isotropic $S=1/2$ Heisenberg model, first proposed in connection with cuprates by Anderson (1987). Holes added by doping enter the singlets, introduced by Zhang and Rice (1988), which can be in a fermion model treated as empty sites (holes). Such a reduction, confirmed also with other analytical approaches (Zaanen and Ole\'s 1988, Ram\v sak and Prelov\v sek 1989) and cluster methods (Hybertsen {\it et al.} 1990), leads to a single-band $t$-$J$ model (Rice 1995), \begin{equation} H=-t\sum_{\langle ij\rangle s}(\tilde{c}^\dagger_{js}\tilde{c}_{is}+ {\rm H.c.})+J\sum_{\langle ij\rangle} (\vec{S}_i\cdot \vec{S}_j - {1\over 4} n_i n_j), \label{cm1} \end{equation} describing fermions in a tight-binding band with the hopping parameter $t \propto t_{pd}^2/\Delta$. Here $\vec{S}_i=(1/2)\sum_{ss'}c^\dagger_{is}\vec{\sigma}_{ss'}c_{is'}$ are the local spin operators interacting with the exchange parameter $J\propto t^4_{pd}/U_d \Delta<t$. Due to the strong on-site repulsion states with doubly occupied sites are explicitly forbidden and we are dealing with projected fermion operators $\tilde{c}_{is}=c_{is}(1-n_{i,-s})$. By explicitly projecting out states with doubly occupied sites, the $t$-$J$ model only allows for charge fluctuations in terms of a hole motion, while at half-filling it is equivalent to the $S=1/2$ Heisenberg model. The $t$-$J$ model is expected to capture the essential low-energy physics of doped AFM as well of cuprates in the whole regime of dopings. The challenging regime of the model is the one of strong correlations with $J<t$. The $t$-$J$ model is the simplest model which describes the interplay of magnetism and itinerant metallic properties of cuprates. A more rigorous reduction of the three-band model (Zaanen and Ole\'s 1988, Ram\v sak and Prelov\v sek 1989) leads to additional terms, which could be as well represented within a reduced space without doubly occupied sites. Among possible generalizations most attention has been recently devoted to the addition of the next-nearest-neighbour (n.n.n.) hopping ($t'$) terms, emerging from the $t_{pp}$ hopping in the three-band model, \begin{equation} H_{t'}=-t'\sum_{\langle \langle ij\rangle \rangle s} (\tilde c^\dagger_{js}\tilde c_{is} + {\rm H.c.}), \label{cm2} \end{equation} representing the hopping along the diagonal in Fig.~\ref{1.1}a. Analogous is the $t''$ term for the n.n.n. hopping along each axis. It seems necessary to include such a term to account for the QP dispersion found in ARPES experiments on undoped material, such as Sr$_2$CuO$_2$Cl$_2$ (Wells {\it et al.} 1995). In spite of their apparent smallness, $t'$ and $t''$ terms could lead to relevant corrections since they allow a free propagation of fermions even in an AFM (N\'eel) spin background. The $t$-$J$ model, as relevant for cuprates, should be considered on a planar square lattice. Parameters are rather well known (Rice 1995). $J$ is measured in the undoped AFM via the inelastic neutron scattering and the magnon dispersion, leading to $J\sim 0.13$~eV. The hopping parameter $t$ is not accessible directly, but cluster calculations (Hybertsen {\it el al.} 1990) and other considerations (Rice 1995) allow only for a narrow range of $t$ values. In our calculations we shall furtheron use (if not declared differently) $t=0.4$~eV and $J/t =0.3$.A possible range for $t'$ (Hybertsen {\it et al.} 1990) is more controversial, while in numerical studies (Tohyama and Maekawa 1994, Nazarenko {\it et al.} 1995) values $-0.35 < t'/t <-0.2$ are used. Another prototype model for strongly correlated electrons is the traditional Hubbard model (Hubbard 1963), \begin{equation} H=-t\sum_{\langle ij\rangle s}(c^\dagger_{js} c_{is}+ {\rm H.c.}) +U\sum_i n_{i\uparrow}n_{i\downarrow}. \label{cm3} \end{equation} High-energy excitations of the Hubbard model could be different from those of the charge-transfer regime in the three-band model, still it is expected that the low-energy properties map well on those of the $t$-$J$ model provided that $U\gg t$. In the following we shall mainly consider $T>0$ properties of the $t$-$J$ model. It should be however noted that prior to the introduction of the FTLM method most calculations of $T>0$ properties have been performed for the Hubbard model by applying QMC methods (Dagotto 1994). \setcounter{equation}{0}\setcounter{figure}{0} \section{Finite-Temperature Lanczos Method} This chapter is devoted to the description of the FTLM which we developed (Jakli\v c and Prelov\v sek 1994a) for studying correlated systems at $T>0$ and which is used to obtain results described furtheron. The goal was to calculate $T>0$ properties in small model systems and to find a method, comparable in efficiency to g.s. calculations employing ED methods, used in the past decade extensively in the study of correlated systems (Dagotto 1994). Here we should stress that the advantage of $T>0$ calculations is twofold. It is evident that we are interested in static and dynamical properties at nonzero $T$, in particular in their $T$-variation. On the other hand, the use of finite but small $T>0$ represents the proper approach to more reliable g.s. calculations in small systems. Namely, it is well known that g.s. ED results, in particular for dynamical quantities, are strongly influenced by finite size artifacts. At $T>0$ the latter effects can to large extent average out, leading to more macroscopic-like results. Still the understanding of remaining finite-size restrictions is important for the proper application of the method, as will be described in Sec.~3.7. In Sec.~3.8. we put our approach in perspective with other methods yielding $T>0$ results for models of correlated electrons. These includes mainly various QMC methods and the high-$T$ expansion (HTE) technique. \subsection{Lanczos algorithm and matrix elements} The scarcity of well-controlled analytical approaches to models of strongly correlated electrons has stimulated the development of computational methods. Conceptually the simplest is the ED method of small systems. In models of correlated electrons, however, one is dealing with the dimension of the basis (number of basis states) which grows exponentially with the size of the system. In the Hubbard model there are 4 basis states for each lattice site, therefore the number of basis states in the $N$-site system is $N_{st} \propto 4^N$. In the $t-J$ model $N_{st}$ still grows as $\propto 3^N$. In the ED of such systems one is therefore representing operators with matrices $N_{st}\times N_{st}$, which become large already for very modest values of $N$. The helpful circumstance is that for most interesting operators and lattice models only a small proportion of matrix elements is nonzero within the local basis. Then, the operators can be represented by sparse matrices with $N_{st}$ rows and at most $f(N)$ nonzero elements in each row. In this way memory requirements are relaxed and matrices up to $N_{st} \sim 10^7$ are considered in recent applications. Finding eigenvalues and eigenvectors of such large matrices is not possible with standard algorithms performing the full diagonalization. One must instead resort to the power algorithms (see Parlett 1980), among which the Lanczos algorithm (Lanczos 1950) is one of the most widely known. The Lanczos algorithm starts with a normalized random vector $|\phi_0\rangle$ in the vector space in which the Hamiltonian operator $H$ is defined. $H$ is applied to $|\phi_0\rangle$ and the resulting vector is split in components parallel to $|\phi_0\rangle$, and $|\phi_1\rangle$ orthogonal to it, respectively, \begin{equation} H|\phi_0\rangle=a_0 |\phi_0\rangle + b_1|\phi_1\rangle. \label{fl1} \end{equation} Since $H$ is Hermitian, $a_0=\langle\phi_0|H|\phi_0\rangle$ is real, while the phase of $|\phi_1\rangle$ can be chosen so that $b_1$ is also real. In the next step $H$ is applied to $|\phi_1\rangle$, \begin{equation} H|\phi_1\rangle=b_1'|\phi_0\rangle +a_1 |\phi_1\rangle + b_2|\phi_2\rangle, \label{fl2} \end{equation} where $|\phi_2\rangle$ is orthogonal to $|\phi_0\rangle$ and $|\phi_1\rangle$. It follows also $b_1'=\langle\phi_0|H|\phi_1\rangle = b_1$. Proceeding with the iteration one gets in $i$ steps \begin{equation} H|\phi_i\rangle=b_i|\phi_{i-1}\rangle +a_i |\phi_i\rangle + b_{i+1}|\phi_{i+1}\rangle,\qquad 1\leq i \leq M. \label{fl3} \end{equation} By stopping the iteration at $i=M$ and putting the last coefficient $b_{M+1}=0$, the Hamiltonian can be represented in the basis of orthogonal Lanczos functions $|\phi_i\rangle$ as the tridiagonal matrix $H_M$ with diagonal elements $a_i$ with $i=0\ldots M$, and offdiagonal ones $b_i$ with $i=1\ldots M$. Such a matrix is easily diagonalized using standard numerical routines to obtain approximate eigenvalues $\epsilon_j$ and the corresponding orthonormal eigenvectors $|\psi_j\rangle$, \begin{equation} |\psi_j\rangle=\sum_{i=0}^M v_{ji} |\phi_i\rangle,\;\;\;j=0\ldots M. \label{fl5} \end{equation} It is important to realize that $|\psi_j\rangle$ are (in general) not exact eigenfunctions of $H$, but show a remainder \begin{equation} H|\psi_j\rangle-\epsilon_j|\psi_j\rangle= b_{M+1}v_{jM}|\phi_{M+1}\rangle. \label{fl6} \end{equation} On the other hand it is evident from the diagonalization of $H_M$ that matrix elements \begin{equation} \langle\psi_i|H|\psi_j\rangle=\epsilon_j\delta_{ij},\;\;\;i,j=0\ldots M \label{fl7} \end{equation} are exact, without restriction to the subspace $L_M$. If in the equation (\ref{fl3}) $b_{M+1}=0$, we have found a $(M+1)$-dimensional eigenspace where $H_M$ is already an exact representation of $H$. This inevitably happens when $M=N_{st}-1$, but for $M<N_{st}-1$ it can only occur if the starting vector is orthogonal to some invariant subspace of $H$. This should not be the case if the input vector $|\phi_0\rangle$ is random, without any hidden symmetries. The number of operations needed to perform $M$ Lanczos iterations scales as $MN_{st}$. Numerically the Lanczos procedure is subject to roundoff errors, introduced by the finite-precision arithmetics. This problem usually only becomes severe at larger $M>100$ (more than needed to get accurate g.s. $|\psi_0\rangle$) and is seen in the loss of the orthogonality of vectors $|\phi_i\rangle$. It can be remedied by successive reorthogonalization (and normalization) of new states $|\phi'_i\rangle$, plagued with errors, with respect to previous ones. However this procedure requires $\sim M^2N_{st}$ operations, and can become computationally more demanding than Lanczos iterations alone. This effect prevents one to use the Lanczos method e.g. to tridiagonalize large matrices. The identity (\ref{fl7}) already shows the usefulness of the Lanczos method for the calculation of particular matrix elements. As an aid in a further discussion of the Lanczos method we consider the calculation of a matrix element \begin{equation} W_{kl}=\langle n|H^k B H^l A|n\rangle, \label{fe1} \end{equation} where $|n\rangle$ is an arbitrary normalized vector, and $A, B$ are general operators. One can calculate this expression exactly by performing two Lanczos procedures with $M=\max(k,l)$ steps. The first one, starting with the vector $|\phi_0\rangle=|n\rangle$, produces the subspace $L_M=\{|\phi_j\rangle,\;j=0\ldots M\}$ along with approximate eigenvectors $|\psi_j\rangle$ and eigenvalues $\epsilon_j$. The second Lanczos procedure is started with the normalized vector \begin{equation} |\tilde\phi_0\rangle=A|\phi_0\rangle/\sqrt{\langle\phi_0| A^\dagger A|\phi_0\rangle}, \label{fe2} \end{equation} and results in the subspace $\tilde L_M=\{|\tilde\phi_j\rangle,\;j=0\ldots M\}$ with approximate $|\tilde\psi_j\rangle$ and $\tilde \epsilon_j$. We can now define projectors \begin{equation} P_m=\sum_{i=0}^m|\phi_i\rangle\langle\phi_i|,\;\; \tilde P_m=\sum_{i=0}^m|\tilde\phi_i\rangle\langle\tilde\phi_i|, \label{fe3} \end{equation} which for $m=M$ can also be expressed as \begin{equation} P_M=\sum_{i=0}^M|\psi_i\rangle\langle\psi_i|,\;\; \tilde P_M=\sum_{i=0}^M|\tilde\psi_i\rangle\langle\tilde\psi_i|. \label{fe4} \end{equation} By taking into account definitions (\ref{fe3}), (\ref{fe4}) we show that \begin{equation} H P_m=P_{m+1}HP_m=P_M H P_m, \qquad m<M. \label{fe5} \end{equation} Since in addition $|n\rangle=|\phi_0\rangle=P_0|\phi_0\rangle$ and $A|n\rangle\propto|\tilde\phi_0\rangle=P_0|\tilde\phi_0\rangle$, by successive use of the first equality in (\ref{fe5}) we arrive at \begin{equation} W_{kl}= \langle \phi_0|P_0HP_1H\ldots HP_kB\tilde P_l H\ldots\tilde P_1 H\tilde P_0 A|\phi_0\rangle. \label{fe6} \end{equation} Using the second equality in the equation (\ref{fe5}) and identities $P_0|\phi_0\rangle=P_M|\phi_0\rangle$, $\tilde P_0 A|\phi_0\rangle=\tilde P_M A|\phi_0\rangle$ we can rewrite $W_{kl}$ as \begin{equation} W_{kl}= \langle \phi_0|P_MHP_MH\ldots HP_MB\tilde P_MH\ldots\tilde P_M H\tilde P_M A|\phi_0\rangle. \label{fe7} \end{equation} We note that the necessary condition for the equation (\ref{fe7}) is $M\ge k,l$. We finally expand the projectors according to expressions (\ref{fe4}) and take into account the orthonormality relation (\ref{fl7}) for matrix elements, and get \begin{eqnarray} W_{kl}&=& \sum_{i_0=0}^M\ldots\sum_{i_k=0}^M\sum_{j_0=0}^M\ldots\sum_{j_l=0}^M \langle\phi_0|\psi_{i_0}\rangle\langle\psi_{i_0}| H|\psi_{i_1}\rangle\ldots \langle\psi_{i_{k-1}}|H|\psi_{i_k}\rangle \nonumber \\ & &\times \langle\psi_{i_k}|B|\tilde\psi_{j_l}\rangle \langle\tilde\psi_{j_l}| H|\tilde\psi_{j_{l-1}}\rangle\ldots \langle \tilde\psi_{j_1}|H |\tilde\psi_{j_0}\rangle \langle\tilde\psi_{j_0}|A|\phi_0\rangle =\nonumber \\ &=& \sum_{i=0}^M\sum_{j=0}^M\langle\phi_0|\psi_{i}\rangle\langle\psi_{i}| B|\tilde\psi_{j}\rangle\langle\tilde\psi_{j}|A|\phi_0\rangle (\epsilon_i)^k (\tilde \epsilon_j)^l. \label{fe8} \end{eqnarray} We have thus expressed the desired quantity in terms of the Lanczos (approximate) eigenvectors, and eigenvalues alone. \subsection{Dynamical response in the ground state} Within the Lanczos algorithm the extreme (smallest and largest) eigenvalues $\epsilon_i$, along with their corresponding $|\psi_i\rangle$, are rapidly converging to exact eigenvalues $E_i$ and eigenvectors $|\Psi_i\rangle$. It is quite characteristic that usually (for nondegenerate states) $M=30-60 \ll N_{st}$ is sufficient to achieve the convergence to the machine precision of the g.s. energy $E_0$ and the wavefunction $|\Psi_0\rangle$, from which various static and dynamical correlation functions at $T=0$ can be evaluated. After $|\Psi_0\rangle$ is obtained, the g.s. dynamic correlation functions can be calculated within the same framework (Haydock {\it et al.} 1972). Let us consider the autocorrelation function \begin{equation} C(t)=-i\langle\Psi_0|A^\dagger(t)A|\Psi_0\rangle =-i\langle\Psi_0|A^\dagger e^{i(E_0-H)t}A|\Psi_0\rangle \label{fd1} \end{equation} with the transform, \begin{equation} \tilde C(\omega)=\int_0^\infty dt e^{i\omega^+t}C(t) =\langle\Psi_0| A^\dagger \frac{1}{\omega^+ +E_0-H}A|\Psi_0\rangle. \label{fd2} \end{equation} where $\omega^+=\omega+i\epsilon,~\epsilon>0$. To calculate $\tilde C(\omega)$, one has to run the second Lanczos procedure starting with the normalized function $|\tilde\phi_0\rangle$, equation (\ref{fe2}). The matrix for $H$ in the new basis $\tilde L_M$, with elements $\langle\tilde\phi_i|H|\tilde\phi_j\rangle=[\tilde H_M]_{ij}$, is again a tridiagonal one with $\tilde a_i$ and $\tilde b_i$ elements, respectively. Terminating the Lanczos procedure at given $M$, one can evaluate the $\tilde C(\omega)$ as a resolvent of the $\tilde H_M$ matrix which can be expressed in the continued-fraction form (Haydock {\it et al.} 1972), \begin{equation} \tilde C(\omega)=\frac{\langle\Psi_0|A^\dagger A|\Psi_0\rangle} {\omega^+ +E_0-\tilde a_0-{\displaystyle\frac{\tilde b_1^2} {\omega^+ +E_0-\tilde a_1-{\displaystyle\frac{\tilde b_2^2} {\omega^+ +E_0-\tilde a_2-\ldots}}}}}\;, \label{fd3} \end{equation} terminating with $\tilde b_{M+1}=0$, although other termination functions have also been employed. The spectral function $C(\omega)=-(1/\pi) {\rm Im} \tilde C(\omega)$ is characterized by frequency moments, \begin{equation} \mu_l=\int_{-\infty}^\infty \omega^l C(\omega) d\omega =\langle\Psi_0|A^\dagger(H-E_0)^lA|\Psi_0\rangle, \label{fd4} \end{equation} which are particular cases of the expression (\ref{fe1}) for $B=A^\dagger$, $k=0$, and $|n\rangle=|\Psi_0\rangle$. Using the equation (\ref{fe8}) we can express $\mu_l$ for $l\le M$ in terms of Lanczos quantities alone \begin{equation} \mu_l=\sum_{j=0}^M \langle\Psi_0|A^\dagger|\tilde\psi_j\rangle \langle\tilde\psi_j|A|\Psi_0\rangle (\tilde \epsilon_j-E_0)^l. \label{fd5} \end{equation} Hence moments are exact for given $|\Psi_0\rangle$ (as would be also for any other starting $|n\rangle$) provided $l\leq M$. The corresponding approximation for $C(t)$, equation (\ref{fd1}), within the restricted set of eigenfunctions $|\tilde\psi_j\rangle, j=0\ldots M$, can be written at given $M$ (assuming $\tilde b_{M+1}=0$) as \begin{equation} C(t) = -i\sum_{j=0}^M \langle\Psi_0|A^\dagger|\tilde\psi_j\rangle \langle\tilde\psi_j|A|\Psi_0\rangle e^{-i(\tilde \epsilon_j-E_0)t}. \label{fd6} \end{equation} Note that such $C(t)$ expanded as a series in $t$ (short-time expansion) has exact $M$ terms, since the coefficients are just moments $\mu_l$, equation (\ref{fd5}). As a practical matter we note that $|\langle\tilde\psi_j|A|\Psi_0\rangle|^2=\tilde v_{j0}^2$, hence no matrix elements need to be evaluated within this approach. In contrast to the continued fraction (\ref{fd3}), the expression (\ref{fd6}) allows also the treatment of more general correlation functions $\langle B(t)A\rangle$, with $B\ne A^\dagger$. In this case the matrix elements $\langle\Psi_0|B|\tilde\psi_j\rangle$ have to be evaluated explicitly. From the above one sees that the Lanczos method is very convenient to calculate the frequency moments as well as the dynamical $C(\omega)$. Certain ideas presented above will be used to construct the algorithm for $T>0$, discussed in next subsections. \subsection{High-temperature expansion} The novel method for $T>0$ is based on the application of the Lanczos iteration, reproducing correctly high-$T$ and large-$\omega$ series. The method is then combined with the reduction of the full thermodynamic trace to the random sampling. We present these ingredients in the following. We first consider the expectation value of the operator $A$ in the canonical ensemble \begin{equation} \langle A\rangle=\sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}A|n\rangle \biggm/ \sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}|n\rangle, \label{fh1} \end{equation} where $\beta=1/k_BT$. A straightforward calculation of $\langle A\rangle$ requires the knowledge of all eigenstates $|\Psi_n\rangle$ and corresponding energies $E_n$, obtained by the full diagonalization of $H$, \begin{equation} \langle A\rangle=\sum_{n=1}^{N_{st}}e^{-\beta E_n}\langle \Psi_n|A| \Psi_n\rangle \biggm/ \sum_{n=1}^{N_{st}} e^{-\beta E_n}, \label{fh1a} \end{equation} computationally accessible only for $N_{st} \sim 5000$. Instead let us perform the HTE of the exponential $e^{-\beta H}$, \begin{eqnarray} \langle A\rangle &=& Z^{-1} \sum_{n=1}^{N_{st}}\sum_{k=0}^\infty \frac{(-\beta)^k}{k!}\langle n|H^kA|n\rangle, \nonumber\\ Z&=&\sum_{n=1}^{N_{st}}\sum_{k=0}^\infty \frac{(-\beta)^k}{k!}\langle n|H^k|n\rangle. \label{fh2} \end{eqnarray} Terms in the expansion $\langle n|H^k A|n\rangle$ can be calculated exactly using the Lanczos procedure with $M \geq k$ steps and with $|\phi^n_0\rangle=|n\rangle$ as a starting function, since this is a special case of the expression (\ref{fe1}). Using the relation (\ref{fe8}) with $l=0$ and $B=1$, we get \begin{equation} \langle n|H^k A|n\rangle= \sum_{i=0}^M\langle n|\psi^n_{i}\rangle\langle\psi^n_{i}| A|n\rangle (\epsilon^n_i)^k. \label{fh3} \end{equation} Working in a restricted basis $k\leq M$, we can insert the expression (\ref{fh3}) into sums (\ref{fh2}), extending them to $k >M$. The final result can be expressed in analogy to the equation (\ref{fd6}) as \begin{eqnarray} \langle A \rangle &\approx& Z^{-1}\sum_{n=1}^{N_{st}}\sum_{i=0}^M e^{-\beta \epsilon^n_i}\langle n|\psi^n_i\rangle\langle\psi^n_i|A|n \rangle, \nonumber \\ Z &\approx& \sum_{n=1}^{N_{st}}\sum_{i=0}^M e^{-\beta \epsilon^n_i}\langle n|\psi^n_i\rangle\langle\psi^n_i|n \rangle, \label{fh4} \end{eqnarray} and the error of the approximation is of the order of $\beta^{M+1}$. Evidently, within a finite system the expression (\ref{fh4}), expanded as a series in $\beta$, reproduces exactly the HTE series to the order $M$. In addition, in contrast to the usual HTE, it becomes (remains) exact also for $T\to 0$. Let us assume for simplicity that the g.s. $|\Psi_0\rangle$ is nondegenerate. For initial states $|n\rangle$ not orthogonal to $|\Psi_0\rangle$, already at modest $M\sim 50$ the lowest function $|\psi^n_0\rangle$ converges to $|\Psi_0\rangle$. We thus have for $\beta \to \infty$, \begin{eqnarray} \langle A\rangle &=&\sum_{n=1}^{N_{st}} \langle n|\Psi_0\rangle\langle\Psi_0|A|n\rangle\bigg/ \sum_{n=1}^{N_{st}}\langle n|\Psi_0\rangle\langle\Psi_0|n\rangle =\nonumber \\ &=&\langle\Psi_0|A|\Psi_0\rangle/\langle\Psi_0|\Psi_0\rangle, \label{fh5} \end{eqnarray} where we have taken into account the completeness of the set $|n\rangle$. Obtained result is just the usual g.s. expectation value of an operator. \subsection{Large-frequency expansion at $T>0$} In order to calculate dynamical quantities, the HTE must be supplemented by the high-frequency (short-time) expansion analogous to the one used at $T=0$ in deriving the equation (\ref{fd6}) from (\ref{fd5}). The goal is to calculate the dynamical correlation function at $T>0$, \begin{equation} \langle B(t)A\rangle={\rm Tr}\left[e^{-\beta H}e^{iHt}Be^{-iHt}A\right]/ {\rm Tr}~e^{-\beta H}. \label{ff1} \end{equation} Expressing the trace explicitly and expanding the exponentials, we get \begin{equation} \langle B(t)A\rangle = Z^{-1} \sum_{n=1}^{N_{st}}\sum_{k=0}^\infty\sum_{l=0}^\infty \frac{(-\beta+it)^k}{k!}\frac{(-it)^l}{l!} \langle n|H^kBH^lA|n\rangle. \label{ff2} \end{equation} Expansion coefficients in equation (\ref{ff2}) can be again obtained via the Lanczos method, as discussed in Sec.~3.1. Performing two Lanczos iterations with $M$ steps, started with normalized $|\phi^n_0\rangle=|n\rangle$ and $|\tilde\phi^n_0\rangle \propto A|n\rangle$, respectively, we calculate coefficients $W_{kl}$ following the equation (\ref{fe8}), while $Z$ is approximated by the static expression (\ref{fh4}). Extending and resumming series in $k$ and $l$ into exponentials, we get \begin{equation} \langle B(t)A\rangle \approx Z^{-1} \sum_{n=1}^{N_{st}} \sum_{i=0}^M\sum_{j=0}^M e^{-\beta \epsilon^n_i} e^{it(\epsilon^n_i-\tilde \epsilon^n_j)} \langle n|\psi^n_{i}\rangle\langle \psi^n_{i}|B|\tilde\psi^n_{j}\rangle\langle\tilde\psi^n_{j}|A |n\rangle. \label{ff3} \end{equation} We check again a nontrivial $T=0$ limit of the above expression. If $|n\rangle$ are not orthogonal to the g.s. $|\Psi_0\rangle$, then for large enough $M$ the lowest-lying state converges to $\epsilon^n_0\sim E_0$ and $|\psi^n_0\rangle\sim|\Psi_0\rangle$, respectively. In this case we have in analogy to the equation (\ref{fh5}) \begin{equation} \langle B(t)A\rangle\approx\sum_{n=1}^{N_{st}}\sum_{j=0}^M e^{it(E_0- \tilde \epsilon^n_j)} \langle\Psi_0|B|\tilde\psi^n_{j}\rangle \langle\tilde\psi^n_{j}|A|\phi^n_0\rangle\langle\phi^n_0| \Psi_0\rangle\bigg/\langle\Psi_0|\Psi_0\rangle. \label{ff4} \end{equation} Generally larger $M\gg 100$ are needed in order that relevant higher-lying states $|\tilde\psi^n_j\rangle$ and $\tilde \epsilon^n_j$ become independent of $|n\rangle$. Only in such a limit we recover strictly the g.s. result, corresponding for $A^{\dagger} \to B$ to the equation (\ref{fd2}). Note however that similar restrictions apply to the continued fraction expansion (\ref{fd3}) which reproduces correctly moments $\mu_l$ (\ref{fd4}) up to $l<M$, but not necessarily the details (e.g. positions and weights of peaks) of the $C(\omega)$ spectrum. \subsection{Random sampling} The computation of static quantities (\ref{fh4}) and dynamical ones (\ref{ff3}) still involves the summation over the complete set of $N_{st}$ states $|n\rangle$, which is not feasible in practice. To obtain a useful method, one further approximation must be made which replaces the full summation by a partial one over a much smaller set of random states (Imada and Takahashi 1986). Such an approximation analogous to Monte Carlo methods is of course hard to justify rigorously, nevertheless we can estimate the errors involved. We consider the expectation value $\langle A \rangle$ at $T>0$, as defined by the expression (\ref{fh1}). Instead of the whole sum in equation (\ref{fh1}) we first evaluate only one element with respect to a random state $|r\rangle$, which is a linear combination of basis states \begin{equation} |r\rangle=\sum_{n=1}^{N_{st}}\beta_{rn}|n\rangle, \label{fr1} \end{equation} i.e. $\beta_{rn}$ are assumed to be distributed randomly. Let us discuss then the random quantity \begin{eqnarray} \tilde A_r&=&\langle r|e^{-\beta H}A|r\rangle/ \langle r|e^{-\beta H}|r\rangle =\nonumber \\ &=& \sum_{n,m=1}^{N_{st}}\beta^*_{rn}\beta_{rm}\langle n|e^{-\beta H}A|m\rangle \biggm/ \sum_{n,m=1}^{N_{st}}\beta^*_{rn}\beta_{rm}\langle n|e^{-\beta H}|m\rangle. \label{fr2} \end{eqnarray} We choose for convenience that here basis states $|n\rangle$ correspond to the eigenstates of $H$. We first assume in addition that $[H,A]=0$, to diagonalize simultaneously both $A$ and $H$. Then we have \begin{equation} \tilde A_r= \sum_{n=1}^{N_{st}}|\beta_{rn}|^2\langle n|e^{-\beta H}A|n\rangle \biggm/ \sum_{n=1}^{N_{st}}|\beta_{rn}|^2\langle n|e^{-\beta H}|n\rangle.\label{fr3} \end{equation} We can express $|\beta_{rn}|^2=1/N_{st}+\delta_{rn}$, where random deviations $\delta_{rn}$ are not correlated with matrix elements $\langle n|e^{-\beta H}|n\rangle=Z_n$ and $\langle n|e^{-\beta H}A|n\rangle=Z_n A_n$. It is then easy to see that $\tilde A_r$ is close to $\langle A\rangle$, and the statistical deviation is related to the effective number of terms $\tilde Z$ in the thermodynamic sum, i.e. \begin{equation} \tilde A_r = \langle A\rangle +{\cal O}(1/\sqrt{\bar Z}),\qquad \bar Z=e^{\beta E_0}\sum_n Z_n. \label{fr4} \end{equation} Note that for $T\to \infty$ we have $\bar Z\to N_{st}$ and therefore at large $N_{st}$ a close estimate of the average (\ref{fr4}) can be obtained from a single random state (Imada and Takahashi 1988, Silver and R\"oder 1994). On the other hand, at finite $T<\infty$ the statistical error of $\tilde A_r$ increases with decreasing $\bar Z$. Still, strictly at $T=0$ and for a nondegenerate g.s. we obtain from equation (\ref{fr3}) again the correct result. In the FTLM we replace the full summation in the expression (\ref{fh1}) with a restricted one over several random vectors $|r\rangle$, $r=1\ldots R$, \begin{equation} \tilde A= \sum_{r=1}^R\langle r|e^{-\beta H}A|r\rangle\biggm/ \sum_{r=1}^R\langle r|e^{-\beta H}|r\rangle \nonumber. \label{fr5} \end{equation} From equations (\ref{fr3}) and (\ref{fr4}) it follows that the statistical error is even reduced, \begin{equation} \tilde A = \langle A\rangle +{\cal O}(1/\sqrt{R\bar Z}). \label{fr6} \end{equation} For a general $A$, not commuting with $H$, we have to consider also the contribution of offdiagonal terms in the equation (\ref{fr2}). Since the phases of random coefficients $\beta_{rn}$ are randomly distributed, we can expect that vectors $\beta_{rn}$ are approximately orthogonal, \begin{equation} \sum_{r=1}^R\beta_{rn}^*\beta_{rm}\sim {R\over N_{st}}(\delta_{nm} + \zeta_{nm}/\sqrt{R}), \label{fr7} \end{equation} where $|\zeta_{nm}| = {\cal O}(1)$. The relative contribution of offdiagonal terms is then given by \begin{equation} w(R,N_{st})=\frac{1}{\sqrt{R}}\left| \sum_{n,m=1\atop n\ne m}^{N_{st}}\langle n|e^{-\beta H}A|m\rangle \zeta_{nm}\right|\biggm/ \sum_{n=1}^{N_{st}}\langle n|e^{-\beta H}A|n\rangle. \end{equation} It is not easy to estimate the ratio $w$ in general. First we note that at $\beta \to 0$ we could choose $|n\rangle$ to diagonalize $A$, so offdiagonal terms could be avoided anyhow (for a static operator $A$). This is however not the case for low $T$. If we assume that the sign of $\zeta_{nm}$ is random and uncorrelated with matrix elements $\langle n|e^{-\beta H}A|m\rangle$, we can put an upper bound $w(R,N_{st}) < 1/\sqrt{R}$. There are however several arguments, e.g. operators $A$ are usually local leading to a sparse-matrix representation, which seem to indicate a much smaller contribution of offdiagonal terms. To conclude, taking into account all assumptions mentioned, the approximation $\tilde A$ (\ref{fr5}) should therefore yield a good estimate of the thermodynamic average $\langle A\rangle$ at all $T$. For low $T$ the error is expected to be of the order of ${\cal O}(1/\sqrt{R})$, while for high $T$ the error is expected to scale even as ${\cal O}(1/\sqrt{N_{st}R})$. Since arguments leading to these estimates rely on several assumptions which are not easy to verify, it is essential to test the method for particular cases. \subsection{Implementation and tests} We comment now on the practical implementation of the FTLM and present few tests in order to get a quantitative estimate of approximations mentioned above. First we consider the calculation of static quantities. Joining the HTE and the random sampling we approximate the average of the operator $A$ as \begin{eqnarray} \langle A \rangle &\approx& \frac{N_{st}}{ZR}\sum_{r=1}^{R}\sum_{j=0}^M e^{-\beta \epsilon^r_j}\langle r|\psi^r_j\rangle\langle\psi^r_j|A| r \rangle, \nonumber \\ Z &\approx& \frac{N_{st}}{R}\sum_{r=1}^{R}\sum_{j=0}^M e^{-\beta \epsilon^r_j}|\langle r|\psi^r_j\rangle|^2. \label{fi1} \end{eqnarray} The sampling is over $R$ random states $|r\rangle=|\phi^r_0\rangle$, which serve as initial functions for the $M$-step Lanczos procedure, resulting in $M$ approximate eigenvalues $\epsilon^r_j$ with corresponding eigenvectors $|\psi^r_j\rangle$. For a general operator $A$ the calculation of eigenfunctions $|\psi_j^r\rangle$ and corresponding matrix elements $\langle\psi^r_j|A| r \rangle$ is needed. On the other hand, the calculation effort is significantly reduced if $[H,A]=0$ and $A$ can be diagonalized simultaneously. Then \begin{equation} \langle A \rangle \approx \frac{N_{st}}{ZR}\sum_{r=1}^{R}\sum_{j=0}^M e^{-\beta \epsilon^r_j}|\langle r|\psi^r_j\rangle|^2 A^r_j. \label{fi2} \end{equation} In this case the evaluation of eigenfunctions is not necessary since the element $\langle r|\psi^r_j\rangle=v_{j0}^r$, equation (\ref{fl5}), is obtained directly from eigenvectors of the tridiagonal matrix $H_M^r$. Few remarks on the implementation are in order here. Already in usual g.s. Lanczos calculations the use of symmetries of the model Hamiltonian is crucial in order to reduce computational and storage requirements. This is even more important when using the FTLM where the computational burden is increased due to the sampling and due to the calculation of matrix elements. At $T>0$ in general all symmetry sectors must be taken into account and these can differ significantly regarding the number of basis states they contain. Formulas (\ref{fi1}), (\ref{fi2}) must then be generalized to allow for the varying number of samples in each sector, so that sectors containing more states are more thoroughly sampled. If in the symmetry sector $s$ containing $N_{st}^s$ basis states $R_s$ samples are evaluated, then the random sampling summation is modified as \begin{equation} \frac{N_{st}}{R}\sum_{r=1}^R\longrightarrow \sum_s \frac{N^s_{st}}{R_s}\sum_{r=1}^{R_s}. \label{fi3} \end{equation} Usually we choose $R_s\propto N_{st}^s$. The number of Lanczos steps can also be taken as sector dependent, $M_{s}\leq N^s_{st}$. This is important in sectors with small dimensions $N^s_{st}$. Calculations on finite systems can be carried out on different lattices with various boundary conditions. For planar problems it is convenient to use tilted square lattices of sizes $N=n^2+m^2$ (Oitmaa and Betts 1978) with periodic boundary conditions (p.b.c.). The translational invariance of lattice Hamiltonians is preserved on such systems, which makes crystal momenta $\vec{k}$ good quantum numbers and enables to reduce the basis of states and their dimension $N^s_{st}$. Let us test the method for static quantities on the problem of a Heisenberg model on a two-leg ladder. We discuss the case $J=J'$ (exchange equal along and perpendicular the ladder) which exhibits the spin gap. One of the most interesting quantities in this system is the uniform susceptibility $\chi_0=\langle (S_z^{tot})^2\rangle/T N$ (defined and discussed in more detail for doped AFM in Sec.~6.2) and its $T$ dependence. As a test we choose the model on a $N=2\times 8$ ladder, for which the exact results obtained by full diagonalization are available (Barnes and Riera 1994). In Fig.~\ref{3.1}a we study the influence of the number of Lanczos steps $M$ on the accuracy of results. These are shown for fixed sampling $R=1028$, while $M$ is varied from 5 to 20. It is rather surprising that even with the smallest $M=5$ in the largest symmetry sector we obtain very good agreement with the exact result not only at high $T>J$, but as well as at low $T<J$. In this case this is likely to be due to the gap in the energy spectrum, as expected for ladders with even number of legs. \begin{figure} \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_3.1a.ps,height=8cm,angle=-90}} \quad \subfigure[]{ \epsfig{file=fig_3.1b.ps,height=8cm,angle=-90}}} \fi \caption{ Uniform susceptibility $\chi_0$ vs. $T$ for the $2\times 8$-site Heisenberg spin ladder, as obtained using the FTLM with a) fixed sampling $R=1028$ and various number of Lanczos steps $M$, and b) fixed $M=30$ and different number of random samples $R$. Exact results are taken from Barnes and Riera (1994). } \label{3.1} \end{figure} In Fig.~\ref{3.1}b we fix $M= 30$, while the number of random samples $R$ varies. Note that using the translational symmetry and the conservation of total $S_z^{tot}=S_z$ the maximum number of states in a symmetry sector is $N_{st}^s=820$, while the total number of states is $N_{st}=2^{16}=65536$. The number of samples within each symmetry sector $R_s$ is chosen to be proportional to the number of basis states in the sector. In the largest sector $N_{st}^s=820$ this amounts to $R_s=2 - 13$ for the cases with total $R=180 - 1028$, respectively. This corresponds to the sampling in the range $R/N_{st}=0.003-0.016$. We first observe that almost regardless of the sampling results agree closely with the exact one at higher $T>J$, as expected from the discussion of the statistical error in the equation (\ref{fr6}). At $T<J$ results start to disagree, still $R_s \gg 1$ leads also to an improved accuracy in this regime. Let us turn now to the calculation of dynamical quantities. By joining equation (\ref{ff3}) with the random sampling (\ref{fr5}), we get the frequency-dependent correlation function \begin{equation} \langle B(t)A\rangle_{\omega} \approx \frac{\pi N_{st}}{ZR}\sum_{r=1}^R \sum_{i=0}^M\sum_{j=0}^M e^{-\beta \epsilon^r_i} \delta(\omega - \epsilon^r_i+\tilde \epsilon^r_j) \langle r|\psi^r_{i}\rangle\langle \psi^r_{i}|B|\tilde\psi^r_{j}\rangle\langle\tilde\psi^r_{j}|A |r\rangle. \label{fi4} \end{equation} The sampling is over $R$ random states $|r\rangle$, resulting in $M$ approximate eigenfunctions $|\psi^r_i\rangle, |\tilde \psi^r_j\rangle$, and corresponding $\epsilon_i^r, \tilde \epsilon_j^r$, respectively. At the full sampling $R=N_{st}$ and within the chosen system the number of Lanczos steps $M$ determines the number of exact frequency moments $\mu_l$, analogous to g.s. moments (\ref{fd5}). This is evident from the expansion (\ref{ff2}) at least for $\beta \to 0$, while for lower $T$ we are dealing with a double expansion, i.e. $\beta$ and $t$ series, and combined moments are exact. It follows that at least at high $T$ the frequency resolution in spectra is $\Delta \omega \sim \Delta E/M$, $\Delta E$ representing typically the energy span of the model. Since the information content in higher moments is limited, in particular due to finite-size effects, there is no point in using very large $M$, hence we restrict our calculation in most cases to $M<200$. The effects of using a reduced sampling $R\ll N_{st}$ are expected to be most pronounced at low $T$ where moreover only a minority of symmetry sectors with the lowest energies contributes. Several tests for dynamical quantities within the $t$-$J$ model (\ref{cm1}) have been already presented by Jakli\v c and Prelov\v sek (1994a, 1995c). Here we consider in addition the spectral function of a single hole injected in the undoped AFM, $A(\vec{k},\omega)\propto{\rm Re} \langle c^\dagger_{\vec{k}s}(t)c_{\vec{k}s}(0)\rangle_\omega$, defined and discussed in more detail in Sec.~7.2. We choose the system of $4\times 4$ sites with $J/t=0.5$ and $\vec{k}^*= (\pi/2,\pi/2)$ corresponding to the g.s. wavevector. We compare $T=0$ results, being the most stringent test for the FTLM, with the g.s. ED results (Stephan and Horsch 1990, Eder {\it et al.} 1994). In Fig.~\ref{3.2} we first show the convergence of the spectral function at $T=0$ with the sampling $R_s$ (note that for $T=0$ only g.s. symmetry sectors have to be considered) for a fixed number of Lanczos steps $M=180$. Note that $M$ determines also the number of correct $T=0$ frequency moments. We observe that the position of peaks in the spectrum is mainly unaffected by the sampling. Low-$\omega$ peaks are known to be quite accurate within the $T=0$ ED method, hence also their positions within the FTLM. Their intensities are less reliable at smaller sampling and also frequency moments are expected to have larger errors. However, by increasing $R_s$ the accuracy of peak intensities is improved. \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_3.2.ps,height=10cm,angle=-90} \fi \caption{ Spectral function $A(\vec k^*,\omega)$ of a single hole in an AFM at $T=0$. The g.s. ED result and the FTLM results for different random sampling $R_s$ are shown, at fixed $M=180$. } \label{3.2} \end{figure} We investigate further the effect of reducing the number of Lanczos steps $M$. In Fig.~\ref{3.3} the spectral function at $T \to\infty$ is calculated with $R=32$ and varying $M$. We observe regular oscillations for lowest $M =30, 60$, appearing in frequency intervals $\Delta\omega \sim\Delta E/M$, where $\Delta E$ is the maximum energy span in the model. We have only a partial explanation of this phenomenon, typical for high $T$. While the Lanczos algorithm obtains correct lowest and highest eigenvalues, Lanczos eigenvalues in the middle of the spectrum do not have any correspondence with true ones (as evident already from the discrepancy in their number $M \ll N^s_{st}$) and appear almost equidistant. At $T \to \infty$ they all contribute and yield observed oscillations. Since oscillations do not contain any relevant information, they can be easily smoothened out by a suitable filtering. They also become much less pronounced at lower $T$, where the predominant contribution is given by transitions from the states in the lower part of the spectrum to excited states. \begin{figure} \centering \iffigure \epsfig{file=fig_3.3.ps,height=10cm,angle=-90} \fi \caption{ $A(\vec k^*,\omega)$ at large $T \gg t$. The dependence on $M$ is shown at fixed $R=32$. } \label{3.3} \end{figure} \subsection{Finite size effects} We introduce and justify the FTLM as a method to calculate $T>0$ properties on small systems. We also argue that choosing appropriate $M$, and the sampling $R$ one can reproduce exact results to prescribed precision on a given system. In this sense the method is very effective, in its computational effort comparable (although more time and memory consuming) to g.s. ED calculations on the same system. Still the well known deficiency of the ED method is the smallness of available lattices. Hence it is important to understand the finite size effects and their role at $T>0$. In the following we predominantly study planar systems corresponding to the tilted square lattice with p.b.c. (Oitmaa and Betts 1978) where $N=n^2+m^2$. Mostly, we employ $N=16, 18, 20, 26$, as presented in Fig.~\ref{3.4}. \begin{figure} \centering \iffigure \epsfig{file=fig_3.4.ps,height=15cm, bbllx=230,bblly=60,bburx=410,bbury=688, angle=-90,clip=} \fi \caption{ Tilted square lattices with p.b.c. corresponding to different sizes. } \label{3.4} \end{figure} We claim that generally $T>0$ reduces finite size effects. This is related to the fact that at $T=0$ both static and dynamical quantities are calculated only from one wavefunction $|\Psi_0\rangle$, which can be quite dependent on the size and on the shape of the system. In particular, g.s. spectra of dynamical quantities (see Dagotto 1994), e.g. the optical conductivity (Sega and Prelov\v sek 1990) and the single-particle spectral function (Stephan and Horsch 1991), quite generally appear as a restricted number of delta functions. While lowest frequency moments, in the sense of equations (\ref{fd4}), can be quite representative of a large system, the peak-like structure and details of spectra are mostly not. $T>0$ introduces the thermodynamic averaging over a larger number of eigenstates. This reduces directly finite-size effects for static quantities, whereas for dynamical quantities spectra become denser. From the equation (\ref{fi4}) it follows that we get in spectra at elevated $T>0$ generally $RM^2$ different peaks leading to nearly continuous spectra. This is also evident from high-$T$ result in Fig.~\ref{3.3}, as compared to the $T=0$ result in Fig.~\ref{3.2}. The effect of $T>0$ can be expressed also in another way. There are several characteristic length scales in the system of correlated electrons, e.g. the AFM correlation length $\xi$, the transport mean free path $l_s$, etc. These lengths decrease with increasing $T$ and results for related quantities have a macroscopic relevance provided that the lengths become shorter than the system size, e.g. $l_s < L$ where $L$ is the linear size of the system. This happens for particular $T> T_s$, where clearly $T_s$ depends also on the quantity considered. For certain quantities one can monitor such conditions directly, e.g. for $l_s$ as discussed in more detail in Sec.~5. As a criterion for finite size effects we use the characteristic finite-size temperature $T_{fs}$. It is chosen so that in a given system the thermodynamic sum \begin{equation} \bar Z(T)= {\rm Tr}e^{-\beta(H-E_0)} \label{fs1} \end{equation} is appreciable, i.e. $\bar Z(T_{fs})=Z^* \gg 1$. To get the size dependence of $T_{fs}$ it is important to understand general features of many body spectra. In Fig.~\ref{3.5}a we present the levels of the Heisenberg model on a square lattice with $N=16$ sites for the $S_z=0, k=0$ sector. Note that energy span is $\Delta E\propto NJ$ while the number of states scales as $N^s_{st} \propto 2^N$. This means that the density of states far from spectral edges scales exponentially with $N$. On the other hand, near the edges spectra become sparse and the spacing between lowest levels decreases rather slowly with the size, i.e. $\Delta \epsilon \propto N^{-p}$. We expect that also $T_{fs}$ scales as $\Delta \epsilon$. Still the character and the density of low-lying states can change qualitatively from one regime to another. In Fig.~\ref{3.5}b we show for comparison the levels within the $t$-$J$ model with $N_h=2$ holes on the tilted square lattice with $N=10$ sites, again only for the $S_z=0, k=0$ sector. While both cases have similar $N^s_{st}$, it is evident that the low-energy regime shows much higher density of states in doped $t$-$J$ model, Fig.~\ref{3.5}b, indicating a large degeneracy of states at $E\gtrsim E_0$. \begin{figure} \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_3.5a.ps,height=8cm}} \quad \subfigure[]{ \epsfig{file=fig_3.5b.ps,height=8cm}}} \fi \caption{ Many-body levels for: (a) the 2D Heisenberg model on $N=16$ sites, (b) the 2D $t$-$J$ model with $N_h=2$ on $N=10$ sites. In both cases only the $S_z=0, k=0$ sector is presented. } \label{3.5} \end{figure} The FTLM is best suited just for quantum many-body systems with a large degeneracy of states, i.e. large $\bar Z$ at low $T$. This is the case with doped AFM and the $t$-$J$ model in the strong correlation regime $J<t$. To be concrete we present in Fig.~\ref{3.6} the variation of $T_{fs}$ with the doping $c_h=N_h/N$, as calculated from the system of $N=18$ sites and $J/t=0.3$. For convenience we fix $T_{fs}$ with criteria $Z^*=30, 100, 300$, respectively. It is indicative that $T_{fs}$ reaches the minimum for intermediate (optimum) doping $c_h =c^*_h \sim 0.15$, where we are able to reach $T_{fs}/J \lesssim 0.3$. Away from such optimum case $T_{fs}$ is larger. In the undoped (and underdoped) AFM this happens due to rather large finite-size gaps in magnon excitations, while in the overdoped system electrons behave closer to free electrons with a nearly unrenormalized bandwidth and hence large gaps between single-electron excited states. We claim that small $T_{fs}$ and related large degeneracy of low-lying states are the essential features of strongly correlated system in their most challenging regime, being a sign of a novel quantum frustration. On the other hand this gives an advantage to the FTLM which performs best where several other methods, like QMC, fail due to the same frustration (sign) problems. \begin{figure} \centering \iffigure \epsfig{file=fig_3.6.ps,height=10cm,angle=-90} \fi \caption{ The variation of $T_{fs}$ with doping $c_h$ in the $t$-$J$ on $N=18$ sites with $J/t=~0.3$. Curves correspond to different choices of $Z^*=\bar Z(T_{fs})$. } \label{3.6} \end{figure} \subsection{Relation to other numerical methods} It is not our intention to give an exhaustive overview of other numerical (or partly analytical) methods which are used in the analysis of the problem of strongly correlated electrons, and for doped AFM in particular. We mainly list below methods which are alternative to the FTLM, together with their limitations and advantages. Since $T>0$ calculations are often used as a proper approach to $T=0$ results, as is also the case for several quantities evaluated within the FTLM furtheron, we comment also on some g.s. calculations. Clearly the closest relation is with the ED studies of small systems. Apart from the FTLM only few studies of $T>0$ properties of the $t$-$J$ model and of the Heisenberg model have been performed (Tohyama {\it et al.} 1993, Sokol {\it et al.} 1993, Tsunetsugu and Imada 1997), restricted to small $N$ and in particular small $N^s_{st} \lesssim 1000$ due to the full diagonalization employed in these calculations. We note that such restricted $N_{st}$ can influence in particular dynamical spectra, which can show up finite size effects even at higher $T$. We should also note that quite an analogous method to the FTLM, based instead on the Chebyshev iteration, has been introduced recently (Silver and R\"oder 1994), but has not been exploited much so far. Much more extensive ED calculations have been performed for g.s. properties (see Dagotto 1994). While systems considered could be possibly somewhat larger than those reachable within the FTLM, we claim that in particular dynamical spectra as evaluated within the g.s. should be interpreted with care due to their sparse structure. This seems to be the case with the g.s. optical conductivity, the spin structure factor, spectral functions etc., as discussed in detail furtheron. Closely related to the FTLM (see Sec.~3.3) is the HTE approach, which in principle deals with a large (infinite) system and is one of few straightforward (at least partly) analytical methods for correlated systems. It uses $\beta$ as a small parameter for the series expansion, while appropriate extrapolations (e.g. using Pade approximants) are needed to obtain results for $T$ in the physically interesting regime. So far it has been used in several studies of the $t$-$J$ model, e.g. to address the question of the phase separation (Putikka {\it et al.} 1992), magnetic correlations (Singh and Glenister 1992a), the momentum-distribution function (Singh and Glenister 1992b), the charge-spin separation (Putikka {\it et al.} 1994), the Hall effect (Shastry {\it et al.} 1993) etc. The advantage of the method is the absence of any finite cluster bounds. On the other hand the method requires a careful and a nonunique extrapolation procedure which is reliable only for static quantities. Widely used QMC technique yields in general results for $T>0$. There are various methods, which have been covered in several recent reviews (von der Linden 1992, Suzuki 1993). We mention here only approaches which are relevant for studies of planar undoped and doped AFM. The world-line QMC has been very successful in the evaluation of static properties of the Heisenberg model (see Manousakis 1991). Analogous results have been obtained via the QMC for the insulating Hubbard model at half-filling (Hirsch 1985). Away from the half-filling the sign problem becomes the major difficulty for the QMC studies of fermionic models. It is particularly severe at low $T$ within the intermediate-doping regime, being thus connected with the large degeneracy of fermionic states. Still various static quantities have been evaluated within the Hubbard model as a function od doping both for $T=0$ and $T>0$ (see Dagotto 1994). The calculation of dynamical quantities within the QMC is possible via the deconvolution of the imaginary-time dynamics into a real-frequency one using the maximum entropy analysis (Jarrell {\it et al.} 1991). The latter appears to be quite delicate due a large influence of statistical errors. Still there has been in recent years several studies of dynamic properties of spin systems (Makivi\'c and Jarrell 1992) and of the planar Hubbard model, in particular of spectral functions (Bulut {\it et al.} 1994, Preuss {\it et al.} 1995, 1996). In spite of much larger systems reachable within QMC studies, it is well conceivable that due to inherent difficulties QMC results for dynamics are less reliable than those obtained within the FTLM. There are other powerful numerical methods which are only partly relevant to studies of doped AFM. Particularly successful and promising is the Density Matrix Renormalization Group (DMRG) approach, as developed by White (1992) and extensively applied to problems of correlated electrons. While designed mainly for 1D systems, it has been extended to ladder systems (White and Scalapino 1997a) as well as to planar models (White and Scalapino 1997b). So far appropriate generalizations in order to study the $T>0$ and dynamical properties have only been attempted (Pang {\it et al.} 1996). \setcounter{equation}{0}\setcounter{figure}{0} \section{Thermodynamic properties} We first consider thermodynamic properties of the $t$-$J$ model (Jakli\v c and Prelov\v sek 1996). These include quantities directly derivable from the grand-canonical sum $\Omega$: free energy density ${\cal F}$, chemical potential $\mu$, charge compressibility $\kappa$, entropy density $s$, specific heat $C_V$ etc. Some of these have been already studied using other methods. Within the HTE some thermodynamical quantities have been calculated within the $t$-$J$ model. Results indicate the ferromagnetic phase at $J \ll t$ and at low doping (Putikka {\it et al.} 1992), but also a large enhancement of the entropy in a doped AFM (Putikka, unpublished). Within the Hubbard model the projector ($T=0$) QMC method has been employed to study the phase diagram and to calculate the charge compressibility (Furukawa and Imada 1992). Several calculations of the chemical potential vs. doping within the Green's function QMC have been presented in order to establish the regime of the phase separation (Kohno 1997, Hellberg and Manousakis 1997), although with contradictory conclusions for the most interesting regime $J<t$. In order to study continuously varying particle densities, we perform the averaging within the grand-canonical ensemble, involving all possible numbers of electrons $N_e$, \begin{equation} \langle A\rangle = \sum_{N_e}{\rm Tr}_{N_e} [e^{-\beta(H-\mu N_e)}A]/\sum_{N_e}{\rm Tr}_{N_e}e^{-\beta(H-\mu N_e)}, \label{t1} \end{equation} where $\mu$ is the chemical potential. For each $N_e$ the problem thus reduces to the evaluation of the canonical thermal average, which we achieve with the FTLM as described in Sec.~3. The implementation of the FTLM can be further simplified for operators $A$ which are conserved quantities, i.e. commute with $H$, as shown in Sec.~3.6. Examples include $H$ itself, the particle number $N_e$, the total spin $S_z$ etc. By choosing random functions $|r\rangle$ to have good quantum numbers $N_e$ and $S_z$, we can evaluate the expectation value of an arbitrary function $f(N_e,S_z,H)$ \begin{eqnarray} \langle f\rangle &\approx& \frac{N_{st}}{R\Omega }\sum_{r=1}^{R} \sum_{j=0}^{M}|\langle r|\psi_j^r \rangle|^2 f(N_e^r,S_z^r,\epsilon_j^r) e^{-\beta(\epsilon_j^r-\mu N_e^r)}, \nonumber \\ \Omega &\approx& \frac{N_{st}}{R}\sum_{r=1}^{R} \sum_{j=0}^{M}|\langle r|\psi_j^r \rangle|^2 e^{-\beta(\epsilon_j^r-\mu N_e^r)}. \label{t2} \end{eqnarray} As noted in the equation (\ref{fi2}), in this case $|\langle r|\psi_j^r\rangle|^2$ can be evaluated from the tridiagonal matrix directly. Since also $M\lesssim 100$ is enough, the reorthogonalization of Lanczos functions can be avoided. This eliminates the need to store wavefunctions $|\phi_j^r\rangle$ and systems with considerably larger $N_{st}$ can be studied. Consequently the computational effort in this case is equal to that of a g.s. Lanczos procedure, repeated $R$ times. We employ in the following typically $R\sim 200-1000$ in each $N_e$ sector. Calculations are performed on systems with $N=16$, $18$, and $20$ sites for arbitrary filling $c_h$, while for the undoped system we reach $N=26$ sites. Note also that we fix $J/t = 0.3$. \subsection{Chemical potential} We first analyze the hole chemical potential $\mu_h=-\mu$ as a function of $T$ and of the hole density $c_h=1-\langle N_e\rangle/N$. Results are obtained by first calculating $c_h$ at fixed $\mu, T$ from equations (\ref{t2}) with $f=N_e$, and then inverting the dependence $c_h(\mu,T)$. In Fig.~\ref{4.1} we present curves $\mu_h(T)$ for several $c_h$. Note that for thermodynamic quantities results seem somewhat less sensitive to finite-size effects, hence we follow them in Fig.~\ref{4.1} to $T \sim 0.05~t$ (for $c_h\leq 0.1$ we would still estimate higher $T_{fs} \sim 0.1~t$). In order to interpret $\mu_h(T)$ at low $c_h \ll 1$ it is essential to note that at $T=0$ the system contains no holes in the equilibrium provided $\mu_h<\mu_h^0$. For chosen $J/t=0.3$ it has been established $\mu_h^0 \sim -1.99~t$ (Dagotto 1994), related to the minimum energy of a single hole added to the undoped AFM. \begin{figure} \centering \iffigure \epsfig{file=fig_4.1.ps,height=10cm,angle=-90} \fi \caption{ Hole chemical potential $\mu_h$ vs. $T$ at several dopings $c_h$. } \label{4.1} \end{figure} Analyzing Fig.~\ref{4.1} we mostly do not find a $T^2$ dependence of $\mu_h$ at low $T$, as expected for a normal LFL, except within the extremely overdoped regime $c_h\geq 0.3$. In particular, in a broad range $0.05 <c_h <0.3$ we find a very unusual roughly linear variation, \begin{equation} \mu_h(T)= \mu_h(T=0) + \alpha T, \qquad T_{fs}<T<J, \label{t2a} \end{equation} whereby the slope $\alpha$ changes the sign at $c_h = c_h^* \gtrsim 0.15$. It is remarkable that the marginal doping $c_h^*$ appears to be quite system independent, as checked quantitatively for different system sizes $N=16 - 20$. The marginal $c_h^*$ shows up again in Fig.~\ref{4.2}, displaying the variation $c_h(\Delta \mu)$ at various temperatures $T$. Here $\Delta \mu= \mu+\mu_h^0$ is the difference to the undoped AFM case and is displayed in eV to allow the comparison with experiments, using the usual correspondence $t=0.4~$eV. $c_h^*$ represents in this case the crossing of curves at different $T$, whereby $\Delta\mu(T)$ is essentially pinned at the value $\Delta\mu\sim -0.17~t$. This pinning is active in a wide range of $T$. Analyzing the regime $c_h<c^*_h$ in Fig.~\ref{4.2}, we note again that $c_h(T\to 0)$ remains finite only for $\mu_h<\mu_h^0$. One can evaluate from these results the compressibility of the hole fluid $\kappa \propto -dc_h/d\mu$. We find that $\kappa <\infty $ for all $T>0$, indicating the absence of the phase separation in the system at chosen $J/t$. This is in contrast with some recent QMC studies (Hellberg and Manousakis 1997) claiming the phase separation in the $t$-$J$ model at all $J/t$, as first put forward by Emery {\it et al.} (1990). Our results in Fig.~\ref{4.2} do not support such a behaviour, at least not in the range $T/t>0.05$. Still Fig.~\ref{4.2} reveals that $\kappa(T\to 0)$ is increasing and becoming larger on approaching $\Delta \mu \sim 0$. \begin{figure} \centering \iffigure \epsfig{file=fig_4.2.ps,height=10cm,angle=-90} \fi \caption{ Hole concentration $c_h$ vs. $\Delta \mu$ at various $T$. For comparison we show experimental results for LSCO at low $T$, obtained from the shift of photoemission spectra by Ino {\it et al.} (1997a). } \label{4.2} \end{figure} A proper interpretation of the regime $c_h \to 0$ at low $T$ is clearly one of the major challenges. One frequently used picture is that holes doped in an AFM could be described as degenerate fermions with small FS - hole pockets (Trugman 1990, Eder and Becker 1991). In this case one would expect in a 2D lattice $\kappa = {\cal N}(\mu) = 1/2\pi t^*$, where ${\cal N}(\epsilon)$ is the single-particle (hole) density of states (DOS) and $t^*$ an effective hopping parameter. Since the hole effective mass can be quite enhanced, i.e. $t^*\ll t$, $\kappa$ can become quite large. Results in Fig.~\ref{4.2} put a lower bound to the possible enhancement, i.e. $t/t^*>5$. Analyzing $T=0$ QMC results for the Hubbard model near half filling a variation $c_h(\mu_h)$, similar to ours, has been found (Furukawa and Imada 1992, Assaad and Imada 1996). Results were interpreted in terms of a singular behaviour $\kappa \propto (\mu_h-\mu_h^0)^{-1/2}$. In contrast to the hole-pocket picture, the latter form does not allow for a regime with a degenerate gas of holes, at least not with holes having a doping-independent mass. From our results it is hard to exclude any of these scenarios, whereby the regime of hole pockets should be in any case restricted to very low doping $c_h<0.1$. It is tempting to interpret the existence of the marginal concentration $c_h^*$ as a change of the character of the FS. To establish the relation, we have to rely on arguments which apply to the gas of noninteracting fermions. A simple Sommerfeld expansion yields that the electron density at fixed $\mu$ is given by \begin{equation} c_e(T)=c_e(T=0)+\frac{(\pi k_BT)^2}{6} {\cal N}'(\mu). \label{t3} \end{equation} Indirectly this gives an information on the FS, since one would plausibly associate ${\cal N}'(\mu)>0$ for $c_e\lesssim 1$ with a large electron FS, and oppositely ${\cal N}'(\mu)<0$ with a hole-like FS or small hole pockets vanishing for $c_e \to 1$. At least for free electrons it is easy to establish the connection of ${\cal N}'(\mu)$ with the curvature $K^{-1}$ of the FS (Jakli\v c and Prelov\v sek 1996), analogous to the relation for the Hall resistivity (Tsuji 1958). At least in the region of the $\vec{k}$ space, where the effective-mass tensor is positive-definite, ${\cal N}'(\mu)>0$ implies also that the average FS curvature $K^{-1}$ is positive. The observed nonquadratic $T$ dependence in Fig.~\ref{4.1} questions the interpretation in terms of the free-electron DOS (\ref{t3}). Still we may interpret $dc_h/dT<0$ for $c_h>c_h^*$, as deduced from Fig.~\ref{4.2}, as an indication for ${\cal N}'(\mu)>0$, i.e. positive average curvature of the FS. This in turn implies a transition at $c_h \sim c_h^*$ from a hole-pocket picture at low doping (Trugman 1990, Eder and Becker 1991), to an electron-like large FS (Stephan and Horsch 1991, Singh and Glenister 1992b). Recently the variation of $\mu$ with the hole doping in LSCO has been deduced experimentally from the shift of photoemission spectra by Ino {\it et al.} (1997a). For comparison we plot also these results in Fig.~\ref{4.2}, noting that they apply to low $T$ in terms of our model parameters. The overall agreement is quite reasonable, taking into account the uncertainty of PES results. Again the flatness of $\mu(c_h<c_h^*)$ is quite remarkable, indicating on the possibility of divergent $\kappa \to \infty$ for $c_h \to 0$. As noted already by the authors the variation $\mu(c_h)$ is highly nontrivial and cannot be accounted for by a simple LFL results as e.g. obtained in band structure calculations. \subsection{Entropy} Let us consider the entropy density (per unit cell) \begin{equation} s={1\over N} \left(k_B\ln\Omega+ {\langle H\rangle-\mu\langle N_e\rangle \over T}\right), \label{t4} \end{equation} where averages and $\Omega$ are calculated using the FTLM (Jakli\v c and Prelov\v sek 1995b, 1996) with equations (\ref{t2}). Within the $t$-$J$ model $s$ has been studied also via the HTE (Putikka, unpublished), while the QMC method has been recently used to calculate the entropy within the Hubbard model (Duffy and Moreo 1997). The $T$ variation of $s$ at various $c_h$ is presented in Fig.~\ref{4.3}a. Note again that within the grand-canonical calculation $c_h$ can be followed continuously. Results shown for $N=20$ are quite close to those for lattices with $N=16, 18$ sites provided that $T>0.1~t$. It is evident that in the undoped AFM $s(T)$ at low $T<J/2$ is consistent with the magnon contribution $s \propto T^2$. This dependence changes however already for smallest finite doping $c_h = 0.05$ to $s \propto T^{\alpha}$ with $\alpha \lesssim 1$. \begin{figure} \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_4.3a.ps,height=8cm,angle=-90}} \quad \subfigure[]{ \epsfig{file=fig_4.3b.ps,height=8cm,angle=-90}}} \fi \caption{ Entropy density $s$ vs. $T$ for different dopings $c_h$ for a) $J/t=0.3$, and b) $J=0$. } \label{4.3} \end{figure} In order to understand the role of AFM correlations induced by $J>0$ on the entropy $s$, we present in Fig.~\ref{4.3}b also results obtained for $J=0$. It is clear that here the undoped case $c_h=0$ is singular, since $s(T>0)=\ln 2$. It is also plausible that for $c_h \gtrsim 0.3$ we find $s(T)$ essentially independent of $J$, i.e. the spin exchange becomes irrelevant in the overdoped regime. Comparison again confirms that the role of $J$ is crucial in the underdoped regime and at optimum doping. Alternatively we can discuss the doping dependence of $s$, as shown in Fig.~\ref{4.4} at different $T\le J$. As realized already from the discussion of the thermodynamic sum $Z(T)$ and related $T_{fs}(c_h)$ in Fig.~\ref{3.6}, the entropy displays a broad maximum at $c_h\sim 0.15$, indicating the highest density of many-body states in the optimum-doping regime. The appearance of the maximum in $s(c_h)$ is intimately related to $\mu(T)$ discussed in Sec.~4.1. Namely from general thermodynamic relations (equality of mixed derivatives) for the free energy density ${\cal F}(c_h,T)$ it follows \begin{equation} \left. {\partial s\over \partial c_h}\right|_T=\left. -{\partial \mu_h\over \partial T}\right|_{c_h}, \label{t5} \end{equation} taking into account that $s=-\partial{\cal F}/\partial T$ and $\mu_h=\partial{\cal F}/\partial c_h$. The relation (\ref{t5}) connects $s(c_h)=max$ with the pinning of $\mu_h(T)$ seen in Figs.~\ref{4.1},~\ref{4.2} at the optimum doping $c_h \sim c_h^*$. \begin{figure} \centering \iffigure \epsfig{file=fig_4.4.ps,height=10cm,angle=-90} \fi \caption{ $s$ vs. $c_h$ at several $T$, calculated in a system with $N=20$ sites. We show for comparison also experimental results for LSCO by Loram {\it et al.} (1996) at highest $T=320~K \sim 0.07~t$. } \label{4.4} \end{figure} Besides the enhancement of $s$ with doping, the surprising fact is also its magnitude at $T<J$, i.e. at small $T$ in terms of model parameters. $s$ in the optimum regime appears very large, e.g. at $T=0.1~t=J/3$ the entropy per site is $s\sim 0.39~k_B$, which is almost 40\% of $s(T=\infty)$ for the same $c_h$, although $T < J$ and the energy span of excitations extends well beyond the scale $\Delta E > t$. This should be contrasted with the situation in an undoped AFM, where $s$ becomes relatively significant only for $T>J/2$, and saturates for $T \gtrsim J$. Moreover, in the case of noninteracting fermions one gets $s\sim k_B$ only at the Fermi temperature $T^0_F \sim W$, where the bandwidth is $W=8~t$. On the other hand, by introducing the degeneracy temperature within the $t$-$J$ model as $s(T_{{deg}})=s(T=\infty)/2$, we get for $c_h\sim c_h^*$ only $T_{{deg}}\sim 0.17~t$, being small in comparison with any reasonable effective QP bandwidth. It is indicative that the entropy of such a magnitude has been deduced from the electronic specific heat measurements in oxygen deficient YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO) materials (Loram {\it et al.} 1993). E.g., for the optimally doped material with $\delta=0.03$ at $T=300~{K}$ the experimental result is $\Delta s = 0.35~k_B$ per planar copper site ($\Delta s = 0.7~k_B$ per formula unit), relative to the undoped $\delta=1$ sample. We find the corresponding value $\Delta s=s(c_h=0.15)-s(c_h=0)\sim 0.30~k_B$ at $T=0.1~t \sim 450~{K}$. Recently, $s$ has been measured also for LSCO in a large doping range $0<x<0.4$ (Loram {\it et al.} 1996). We plot results at fixed $T=320 K \sim 0.07~t$ as a function of doping in Fig.~\ref{4.4}, for comparison with our model results. The qualitative and quantitative agreement is quite promising, also in view of possible uncertainties in the experimental determination of $s$. When comparing results we note that our curves start at higher $T$ and so the main difference is in the location of the entropy maximum, which in LSCO appears at somewhat higher $c_h\sim 0.22$. \subsection{Specific heat} The same results can be discussed in terms of the specific heat (per unit cell) \begin{equation} C_V= T\left(\frac{\partial s}{\partial T}\right)_{c_h}, \label{t6} \end{equation} which can be as well represented directly with expectation values at given $T$, analogous to the equation (\ref{t4}) and a differentiation with respect to $T$ is not needed. Let us first show as a test $C_V$ for the undoped AFM (Heisenberg model) for several system sizes. In Fig.~\ref{4.5} results are shown for systems ranging in size from 16 to 26 sites. Except at the lowest $T\lesssim J/3$, results do not vary appreciably with the system size, particularly regarding the position and the height of the maximum. Our results seem to be even superior to those obtained by the QMC method (Gomez-Santoz {\it et al.} 1989). Calculated $C_V$ is strongly $T$ dependent in the whole $T$ range, with a maximum at $T\sim 2J/3$, and as expected $C_V \propto T^2$ at low $T$ consistent with the magnon excitations dominating this regime (Manousakis 1991). \begin{figure} \centering \iffigure \epsfig{file=fig_4.5.ps,height=10cm,angle=-90} \fi \caption{ Specific heat $C_V$ vs. $T$ for the undoped AFM for several system sizes. } \label{4.5} \end{figure} In Fig.~\ref{4.6} we present $C_V(T)$ at different $c_h$. As the AFM is doped, $C_V(T)$ still exhibits a maximum, which is however strongly suppressed and gradually moves to lower $T$ with increasing $c_h$. The peak can be attributed to the thermal activation of spin degrees of freedom. The latter are still characterized by the exchange scale $J$ which persists in the doped system, as observed also in dynamical spin correlations (Jakli\v c and Prelov\v sek 1995a) discussed in Sec.~6. The exchange energy scale however disappears in the overdoped regime $c_h\ge 0.3$. Results indicate a possible LFL behaviour with $C_V \sim \gamma T$ only for $T<0.1~t$. It is characteristic (and consistent with the vanishing role of $J$) that in the optimally doped regime $c_h\sim 0.2$ we find $C_V(T)\sim const$ for $0.15<T/t<1$, being far from a FL behaviour. \begin{figure} \centering \iffigure \epsfig{file=fig_4.6.ps,height=10cm,angle=-90} \fi \caption{ $C_V$ vs. $T$ for different hole dopings $c_h$. } \label{4.6} \end{figure} Results in Fig.~\ref{4.6} confirm the recent conjecture (Vollhardt 1997) that in correlated systems the specific heat $C_V(T,X)$ can show universal crossings as a function of a thermodynamic variable $X$. In our case we consider $X=c_h$, and realize that $C_V(T)$ cross for different $c_h$ at two $T$, whereby the lower crossing at $T \sim 0.13~t$ seems to be nearly independent of $c_h$. \setcounter{equation}{0}\setcounter{figure}{0} \section{Electrical Properties} The anomalous normal-state character of electrical transport properties has been realized since the discovery of high-$T_c$ cuprates and remains the challenge for theoreticians ever since. Among these properties the prominent example is nearly linear in-plane resistivity $\rho \propto T$ in the normal state (for a review see Iye 1992, Batlogg {\it et al.} 1994). It is however an experimental fact that such a behaviour is restricted to the optimum doping regime, while deviations from linearity appear both in the underdoped and overdoped regimes, being still universal for a number of materials with a similar doping (Takagi {\it et al.} 1992). The d.c. resistivity is intimately related to the optical conductivity $\sigma(\omega)$, which has been also extensively studied (for a review see Tanner and Timusk 1992) and shows in the normal state the unusual non-Drude behaviour (Schlesinger {\it et al.} 1990, Romero {\it et al.} 1992, Cooper {\it et al.} 1993, El Azrak {\it et al.} 1994, Puchkov {\it et al.} 1996, Startseva {\it et al.} 1997). Another challenging set of experimental findings concern the d.c. resistivity $\rho_c(T)$ perpendicular to the CuO$_2$ planes and the corresponding $\sigma_c(\omega)$ (see Uchida 1997), which we will not consider here. The central question is whether these anomalous static and dynamical transport properties can be accounted for by strong correlations alone, or possible other mechanisms such as the electron-phonon coupling have to be invoked (Zeyher 1991). In spite of considerable efforts so far there are very few microscopical theories of electron transport dealing with planar or higher-dimensional strongly correlated systems. Brinkman and Rice (1970) solved the problem of a single mobile hole in the extreme case $J=0$ within the retraceable-path approximation (RPA) and evaluated the d.c. mobility $\mu_0(T)$. Analogous results have been obtained via the HTE by Ohata and Kubo (1970). Within the RPA also $\sigma(\omega)$ has been evaluated (Rice and Zhang 1989). It is not easy to find the range of validity and relevance of these results, in particular for $J>0$ where one would expect possibly a crossover to a different relaxation mechanism for $T<J$. Nevertheless it is clear that the analysis, treating holes as independent, is more appropriate for the weak-doping regime $c_h\ll 1$, at least when the behaviour at low $\omega<t$ and $T<t$ is concerned. It appears much more difficult to approach analytically the electron transport at $c_h>0$. An attractive proposal remains that of spinons and holons as basic low-energy excitations (Anderson and Zou 1988), as well as related gauge theories (Nagaosa and Lee 1990) and slave-boson approaches, which have been applied also to the calculation of optical conductivity (Bang and Kotliar 1993). Very fruitful have been recent studies of infinite-dimensional models, in particular for the Hubbard model (for a review see Pruschke {\it et al.} 1995, Georges {\it et al.} 1996), which allow also for the evaluation of $\sigma(\omega)$ and $\rho(T)$. Since these numerical results are also hard to interpret, it is still under debate to what extent they contain features relevant to lower dimensions, e.g. to planar systems discussed here. Several conclusions on the charge dynamics have been reached using the ED dealing with the g.s. behaviour. For a single hole in the AFM the planar optical conductivity $\sigma(\omega)$ (Sega and Prelov\v sek 1990, Poilblanc {\it et al.} 1993) and the charge stiffness (Zotos {\it et al.} 1990) have been interpreted in terms of partially coherent hole motion with a substantially enhanced effective mass (at $J/t < 1$) and with a mid-infrared peak at $\omega \sim 2J$. Analogous results were presented for larger doping (Dagotto 1994), however with considerable finite-size effects, so that their interpretation does not appear well settled. We discuss in the following results for the charge transport in the $t-J$ model as obtained with the FTLM, in part already presented elsewhere (Jakli\v c and Prelov\v sek 1994b, 1995c). \subsection{Current response} Let us consider the real optical conductivity $\hbox{\boldmath$\sigma$}(\omega)$, which is a tensor in general. We are dealing here with a system without any external magnetic field. On a square lattice the tensor is then diagonal, so that $\sigma_{\alpha\alpha}(\omega)=\sigma(\omega)$. Within the linear-response theory (see e.g. Mahan 1990) the regular part of $\sigma(\omega>0)$ is given by \begin{equation} \sigma_{reg}(\omega)=e_0^2{1-{\rm e}^{-\beta\omega}\over \omega} C(\omega), \qquad C(\omega)={1\over N}{\rm Re}\int_0^\infty dt {\rm e}^{i\omega t} \langle j_\alpha(t) j_\alpha (0)\rangle, \label{ec1} \end{equation} where $\vec j$ is the (total) particle current operator. In a finite system one can write $C(\omega)$ in terms of exact eigenstates $|\Psi_n\rangle$ with corresponding energies $E_n$, \begin{equation} C(\omega)= {1\over NZ} \sum_{n\ne m}{\rm e}^{-\beta E_n} |\langle \Psi_m|j_{\alpha}|\Psi_n \rangle|^2 \delta(\omega - E_m+ E_n). \label{ec2} \end{equation} It is however well known that in general one has to take into account also the singular contribution to the charge dynamical response, i.e. \begin{equation} \sigma(\omega)= 2\pi e_0^2 D_c \delta(\omega) +\sigma_{reg}(\omega), \label{ec3} \end{equation} where $D_c$ represents the charge stiffness. We study in the following $\sigma(\omega)$ in more general tight-binding models, e.g. including also the n.n.n. hopping. Since the analysis of this case, together with the derivation of a proper $D_c$ and the optical sum rule, is not usual in the literature we present it shortly below. We follow the approach by Kohn (1964) introducing a (fictitious) flux $\phi$ through a torus representing the square lattice with p.b.c. Such a flux induces a vector potential $\vec A$, being equal on all lattice sites. In lattice models with a discrete basis for electron wavefunctions $\vec A$ can be introduced with a gauge transformation (Peierls construction) $ c^\dagger_{js}\to c^\dagger_{js} {\rm exp}(-ie_0\vec{A}\cdot\vec{R}_j)$, which effectively modifies hopping matrix elements. Taking $\vec A$ as small we can express the modified tight-binding Hamiltonian allowing also for more general hopping elements $t_{ij}$ \begin{eqnarray} H(\vec{A})&=&-\sum_{i,j,s} t_{ij}e^{-ie_0\vec{A}\cdot\vec{R}_{ij}}c^\dagger_{js}c_{is}+H_{int} \approx \nonumber \\ &\approx&H(0)+e_0\vec{A}\cdot\vec{j}+{e_0^2\over 2} \vec{A}\cdot \hbox{\boldmath$\tau$} \vec{A}, \label{ec5} \end{eqnarray} where $\vec{R}_{ij}=\vec{R}_j-\vec{R}_i$, $\hbox{\boldmath$\tau$}$ is the kinetic stress tensor, and \begin{eqnarray} \vec{j}&=&i\sum_{i,j,s} t_{ij}\vec{R}_{ij} c^\dagger_{js}c_{is}, \nonumber \\ \hbox{\boldmath$\tau$}&=&\sum_{i,j,s} t_{ij}\vec{R}_{ij}\otimes\vec{R}_{ij} c^\dagger_{js}c_{is}. \label{ec6} \end{eqnarray} Note that in usual n.n. tight-binding models $\hbox{\boldmath$\tau$}$ is directly related to the kinetic energy operator, $\tau_{\alpha\alpha} = (H_{kin})_{\alpha\alpha}$. The electrical current $\vec{j}_e$ is from the equation (\ref{ec5}) expressed as a sum of the particle-current and the diamagnetic contribution, \begin{equation} \vec{j}_e=-\partial H/\partial\vec{A}=-e_0\vec{j}-e_0^2\hbox{\boldmath$\tau$}\vec{A}. \label{ec7} \end{equation} The above analysis applies also to an oscillating $\vec A(t) = \vec{A}(\omega){\rm exp}(-i\omega^+ t)$. This induces an electric field in the system $\vec E(t)= - \partial\vec{A}(t)/\partial t$. We are interested in the response of $\langle \vec j_e\rangle(\omega)$. Evaluating $\langle\vec{j}\rangle$ within the standard linear response (Mahan 1990), and with $\vec A(\omega)= \vec E(\omega)/i\omega^+$, we arrive at the complex optical conductivity \begin{eqnarray} \tilde {\hbox{\boldmath$\sigma$}}(\omega)&=&\frac{i e_0^2}{\omega^+ N} (\langle \hbox{\boldmath$\tau$}\rangle - \hbox{\boldmath$\chi$}(\omega)),\nonumber \\ \hbox{\boldmath$\chi$}(\omega)&=& i\int_0^{\infty} dt e^{i\omega^+t}\langle[\vec j(t), \vec j(0)]\rangle. \label{ec8} \end{eqnarray} Complex $\tilde {\hbox{\boldmath$\sigma$}}(\omega)= {\hbox{\boldmath$\sigma$}}(\omega)+i\tilde {\hbox{\boldmath$\sigma$}}^{\prime\prime}(\omega)$ satisfies the Kramers-Kronig relation. Since $\hbox{\boldmath$\chi$}(\omega \to \infty) \to 0$, we get from the equation (\ref{ec8}) a condition for $\tilde {\hbox{\boldmath$\sigma$}}^{\prime\prime} (\omega\to \infty)$, \begin{equation} \int_{-\infty}^\infty \hbox{\boldmath$\sigma$}(\omega)d\omega= \frac{\pi e_0^2}{N}\langle \hbox{\boldmath$\tau$} \rangle, \label{ec9} \end{equation} which corresponds to the optical sum rule. It reduces to the well known one for continuum electronic systems, as well as for n.n. hopping models where $\langle \tau_{\alpha\alpha} \rangle= -\langle H_{kin}\rangle/d$ (Maldague 1977). We can now make contact with the definition (\ref{ec3}). From the expression (\ref{ec8}) it follows \begin{eqnarray} \sigma_{reg}(\omega)&=&\frac{e_0^2}{N\omega}\chi''_{\alpha\alpha}(\omega), \nonumber \\ D_c =\frac{1}{2e_0^2}\lim_{\omega\to 0}\omega \tilde \sigma_{\alpha\alpha}^{\prime\prime}(\omega)&=& \frac{1}{2N}[\langle \tau_{\alpha\alpha} \rangle-\chi_{\alpha\alpha}'(0)]. \label{ec10} \end{eqnarray} \subsection{Charge stiffness} Nonzero charge stiffness $D_c^0=D_c(T=0)>0$ is a characteristic signature of a metallic state (Kohn 1964, Scalapino {\it et al.} 1993), in contrast to an insulator with $D_c^0=0$. The evaluation of $D_c^0$ has been recently applied to a number of correlated fermionic systems, both analytically for 1D systems (Shastry and Sutherland 1990) and numerically for planar Hubbard and $t$-$J$ models (see Dagotto 1994). At $T>0$ one would expect for normal resistors $D_c=0$. Since we are working with small systems with p.b.c., where ballistic response of carriers can persist at $T>0$, we find in general $D_c(T)\ne 0$. Note however that recently a nontrivial possibility of a nonergodic behaviour in a macroscopic limit has been realized, i.e. with $D_c(T>0)>0$, being related to the integrability of the fermionic model (Castella {\it et al.} 1995, Zotos and Prelov\v sek 1996). The $t$-$J$ model at half-filling has $D_c(T)=0$ at any $T$, since charge fluctuations are projected out by construction of the model (\ref{cm1}). For a doped system one expects $D_c^0 \propto c_h$ at low $c_h$. Studying the charge transport at $T>0$ within the planar $t-J$ model we adopt the view that there exist scattering processes which in a macroscopically large systems cause $D_c(T>0)=0$, i.e. the model is ergodic. However, in our numerical calculations we are dealing with small systems and a small portion of the total current can propagate through the system unscattered, thereby establishing in the system a persistent current. Hence we characterize observed $D_c(T>0) \ne 0$ as a finite-size artifact. Nevertheless, a variation of $D_c(T)$ brings a valuable information. At $T>0$ the scattering processes result in a finite mean-free path of charge carriers $l_s(T)$. When the linear system size $L$ exceeds $l_s$, it is reasonable to expect that $D_c(T)\sim 0$. On the other hand for $L< l_s(T)$ we obtain $D_c(T) \sim D_c^0$. By following $D_c(T)$ we can thus independently monitor the transport mean free path. Since $l_s$ is strongly $T$-dependent, we can approximately locate the crossover temperature as $L \sim l_s(T_{fs})$. We can express $D_c$ in the $t$-$J$ model on a square lattice from the equation (\ref{ec10}) as \begin{equation} D_c= -{1\over 4N} \langle H_{kin}\rangle - {1\over \pi e_0^2} \int_{0^+}^\infty \sigma(\omega) d \omega. \label{es1} \end{equation} Within the FTLM $\langle H_{kin}\rangle$ and $\sigma(\omega)$ are calculated separately. It should be however mentioned that due to finite $M$ there could be some ambiguity in the cutoff employed at low $\omega$, the problem which is however restricted to otherwise unproblematic $T\gg t$ regime. Results for a single hole in the $t-$($J=0$) model, as obtained by Jakli\v c and Prelov\v sek (1995c) show that $D_c(T)$ interpolates quite smoothly between $D_c=0$ at $T=\infty$ and $D_c^0$, the crossover becoming sharper in larger systems. For the AFM case results for $D_c(T)$ are less regular, as presented in Fig.~\ref{5.1} for various $N_h\geq 1$ on a system with $N=16$ sites. When judging the extent of deviations of $D_c$ from zero at $T>0$, it is useful to compare values with the maximum possible ones, i.e. with the value of the sum rule at $T=0$, $D_{max}=|\langle H_{kin}\rangle(T=0)|/4N$, as follows from the equation (\ref{es1}). We notice from Fig.~\ref{5.1} that $D_c$ typically shows a rather abrupt transition from $D_c\sim 0$ to $D_c \neq 0$ at the crossover temperature $T_{fs}$, depending mainly on $c_h$. For $T<T_{fs}$ the variation $D_c(T)$ can become quite unphysical, i.e. we get in some cases even $D_c<0$. These phenomena are influenced by particular p.b.c. and more sensible results can be obtained by the introduction of twisted boundary conditions or fluxes (Poilblanc 1991). Nevertheless we are here interested only in the regime $T>T_{fs}$. It follows again that $T_{fs}$ is minimum, i.e. $T_{fs} \sim 0.1~t$, for the intermediate doping $0.1<c_h<0.3$. We should stress the striking message that at intermediate doping even at such low $T$, representing for cuprates $T\sim 450K$, the mean free path does not exceed $l_s \sim 4$ sites and this entirely due to correlation effects. \begin{figure} \centering \iffigure \epsfig{file=fig_5.1.ps,height=10cm,angle=-90} \fi \caption{ Normalized charge stiffness $D_c/D_{max}$ for various number on holes $N_h$ in a system with $N=16$ sites. } \label{5.1} \end{figure} \subsection{Single-hole mobility} It is expected that at low doping the conductivity scales linearly with doping, hence it is meaningful to introduce the dynamical mobility $\mu(\omega)$ which is a single-hole property \begin{equation} \sigma(\omega) = e_0 c_h \mu(\omega), \label{em1} \end{equation} but still highly nontrivial for a Mott or an AFM insulator. The most conclusive theoretical results (in 2D or higher D systems) have been so far obtained for a problem of a single mobile hole introduced into a reference insulator. Brinkman and Rice (1970) solved the problem for $J=0$ within RPA. They pointed out of an essentially incoherent hole motion and evaluated the d.c. mobility $\mu_0(T)$, exhibiting $\mu_0 \propto T$ for $T>t$. $\mu(\omega)$ within RPA (Rice and Zhang 1989) shows an incoherent motion, resulting in a slow non-Drude fall-off $\mu(\omega) \propto 1/\omega$ for larger $\omega$. The RPA has been recently justified and applied more rigorously for infinite-D lattices (Metzner {\it et al.} 1992). An analogous approach is the evaluation of frequency moments of $\mu(\omega)$, starting at $T\gg t$, as applied to the $J=0$ problem by Ohata and Kubo (1970). On the other hand, $\mu(\omega)$ at $T=0$ has been in recent years well established by numerical studies of small systems via the ED method (Sega and Prelov\v sek 1990, Dagotto 1994). Nevertheless there are important unsolved questions even for the single-hole problem. Is $\mu(\omega)$ on a planar lattice qualitatively and quantitatively well described within the RPA, at least for $T>t$ ? Which are new qualitative dynamical features at $T<t$, both for the $J=0$ and the $J>0$ case ? Results for $\mu(\omega)$ at $J=0$, obtained via the FTLM by Jakli\v c and Prelov\v sek (1995c), show an overall agreement with the RPA. However, in contrast to the smooth RPA curve the actual $\mu(\omega)$, evaluated at $T\gg t$, seems to be nonanalytical, i.e. it shows a cusp at $\omega=0$. The phenomenon seems to be characteristic for $J=0$, but not for $J>0$. $\mu(\omega)$ for $J=0$ retains its high-$T$ form for all $T > t$, but changes qualitatively for $T<t$. Here the central peak due to the formation of the FM polaron (Nagaoka 1966) starts to emerge at low $\omega$, and the incoherent broad background steadily vanishes on approaching $T \rightarrow 0$. We are more interested in the AFM case, as shown in Fig.~\ref{5.2}. For $T>t$ the spin background is disordered and the mobility retains the high-$T$ form $\mu(\omega) \propto \beta$, leading to $\mu_0 \propto 1/T$. As seen from Fig.~\ref{5.2}, there is a qualitative change already in the regime $J<T<t$. Namely, it appears that here $\mu(\omega)$ has a weaker $T$-dependence. For $T\rightarrow 0$ we are however approaching a nontrivial response of an AFM polaron, which has been analyzed numerically by several authors. At $T=0$ one expects in $\mu(\omega)$ for $\omega \to 0$ a coherent response of an AFM polaron with an enhanced mass, but also a nonvanishing incoherent part at $\omega>J$ (Sega and Prelov\v sek 1990, Dagotto 1994). The latter component seems to have a nontrivial internal structure being related to the mid-infrared peak, as realized also in Fig.~\ref{5.2} for the lowest $T=0.1~t$. \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_5.2.ps,height=10cm,angle=-90} \fi \caption{ Dynamical mobility $\mu(\omega)$ for a single hole in an AFM at various $T$. } \label{5.2} \end{figure} From $\mu(\omega)$ one can extract the d.c. mobility $\mu_0$. Results for $J=0.3~t$ and $J=0$ are presented in Fig.~\ref{5.3}, with the RPA ($J=0$) result for comparison. While for $T>2t$ we see the high-$T$ result $\mu_0 \propto 1/T$ for both $J$, we get for the AFM case a nontrivial $\mu_0 \sim const$ within the regime $J<T<t$. It is plausible that $J>0$ reduces $\mu_0$ as follows also from the frequency-moment analysis at $T \gg t$ (Jakli\v c and Prelov\v sek 1995c), indicating that in an AFM the hole motion is more frustrated leading to stronger scattering. As one can realize from Fig.~\ref{5.2}, it is hard to get meaningful results for $T< J \sim T_{fs} $, since in this regime the central QP peak is essentially undamped. This is an indication that we are already dealing with finite-size effects, as discussed in Sec.~5.2, and the mean free path is beyond our system size $l_s>L$. Naively one would expect from Fig.~\ref{5.3} that in the regime $T<J$ $\mu_0$ would increase with decreasing $T$, whereby the scattering of the QP (AFM polaron) should be on thermally excited magnons. There are however unsolved problems with such a description. In an AFM the QP dispersion (Kane {\it at al.} 1989, Martinez and Horsch 1991) is strongly renormalized and is effectively narrower than the magnon one, hence it seems hard to find appropriate allowed scattering processes. These questions become important when discussing the relation of calculated $\mu_0$ to the resistivity of cuprates at low doping, as discussed in Sec.~5.5. \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_5.3.ps,height=10cm,angle=-90} \fi \caption{ D.c. mobility $\mu_0$ vs. $T$ for $J=0$ and $J/t=0.3$. The RPA result is shown for comparison. } \label{5.3} \end{figure} \subsection{Optical conductivity at finite doping} Even more challenging questions appear at finite hole doping. Whereas at low doping one could treat the transport within the semiconductor-like model of independent QP - AFM polarons, this concept fails even for a moderate $c_h\gtrsim 0.1$ due to the overlap between extended spin deformations around holes. Alternative indications for such a phenomenon have been already discussed in Sec.~4 in connection with $c_h(\mu_h)$. Since in a planar system we are evaluating the sheet conductivity $\sigma(\omega)$ it is convenient to discuss at finite doping the dimensionless quantity $\bar \sigma= \sigma \rho_0$, where $\rho_0=\hbar / e_0^2 = 4.1 k\Omega$ is the universal 2D sheet resistance. Such a quantity has an additional meaning since it makes a direct contact with the theory of localization, where $\bar \sigma_0 \sim \bar \sigma_{min}$ is a characteristic marginal value associated with the 2D minimum metallic conductivity (Mott and Davis 1979). The value of $\bar \sigma_{min}$ and its relevance is however controversial, and it ranges from $\bar \sigma_{min}=0.1$ (Mott and Davis 1979) to $\bar \sigma_{min} \sim 0.5$ found in experiments (Mandrus {\it et al.} 1991) as a borderline between insulating and conducting cuprates at low $T$. \subsubsection{Intermediate doping} Let us go straight to results at the intermediate (optimum) doping, where we can reach lowest $T_{fs}$. Instead of $\sigma(\omega)$ it is more instructive to present the current correlation function $C(\omega)$, equation (\ref{ec1}). To avoid the ambiguities with an additional smoothing, we plot the corresponding integrated spectra \begin{equation} I_C(\omega)=\int_0^{\omega} C(\omega') d\omega'. \label{eo1} \end{equation} In Fig.~\ref{5.4} we present $I_C(\omega)$ for $c_h=3/16$, for various $T\le t$. Spectra reveal several remarkable features (Jakli\v c and Prelov\v sek 1995b): \noindent (i) For $T \le J$ spectra $I_C(\omega)$ are essentially independent of $T$, at least for available $T>T_{fs}$. \noindent (ii) Simultaneously the slope of $I_C(\omega < 2~t)$ is nearly constant, i.e. we find $C(\omega) \sim C_0$ in a wide $\omega$ range. At the same time $C_0$ is only weakly dependent on $J$ as tested for $J/t=0.2 - 0.6$. \noindent (iii) Even for higher $T>J$ the differences in the slope $C_0$, as also in $I_C(\infty)$, appear as less essential. Note that for $T\gg t$ we know exactly $I_C(\infty)=\pi t^2 c_h(1-c_h)$ (Jakli\v c and Prelov\v sek 1995c). \begin{figure} \centering \iffigure \epsfig{file=fig_5.4.ps,height=10cm,angle=-90} \fi \caption{ Integrated current correlation spectra $I_C(\omega)$ at $c_h=3/16$ for different $T\le t$. } \label{5.4} \end{figure} We conclude that $C(\omega<2~t) \sim C_0$ implies a simple universal form (Jakli\v c and Prelov\v sek 1995b), \begin{equation} \sigma(\omega)=C_0 e_0^2{1-e^{-\beta\omega} \over \omega}. \label{eo2} \end{equation} Such a $\sigma(\omega)$ shows a nonanalytic behaviour at $\omega \rightarrow 0$, starting with a finite slope. This is already an indication that $\sigma(\omega)$ is not consistent with the usual Drude form, but rather with a marginal concept (Varma {\it et al.} 1989) where the only $\omega$ scale is given by $T$. It is also remarkable that the form (\ref{eo2}) trivially reproduces the linear law $\rho \propto T$ as well as the non-Drude fall-off at $\omega >T$. It is evident that the expression (\ref{eo2}) is universal containing the only parameter $C_0$ as a prefactor. Experimental results and theoretical considerations are often discussed in terms of the $\omega$-dependent relaxation time $\tau$ and the effective mass $m^*$. These can be uniquely introduced via the complex $\tilde \sigma(\omega)$ and the corresponding memory function $M(\omega)$ (G\"otze and W\"olfle 1972), \begin{equation} \tilde \sigma(\omega)= {ie_0^2 {\cal S} \over \omega + M(\omega)}, \qquad \qquad {\cal S} = -\langle H_{kin} \rangle/2N, \label{eo3} \end{equation} and \begin{eqnarray} {1\over\tau(\omega)} &=&{ M''(\omega)\over 1+ M'(\omega)/ \omega}, \nonumber \\ {m^*(\omega)\over m_t} &=& {2 c_h t\over {\cal S}} \left(1+{M'(\omega)\over\omega}\right), \label{eo4} \end{eqnarray} where $m_t = 1/2ta_0^2$ is the bare band mass. Using relations (\ref{eo3}),(\ref{eo4}) one can formally rewrite $\tilde \sigma(\omega)$ in the familiar Drude form \begin{equation} \tilde \sigma(\omega) = {i c_h e_0^2 \over m^*(\omega)[\omega +i /\tau(\omega)]}. \label{eo5} \end{equation} Employing the equation (\ref{eo5}) we evaluate both $\tau(\omega)$ and $m^*(\omega)$ from known $\tilde \sigma(\omega)$. Results for $c_h=3/16$, corresponding to Fig.~\ref{5.4}, are presented in Fig.~\ref{5.5}. It follows from Fig.~\ref{5.5}a, but even more directly from the form (\ref{eo2}), that in the regime $T\lesssim J$ and $\omega<t$ we can describe the behaviour of $1/\tau$ with a linear law \begin{equation} \tau^{-1} = 2\pi\lambda (\omega + \xi T), \qquad \lambda \sim 0.09,~~~\xi \sim 2.7~. \label{eo6} \end{equation} This dependence falls within the general framework of the MFL scenario. It is however not the form (\ref{cp3}) proposed originally (Varma {\it et al.} 1989, Littlewood and Varma 1990), but rather the one (\ref{cp3a}) deduced from experiments in cuprates (El Azrak {\it et al.} 1995, Baraduc {\it et al.} 1995). It should be however stressed that the asymptotic form (\ref{eo6}) does not allow for any free parameter, i.e. constants $\lambda$ and $\xi$ are universal and independent of any model parameters, whereas within the MFL proposal $\lambda$ is an adjustable parameter while $\xi = \pi$. \begin{figure} \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_5.5a.ps,height=7.5cm,angle=-90}} \quad \subfigure[]{ \epsfig{file=fig_5.5b.ps,height=7.5cm,angle=-90}}} \fi \caption{ (a) Inverse relaxation time $1/\tau$, and (b) mass enhancement $m^*/m_t$ vs. $\omega$, for $c_h=3/16$ and different $T$. The insert in a) shows the $T$-dependence of $1/\tau(0)$. } \label{5.5} \end{figure} It should be also mentioned that the universal dynamics, as described by the equation (\ref{eo2}), does not seem to be restricted only to the particular case of doped AFM, but has a wider applicability, e.g. it has been recently established also in ladder systems (Tsunetsugu and Imada 1997). \subsubsection{Underdoped and overdoped regime} Let us turn to the discussion of results at other dopings $c_h$. Results for $c_h =4/16$ (Jakli\v c and Prelov\v sek 1995c) are in all aspects very similar to the $c_h=3/16$ case. We show in Figs.~\ref{5.6} analogous $I_C(\omega)$ for underdoped $c_h = 2/18$, as well as for overdoped $c_h=7/16$. One feature common to all $c_h$ considered (including $N_h=1$) is the non-Drude behaviour for $\omega>t$. This confirms the belief that the incoherent motion of holes, dominating the high-$\omega$ response as described e.g. within the RPA (Rice and Zhang 1989), can remain a valid concept even at large doping $c_h < 0.5$. \begin{figure}[ht] \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_5.6a.ps,height=8cm,angle=-90}} \quad \subfigure[]{ \epsfig{file=fig_5.6b.ps,height=8cm,angle=-90}}} \fi \caption{ $I_C(\omega)$ for (a) $c_h=2/18$, and (b) $c_h=7/16$, for different $T$. } \label{5.6} \end{figure} Otherwise it is harder to make conclusions on the most interesting low-$(\omega,T)$ regime, both for the underdoped as well as for the overdoped case. For $c_h=2/18$ we note that in contrast to Fig.~\ref{5.4} the low-$\omega$ slope is not constant at $T<J$ but is still gradually decreasing with $T$. This results in a modified variation of the d.c. conductivity $\sigma_0 \propto 1/T^{1-\eta}$ with $\eta >0$, hence also in a sublinear $\rho_0 \propto T^{1-\eta}$. On the other hand we see already for $T=0.1~t$ a drop of $I_C(\infty)$ (closely related to the sum rule) which can be interpreted with a significant persistent (nonscattered) current in a system with p.b.c. The latter emerges as the coherent part with $D_c>0$ according to the equation (\ref{ec3}), not taken into account in the presented $I_C(\omega)$. While we cannot say much on the current relaxation of such a coherent part, it is plausible that an increase in the coherence should decrease the resistivity $\rho_0$, as found in underdoped cuprates below the crossover $T^*$. Otherwise results in Fig.~\ref{5.6}a are close to curves for a single hole in an AFM, Fig.~\ref{5.2}, with the difference that in the latter case $T^*$ appears even higher. In the overdoped case in Fig.~\ref{5.6}b $I_C(\infty)$ starts to deviate from a constant already for $T\lesssim J$. At $T<J$ a sharp increase of an unscattered current appears, so we are unable to speculate on a possible onset of more LFL-like $\rho \propto T^2$. \subsubsection{Effects of the next-nearest-neighbour hopping} It is a relevant question whether the anomalous but universal behaviour of $\sigma(\omega)$ found at the intermediate doping is spoiled by possible additional terms, e.g. by the introduction of the n.n.n. hopping $t'$ term (\ref{cm2}), invoked often to obtain a realistic description of cuprates. When considering the effect of $t'$ it is important to realize that at finite doping $t'>0$ tends to stabilize the AFM ordering while $t'<0$ destabilizes it and tends toward a Nagaoka-type FM state (Bon\v ca and Prelov\v sek 1989). Results for $I_C(\omega)$ at $t' \neq 0$ are presented in Fig.~\ref{5.7}. We choose rather modest $|t'/t|=0.2$, still quite a pronounced effect on $I_C(\infty)$ is evident. We should note that in this case also generalized sum rules, following equations (\ref{ec6}) and (\ref{ec9}), apply. On the other hand, the low-$\omega$ part is not changed essentially. For $t'/t=0.2$ we note again an universal behaviour according to the form (\ref{eo2}). Deviations at lowest $T$ are somewhat larger for $t'/t=-0.2$. A possible interpretation of these results is that $t'\neq 0$ moves the system effectively away from the starting doping regime, i.e. drives an optimum system either towards the underdoped or to the overdoped regime. \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_5.7.ps,height=11cm,angle=-90} \fi \caption{ $I_C(\omega)$ for $c_h=3/18$ and the n.n.n. hopping: a) $t'/t=0.2$, and b) $t'/t=-0.2$, for different $T$. } \label{5.7} \end{figure} \subsection{Resistivity} Finally, we display results for the d.c. resistivities $\rho(T)$, as extracted from $\sigma(\omega \rightarrow 0)$. The extrapolation $\omega \rightarrow 0$ is straightforward at higher $T$. It becomes somewhat more delicate on approaching $T\sim T_{fs}$ due to the appearance of more pronounced structures and $D_c \ne 0$. Actually, we can evaluate $\sigma_0$ either from $\sigma(\omega)$ which involves some smoothing, or from the slope of $I_C(\omega \sim 0)$ which seems to be more reliable. In Fig.~\ref{5.8} we present results for $\rho(T)$ at various $c_h$ for $J/t=~0.3$, and in comparison also for $J=0$. We first notice that at $T>t$ calculated $\rho(T) \propto T$. The slope at $T>t$ is nearly independent of $J$, the main effect of finite $J>0$ being the upward shift of $\rho(T)$ curves. The shift also decreases as doping is increased, since it is plausible that the effect of $J$ nearly vanishes in the overdoped regime $c_h > 0.3$. The slope at $T>t$ can be approximately given (for $c_h < 0.25$) as $d\rho/dT \sim \zeta \rho_0 k_B /c_h t$ with $\zeta \sim 0.4$. This value confirms the relevance ot the RPA (Brinkman and Rice 1970), which yields for a square lattice $\zeta \sim 0.72$. \begin{figure} \centering \iffigure \epsfig{file=fig_5.8.ps,height=10cm,angle=-90} \fi \caption{ Resistivity $\rho$ vs. $T$ at different $c_h=N_h/N$ for $J=~0.3~t$ (full lines) and $J=0$ (dashed lines). } \label{5.8} \end{figure} One could expect essential changes in the regime $T\lesssim J$. It is evident from Fig.~\ref{5.8} that at the intermediate doping $0.15< c_h \lesssim 0.25$ the curve $\rho(T)$ changes the slope at $T \sim J$. Nevertheless $\rho(T<J)$ is still linear in $T$, as emerges also from the observed universal form (\ref{eo2}). As discussed in Sec.~5.4.2, the underdoped systems as $c_h=2/16$ show already deviations from the universality (\ref{eo2}). Two effects should be mentioned. In the regime where our analysis is fully applicable and $l_s<L$ we find sublinear $\rho(T) \propto T^{1-\eta}$, which could be interpreted also as $\rho(T) \sim A+BT$ with a positive intercept $A>0$. On the other hand, the appearance of $D_c\neq 0$ in our results seems to be closely related to another scale $T\sim T^*$ where a kink of $\rho(T)$ appears. It should be also noted that the introduction of the n.n.n. hopping $t' \neq 0$ does not change qualitative conclusions. \subsection{Relation to experiments} In 2D $\sigma$ is naturally expressed in terms of the universal constant $\rho_0=\hbar/e_0^2$. The corresponding 3D conductivity of a stack of 2D conducting sheets with an average distance $\bar{c}$ is given instead by $\sigma_{3D}= \sigma/\bar c$. For comparison with experiments we reduce the 3D measured values to the 2D conductivities for three different cuprates at intermediate doping, i.e. LSCO with $x=0.2$ (Uchida {\it et al.} 1991), Bi$_2$Sr$_2$CaCu$_2$O$_8$ (BISCCO) (Romero {\it et al.} 1992), and YBCO (Cooper {\it et al.} 1993). Experimental results are taken at relatively low temperatures $T<200~K$, i.e. at $T<T_{fs}$ below the sensitivity of our small-system calculations. It is an open experimental question whether spectra reduced in this way are really universal, i.e. whether they depend only on the hole concentration $c_h$ in conducting CuO$_2$ layers or other details of material structure are still important. Moreover, effective $c_h$ for presented materials are known only approximately, except for LSCO where $c_h \sim x$, i.e. it is estimated that $c_h \sim 0.23$ for BISCCO and YBCO (Batlogg {\it et al.} 1994). Taking into account this uncertainty calculated spectra $\sigma(\omega)$ for $c_h=3/16$ are in a quantitative agreement with measurements, as seen in Fig.~\ref{5.9}, where the energies are expressed in eV using $t=0.4$~eV. We note also that at high $\omega \gtrsim 1~eV$ the fall-off of calculated $\sigma(\omega)$ is faster than that of measured ones. This could be explained by the emerging contribution of transitions to excited (charge transfer or upper Hubbard band) states not taken into account within the $t-J$ model, but clearly identified experimentally (Uchida {\it et al.} 1991). \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_5.9.ps,height=10cm,angle=-90} \fi \caption{ Sheet conductivities $\sigma(\omega)$ for various $T/t$, in comparison with measurements in different cuprates. Experimental results refer to $T<200~K$. } \label{5.9} \end{figure} It should be noted that also calculated $\tau^{-1}(\omega)$ and corresponding parameters $\lambda$ and $\xi$, as defined by the equation (\ref{eo6}) are close to the experimental ones. E.g., an analysis in a wide frequency range $\omega<1000~$cm$^{-1}$ presented for YBCO and BISCCO data by El Azrak {\it et al.} (1994) yields $\lambda \sim 0.11$, while a broader class of optimally doped materials is consistent with $0.1<\lambda <0.15$ (Baraduc {\it et al.} 1995). A recent investigation (Startseva {\it et al.} 1997) in a narrower $\omega$ range obtains for an overdoped LSCO ($x=0.22$) similar values $0.06 <\lambda <0.08$. Also $m^*/m_t$ in Fig.~\ref{5.5}b qualitatively agrees with experimental findings (Romero {\it et al.} 1992, El Azrak {\it et al.} 1994, Tanner and Timusk 1992, Cooper {\it et al.} 1993). For a quantitative comparison we should note that by lowering $T$ to experimentally investigated values we expect the increase of $m^*$ at low $\omega$. On the other hand, assuming realistic values $a_0 = 0.38~{\rm nm}$ and $t=0.4~{{\rm eV}}$ we should also take into account that $m_t \sim 0.6~m_0$. In Fig.~\ref{5.10} we compare calculated resistivities to the measured ones. It should be pointed out that there is a restricted $T$ window where a comparison could be made since $T_{fs} \sim 450 K$, whereby at $T\sim T_{fs}$ also finite-size effects start to influence our analysis. Nevertheless, for the intermediate doping $c_h \sim 0.2$ our $\rho(T)$ results match quantitatively well experimental ones for cuprates with comparable hole concentrations, i.e. for BISCCO, YBCO and LSCO with $x=0.15$ (Takagi {\it et al.} 1992, Batlogg {\it et al.} 1994, Iye 1992). \begin{figure} \centering \iffigure \epsfig{file=fig_5.10.ps,height=10cm,angle=-90} \fi \caption{ Sheet resistivities $\rho(T)$ for various dopings (full lines) in comparison with measurements in LSCO with $x=0.15$ (dotted), BISCCO (dashed), and YBCO (dash-dotted). } \label{5.10} \end{figure} Turning to underdoped materials, there are certain features which are reproduced in our results. First, our $\mu(\omega)$ for a single hole, see Fig.~\ref{5.2}, as well as several $T=0$ numerical studies (Sega and Prelov\v sek 1990, Dagotto 1994) indicate an existence of a mid-infrared peak in $\sigma(\omega)$, being a signature of magnon excitations in a spin background with a longer range AFM order. This feature has been seen in experiments in LSCO (Uchida {\it et al.} 1991 ), although its appearance in other materials is controversial. We also reproduce the observation that at moderate doping $c_h>0.1$ the resistivity essentially scales as $\rho \propto 1/c_h$. An unproportional increase of $\rho$ at low doping $c_h<0.1$, as deduced when comparing values in Fig.~\ref{5.10} with single-hole mobilities in Fig.~\ref{5.3}, is as well consistent with experiments, e.g. for LSCO with $x<0.1$ (Takagi {\it et al.} 1992). \setcounter{equation}{0}\setcounter{figure}{0} \section{Magnetic Properties} The static spin response and the spin dynamics in the undoped and doped AFM state in cuprates have been experimentally studied by the neutron scattering (see e.g. Shirane 1991), by NMR techniques (see e.g. Slichter 1994), as well by other methods. Magnetic excitations in cuprates as measured by the neutron scattering reveal in insulating materials La$_2$CuO$_4$ (Keimer {\it et al.} 1992, Hayden {\it et al.} 1996) and YBa$_2$Cu$_3$O$_6$ (Rossat-Mignot {\it et al.} 1991) a remarkable agreement with the magnons within the planar Heisenberg AFM. On the other hand, a consistent description of magnetic properties of doped cuprates is still lacking. Nevertheless it is quite clear that the normal-state spin dynamics differs qualitatively from the one expected for LFL. In doped LSCO NMR and NQR spin-lattice relaxation time $T_1$ on Cu nuclei is generically nearly $T$ and doping independent in the normal state ($T>T_c$) (Imai {\it et al.} 1993), in contrast to the Korringa law $1/T_1 \propto T$ in normal metals. Also, in the same regime the low-$\omega$ dynamical susceptibility in doped systems appears to be consistent with $\chi''(\omega) \propto \omega/T$ (Shirane 1991, Keimer {\it et al.} 1992, Sternlieb {\it et al.} 1993). Experimental investigations in recent years have been focused on underdoped materials, which show in the normal phase the emerging spin gap both in the NMR (Slichter 1994) and in the neutron scattering (Sternlieb {\it et al.} 1993), but also the appearance of charge stripes related to AFM domains (Tranquada {\it et al.} 1995). Since the latter phenomena appear only at lower $T$, hardly accessible in our studies, we shall comment them only briefly lateron. Well understood theoretically is so far the isotropic quantum AFM, with the long-range order at $T=0$ (Manousakis 1991). For doped systems several phenomenological explanations (see Sec.~2.2) have been presented for magnetic properties. NMR and NQR data on the spin dynamics have been interpreted within the NAFL scenario (Millis {\it et al.} 1990, Millis and Monien 1992), where the $T$ dependence is attributed to the variation of the AFM correlation length $\xi(T)$. At low hole doping the mapping on the quantum critical scaling regime of the nonlinear sigma model, with the main ingredient $\xi \propto 1/T$, has been advocated by Sokol and Pines (1993). An alternative scenario for the low-$(\omega,T)$ behaviour is the MFL scenario (Varma {\it et al.} 1989, Littlewood and Varma 1991), where $\xi(T)$ is short and not crucially $T$-dependent. Most reliable model results for magnetic properties of doped AFM have been obtained for static quantities by QMC studies of the Hubbard model and by the g.s. ED of the $t$-$J$ model (see Dagotto 1994), as well as by means of the HTE analysis (Singh and Glenister 1992a). Much less conclusive are results for the spin dynamics. While g.s. ED studies of the spin structure factor $S(\vec q,\omega)$ (Tohyama {\it et al.} 1995, Eder {\it et al.} 1995) reveal quite an essential difference between $S(\vec q,\omega)$ and corresponding charge spectra $N(\vec q,\omega)$, they cannot give reliable conclusions for the low-$\omega$ behaviour. There have been only few attempts to address numerically dynamical properties at $T>0$ (Tohyama {\it et al.} 1993), prior to the application of the FTLM (Jakli\v c and Prelov\v sek 1995a, b). It should be reminded that most challenging questions (related to the normal state) refer to the dynamics at low $\omega$ and to the d.c. spin response in the strong correlation regime $J<t$ at low $T<J$. \subsection{Spin response} We consider the response of the electronic system to the time-dependent, spatially modulated magnetic field, which couples to the spin degrees of freedom via a Zeeman term \begin{equation} H' = N M^z_{\vec{q}} B_{\vec{q}},\qquad M^z_{\vec{q}} = -{g\mu_B\over N}\sum_{i} e^{i\vec{q}\cdot\vec{R_i}} S_{i}^z, \label{ms1} \end{equation} Within the linear response theory the magnetization response is given by \begin{equation} \delta \langle M^z_{\vec{q}}\rangle (\omega)=g^2\mu_B^2 \chi(\vec{q},\omega)B_{\vec{q}}(\omega), \label{ms2} \end{equation} where \begin{eqnarray} \chi(\vec{q},\omega)&=&i\int_{0}^\infty dt\; e^{i\omega t} \langle[S^z_{\vec{q}}(t),S^z_{-\vec{q}}(0)]\rangle, \nonumber \\ S^z_{\vec{q}}&=& (1/\sqrt{N})\sum_{i}e^{i\vec{q}\cdot\vec{R}_i} S^z_{i}. \label{ms3} \end{eqnarray} $\chi''(\vec{q},\omega)$ can be expressed with the dynamical spin structure factor $S(\vec {q},\omega)$ \begin{eqnarray} \chi''(\vec{q},\omega)&=&\pi(1-e^{-\beta\omega})S(\vec{q},\omega), \nonumber \\ S(\vec{q},\omega)&=& {1\over \pi}{\rm Re}\int_{0}^\infty dt\; e^{i\omega t} \langle S^z_{\vec{q}}(t) S^z_{-\vec{q}}(0)\rangle. \label{ms4} \end{eqnarray} $S(\vec {q},\omega)$ is the quantity directly evaluated within the FTLM, following the expression (\ref{fi4}). Consequently the static structure factor and the static susceptibility can be evaluated, \begin{eqnarray} S(\vec{q})&=&\int_{-\infty}^\infty S(\vec{q},\omega)d\omega= \langle S^z_{\vec{q}}S^z_{-\vec{q}}\rangle, \nonumber \\ \chi(\vec{q})&=&\chi'(\vec{q},0)=\frac{1}{\pi}{\cal P} \int_{-\infty}^\infty\frac{\chi''(\vec{q},\omega)}{\omega}d\omega. \label{ms5} \end{eqnarray} \subsection{Uniform spin susceptibility and Wilson ratio} The uniform case $\vec{q}=0$ is particular, since $S_z^{tot}$ is a conserved quantity. Then one can use for the uniform static spin-susceptibility $\chi_0$ the expression \begin{equation} \chi_0=\chi(\vec{q}=0)=\beta S(\vec{q}=0)=\frac{\langle (S_z^{tot})^2\rangle}{N T}. \label{mu1} \end{equation} The calculation reduces to the one involving only a thermodynamic average of a conserved operator, and an analysis analogous to the one used in Sec.~4 applies. This simplification allows for the consideration of larger systems, i.e. we calculate $\chi_0(T)$ for a system of $N=20$ sites in the range $0<c_h<0.3$, while for the undoped AFM (Heisenberg model) we reach $N=26$ (Jakli\v c and Prelov\v sek 1996). Results at several $c_h$ are presented in Fig.~\ref{6.1}. For $c_h=0$ our results agree with the HTE (Singh and Glenister 1992a) down to $T\sim 0.3~J$. In an AFM $\chi_0$ exhibits a maximum at $T=T^*\sim J$, which reflects the onset of the short range AFM order for $T<T^*$. Namely, at $T\to 0$ the longitudinal spin fluctuations in an ordered AFM gradually freeze out, while transverse ones remain constant, thus leading to a reduced $\chi_0(T\to 0)$. \begin{figure} \centering \iffigure \epsfig{file=fig_6.1.ps,height=10cm,angle=-90} \fi \caption{ Uniform susceptibility $\chi_0$ vs. $T$ at several $c_h$ in steps of 0.05 within a system of $N=20$ sites. $c_h=0$ is obtained for $N=26$. } \label{6.1} \end{figure} For a finite doping $\chi_0$ is only weakly diminished for $T>J$, the reduction being due to a reduced number of spins in the system. On the other hand, the qualitative change appears for $T<J$. The maximum $T^*$ gradually shifts to lower $T$ with doping and finally disappears at $c_h>0.15$. In the overdoped regime $c_h \gtrsim 0.15$ we observe a monotonous Pauli-like $\chi_0$ for $T<0.2~t$, which could signify an onset of a low-$T$ behaviour consistent with the LFL picture. Still up to $c_h>0.6$ we do not find the usual LFL behaviour $\chi_0(T)={\rm const}$, but rather $\chi(T)$ still increases nearly linearly on lowering $T\to 0$. Obtained results are quite consistent with experiments on cuprates. First, $\chi_0$ at $T\to 0$ increases with doping as found in LSCO (Johnston 1989, Torrance {\it et al.} 1989) and the existence of a maximum of $\chi_0(T)$ at $T^*(c_h)$ in underdoped cuprates has been interpreted in terms of a pseudogap scale (Batlogg {\it et al.} 1994). It is evident from our results that such a pseudogap is inherent within the $t$-$J$ model and is intimately related to the short range AFM order. It is remarkable that in the model this feature disappears, i.e. $T^* \to 0$, just at a similar (optimum) doping $c_h \sim 0.15$ as in cuprates (Batlogg {\it et al.} 1994). Note also that the unusual linear increase of $\chi_0(T\to 0)$ in the overdoped regime, see Fig.~\ref{6.1} for $c_h=0.2$, seems to be well consistent with experimental results in overdoped LSCO (Loram {\it et al.} 1996). Here we can also comment on the ratio $W=(g^2 \mu_B^2\chi_0)/\gamma$ where $\gamma=C_V/T$. It is used as a test for the concept of nearly free QP, where one expects $W_0= {1\over 3}(\pi k_B/\mu_B)^2$. The meaningful measure is thus the Wilson ratio $R_W=W/W_0$ being often studied in correlated systems, in particular in connection with heavy-fermion metals. In our case it is convenient to perform the test on $s$ directly by defining $\tilde \gamma = s/T$ (see also Loram {\it et al.} 1996). In our notation the dimensionless Wilson ratio can be expressed as \begin{equation} R_W={4 \pi^2 \over 3} {(k_BT/t) (\chi_0 t)\over s/k_B } .\label{mu2} \end{equation} We can now easily evaluate $R_W$ by comparing results in Fig.~\ref{4.3}a with Fig.~\ref{6.1} and results are shown in Fig.~\ref{6.2}. It is quite striking that in the intermediate regime $0.1 \lesssim c_h \lesssim 0.2$ at low $T\ll J$ we find values, very close to the free fermion one, i.e. $R_W=1$. In the overdoped regime calculated values are somewhat larger, but still $R_W<2$. The same seems to be the case for the underdoped case $c_h=0.05$, where results at lowest $T$ should be taken with care. With increasing $T$ also $R_W$ is increasing and seems to reach quite a wide plateau for $T\gtrsim J$ with $R_W \sim 2$. \begin{figure} \iffigure \centering \epsfig{file=fig_6.2.ps,height=10cm,angle=-90} \fi \caption{Wilson ratio $R_W$ vs. T at several $c_h$.} \label{6.2} \end{figure} This finding is quite puzzling since clearly doped AFM are far from a simple LFL, and apparently even further from free fermions. Note however that experimental results also yield $R_W \sim 1$, both for LSCO and YBCO (Loram {\it et al.} 1996). Moreover, the same experiments indicate that $s \propto T\chi_0$ in a wide range of $T$ and doping, so $R_W$ as defined in the equation (\ref{mu2}) is nearly $T$ independent. \subsection{Spin structure factor and dynamical susceptibility} Let us first discuss $S(\vec Q, \omega)$ spectra at the AFM wavevector $\vec Q =(\pi, \pi)$, where the spin response is most pronounced. In Fig.~\ref{6.3} we present results at fixed $T=0.2~t<J$, but various dopings $c_h$ (Jakli\v c and Prelov\v sek 1995a). It should be noted that $S(\vec Q, \omega)$ is not symmetric around $\omega=0$, hence a maximum appears in general at $\omega>0$. The most interesting feature in Fig.~\ref{6.3} is the qualitative change of spectra on doping. At low doping $c_h<0.12$ we see that $S(\vec Q,\omega)$ is dominated by a single central peak, i.e. we are observing mainly the AFM spin fluctuations of a Heisenberg model. The main effect of holes is to reduce the AFM correlation length, and consequently the intensity of the central peak. \begin{figure} \centering \iffigure \epsfig{file=fig_6.3.ps,height=10cm,angle=-90} \fi \caption{ Spin structure factor $S(\vec Q, \omega)$ at fixed $T=0.2~t <J$ and different dopings $c_h$. } \label{6.3} \end{figure} In the intermediate regime $0.12<c_h<0.3$ a high-frequency component with $\omega \gtrsim t$ emerges, coexisting with the remaining low-$\omega$ fluctuations. It is quite plausible to attribute the high-$\omega$ dynamics to the free-fermion-like component of the correlated system, in particular since it appears to be quite independent of $J$ (provided that $J<t$). Although this observation is consistent with previous studies (Tohyama {\it et al.} 1993, Putikka {\it et al.} 1994), the coexistence of spin-fluctuation and free-fermion timescales at the intermediate doping has been clearly established only by using the FTLM (Jakli\v c and Prelov\v sek 1995a). Namely, it is harder for other methods, e.g. within the HTE method, to resolve coexisting different timescales. The dual character is a crucial property, since the free-fermion part determines to large extent static spin correlations $S(\vec q)$ through the relation (\ref{ms5}), as well as the electron (charge) density correlations $N(\vec q)$, discussed in Sec.~8.1. The latter have been in fact interpreted in terms of a quasi FS (Putikka {\it et al.} 1994). On the other hand the low-$\omega$ spin dynamics dominates dynamical and static spin susceptibilities, i.e. $\chi''(\vec q,\omega)/\omega$ and $\chi(\vec q)$, hence the low-$\omega$ neutron scattering and NMR processes. It is also evident that in the overdoped regime $c_h>0.3$ the low-frequency component is disappearing and the free-fermion-like fluctuations tend to exhaust the spectra. $S(\vec Q,\omega)$ is nevertheless quite distinct from the one for free fermions, even at $c_h \sim 0.5$. Before we present results for general ${\vec q}\neq \vec Q$, let us discuss the AFM correlation length $\xi(T)$. One can evaluate $\xi(T)$ from the static real space correlations $S(\vec r)$, corresponding to $S(\vec q)$, i.e. \begin{equation} \xi^2= \frac{1}{4S(\vec{Q})} \sum_i |\vec{r}_i|^2 \exp(i \vec Q\cdot\vec r_i)S(\vec{r}_i). \label{mf1} \end{equation} In the most interesting regime $c_h=0.1-0.3$ we find that $\xi$ is short, typically $\xi \sim 1$, governed by correlations at $r_i =1$. It increases less than $30 \% $ between $T=J$ and $T=J/3$. This finding (Jakli\v c and Prelov\v sek 1995a) is well in agreement with the HTE studies (Singh and Glenister 1992a), with the QMC results for the Hubbard model (Furukawa and Imada 1992) as well as with values $\xi(T=0)$ obtained via the ED within the $t-J$ model (Dagotto 1994). Similar values for $\xi$ can be extracted also considering our results for static $\chi(\vec q)$, although results are less conclusive at low $T\sim T_{fs}$. In recent years experiments in cuprates have shown the possibility of longer range (or even long range) spin correlations with $\vec q \neq \vec Q$ (Tranquada {\it et al.} 1995, Hayden {\it et al.} 1996). This has been related either to an incommensurate spin order or to the appearance of charge stripes. Such structures are well possible within the $t$-$J$ model (Prelov\v sek and Zotos 1990, White and Scalapino 1997b) although it is controversial whether they are stable in the most interesting regime $J\ll t$. Nevertheless we do not expect to resolve them in our calculations for $J/t=0.3$, since they could possibly appear only for $T<T_{fs}$ and could be also missed due to particular clusters which do not prefer expected orderings. Let us turn to dynamical susceptibilities. Fig.~\ref{6.4} displays $\chi''(\vec q,\omega)/\omega$ for fixed $c_h=3/16$, but various $T$ and nonequivalent $\vec q$. Note that on a $4\times 4$ lattice $\vec q= (0,\pi)$ and $\vec q = (\pi/2,\pi/2)$ are equivalent. In contrast to Fig.~\ref{6.3}, high-$\omega$ features are now suppressed. Also, the free-fermion-like component is well separated from the low-$\omega$ part only for $\vec q \sim \vec Q$, where one expects a gap in the response of free fermions with a well defined FS. For other $\vec q$ the free-fermion contribution persists at larger $\omega>J$ in the form of a long tail, while in the low-$\omega$ regime it merges with the spin contribution. \begin{figure} \centering \iffigure \epsfig{file=fig_6.4.ps,height=11cm} \fi \caption{ $\chi''(\vec q,\omega)/\omega$ at $c_h=3/16$ for nonequivalent $\vec q$ and different $T$: $T/t=0.1$ (full line), $0.2$ (dashed line), $0.3$ (dash-dotted line), and $0.5$ (dotted line). } \label{6.4} \end{figure} The most striking feature of Fig.~\ref{6.4} is the strong $T$ dependence of the low-$\omega$ spectra, whereas the AFM correlation length $\xi$ has been found to be only weakly $T$ dependent. This conclusion on $\chi''(\vec q,\omega)/\omega$ seems to hold within the correlation regime $|\vec q - \vec Q| < \xi^{- 1}$, where also $\chi(\vec q) \sim \chi(\vec Q)$. The relevant volume in the $\vec q$ space clearly increases on doping and exhausts for $c_h=3/16$ already the substantial part of the Brillouin zone. The variation at low $\omega$ appears to follow, \begin{equation} \chi''(\vec q,\omega)/\omega \propto 1/T,\qquad \omega < T< J, \label{mf2} \end{equation} or equivalently, from the relation (\ref{ms4}) $S(\vec q, \omega)$ is nearly $T$ and $\omega$ independent in the same regime. On the other hand, at the same doping scaling does not hold for $\vec q$ outside the correlation volume, e.g. for $\vec q=(0,\pi/2)$ in Fig.~6.4, where $\chi''(\vec q,\omega)/\omega$ is approximately $T$ independent as expected within the LFL. Spectra discussed above have as a direct consequence the $T$ variation of the static $\chi(\vec q)$ for $T<J$. We observe a pronounced $T$ dependence, e.g. $\chi(\vec q) \propto 1/T$ in a wide regime $J/3 < T < t$ for all $q$ within the correlation regime. It should be however noted that we are quite restricted in the range of $T/J$, so that more quantitative conclusions on a possible power-law (or logarithmic) variation with $T$ are not feasible. \subsection{Local spin dynamics} In order to describe properly the spin correlation function $S(\vec q,\omega)$ it is very helpful to consider the local spin correlation function $S_L(\omega)$ and its symmetric part $\bar S(\omega)$ (Jakli\v c and Prelov\v sek 1995b), \begin{eqnarray} S_L(\omega)&=&{1\over N} \sum_{\vec q} S(\vec q, \omega), \nonumber \\ \bar S(\omega)&=&S_L(\omega)+S_L(-\omega)= (1+e^{-\beta\omega}) S_L(\omega). \label{ml1} \end{eqnarray} It should be noted that $S_L(\omega)$ and the related susceptibility $\chi_L(\omega)$ have been directly measured in cuprates by the neutron scattering (Shirane 1991), while the NMR relaxation on $^{63}$Cu effectively yields the information on $S_L(\omega \rightarrow 0)$, as discussed in Sec.~6.5. An important restriction on $\bar S(\omega)$ is the sum rule following from the equation (\ref{ms5}), \begin{equation} \int_0^{\infty} \bar S(\omega) d\omega = \langle (S_i^z)^2\rangle = {1 \over 4} (1-c_h). \label{ml2} \end{equation} When we perform the calculation of $\bar S(\omega)$ we omit the singular $\vec q=0$ term in the summation (\ref{ml1}), so the sum rule (\ref{ml2}) serves as an useful test. In Fig.~\ref{6.5} we display $\bar S(\omega)$ for different dopings $c_h=1/20$ and $3/16$, and several $T$ in the range $0.1 \le T/t \le 0.7 $ (Jakli\v c and Prelov\v sek 1995b). It is immediately evident that $\bar S(\omega)$ at the optimum doping $c_h=3/16$ is essentially $T$ independent in a wide $T$ range, although one crosses the exchange-energy scale $T \sim J$. \begin{figure} \centering \iffigure \epsfig{file=fig_6.5.ps,height=10cm,angle=-90} \fi \caption{ Local spin correlation function $\bar S(\omega)$ for $c_h= 1/20$ and $c_h=3/16$, and different $T$. } \label{6.5} \end{figure} For the underdoped case $c_h=1/20$ as well as for the undoped AFM the behaviour is analogous for higher $T >T_0 \sim 0.7~J$. Deviations appear at $T<T_0$, leading at $T\to 0$ to a decrease and to a flattening of $\bar S(\omega<2J) \sim const$, whereby a weak enhancement of $\bar S(\omega>2J)$ is then required by the sum rule. At least this regime of essentially pure Heisenberg model is well understood theoretically. The behaviour at $T>T_0$ is consistent with the quantum critical regime within the AFM, whereas for $T< T_0$ the renormalized classical regime applies (Chakravarty {\it et al.} 1989). In the latter regime we are dealing with longer range AFM correlations $\xi \gg 1$, hence $\bar S(\omega )$ is essentially that of an ordered AFM where the simple magnon picture leads in 2D to $\bar S(\omega<2J) \sim const$. In Fig.~\ref{6.6} we follow the doping dependence of $\bar S(\omega)$ at fixed $T=0.2~t<J$. For convenience we plot again the integrated spectra, in analogy to the equation (\ref{eo1}), \begin{equation} I_S(\omega)= \int_0^{\omega} \bar S(\omega') d\omega'. \label{ml3} \end{equation} For chosen $T$ results are again most reliable at the intermediate doping. The most striking message is that the initial slope of $I_S(\omega)$ and consequently $S_L(\omega \rightarrow 0)$ is nearly doping independent for $0\le c_h \le 0.25$, as well as $T$ independent at the intermediate doping. Only in the overdoped systems with $c_h>0.25$ the low-$\omega$ behaviour changes qualitatively, where the low-$\omega$ contribution is strongly suppressed as expected in a (more) normal FL. In a pure Heisenberg model the spin dynamics is nearly exhausted in the range $\omega<3J$ with excitations being magnons. On the other hand, at the intermediate doping $\bar S(\omega)$ decreases smoothly up to $\omega \sim 4t$, this being a consequence of the free-fermion-like component. Still up to $c_h \sim 0.3$ the dominant scale of spin fluctuations remains related to $J$. \begin{figure} \centering \iffigure \epsfig{file=fig_6.6.ps,height=10cm,angle=-90} \fi \caption{ Integrated spin correlation spectra $I_S(\omega)$ at fixed $T=0.2~t$ and different $c_h=N_h/N$. } \label{6.6} \end{figure} Using equations (\ref{ms4}) and (\ref{ml1}) we finally note that the $T$-independent $\bar S(\omega)$ leads to the universal form \begin{equation} \chi''_L(\omega) = {1\over \pi}\tanh\left( {\omega \over 2T} \right) \bar S(\omega), \label{ml4} \end{equation} where the $T$ dependence is only in the thermodynamic prefactor. It follows from the equation (\ref{ml4}) that for $T<J$ the only relevant scale for $\chi''_L(\omega)$ is given by $T$. The same should hold for the response at fixed $\vec q$. So one can generalize the expression (\ref{ml4}) to \begin{equation} \chi''(\vec q, \omega) \sim {\chi(\vec q) \over \chi_L} \chi''_L(\omega) \sim {2\pi \ln^{-1} (\xi q_m) \over |\vec q - \vec Q|^2 +\xi^{-2}} \chi''_L(\omega), \label{ml5} \end{equation} where the cutoff $q_m\sim \pi$. The scaling is expected to be valid at least for $|\vec q - \vec Q| < \xi^{-1}$. Since $\xi(T)$ remains weakly $T$ dependent on decreasing $T$, this introduces an additional $T$ variation in the expression (\ref{ml5}). Far from the regime $\vec q \sim \vec Q$, in particular for $q \sim 0$, the response is more free-fermion like, i.e. $\chi''(\vec q, \omega)$ is $T$ independent, as seen in Fig.~\ref{6.4} for $\vec q=(\pi/2,0)$. \subsection{Nuclear magnetic relaxation} For experimental studies of static and dynamical spin correlations the NMR and the NQR relaxation are among the most valuable tools. The hyperfine coupling of electronic spins to $^{63}$Cu and $^{17}$O nuclear spins $\vec{I}$ within the CuO plane has been established by a number of authors (Mila and Rice 1989, Shastry 1989, Millis {\it et al.} 1990, Millis and Monien 1992, Slichter 1994), \begin{equation} H_{e-n}=\; \vec{I} \cdot \frac{1}{\sqrt{N}}\sum_{\vec{q}} {\bf A}(\vec{q})\vec{S}_{\vec{q}}. \label{mn1} \end{equation} Form factors are determined by the relative position of electronic spins (assumed to be centered on Cu sites) and nuclear spins, \begin{eqnarray} ^{63}{\bf A}(\vec{q})&=&{\bf A}+2B(\cos q_x a+\cos q_y a), \nonumber \\ ^{17}{\bf A}_\alpha(\vec{q})&=&2C\cos (q_\alpha a/2) \label{mn2}, \end{eqnarray} where $\alpha=x,y$ refer to Cu-O-Cu axis orientation, and ${\bf A}$, $B$, $C$ are the hyperfine couplings. ${\bf A}$ is the direct hyperfine tensor, coupling the electronic and the nuclear spin on the same Cu site, with components $A_\perp$ and $A_\parallel$, corresponding to spins oriented perpendicular and parallel to the CuO$_2$ plane, respectively. The nuclear spin-lattice relaxation is due to the coupling (\ref{mn1}), directly related to the low-$\omega$ electronic spin fluctuations. Let us consider only the magnetic field directed perpendicular to the CuO$_2$ plane. Then the NMR spin-lattice relaxation rate is given by (Slichter 1994), \begin{equation} {1 \over T_1}= \frac{\zeta}{2N}\sum_{\vec{q}}|A_\perp(\vec{q})|^2 S(\vec q,\omega_0), \label{mn3} \end{equation} where $\omega_0$ is the precession frequency of the nuclear spin in the magnetic field, and $\zeta=1$. Since $\omega_0 \ll T$, one can express result as well with the dynamical susceptibility $S(\vec q,\omega)\approx T\chi''(\vec{q},\omega_0)/\omega_0$. The NQR relaxation rate $1/T_1$ is given also by the expression (\ref{mn3}), but with $\zeta = 4$ (Millis and Monien 1992). Form factors (\ref{mn2}) differ essentially for O and Cu sites. Due to AFM fluctuations spin fluctuations are at maximum around $\vec{q}=\vec{Q}$, as in the equation (\ref{ml5}). The weight of this region enhances $1/T_1$ for $^{63}$Cu, while suppressing it for $^{17}$O. This is the most important point explaining the observation of very different relaxation rates on different nuclei. In the evaluation of $1/T_1$ within a finite system we again omit the $\vec{q}=0$ term since $\chi''(\vec q,\omega)/\omega$ is ill defined for $\omega \rightarrow 0$ due to conserved total $S_z$. A proper treatment would require a more detailed spin-diffusion contribution at $q\sim 0$ which however seems to be less important, at least for the undoped system (Sokol {\it et al.} 1993). To allow a direct comparison with experiments we choose hyperfine-coupling parameters as proposed in the literature (Millis and Monien 1992). Results for $^{63}$Cu NQR relaxation rate $1/T_1$ are presented in Fig.~\ref{6.7} (Jakli\v c and Prelov\v sek 1995a). For the undoped case our results agree with Sokol {\it et al.} (1993). It is remarkable that $1/T_1$ appears to be nearly $T$ independent for a broad range of hole concentrations $0.06 <c_h \leq 0.3$. Only for overdoped systems with $c_h > 0.5$ the behaviour at $T<t$ approaches that of a normal LFL with $1/T_1 \propto T$. Since the $^{63}$Cu form factor is maximum (but slowly varying with $\vec q$) near AFM $\vec Q$, we get approximately $1/T_1 \propto S_L(\omega_0 \sim 0)$. Previously established universality of $S_L(\omega)$ naturally explains nearly $T$ independent $1/T_1$, which is moreover also weakly dependent on $c_h$ for $c_h<0.15$. \begin{figure} \centering \iffigure \epsfig{file=fig_6.7.ps,height=10cm,angle=-90} \fi \caption{ $^{63}$Cu NQR spin-lattice relaxation rate $1/T_1$ vs. $T$ for different dopings $c_h$. } \label{6.7} \end{figure} Results as shown in Fig.~\ref{6.7} are in agreement, even quantitatively without any fitting parameters, with remarkable NQR experiments on doped LSCO (Imai {\it et al.} 1993), which reveal nearly $T$- and $x$-independent $1/T_1$ at $T>300~{\rm K}$ and $x<0.15$. In contrast to experiments our results show some variation of $1/T_1$ with $c_h$ in this doping range. For the optimum doping lower rates $1/T_1$ are anyhow expected, consistent with the data for YBCO (Takigawa {\it et al.} 1991, Kitaoka {\it et al.} 1991), where for $T>T_c$ the rate $1/T_1$ is again only weakly $T$ dependent. Essentially different $T$ dependence of $1/T_1$ on Cu and O nuclei has been used as an evidence for the importance of strong AFM correlations and for the non-LFL behaviour in doped cuprates. To evaluate the NMR relaxation rate $1/T_1$ for $^{17}{\rm O}$ we can use again the expression (\ref{mn3}) with proper form factors (\ref{mn2}), projecting out the AFM fluctuations at $\vec q \sim \vec Q$. The omitted $q \sim 0$ contribution introduces in this case a larger uncertainty. Nevertheless, for $c_h =1/16$ and $c_h=2/16$ we recover results very well described with the Korringa behaviour, i.e. $1/T_1 \sim wT $, as seen in Fig.~\ref{6.8} for $T_{fs}<T<J$. In particular for $c_h=2/16$ we get $w \sim 0.3$(sK)$^{-1}$, very close to the actual value $w \sim 0.35$(sK)$^{-1}$ reported for YBCO (Takigawa {\it et al.} 1991, Kitaoka {\it et al.} 1991). For $c_h \ge 3/16$ deviations from the Korringa law become more pronounced due to very short $\xi \sim 1$. \begin{figure} \centering \iffigure \epsfig{file=fig_6.8.ps,height=10cm,angle=-90} \fi \caption{ $^{17}$O NMR spin-lattice relaxation rate $1/T_1$ vs. $T$ for different dopings $c_h$. } \label{6.8} \end{figure} The Gaussian component of the spin-spin relaxation rate $1/T_2$ for $^{63}$Cu nuclei can be on the other hand related to static spin susceptibilities (Slichter 1994), \begin{equation} {1\over T_2} = \sqrt{0.69 \over 8} \left[ {1\over N} \sum_{\vec q} A_{\parallel}^4(\vec q) \chi^2(\vec q) - \left({1\over N} \sum_{\vec q} A_{\parallel}^2(\vec q) \chi(\vec q)\right)^2 \right]^{1/2}.\label{mn5} \end{equation} The ratio $R=T_1T/T_2$ for $^{63}$Cu nuclei relaxation, which is approximately $T$ independent in cuprates, has been interpreted as the evidence for the quantum critical behaviour of the effective spin system (Sokol and Pines 1993). We find quite analogous $T$ variation of $R$ (Jakli\v c and Prelov\v sek 1995a) using calculated $\chi(\vec q,\omega)$ and equations (\ref{mn3}), (\ref{mn5}), together with the standard parameters (Millis and Monien 1992). The origin of $R(T)\sim {\rm const}$ is however considerably different from the quantum critical scenario, since the anomalous $T_2(T)$ dependence can be related to the $T$ dependence of static $\chi(\vec q)$ which does not seem to be connected in an evident way with the variation of $\xi(T)$. Results indicate stronger doping dependence, even at low doping. Quantitatively, obtained values are in reasonable agreement with the experimental ones, e.g. $R \sim 1700~{\rm K}$ at $T=300~{\rm K}$ for YBCO ($c_h \sim 0.23$), while $R \sim 2400~{\rm K}$ for ${\rm YBa}_2 {\rm Cu}_3{\rm O}_{6.63}$ (Sokol and Pines 1993). \subsection{Neutron scattering} The standard neutron scattering using the thermalized neutrons from the reactor is restricted to investigations in the energy regime $\omega <40~$meV. For cuprates this means only the low-energy part of the spin dynamics with $\omega<J$. There have been several measurements, probing the local $S_L(\omega)$ by measuring $S(\vec q ,\omega)$ integrated over $\vec q \sim \vec Q$ (see Shirane 1991). To compare with our results one can simplify the relevant expression (\ref{ml4}) by $\bar S(\omega)\sim \bar S_0$. Such a form has indeed been used to describe experiments (Keimer {\it et al.} 1992, Sternlieb {\it et al.} 1993). In this connection we should take into account that we are not able to establish in our model calculations (the main reason being too high $T_{fs}$) the existence of the pseudogap $\omega_g \sim 0.1~J$, observed in cuprates at low $T\gtrsim T_c$ (Rossat-Mignod {\it et al.} 1991, Sternlieb {\it et al.} 1993) and invoked in the detailed form of $S_L(\omega)$ used in describing neutron scattering experiments. The introduction of spallation neutron sources has expanded crucially the accessible energy range of the spin dynamics to $\omega < 1$eV. This gives a possibility to measure the whole relevant range of the spin dynamics in cuprates, results being available for undoped La$_2$CuO$_4$ and SC LSCO with $x=0.14$ (Hayden {\it et al.} 1996). Data for the undoped material are well described with the standard magnon excitations in an ordered AFM, so more challenging are results for the doped material. Since the authors present the calibrated local $\chi_L''(\omega)$ up to $\omega \sim 0.25$~eV, we can compare data quantitatively with our results e.g. for $c_h=3/16$, as shown in Fig.~\ref{6.9}. For the latter case measurements were performed at $T=17~$K$\ll J$, so we present in Fig.~\ref{6.9} the $T \to 0$ limit of our result following equation (\ref{ml4}) $\chi_L''(\omega>T) \sim \bar S(\omega)/\pi$. We observe that the agreement is quite satisfactory (note that we use both units in eV assuming again $t=0.4$~eV) for $\omega>40~$meV. There is however an additional peak at $\omega \sim 20~$meV, which does not appear in our analysis. This feature can be interpreted as a low-$T$ phenomenon related to the onset of longer range incommensurate spin order found in the same material (Hayden {\it et al.} 1996), clearly beyond the reach of our method. \begin{figure} \centering \iffigure \epsfig{file=fig_6.9.ps,height=10cm,angle=-90} \fi \caption{ Local susceptibility $\chi_L^{\prime\prime}(\omega)$ as calculated for $c_h=3/16$ and $T\to 0$, compared with the neutron scattering result for LSCO with $x=0.14$ (Hayden {\it et al.} 1996). } \label{6.9} \end{figure} \subsection{Orbital diamagnetism} In comparison with the spin response discussed in previous subsections, the diamagnetic contribution $\chi_d$ to the d.c. susceptibility has been much less investigated, both experimentally (Walstedt {\it et al.} 1992, Miljak {\it et al.} 1993) and theoretically (Rojo {\it et al.} 1993). A diamagnetic response emerges from the orbital motion of mobile carriers. For independent electrons it corresponds to the Landau diamagnetism, which is essentially $T$ independent. For strongly correlated electrons such a behaviour is far from evident. Moreover, Rojo {\it et al.} (1993) have shown a relation of the off-diagonal Hall conductivity in an external homogeneous magnetic field $B$ (perpendicular to the plane) to the orbital diamagnetic susceptibility \begin{equation} \sigma_{xy} = B{\partial\chi_d \over \partial c_h} {\partial c_h\over \partial \mu}. \label{mo1} \end{equation} Since the Hall effect is known to be anomalous in cuprates, in particular its $T$ dependence (Ong 1990), one could speculate on similar anomalies in $\chi_d(T)$, however so far both experimental and theoretical answers are lacking. Homogeneous perpendicular field $B$ can be introduced into a tight binding model with a Peierls construction analogous to the equation (\ref{ec5}), $t_{ij} \to t_{ij} {\rm exp}(i \theta_{ij})$, where in the Landau gauge we can write \begin{equation} \theta_{ij}=-e_0\vec A(\vec R_i) \cdot \vec{R}_{ij},\qquad \vec{A}=B(0,x,0). \label{mo2} \end{equation} Our aim is to evaluate the d.c. orbital susceptibility $\chi_d$ via the free energy density ${\cal F}$ \begin{equation} \chi_d = -\left. {\partial^2 {\cal F} \over \partial B^2} \right|_{B=0}. \label{mo3} \end{equation} It is nontrivial to incorporate in a small (tilted) lattice the phases (\ref{mo2}) arising from $B\neq 0$, being at the same time compatible with the p.b.c. (Veberi\v c {\it et al.} 1998). This can be achieved only for quantized fields $B=m B_0$ where the smallest field $B_0=\phi_0/ N$ corresponds to a unit flux $\phi_0$ per 2D system. Discrete $B$ make the calculation of $\chi_d$ via equation (\ref{mo3}) less reliable. It is also a general observation that properties involving finite $B$ (also the Hall effect) are much more sensitive to finite-size effects, while at the same time translational symmetry is lost due to phases (\ref{mo2}) and hence computational requirements are enhanced. On the other hand, we are evaluating only a thermodynamic quantity - free energy density ${\cal F}$ allowing for simplifications discussed in Sec.~4. We consider here only the case of a single hole $N_h=1$, doped into a magnetic insulator. Nevertheless we expect that results remain relevant for low $c_h>0$. Namely, by assuming the independence of charge carriers (spin polarons) $\chi_d$ should scale linearly with the doping, i.e. $\chi_d \propto c_h$ at least for $c_h \ll 1$. We present in Fig.~\ref{6.10} results for $\chi_d(T)/\chi^*$, as obtained by the analysis of a single hole in an AFM with $J/t=0.4$ on a system with $N=20$ sites. Here, $\chi^*=e^2ta_0^4/\hbar^2$ is a characteristic diamagnetic susceptibility, which can be e.g. compared with the Pauli susceptibility of tight-binding free fermions (with constant average density of states) $\chi_P$, $\chi^* \sim 4\chi_P (m_0/m_t)^2$ (Veberi\v c {\it et al.} 1998). We calculate $\chi_d(T)$ using the equation (\ref{mo3}) and the difference between lowest fields, $B=0$ and $B=B_0$. For comparison we present also the $J=0$ case, where finite-size results can be checked with the HTE (Veberi\v c {\it et al.} 1998). \begin{figure} \centering \iffigure \epsfig{file=fig_6.10.ps,height=9cm,angle=-90} \fi \caption{ Diamagnetic susceptibility $\chi_d/\chi^*$ vs. $T$ for a single hole in an spin background with $J/t=0.4$ (full line), and $J=0$ (dashed line).} \label{6.10} \end{figure} The $J=0$ case seems easier to study and to understand. The HTE and small systems show a gradual transition from the high-$T$ regime of an incoherent hopping to the low-$T$ Nagaoka FM state, accompanied with a monotonous increase of $|\chi_d|$. At $T\to 0$ $\chi_d$ is expected to diverge since within the g.s. the dependence on $B$ is nonanalytic, i.e. $\delta E \propto |B|$ following from a cyclotron motion of a QP (FM polaron) in a finite $B>0$. Still the asymptotic behaviour at low $T$ is not simple to establish, since low lying states (above the Nagaoka state) are not easy to describe. The behaviour of the AFM polaron with $J>0$ is more involved. At $T\gg J$ the exchange scale $J$ is not important and results qualitatively follow that of $J=0$. Quite remarkable is however the vanishing of the diamagnetic $\chi_d$ (or even the change of its sign) which appears on lowering $T<t$. This seems to follow from the fact that $J>0$ diminishes and even destroys emerging coherence of a QP. Only at lower $T<J$ the coherence is reestablished and the standard dynamical picture of a coherent AFM polaron is dominating the behaviour in the lowest fields. Reliable results are however quite difficult to obtain since small systems results (note that we have to follow the variation with $B$) are quite sensitive to the system shape and boundary conditions due to a very anisotropic and degenerate QP dispersion. At low $c_h$ the relevant value of the diamagnetic susceptibility $c_h \chi_d$ is small compared to the spin contribution (Veberi\v c {\it et al.} 1998), hence it is not evident that it is directly measurable (Walstedt {\it et al.} 1992, Miljak {\it et al.} 1993). Nevertheless, the variation $\chi_d(T)$ is in any case very interesting in connection with the challenging problem of the Hall effect, related via the equation (\ref{mo1}), and the anomalous $R_H(T)$. Both $\chi_d$ as well as the Hall effect emerge from a coupling to orbital currents, which is the essence of the relation (\ref{mo1}). We note that at $T \gg J$ the HTE for the Hall constant $R_H(T)$ is analogous to that of $\chi_d(T)$ discussed here. Crossing the scale $T\sim J$ remains the challenge, whereby it seems that at such $T$ hole-like $R_H>0$ is even reduced from its high-$T$ value (Shastry {\it et al.} 1993). Nevertheless Hall effect measurements (Ong 1990) indicate that $R_H>0$ recovers for $T<T^*$, varying strongly with $T$, and finally seems to approach the well known quasiclassical result for $T\to 0$ (Prelov\v sek 1997b). \setcounter{equation}{0}\setcounter{figure}{0} \section{Spectral Properties} One of the most desired quantities, giving valuable information on electronic properties in interacting systems, is the single-electron Green's function $G(\vec k,\omega)$ and the corresponding spectral function $A(\vec k,\omega)$. In theoretical treatments $G(\vec k,\omega)$ is usually the basis to understand two-particle properties, such as dynamical electric and magnetic response functions etc. In strongly correlated systems it appears however that calculations as well as the measurements of spectral functions are more delicate than that of most response functions, hence we also treat them towards the end. Spectral functions $A({\vec{k}},\omega)$ are directly accessible via ARPES experiments. There has been in recent years an impressive advance in the resolution of ARPES as well in the novel information on cuprates, mostly obtained on convenient layered BISCCO materials (see review by Shen and Dessau 1995, and references therein). In the regime of optimum-doped and overdoped cuprates ARPES experiments reveal for a wide class of materials a well defined large FS consistent with the Luttinger theorem and a QP dispersion close to a tight-binding band (Shen and Dessau 1995, Ding {\it et al.} 1996, Marshall {\it et al.} 1996). This seems to imply the validity of the concept of a metal with an electronic-like FS, although such a simple picture is in an apparent contradiction with anomalous magnetic and transport properties. The LFL interpretation is also spoiled by the overdamped character of QP peaks. Although a large background makes fits of particular lineshapes nonunique, the QP inverse lifetime is found to be of the order of the QP energy, supporting the possibility of the MFL scenario. On the other hand, in underdoped materials (Marshall {\it et al.} 1996) well defined QP crossing the FS gradually lose their character and spectral functions approach a qualitatively different regime of an undoped AFM (Wells {\it et al.} 1995), where the QP dispersion is that of a spin polaron in an AFM background. An excitement has been induced also by the leading-edge shift (Marshall {\it et al.} 1996), observed by ARPES in underdoped cuprates in the normal state. It indicates that a pseudogap consistent with the $d-$wave SC symmetry persists well above $T_c$. On the other hand recent angle-integrated PES measurement (Ino {\it et al.} 1997b) indicate another higher energy pseudogap scale being in a closer correspondence with $T^*(c_h)$ as deduced from $\chi_0(T)$, $\rho(T)$ and $R_H(T)$ (Batlogg {\it et al.} 1997). It has been unclear whether above features of spectral functions could be reproduced within generic models, such as the Hubbard and the $t$-$J$ model, in particular in the most challenging regime of the intermediate doping. $A(\vec k,\omega)$ in 2D models have been so far studied mainly via numerical techniques (Dagotto 1994), in particular using the ED (Stephan and Horsch 1991, Eder {\it et al.} 1994, Moreo {\it et al.} 1995) and the QMC (Bulut {\it et al.} 1994, Preuss {\it et al.} 1995, 1997). These studies have established a reasonable consistency of the model QP dispersions with the experimental ones, as well as the possibility of a large FS, but have not been able to investigate closer the character of QP, as e.g. manifested in the corresponding self energies $\Sigma(\vec k,\omega)$ being in the core of anomalous low-energy properties. \subsection{Green's function} When applying the usual definition of the retarded Green's function (Mahan 1990) to the model of strongly correlated electrons, one should keep in mind that the $t$-$J$ model (\ref{cm1}) acts within the restricted fermionic basis. To avoid the ambiguity it is convenient to build restrictions directly in the definition of retarded Green's functions using the projected operators $\tilde{c}^\dagger _{{\vec k}s}, \tilde{c}_{{\vec k}s}$, being Fourier transforms of $\tilde{c}^\dagger_{is}, \tilde{c}_{is}$, respectively, \begin{equation} G^{R}(\vec{k},\omega)=-i \int_0^\infty dt ~e^{i\omega^+ t} \langle \{\tilde{c}_{\vec{k}s}(t),\tilde{c}^\dagger_{\vec{k}s}(0)\}_+ \rangle. \label{sg1} \end{equation} Note that within the subspace of allowed states the definition is identical to the usual one. $A(\vec{k},\omega)$ can be represented as a sum of the electron spectral function $A^{+}(\vec{k},\omega)$ and the hole spectral function $A^{-}(\vec{k},\omega)$, \begin{equation} A(\vec{k},\omega)=-\frac{1}{\pi}{\rm Im}G^R(\vec{k},\omega) =A^{+}(\vec{k},\omega)+A^{-}(\vec{k},-\omega), \label{sg2} \end{equation} which are expressed in terms of eigenstates as \begin{eqnarray} A^{+}(\vec{k},\omega)&=&\Omega^{-1} \sum_{n,m}|\langle \Psi_m|\tilde c_{\vec{k}s}^\dagger| \Psi_n\rangle|^2 e^{-\beta (E_n-\mu N^n_{e})}\delta(\omega+\mu-E_m+E_n), \nonumber \\ A^{-}(\vec{k},-\omega)&=&\Omega^{-1} \sum_{n,m}|\langle \Psi_m|\tilde c_{\vec{k}s}| \Psi_n\rangle|^2 e^{-\beta (E_n-\mu N^n_{e})}\delta(-\omega-\mu-E_m+E_n). \label{sg3} \end{eqnarray} $A^-$ represents the dynamical response when one electron is taken out (hole added) from a system. It corresponds (within the sudden approximation) to PES experiments and is proportional to corresponding cross sections in ARPES. On the other hand, $A^+$ describes the dynamics on adding an electron and is related to IPES experiments. In the equilibrium electron and hole spectra are related via the Fermi function $f(\omega)=1/(\exp(\beta\omega)+1)$, e.g. $A^{-}(\vec{k},-\omega)=f(\omega)A(\vec{k},\omega)$. The omission of doubly occupied states has important implications, in particular we note the change in anticommutation relations, \begin{equation} \{\tilde{c}^\dagger_{\vec{k}s}, \tilde{c}_{\vec{k}s}\}_+= {1\over N}\sum_i \{\tilde{c}^\dagger_{is}, \tilde{c}_{is}\}_+= {1\over N} \sum_i (1- n_{i-s}). \label{sg4} \end{equation} In the paramagnetic phase where $\langle n_{is} \rangle = (1-c_h)/2$, the equation (\ref{sg4}) leads to a modified sum rule for the spectral function (usually normalized to unity), i.e. \begin{equation} \alpha= \int_{-\infty}^\infty d\omega ~A(\vec{k},\omega)= \frac{1}{2}(1+c_h)<1. \label{sg5} \end{equation} An important consequence of the relation (\ref{sg5}) is an upper bound for the momentum distribution function, \begin{equation} \bar n_{\vec{k}}=\langle c^\dagger_{\vec{k}s}c_{\vec{k}s}\rangle =\int_{-\infty}^\infty A^{-}(\vec{k},\omega)d\omega < \alpha. \label{sg6} \end{equation} The anomalous sum rule (\ref{sg5}) leads to some ambiguity in the definition of the self energy $\Sigma(\vec k,\omega)$. We keep the usual one, \begin{equation} G^R({\vec{k}},\omega)=1 /(\omega-\Sigma({\vec{k}},\omega)), \label{sg7} \end{equation} in order to retain the standard definition of QP parameters. An alternative would be to replace the nominator with $\alpha$ (Prelov\v sek 1997a). The equation (\ref{sg7}) implies that the self energy does not vanish for $|\omega| \to \infty$, but shows rather an asymptotic form $\Sigma({\vec{k}},\omega\to\infty)\to(1-1/\alpha)\omega$. Note also that the $t$-$J$ model doesn't allow for a free propagation even at $J=0$, therefore we do not include any free term $\epsilon_{\vec{k}}$ in the definition (\ref{sg7}). It is straightforward to calculate $A(\vec k, \omega)$ using the FTLM (Jakli\v c and Prelov\v sek 1997), again following the equation (\ref{fi4}). Since operators $\tilde c^\dagger_{\vec k s},\tilde c_{\vec k s}$ act between subspaces with different $N_h$, i.e. $N_h\to N_h \mp 1$, we in fact calculate separately $A^+$ and $A^-$ according to definitions (\ref{sg3}). In calculations at low $T$ and fixed $c_h=N_h/N$ we also replace the grand-canonical average with the canonical one in the subspace of states with $N_h$ holes. For a proper interpretation of results it is important to locate correctly the chemical potential $\mu(T,c_h)$, discussed in Sec.~4.1. We have a nontrivial check of $\mu$ via the one-particle DOS \begin{equation} {\cal N}(\varepsilon)={2 \over N} \sum_{\vec{k}} A({\vec{k}},\varepsilon-\mu), \label{sg8} \end{equation} which relates $\mu$ to $c_h$, \begin{equation} \int_{-\infty}^\infty {\cal N}(\omega+\mu) (e^{\beta\omega}+1)^{-1}d\omega=1-c_h. \label{sg9} \end{equation} We find a very good agreement between $\mu$ calculated from ${\cal N}(\varepsilon)$ and the one determined from thermodynamics in Sec.~4.1. The advantage of the FTLM over the g.s. ED is that it gives smooth $G^R(\vec k,\omega)$ even for quite low $T<J$ provided that $T>T_{fs}$. This allows for a meaningful calculation of $\Sigma(\vec k,\omega)$, which gives a new insight in the properties of QP. We are thus able to evaluate also QP parameters defined via $\Sigma(\vec k,\omega)$ (Mahan 1990), in particular the QP energy $E_{\vec k}$, the weight $Z_{\vec k}$ consistent with the equation (\ref{cp4}), and the damping $\Gamma_{\vec k}$, \begin{eqnarray} E_{\vec{k}}&=&\Sigma'({\vec{k}},E_{\vec{k}}), \nonumber \\ Z_{\vec{k}}^{-1}&=&1- \partial \Sigma'({\vec{k}},\omega)/ \partial \omega|_{\omega = E_{\vec k}},\label{sg10} \\ \Gamma_{\vec{k}}&=&Z_{\vec{k}}|\Sigma''({\vec{k}},E_{\vec{k}})|. \nonumber \end{eqnarray} \subsection{Single hole in the antiferromagnet} We begin by considering the spectral properties of a single hole injected in an AFM. Since in the undoped model particle-number fluctuations are suppressed, the chemical potential is not a well defined quantity in this case and we use here unreduced energy $\varepsilon$ instead of $\omega=\varepsilon-\mu$. On the left side of Fig.~\ref{7.1} the spectral function of a hole in an AFM is presented. Although $T=J/2$ is relatively high, implying only short-range AFM correlations, still a coherent QP peak is clearly observed at low energies for most available $\vec{k}$. For $\vec{k}=(\pi/2,\pi/2)$ and $\vec{k}=(2\pi/3,0)$ peaks coincide with the g.s. minima for systems with $N=16$ and $N=18$ sites, respectively. This indicates that even at $T>0$ the major contribution to the coherent spectral weight comes from transitions between the g.s. configurations with $N_h=0$ and $N_h=1$. The coherent QP peak shows a dispersion on the energy scale comparable to $J$, consistent with the self-consistent Born approximation (Schmitt-Rink {\it et al.} 1988, Kane {\it et al.} 1989), which yields a bandwidth $W\sim 0.6~t$ for $J/t=0.3$ (Martinez and Horsch 1991). Since no long-range AFM order is expected in small systems, the propagation of the hole appears to be determined by short-range AFM correlations. In addition to the QP band at $\varepsilon \lesssim \varepsilon_0 \sim 2t$, we observe also high-energy features at $\varepsilon \ll \varepsilon_0$, which are only weakly dependent on $\vec{k}$ and can be attributed to the incoherent hole propagation. High-energy peaks have been observed also in g.s. ED studies, but the structure was claimed to disappear at large $N$ (Poilblanc {\it et al.} 1993). Our study shows a nontrivial structure consistent at all considered $N=16-20$, so we would rather attribute peaks to resonance (excited) states of the AFM spin polaron. \begin{figure} \centering \iffigure \epsfig{file=fig_7.1.ps,height=16cm} \fi \caption{ Spectral functions for the undoped AFM (left), and for a system with finite doping $c_h=1/N$ (right), both obtained on lattices with $N=16,18$ sites. } \label{7.1} \end{figure} In Fig.~\ref{7.2} we present the $T$-dependence of the spectral function at $\vec{k}^*=(\pi/2,\pi/2)$, which corresponds to the g.s. momentum of a hole in an AFM. We realize from Fig.~\ref{7.2} that there is a significant spectral change going from $T\sim 0$ to $T>J$. Namely, in the high-$T$ regime the QP peak becomes progressively broader and merges with the incoherent background, which at the same time loses the detailed structure (resonance peaks). This development is plausible since for $T>J$ we are dealing essentially with the hole propagation in a random spin background well accounted for within the RPA (Brinkman and Rice 1970). \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_7.2.ps,height=10cm,angle=-90} \fi \caption{ Spectral function $A(\vec k^*,\varepsilon)$ for the undoped AFM at several $T$.} \label{7.2} \end{figure} At $T<J$ the QP energy $E_{\vec k}$ and the corresponding weight $Z_{\vec k}~$, as determined by equations (\ref{sg10}), show only a weak $T$ dependence and we get $\tilde Z= Z_{\vec k^*}(T)\sim 0.16$. We note that this value is obtained for the spectral function normalized according to the relation (\ref{sg5}) with $\alpha = 1/2$. It is easy to see that in order to make a comparison with the usual definition of the QP weight of an AFM polaron (Kane {\it et al.} 1989, Martinez and Horsch 1991) we should rather take ${\cal Z}=\tilde Z/\alpha$ and our value ${\cal Z} \sim 0.32$ is consistent with the one ${\cal Z} \sim 0.284$, obtained within the self-consistent Born approximation for $J/t=0.3$. \subsection{Spectral functions at finite doping} In Figs.~\ref{7.1},~\ref{7.3} we present the gradual development of spectral functions towards the optimum doping. For $c_h\sim 0.06$ and $c_h\sim 0.12$ results obtained on systems with $N=16,18$ sites are combined for $N_h=1$ and $N_h=2$ holes, respectively. As distinct to the undoped case, the spectra are broadened with Lorentzians of variable width according to $\delta=\delta_0+(\delta_\infty-\delta_0)\tanh^2(\omega/\Delta)$, with $\delta_\infty =0.2t$, $\delta_0=0.04t$, and $\Delta=1.0t$. In this way sharper (well resolved) $\omega \sim 0$ features remain unaffected, while some of finite-size structures at higher $|\omega|>t$ are smoothened out. In any case, $\delta$ is always taken smaller than the energy widths of main spectral features. \begin{figure} \centering \iffigure \epsfig{file=fig_7.3.ps,height=14cm} \fi \caption{ $A(\vec k,\omega)$ for $N_h=2$ holes on $N=16,18$ sites, and for $N_h=3$ holes on $N=16$ sites. } \label{7.3} \end{figure} At finite doping we observe in $A({\vec{k}},\omega)$ at all available $\vec{k}$ a coexistence of sharper features, associated with coherent QP peaks, with a broad incoherent background, as already established in ED studies (Stephan and Horsch 1991, Moreo {\it et al.} 1995). Coherent peaks disperse through $\omega=0$ as ${\vec{k}}$ crosses the FS. Within the given resolution in the ${\vec{k}}$ space the FS appears to be large for all $c_h \gtrsim 0.12$, consistent with the Luttinger theorem (Luttinger 1960). An analogous dispersion is observed for the underdoped case $c_h\sim 0.06$, although the latter one requires a more careful interpretation as given lateron. The total QP dispersion $W$ is broadening as $c_h$ is increasing, qualitatively consistent with the slave boson result where $W \propto c_h t + \chi J$ (Baskaran {\it et al.} 1987, Wang {\it et al.} 1991). \subsubsection{Intermediate doping} We first discuss in more detail the regime of intermediate doping (Jakli\v c and Prelov\v sek 1997), in our lattices $c_h \sim 0.12$ and $c_h=3/16\sim 0.19$. In Fig.~\ref{7.4} we show $\Sigma({\vec{k}},\omega)$ evaluated for $c_h=3/16$ at $T=0.1~t\sim T_{fs}$. We first notice an asymmetry between the PES ($\omega<0$) and the IPES ($\omega>0)$ spectra at all ${\vec{k}}$. $\Sigma''({\vec{k}},\omega)$ are small for $\omega>0$, as compared to $\omega<0$. For ${\vec{k}}$ outside the FS this implies a modest QP damping, consistent with sharp IPES peaks seen in $A({\vec{k}},\omega)$ in Fig.~\ref{7.3}, containing the major part of the spectral weight. $\Sigma'({\vec{k}},\omega)$ shows an analogous asymmetry, in the region $\omega>0$ resembling moderately renormalized QP. Due to the definition (\ref{sg7}) the slope of $\Sigma'$ is not zero even at $|\omega|\gg t$. \begin{figure} \centering \iffigure \epsfig{file=fig_7.4.ps,height=13cm,angle=-90} \fi \caption{ Self energy $\Sigma(\vec k,\omega)$ for $c_h=3/16$ at various ${\vec{k}}$. } \label{7.4} \end{figure} The behaviour on the PES ($\omega<0$) side is very different. For all ${\vec{k}}$, $-\Sigma''(\omega)$ are very large at $\omega <-t$, leading to overdamped QP structures. We should however distinguish two cases. For $\vec k$ well outside the FS, large $|\Sigma''(\omega<0)|>t$ does not invalidate a well defined QP at $\omega>0$, but rather induces a weak reflection (shadow) of the IPES peak at $\omega <0$, as well seen in Fig.~\ref{7.3} for ${\vec{k}} =(\pi,\pi)$. On the other hand, for ${\vec{k}}$ inside or near the FS the variation with $\omega$ is more regular, and can be directly related to the QP damping. Particularly remarkable feature in Fig.~\ref{7.4} is a linear $\omega$ dependence of $\Sigma''(\omega\lesssim 0)$ for ${\vec{k}}= (\pi/2,0)$ and ${\vec{k}}=(\pi/2,\pi/2)$. Meanwhile ${\vec k}=(0,0)$, being further away from the FS, seems to follow a different more LFL-type behaviour. Similar general conclusions follow also from results obtained for the lower doping $c_h=2/18$. To address the latter point in more detail, we show in Fig.~\ref{7.5} the $T$ variation of $\Sigma''(\vec k,\omega)$ for both dopings at selected ${\vec{k}}$ inside the FS. For $c_h=3/16$ the linearity of $\Sigma''(\omega)$ is seen in a broad range $-2t\lesssim \omega \lesssim 0$ at the lowest $T$ shown. Moreover, for this optimum doping the $T$ dependence is close to a linear one, taking into account a small residual (finite-size) damping $\eta_0>0$ at $\omega=0$. Data can be well described by \begin{equation} -\Sigma''(k\lesssim k_F,\omega)=\eta_0+\gamma(|\omega|+\xi T), \qquad \gamma\sim 1.4, \quad \xi\sim 3.5. \label{ss1} \end{equation} Such a dependence is consistent with one of the proposed forms (\ref{cp3a}) within the MFL scenario, as well as with the conductivity relaxation $1/\tau(\omega)$ in the equation (\ref{eo6}). In contrast, the $T$ dependence for $c_h=2/18$ seems somewhat different, and we find $-\Sigma''\propto \omega$ only in the interval $-t\lesssim\omega\lesssim -T$. This would indicate the consistency with the original MFL form (\ref{cp3}) (Varma {\it et al.} 1989), however we should be aware that in the underdoped regime finite-size effects are larger at given $T$. \begin{figure} \centering \iffigure \epsfig{file=fig_7.5.ps,height=13cm,angle=-90} \fi \caption{ $\Sigma''({\vec{k}},\omega)$ for $c_h=2/18$ and $c_h=3/16$ at different $T$ and selected ${\vec{k}}$ inside the FS. } \label{7.5} \end{figure} Here we should comment on the manifestation of the FS in small fermionic systems. At $T,\omega\sim 0$ we are dealing in the evaluation of the expression (\ref{sg3}) with the transition between g.s. of systems with $N_h$ and $N_h'=N_h \pm 1$ holes, respectively. Since these g.s. states have definite momenta ${\vec{k}}_0$, they induce sharp QP peaks for particular ${\vec{k}}={\vec{k}}'_0- {\vec k}_0$ defining in this way for a small system the FS, apparently satisfying the Luttinger theorem. At the same time the corresponding QP damping vanishes, i.e. $\Sigma''({\vec{k}},\omega \sim 0)\sim 0$. From $\Sigma({\vec{k}},\omega)$ we can calculate QP parameters, as defined in equations (\ref{sg10}). Results for $c_h \sim 0.12$ and $c_h \sim 0.19$ are presented by Jakli\v c and Prelov\v sek (1997). We note that parameters are of a limited meaning for $\vec k$ well inside FS due to a large damping $\Gamma_{\vec{k}}$. Otherwise, $E_{\vec{k}}$ shows the enhancement of the dispersion with $c_h$, while $\Gamma_{\vec{k}}<E_{\vec{k}}$ for $|{\vec{k}}|>k_F$. To establish the relation with the FL theory one has to evaluate QP parameters at the FS ${\vec k} \sim {\vec{k}}_F$. Of particular importance is the renormalization factor $\tilde Z= Z_{{\vec{k}}_F}(T=0)$. We find that at lowest $T\sim T_{fs}$ $Z_{\vec{k}_F}(T)$ is still decreasing as $T$ is lowered, e.g. a variation (cca. 20\%) is found within the interval $0.1<T/t<0.3$. This is not inconsistent with the MFL form $\tilde Z^{-1}\sim\ln(\omega_c/T)$, although such a conclusion should be taken with care due to narrow $T$ interval. Regarding the size of $\tilde Z$ at low but finite $T>0$ we note, that the value of the momentum distribution function is very close to the maximum (\ref{sg6}) for all $k<k_F$, i.e. $\bar n_{\vec{k}}\sim (1+c_h)/2$ (Stephan and Horsch 1991). Taking the FS volume according to the Luttinger theorem $V_{FS}/V_{BZ}=(1-c_h)/2$ and assuming that $\bar n_{\vec{k}}$ falls monotonously with $k$, this implies an inequality for the discontinuity $\tilde Z$, \begin{equation} \tilde Z= \delta \bar n_{\vec{k}_F} < {2c_h \over 1+c_h}. \label{ss2} \end{equation} One should keep in mind that a discontinuity is meaningful at $T \sim 0$, while at $T>0$ it indicates only a gradual step. Still we find a consistent result $\tilde Z= 0.28$ for $c_h=3/16$, while for $c_h=2/18$ the value $\tilde Z= 0.35$ is somewhat larger, possibly due to higher $T$ used in calculations. An analogous argument can be used to explain the electron-hole asymmetry of $A(\vec{k},\omega)$. Holes added to the system at $k<k_F,\omega<0$ move in an extremely correlated system, strongly coupled to the spin dynamics (Prelov\v sek 1997a), the latter also exhibiting the anomalous MFL-type behaviour, as given by the expression (\ref{ml4}). On the other hand, states for $k>k_F$ are not fully populated, allowing for a moderately damped motion of added electrons with $\omega>0$. Let us comment here on the relevance of our results to ARPES spectra near the optimum doping. The main observation is that for $k<k_F$ we find the linewidth typically $\Gamma\sim t\sim 0.4~{\rm eV}$ well compatible with experiments at ${\vec{k}}$ away from the FS and at intermediate doping (Shen and Dessau 1995, Ding {\it et al.} 1996, Marshall {\it et al.} 1996). Also the MFL form has been claimed to describe better the experiments (Olson {\it et al.} 1990), although this point is still controversial (see Shen and Dessau 1995). \subsubsection{Overdoped and underdoped regime} Let us turn from the intermediate doping first to the overdoped regime. Spectra at higher doping $c_h=4/16$ and $c_h=5/16$ are shown in Fig.~\ref{7.6}. While $A({\vec{k}},\omega)$ at $c_h=4/16$ are still qualitatively similar to the spectra at intermediate doping, at $c_h=5/16$ they are already showing a substantially different behaviour. In the latter case the incoherent background in the PES spectrum is reduced while QP peaks for $k<k_F$ are sharpened and all of them are essentially underdamped with $\Gamma_{\vec k}<E_{\vec k}$. Nevertheless, we still have $\Gamma_{\vec k} \sim E_{\vec k}$ so it is not evident whether this is already a (more) normal LFL. \begin{figure} \centering \iffigure \epsfig{file=fig_7.6.ps,height=14cm} \fi \caption{ $A(\vec k,\omega)$ for $c_h=4/16$ and $c_h=5/16$. } \label{7.6} \end{figure} The proper analysis of the underdoped regime is even more delicate. At $c_h \sim 0.06-0.12$, as presented in Figs.~\ref{7.1}, \ref{7.3}, a 'shadow' feature is clearly seen in $A({\vec{k}},\omega)$ for $k>k_F$. Namely, along with the principal peak at $\omega>0$ a weak bump in the $\omega<0$ part of the spectrum appears. In $\Sigma'(\vec k,\omega)$ for $k>k_F$ this effect emerges as a strong oscillation, most pronounced for ${\vec{k}}=(\pi,\pi)$. It leads even to a double solution of the equation (\ref{sg10}) for $E_{\vec k}~$, analogous to the phenomena within the spin-bag scenario (Kampf and Schrieffer 1990). This effect is becoming less visible at larger doping, e.g. at $c_h=3/16$, so its disappearance is related to the reduction of the AFM correlation length $\xi \sim 1$. Let us try to make the contact with recent ARPES measurements on underdoped BISCCO (Marshall {\it et al.} 1996), which show the opening of the pseudogap even for $T>T_c$, in particular near the $\vec k^{**}=(\pi,0)$ point, where the FS seems to disappear. On the other hand the QP peak disperses through the FS along the $\Gamma$-M direction where the FS seems to be well defined. If we look closer at Fig.~\ref{7.1} for the lowest finite doping $c_h \sim 0.06$, we can recognize very similar features. We realize that close to $\vec k^{**}$, the PES part shows spectra very analogous to the undoped AFM in Fig.~\ref{7.1}, where due to the doubling of the Brillouin zone the dispersion is expected to reach maximum at $\vec k^{*}=(\pi/2,\pi/2)$ (note however that on a $4\times4$ lattice with n.n. hopping $k^*$ and $k^{**}$ are equivalent) and folds back for $\vec k$ outside the reduced AFM zone. E.g., in Fig.~\ref{7.1} for $c_h \sim 0.06$ we can clearly recognize PES peaks also for $\vec k=(\pi,\pi/3)$ and $\vec k=(\pi,\pi/2)$, being lower in $\omega$ than the QP peak at $\vec k^*$. Such a dispersion is also seen in ARPES. At the same time there is for both mentioned $k>k^*$ also a visible peak in the IPES part, which can be interpreted as a QP dispersing through the FS. The PES and the IPES peak in this case are separated by a gap, which does not seem to be a finite-size effect, and is qualitatively consistent with the ARPES feature (Marshall {\it et al.} 1996). It should be also noted that along the $\Gamma$-M direction the QP dispersion is closer to a normal metallic one with much less pronounced 'shadows' for $k>k_F$. \subsubsection{Influence of next-nearest-neighbour hopping} It is evident from spectral functions at the intermediate doping, e.g. in Figs.~\ref{7.3}, \ref{7.6} that the shape of the FS obtained within the $t$-$J$ model is closer to a circular one than to the one found in cuprates via ARPES experiments (Shen and Dessau 1995), where e.g. $\vec k^{**}$ is inside the FS. It has been well established that such a FS topology can be reproduced by adding the n.n.n. hopping term (\ref{cm2}) with $t'<0$ (Tohyama and Maekawa 1994). It is however still an open question whether the modified hopping and different FS topology are essential for the anomalous properties of cuprates. In order to see the qualitative changes introduced by the n.n.n. hopping, we present in Fig.~\ref{7.9} $A(\vec k,\omega)$ at fixed doping $c_h=3/16$ for two different $t'/t=-0.2$ and $t'/t=-0.35$. It is evident that $t'<0$ lifts the degeneracy between of the $\vec k^*$ and $\vec k^{**}$ even in the $4\times 4$ lattice. Also the QP dispersion changes and the QP peak with $\vec k^*$ is now at $\omega>0$, i.e. outside the FS, while the QP with $\vec k^{**}$ moves inside the FS. The effect is enhanced for larger $|t'/t|$. It is however important to realize that other QP properties are not essentially changed, in particular we find that the self energy $\Sigma''(\vec k,\omega)$ remains qualitatively similar to the case with $t'=0$. \begin{figure} \centering \iffigure \epsfig{file=fig_7.7.ps,height=14cm} \fi \caption{ $A(\vec k,\omega)$ for $c_h=3/16$ with nonzero n.n.n. hopping: $t'/t=-0.2$ and $t'/t=-0.35$.} \label{7.9} \end{figure} \subsection{Density of states} Finally we present in Fig.~\ref{7.7} the doping dependence of the single-particle DOS ${\cal N}(\varepsilon)$ (Jakli\v c and Prelov\v sek 1997), as defined in the equation (\ref{sg8}). In a weakly doped system with $c_h\sim 0.06$, a QP coherent peak (of width $\sim 2J$) is seen at $\omega =\varepsilon-\mu\lesssim 0$. Besides that, a broad background due to well understood incoherent hole motion is dominating $\omega \ll 0$. At such low doping the electron $\omega>0$ part of the DOS is weak, with the total intensity $2 c_h$ as compared to $1-c_h$ of the hole part. With increasing $c_h$ the hole incoherent background does not reduce appreciably in intensity, while the coherent dispersion near the Fermi energy widens and cannot be well distinguished in ${\cal N}(\varepsilon)$ from the background. At the same time, the electron part of the DOS is increasing, both in the weight and in the width. Note that oscillations which appear for $\omega>0$ at higher doping $c_h>3/16$ are essentially finite-size effects. Namely we are dealing with a restricted number of finite-size $\vec k$ while QP peaks are becoming quite narrow. Such effects are even more pronounced in overdoped systems with $c_h>0.25$, where the incoherent part is finally losing its intensity and the coherent QP are dominating the whole $\varepsilon$ regime. It should be also mentioned that the introduction of the n.n.n. hopping $t'\neq 0$, in spite of a significant change of the FS shape, does not seem to induce an essential change within the DOS. \begin{figure} \centering \iffigure \epsfig{file=fig_7.8.ps,height=13cm,angle=-90} \fi \caption{ Single-particle DOS ${\cal N(\varepsilon)}$ for different dopings at fixed $T/t=0.15$. For $c_h \sim 0.06$ and $c_h \sim 0.12$ we present joined DOS for $N=16,18$ systems with $N_h=1$ and $N_h=2$, respectively. The thin vertical lines denote the chemical potential $\mu(T)$. } \label{7.7} \end{figure} Let us comment here the relation with the entropy $s$, Fig.~\ref{4.3}a. If we assume the low-$T$ form as follows from the LFL theory (Abrikosov {\it et al.} 1965), we get \begin{equation} s ={ \pi^2 {\cal N}(\mu)\over 3\tilde Z}~T. \label{sd1} \end{equation} We see from Fig.~\ref{7.7} that ${\cal N}(\mu)$ is only weakly doping dependent at intermediate $c_h$ and actually also quite close to the free-fermion value (assuming a tight-binding model with the hopping parameter $t$). Taking $\tilde Z \sim 0.28$ for $c_h = 3/16$, we get $s \sim 0.29~k_B$ at $T=0.1~t$, quite consistent with thermodynamic results in Sec.~4.2. Here the surprising fact is that such $s$ represents a large increase over the undoped AFM and appears due to a very low concentration of mobile holes introduced into an AFM. Let us now look closer on ${\cal N}(\varepsilon)$ in the underdoped regime. We realize quite clearly that at $c_h=0.12$ a pseudogap starts to appear in the DOS at $\omega=\varepsilon-\mu \sim 0$ and becomes even more pronounced at $c_h \sim 0.06$. It is evidently related to a similar phenomenon in $A(\vec k,\omega)$ in Fig.~\ref{7.1}. In order to establish the origin of such a gap, we follow in Fig.~\ref{7.8} the variation of ${\cal N}(\varepsilon)$ with $T$. We find that the gap structure disappears at $T>J$, so it must be related to the onset of the short-range AFM order. Also the gap energy $\Delta\epsilon \lesssim t$ seems to be determined rather by $J$. It is plausible that the onset temperature is related to pseudogap scale $T^*$, discussed before in relation with the maximum in $\chi_0(T)$, and with the crossover in $\rho(T)$. \begin{figure}[ht] \centering \iffigure \mbox{ \subfigure[]{ \epsfig{file=fig_7.9a.ps,height=8cm,angle=-90}} \quad \subfigure[]{ \epsfig{file=fig_7.9b.ps,height=8cm,angle=-90}}} \fi \caption{ ${\cal N(\varepsilon)}$ at various $T$ for dopings: (a) $c_h=1/20$ and (b) $c_h =2/18$. } \label{7.8} \end{figure} Recently the DOS has been measured via the integrated PES and IPES for LSCO compound in a wide range of doping $0<c_h<0.3$ (Ino {\it et al.} 1997b). More reliable are PES spectra proportional to ${\cal N}^+(\omega)={\cal N}(\omega+\mu)f(\omega)$, giving information on the hole part $\omega<0$. Some features are well consistent with our results. In the overdoped regime $c_h>0.2$ ${\cal N}^+(\omega<0)$ appears quite flat. At the same time, there is a substantial and broad IPES contribution indicating an enhanced DOS at $\omega>0$. On the other hand, for $c_h<0.17$ a pseudogap starts to emerge gradually at $\omega \lesssim 0$. E.g., the inflection point in ${\cal N}^+(\omega)$ moves from $\omega \sim 0$ toward $\omega \sim -0.2~$eV at lowest dopings. This is quite close to values found in Figs.~\ref{7.8}. Thus the gap scale indeed seems to be related to the pseudogap scale $T^*$ discussed in connection with $\chi_0$. Note that the latter is substantially larger than the leading-edge shift $\sim 30$~meV found in ARPES (Marshall {\it et al.} 1996). Still it is well possible that both phenomena are closely related. \setcounter{equation}{0}\setcounter{figure}{0} \section{Other properties} \subsection{Electron density correlations} It is interesting to study also electron-density dynamics, as contained within the dynamical charge susceptibility $\chi_c(\vec q,\omega)$ and the corresponding density correlation function $N(\vec q,\omega)$, defined by \begin{eqnarray} \chi_c''(\vec{q},\omega)&=&e_0^2(1-e^{-\beta\omega})N(\vec{q},\omega), \nonumber \\ N(\vec{q},\omega)&=& {\rm Re}\int_{0}^\infty dt\; e^{i\omega t} \langle n_{\vec{q}}(t) n_{-\vec{q}}(0)\rangle, \label{od1} \\ n_{\vec{q}}&=& (1/\sqrt{N})\sum_{i}e^{i\vec{q}\cdot\vec{R}_i} n_{i}. \nonumber \end{eqnarray} Recently several numerical and analytical studies within the $t$-$J$ model have been devoted to static $N(\vec q)$ (Putikka {\it et al.} 1994) as well as to dynamical $N(\vec q,\omega)$ (Tohyama {\it et al.} 1995, Eder {\it et al.} 1995, Khaliullin and Horsch 1996) at $T=0$, mainly in connection with the interesting conjecture of the charge-spin separation (Anderson 1987, Baskaran {\it et al.} 1987) in layered cuprates. A parallel study of charge correlations $N(\vec q,\omega)$ and the spin structure factor $S(\vec q,\omega)$ however reveals quite an essential difference between an 1D Luttinger liquid and a 2D doped AFM (Tohyama {\it et al.} 1995), as well as between charge and spin fluctuations. Typically, $N(\vec q,\omega)$ show a broad peak at high $\omega \sim 6~t$ at $\vec q \sim \vec Q$ due to the incoherent motion of holes, while at $q \to 0$ density fluctuations narrow in a collective acoustic-like mode (Khaliullin and Horsch 1996). In contrast to $S(\vec q,\omega)$, $N(\vec q,\omega)$ spectra seem to scale linearly with $c_h$ at low doping (Eder {\it et al.} 1995). It should be also noted that quite generally $N(q\to 0,\omega)$ fluctuations can be related to (anomalous) current correlations $C(\omega)$ through the particle-number conservation, so an anomalous behaviour is quite possible for $N(\vec q,\omega)$ as well. The investigation of $N(\vec q,\omega)$ at $T>0$ has not been performed so far. We present here some results, using the FTLM in analogy with the evaluation of $S(\vec q,\omega)$ in Sec.~6. Our aim is again twofold. On one hand, FTLM gives smoother spectra at $T>T_{fs}$, which could be compared to g.s. ED results. More interesting is the low-frequency regime $\omega <J$, which cannot be studied reliably in g.s. calculations, and moreover the $T$ dependence of spectra in this regime. In Fig.~\ref{5.11} we present normalized spectra $N(\vec q,\omega)/c_h$ for nonequivalent $\vec q$ on a lattice of $N=18$ sites and fixed $T=0.2~t$. In this case we use $J/t=0.4$. Shown are results for $c_h=1/18$ and $c_h=3/18$, belonging to regimes of low doping and optimum doping, respectively. We notice that normalized spectra are very similar for both $c_h$, at least for larger $q$, taking into account also that a more detailed structure for $c_h=1/18$ is partly due to finite size effects. Our results are in general close to previous numerical (Tohyama {\it et al.} 1995, Eder {\it et al.} 1995) and analytical (Khaliullin and Horsch 1996) results. It is characteristic that at $\vec q \sim \vec Q$ the intensity is nearly exhausted by a large peak at $\omega \sim 6t$. As expected from conservation laws spectra are narrowing for $q \to 0$. \begin{figure} \centering \iffigure \epsfig{file=fig_8.1.ps,height=14cm,clip=} \fi \caption{ Normalized density correlation spectra $N(\vec q,\omega)/c_h$ at fixed $T/t=0.2$ and $J/t=0.4$ at two dopings: $c_h=1/18$ and $c_h=3/18$. } \label{5.11} \end{figure} It is however remarkable that in spite of quite low $T$ in Fig.~\ref{5.11} $N(\vec q, \omega \to 0)$ does not seem to approach zero for any $\vec q$, the effect being more visible for smaller $q$. We follow the evolution of density fluctuations with $T$ at intermediate doping $c_h=3/16$ for chosen $\vec q=(\pi/3,\pi/3)$ in Fig.~\ref{5.12}. It is evident that $N(\vec q,\omega)$ is nearly $T$ independent even for $\omega<T$. The behaviour is quite similar to the one observed for local spin correlations $\bar S(\omega)$ in Fig.~\ref{6.5} and is consistent with the MFL assumption (\ref{cp2}) for the charge susceptibility $\chi_c''(\omega)$, requiring via the equation (\ref{od1}) $N(\vec q,\omega \to 0)=const$ independent of $T$. The MFL behaviour of $N(\vec q, \omega)$ at intermediate doping is not entirely surprising due to the connection to the anomalous MFL-type $C(\omega)$ and optical conductivity $\sigma(\omega)$, as well due to the relation of the electron dynamics to anomalous spin fluctuations $S(\vec q,\omega)$ (Prelov\v sek 1997a). \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_8.2.ps,height=10cm,angle=-90} \fi \caption{ $N(\vec q,\omega)$ for $\vec q=(\pi/3,\pi/3)$ and different $T$ at fixed $c_h=3/18$. } \label{5.12} \end{figure} $N(\vec q,\omega)$ have been mainly the subject of theoretical considerations so far. It should be however noted that a relation could be established with experimentally relevant dielectric function $\epsilon(\vec q,\omega)$, if the model would incorporate also the long-range Coulomb repulsion between electrons. Such an additional interaction could be possibly treated within a framework of an random-phase-like analysis. It would be desirable since $\epsilon(\vec q,\omega)$ can be measured also in cuprates via the electron-energy loss spectroscopy (N\"ucker {\it et al} 1989) which yields directly Im$[1/\epsilon(\vec q,\omega)]$. \subsection{Electronic Raman response} One of very useful probes for the investigation of excitations in an AFM has been the Raman scattering. Using the latter method, it has been also established that the reference insulating cuprates correspond well to a planar Heisenberg AFM, where pronounced Raman resonance processes at low $T$ are attributed to two-magnon excitations (Lyons {\it et al.} 1988, Singh {\it et al.} 1989). A general framework for the theoretical explanation of Raman processes in correlated systems has been so far given within the Hubbard model, where the effective Raman operator for resonant and off-resonant conditions has been derived (Shastry and Shraiman 1990). Still there are several unresolved questions concerning Raman processes in cuprates, and more generally in doped AFM. One puzzling aspect concerns the observed pronounced $T$ dependence of the two-magnon peak in undoped cuprates (Knoll {\it et al.} 1990). The latter has been interpreted as the phonon-induced broadening, but also invoking higher-order resonant processes (Chubukov and Frenkel 1995). Another problem is the doping dependence of Raman spectra. Recent experiments, performed on YBCO materials in the resonant regime (Blumberg {\it et al.} 1994), show a dramatic increase of the broadening of the two-magnon peak with doping, so that spectra appear essentially flat in the normal phase when approaching the optimum doping. At the same time, the peak position does not move appreciably. Here we consider only the resonant Raman processes, since most experiments are performed in the resonant regime. In contrast to response functions considered in previous sections, the operator relevant for the resonant Raman scattering cannot be determined uniquely within the $t$-$J$ model, since it necessarily involves higher (resonant) levels. Still we can adopt a view that more complete (e.g. three-band) model for cuprates can be mapped onto an effective Hubbard model (Hybertsen {\it et al.} 1990) for resonant transitions of interest. Within the Hubbard model near half-filling the Raman-scattering operator has been derived in the limit $~t/U \ll 1$ (Shastry and Shraiman 1990) yielding the well known form for the Heisenberg AFM (Parkinson 1969), \begin{equation} R= A \sum_{\langle ij\rangle} (\vec \epsilon_{inc}\cdot \vec r_{ij}) (\vec \epsilon_{sc}\cdot \vec r_{ij}) (\vec S_i\cdot \vec S_j - {1\over 4} n_i n_j), \label{or1} \end{equation} where $\vec \epsilon_{inc}, \vec \epsilon_{sc}$ are the incident and the scattered electric-field unit vectors, respectively, and the amplitude factor $A = 4 t^2/(U-\omega_{inc}) \propto J$ incorporates the resonance at the incident-light frequency $\omega_{inc} \sim U$. The Raman spectral function is then given by \begin{equation} I(\omega) = {1\over \pi N}{\rm Re}\int_0^\infty dt ~e^{i \omega t} \langle R(t)R(0)\rangle, \label{or2} \end{equation} which can be calculated at $T>0$ using the FTLM analogous to other correlation functions. Within the $t$-$J$ model with finite doping the Raman intensity $I(\omega)$, corresponding to the operator (\ref{or1}), has been evaluated so far at $T=0$ using the ED (Dagotto and Poilblanc 1990), while for the undoped AFM also $T>0$ has been studied using the full diagonalization (Bacci and Gagliano 1991). Results obtained via the FTLM have an advantage over previous ones also for low $T$, since the Raman intensity is not expected to be singular for $T\to 0$, so much smoother spectra are obtained by using small but finite $T>T_{fs}$. We restrict our analysis to the dominant $B_{1g}$ scattering geometry with $\vec \epsilon_{inc} =(\vec e_x +\vec e_y)/\sqrt{2}$ and $\vec \epsilon_{sc} = (\vec e_x -\vec e_y)/\sqrt{2}$. The undoped AFM at $T>T_{fs}\sim J/2$ has been studied by Prelov\v sek and Jakli\v c (1996), and reveals at low $T$ a two-magnon Raman peak at $\omega \sim 3.3~J$, consistent with experiments. The width is however quite narrow and starts to broaden substantially only at higher $T \sim J$, where a gradual transition to a broad featureless spectrum occurs. Hence other mechanisms have to be invoked to account for the observed pronounced $T$-dependent width at lower $T\ll J$ (Knoll {\it et al.} 1990). In doped systems $T$ plays a less essential role and typically we observe only a weak decrease of the Raman intensity in the interval $T/J = 0.3 - 1.0$ (Prelov\v sek and Jakli\v c 1996). On the other hand, the dependence on doping is essential, as evident from Fig.~\ref{8.1}, where we present $I(\omega)$ for various dopings $c_h \le 0.25$ at lowest $T=0.15~t> T_{fs}$. Already the smallest nonzero doping $c_h =0.05$ increases dramatically the width of the two-magnon peak, while spectral features become overdamped on approaching the optimum doping $c_h \sim 0.15$. It is however remarkable, that the peak position does not shift appreciably in the underdoped regime $c_h \leq 0.1$. Only for $c_h > 0.15$ the spectra change to a broad central-peak form with a maximum at $\omega = 0$. \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_8.3.ps,height=8cm,angle=-90} \fi \caption{Raman intensity $I(\omega)$ for different dopings $c_h$ at fixed $T=0.15~t$. } \label{8.1} \end{figure} How can we interpret the above results for the $B_{1g}$ resonant Raman scattering ? Up to the overdoped situation $c_h \sim 0.3$ the scattering seems to be dominated by the spin-exchange part (\ref{or1}), since the latter determines the low-energy fluctuations even at the optimum doping. Still, there are evident changes with doping, since the reduced AFM correlation length at larger $c_h$ induces a large broadening of the Raman two-magnon peak and a reduction of its intensity. In this respect the doped system behaves quite similarly to an undoped AFM but at elevated $T\sim J$. A systematic resonant-Raman scattering study has been recently performed on a sequence of YBCO materials (Blumberg {\it et al.} 1994). It seems that in the underdoped regime our model results well reproduce experimental ones, both regarding the shape of the Raman spectra and their intensity variation with doping. At optimum doping experiments still reveal a weakly pronounced peak at the same frequency $\omega \sim 3~J$, while our results on Fig.~\ref{8.1} already reveal a maximum at $\omega=0$, nevertheless both spectra are nearly flat. \subsection{Thermoelectric power} Among other transport properties the thermoelectric power (TEP) or the Seebeck coefficient $S$ is one of the most frequently investigated. Within the cuprate family TEP has been measured for several materials, for an overview see e.g. Kaiser and Uher (1991) and more recent analyses (Tallon {\it et al.} 1995, Cooper and Loram 1996). Again an universal behaviour has emerged depending mainly on hole doping. For weakly doped materials $S$ is large and positive, with some $T$ dependence at higher $T$. With increasing doping TEP $S$ decreases rapidly. It also falls off nearly linearly with $T$, but with a rather small slope. Quite consistently $S$ is nearly vanishing for the optimum doping and changes sign to a negative one in overdoped materials. This variation has clearly some parallel with the doping dependence of the Hall coefficient $R_H$, also showing a transition from a hole-like behaviour $R_H>0$ at low doping into an electron-like $R_H<0$ in overdoped samples. It seems plausible that both anomalous properties are related and emerge from strong correlations in these materials. Theoretically TEP in doped AFM or more generally in strongly correlated metals has not attracted much attention so far (Hildebrand {\it et al.} 1997). This is not surprising, since it seems to require the understanding of both electrical and thermal currents. Within the linear response theory (Mahan 1990) the TEP can be expressed in terms of particle current $j$ and energy current $j_E$ correlation functions, \begin{equation} S={1\over e_0T}\left[ \mu -{C_{j_Ej}(\omega \to 0) \over C(\omega \to 0)} \right], \label{ot1} \end{equation} where mixed correlation function $C_{j_Ej}(\omega)$ is defined in analogy with the expression (\ref{ec1}) for $C(\omega)$. The evaluation of $S(T)$ thus requires the calculation of $C_{j_Ej}(\omega)$, in addition to the current-current correlation function $C(\omega)$ discussed extensively in Sec.~5. It is straightforward to derive the expression for $j_E$, nevertheless the operator is more involved due to three-site terms. We do not intend to give here a more complete analysis of $C_{j_Ej}(\omega)$ which will be presented elsewhere. We mention however only a preliminary observation, that $C_{j_Ej}(\omega)$ and $C(\omega)$ appear to be closely related, \begin{equation} C_{j_Ej}(\omega) = \zeta C(\omega), \label{ot2} \end{equation} i.e. $\zeta$ is nearly $\omega$ independent for $\omega<t$, but as well $T$ independent at low $T<J$. Since we are dealing at $c_h>0$ with a metal, although with a strange one, we expect a finite $|S(T\to 0)| < \infty$. Hence we deduce from the equation (\ref{ot1}) the condition $\zeta = \mu(T=0)$. Our numerical results for $\zeta$ are indeed consistent with this assumption. We thus arrive at the simplified expression, \begin{equation} S \sim {1\over e_0T}[\mu_h(T=0)-\mu_h(T)], \label{ot3} \end{equation} which involves only $\mu_h(T)$, studied in Sec.~4.1 and shown in Fig.~\ref{4.1}. In the low-$T$ regime of interest we claim in a broad regime of doping $c_h<0.3$ a linear variation (\ref{t2a}) of $\mu_h(T)$. This leads directly to $S\sim - \alpha S_0$ where $S_0 k_B/e_0 = 86 \mu V/K$. It is evident from Fig.~\ref{7.3} that $\alpha$ is changing sign from a negative for $c_h<c_h^*$ to a positive one for $c_h>c_h^*$. In Fig.~\ref{8.2} we plot our result for $S(c_h)$. Note that due to simplifications involved in the expression (\ref{ot3}) we do not intend to consider the $T$ variation, hence our results apply to lowest reachable $T\lesssim T_{fs} \sim 0.1~t$. For comparison we present on the same plot also experimental results for LSCO and oxygen deficient YBCO, taken from Cooper and Loram (1996), whereby data refer to the normal state at $T \sim 300K$. Qualitative agreement between theory and experiment is quite reasonable. In LSCO values of $S$ are decreasing with doping, while $S$ is quite high at low doping $c_h<0.05$. Towards optimum doping $S$ essentially vanishes. Similar is the trend for YBCO data, although there are clearly quantitative differences. The main disagreement between the calculated and experimental $S$ is in the overdoped regime, where our results indicate $S<0$ with probably too large values. It is however well possible that in this regime our analysis is not adequate, first due to $\mu_h(T)$ approaching more normal LFL $T^2$ dependence, as well as due to the breakdown of the relation (\ref{ot2}). \begin{figure}[ht] \centering \iffigure \epsfig{file=fig_8.4.ps,height=10cm,angle=-90} \fi \caption{ Thermoelectric power $S$ vs. $c_h$ for $T/t=0.1$. Experimental result for LSCO and oxygen deficient YBCO are taken from Cooper and Loram (1996). } \label{8.2} \end{figure} \setcounter{equation}{0}\setcounter{figure}{0} \section{Discussion} In the absence of accepted microscopic or phenomenological theories, which would at least qualitatively describe normal-state properties of cuprates in the whole doping regime, it is also hard to summarize our numerical results in a compact manner. It is first important to understand which phenomena determine the optimum doping in cuprates and in microscopic models. In cuprates the optimum is defined by $T_c(c_h^{opt})=max$, but at least very close in doping, i.e., $c^*_h \sim c_h^{opt}$ are also materials where the low-$T$ electronic entropy $s$ and the uniform susceptibility $\chi_0$ are maximum (Loram {\it et al.} 1996), where the pseudogap scale disappears, i.e. $T^* \to T_c$ (Batlogg 1997), etc. Within the $t$-$J$ model we can associate the optimum so far only with respect to normal-state properties, e.g. with $s(c_h^*)=max$. Such a maximum has to exist at a nontrivial $c_h^*>0$ due to the relation (\ref{t5}) with $\mu_h(T)$, where the latter must change the sign from a weakly doped AFM (semiconductor-like regime) at $c_h \gtrsim 0$ to a regime of nearly free 2D fermions at $c_e \sim 0$. It is plausible that within the prototype model (\ref{cm1}) $c_h^*$ is determined by the interplay of the spin exchange energy $E_J=\langle H_J \rangle$ and the (hole) kinetic energy $E_t=\langle H_t \rangle$. Since at low doping we get $|E_t| \propto t c_h$ and $|E_J| \propto J$ we can get a rough estimate $c_h^* \propto J/t \ll 1$. From such an argument it seems plausible that at $c_h \sim c_h^*$ the AFM correlation length becomes very short $\xi \sim 1$, while AFM correlations are essentially irrelevant in the overdoped regime $c_h > c_h^*$. \subsection{Universality at intermediate doping} It is evident from our results that normal-state properties are universal at the optimum doping $c_h \sim c_h^*$. Since the maximum $s(c_h)$ is broad within the $t$-$J$ model also this regime is quite extensive, i.e. $0.15<c_h<0.25$ for chosen $J/t = 0.3$. The main feature of this regime obtained in our study is the MFL-type dynamics in several response functions: the local spin susceptibility $\chi_L''(\omega)$, the optical conductivity $\sigma(\omega)$, density correlations $N(\vec q,\omega)$, and spectral functions $A(\vec k,\omega)$. It is characteristic that the dynamics in the low-$\omega$ ($\omega<J$) regime seems to be universal, i.e. it is determined solely by $T$ itself without any additional parameters, as concluded from equations (\ref{eo2}) for $\sigma(\omega)$ and (\ref{ml4}) for $\chi_L''(\omega)$. It should be however reminded that this universality could be restricted to a certain $T$ range, e.g. we can show its validity only for $T_{fs}<T<J$. It seems that the origin of the universality is in the large degeneracy of the spin subsystem, where the frustration in induced by the hole doping. A plausible argument is that holes tend to induce FM correlations to minimize kinetic energy, while spins prefer an AFM ordering. In any case, only spins have enough degrees of freedom to explain large entropy $s \sim 0.4~k_B$ even at $T<J$. Let us attempt an explanation for the universality of $\bar S(\omega)$ at $c_h\sim c_h^*$. From the explicit representation in terms of eigenstates, \begin{equation} \bar S(\omega) = (1+ e^{-\beta\omega}) \sum_{n,m} {e^{-\beta E_n}\over Z} |\langle \Psi_n| S_i^z|\Psi_m \rangle|^2 \delta (\omega - E_m+E_n), \label{d1} \end{equation} we can first discuss conditions that the response is $T$ independent for $\omega > T$. Since in this case the prefactor is constant, a naive requirement appears to be that low lying states $E_n -E_0 < T$ have similar matrix elements to excited states with $E_m-E_n > T$. Nevertheless the validity of this observation clearly depends on the character and on the density of low-lying many-body states, and e.g. does not apply at low $T$ to a system with a unique g.s. As realized from the entropy results in Sec.~4.2 the degeneracy and the density of many-body states is clearly very large at the intermediate doping. To explain the $T$ independence of $\bar S(\omega)$ even for $\omega<T$, we recall the sum rule (\ref{ml2}) and assume that there is no characteristic scale $\omega_c< T$ which could introduce an additional low-$\omega$ structure in $\bar S(\omega)$. A natural candidate for such a scale in an AFM is the (gap) frequency $\omega_c \sim c /\xi$ where $c$ is the spin wave velocity and $\xi$ the AFM correlation length. Here originates the essential difference between an undoped and the optimally doped AFM. In a pure AFM within the renormalized classical regime $\xi$ is exponentially large for $T\ll J$ (Chakravarty {\it et al.} 1989) and consequently $\omega_c \ll T$. On the other hand, at the intermediate doping $\xi$ appears to be short and not strongly $T$ dependent, i.e. $\xi \propto 1/\sqrt{c_h}$. The latter situation excludes $\omega_c<T<J$, hence leading to an universal $\bar S(\omega)$. Turning to the discussion of $\sigma(\omega)$, as given by the equation (\ref{eo2}), we realize that it is not meaningful to associate $1/\tau(0) \propto T$ with a current relaxation rate. Namely the current-current correlation function $C(\omega)$ shows a broad uniform spectrum indicating on a very fast current decay with $1/\tilde \tau \sim 2t$. Thus it seems that a general condition for the validity of the specific MFL form (\ref{eo2}) is a fast current decay with $1/\tilde \tau \gg T$. This could happen due to holes moving in a disordered spin subsystem, which leads to entirely incoherent collisions among holes and the appropriate mean free path $l_s$ is only few unit cells. Hence it seems that we are dealing with a novel phenomenon of quantum diffusion which follows an universal form (\ref{eo2}). In an analogous way also the relaxation of single electron states near FS is affected, as manifested in the MFL form of the self energy $\Sigma(\vec k,\omega)$, equation (\ref{ss1}). Anyhow it is plausible that the QP damping $\Sigma''(k\sim k_F,\omega)$ and the current relaxation $1/\tau(\omega)$, as given by the equation (\ref{eo6}), are closely related. There are also phenomenological arguments (Littlewood and Varma 1991) which indicate that the MFL-type dynamics of a boson (spin) subsystem implies a MFL behaviour of the electron propagator. Recently such a relation has been derived by one of the authors (Prelov\v sek 1997a), employing a decoupling approximation for the projected $G^R(\vec k,\omega)$. On the other hand it should be stressed that within the same regime certain electronic properties appear quite normal, i.e. close to the LFL behaviour. First, spectral functions indicate on a rather well defined large FS following the Luttinger theorem. This seems to indicate that correlated electronic state evolves continuously from a noninteracting one, whereby the FS can be traced to momenta of lowest (g.s.) many-body states. It is much harder to explain quite LFL behaviour of the entropy $s \propto T$ at low $T$, as seen in Fig.~\ref{4.3}a, and moreover a free-fermion-like Wilson ratio $W \sim W_0$, as found in Sec.~6.2. It should be noted again that such an universality could be restricted to a certain $T$ window. Namely it is well possible that at low $T\sim T_{coh}$ some coherence or ordering could appear. Note however that e.g. the onset of a spin ordering would invalidate our arguments on the MFL-type universality in the low-$T$ phase. This is relevant for cuprates showing stripe structures (Tranquada {\it et al.} 1995) or longer range incommensurate ordering (Hayden {\it et al.} 1996). Even in this case the dynamics at higher-$\omega$ is possibly not affected by the ordering, as seen in the analysis of the neutron scattering experiments in Sec.~6.6. At the intermediate doping our numerical results do not give an indication for a coherence temperature $T_{coh}$, where the breakdown of the universality is expected. This is not surprising in view of the experimental facts for cuprates where at optimum doping $T_{coh} \sim T_c$ and our $T_{fs}>T_c$. Nevertheless we note that certain restrictions follow already from the MFL-type dynamics. E.g., the particular MFL form (\ref{eo2}) for $\sigma(\omega)$ leads at $T\to 0$ to a divergent integral on the lower bound $\omega \to 0$ and hence violates the optical sum rule (\ref{ec9}). Therefore such a form cannot apply to arbitrary low $T$ giving an indication for the existence of a lower crossover $T_{coh}$. \subsection{Energy scales in underdoped antiferromagnet} The underdoped regime $c_h<c_h^*$ is less convenient for the FTLM approach. One reason is that $T_{fs}$ is increasing towards the undoped AFM. At the same time the low-energy (pseudogap) scale is increasing and its effects are for certain quantities hardly resolvable from finite-size effects, hence they increase the uncertainty of results. Still there are clear indications for the pseudogap scale $T^*$ within the $t$-$J$ model. Most evident is the maximum in $\chi_0(T)$, as presented in Sec.~6.2. Quite a similar behaviour can be followed also for $s(T)/T$, as deduced e.g. from Fig.~\ref{4.3}a. This is related to nearly constant Wilson ratio $R_W(T) \sim 1$, as defined in the equation (\ref{mu2}). Note that such a behaviour has been found also experimentally by Loram {\it et al.} (1996), extending to low-$T$ regime not reachable in our calculations. It is quite evident from $\chi(T)$ that the pseudogap scale $T^*$ is related to the onset of short range AFM correlations. It seems that the same phenomenon governs the pseudogap appearing in the DOS $\cal N(\varepsilon)$ in the underdoped regime, as discussed in Sec.~7.4. While the pseudogap is clearly visible for lowest $c_h$, e.g. for $c_h\sim 0.06$, it is becomes more shallow towards the optimum doping $c_h \sim c_h^*$. This is consistent with PES measurements by Ino {\it et al.} (1997b). Experimentally one of the indications for $T^*$ is also the change of the slope in $\rho(T)$. Here our results are less conclusive and appear to be finite-size limited for $T<T^*$. A possible interpretation is that the QP scattering in the presence of longer range AFM correlations becomes a different one, in particular the mean free path $l_s$ increases due to the opening of the spin pseudogap. In our small systems this possibly leads to $l_s(T<T^*)>L$ and consequently to a finite-size $D_c(T<T^*)>0$ which cannot be interpreted uniquely. Note that so far there is neither a satisfactory theory nor a numerical calculation of the mobility $\mu_0(T < J)$, even for a single hole in an AFM which could serve as a guide for the limit $c_h \to 0$. Experiments indicate the existence of a lower crossover scale $T_{sg}$ (Batlogg 1997), see Fig.~~\ref{2.1}, identified as a spin gap in the NMR relaxation and appearing also in the ARPES, $\sigma_c(\omega)$ etc. In our calculations it would be easiest to resolve the existence of a spin gap in $\chi_0(T)$ and in $\bar S(\omega)$, still we do not find an indication for that down to $T\sim 0.1~t$. \subsection{Conclusions and open questions} One of the main conclusions of this work is that the prototype $t$-$J$ model, in spite of its apparent simplicity, accounts surprisingly well for a variety of normal state properties of cuprates. This confirms the belief that most unusual properties are dominated by strong correlations and by the interplay of antiferromagnetism and itinerant character of electrons, both effects being inherent within the model. The agreement is a qualitative but as well a quantitative one, hence the parameters of the model appear to be really microscopic ones in the whole range of doping and not just some effective parameters changing with doping. There are evidently many open questions regarding the interpretation and the understanding of our results, and related experimental facts on cuprates: \noindent a) The universal MFL-type dynamics in the intermediate regime, which seems to be well founded by our results on $\sigma(\omega)$, $\chi_L''(\omega)$ and self energies, as well as by experimental facts, lacks a proper theoretical explanation, although some relations have been proposed already (Littlewood and Varma 1991, Prelov\v sek 1997a). \noindent b) There are more fundamental problems in the underdoped regime. One of them is the evolution of the FS in the limit $c_h \to 0$. A number of authors investigated the evidence for the small FS - hole pockets (Eder and Becker 1991, Eder {\it et al.} 1994) and its possible consequences (Trugman 1990) at low doping, also stimulated by recent ARPES measurements (Marshall {\it et al.} 1996) of underdoped materials. In a strict sense at $T\to 0$ the transformation from a large to a small FS seems to require a phase transition with a qualitative change in a number of properties. There is no evidence for that neither in experiments nor in our small-system analysis. On the other hand our spectral functions at lowest $c_h>0$ reveal a folding of QP dispersion analogous to a single hole in an AFM. Still it is possible that such a feature coexists with a normal QP dispersion crossing of a large FS, however with a weight $\tilde Z \to 0$, i.e. gradually vanishing with doping as predicted by the equation (\ref{ss2}). Such a scenario is not in contradiction with ARPES results. It has been in fact recently deduced from the integrated PES experiments (Ino {\it et al.} 1997b), still it would be hard to establish it beyond doubt. \noindent c) Related is the question of the thermodynamics at very low doping. There is a distinction whether at low $T$ QP behave as a degenerate Fermi gas with finite QP weight $\tilde Z >0 $ even for $c_h \to 0$, or as a nondegenerate system of spin polarons with $\tilde Z \to 0$. In the first case one could expect for $c_h,T \to 0$ a finite electron compressibility $\kappa < \infty$, as discussed in Sec.~4.1, while the alternative would induce $\kappa \to \infty$. Our results are not conclusive in this respect, there is however also a numerical evidence for the latter scenario within the Hubbard model (Furukawa and Imada 1992, Assaad and Imada 1996). There are several quantities which we did not discuss in this review. One of most challenging is the Hall effect, which is known to be anomalous. Unfortunately the inclusion of a real magnetic field $B>0$ increases computational efforts (see Sec.~6.7), while at the same time also the physics of the Hall conductivity is less local. The anomalous properties of the orbital diamagnetism $\chi_d(T)$ and its relation to the Hall effect, equation (\ref{mo1}), nevertheless confirm the anomalous behaviour of $R_H(T)$. Also perpendicular $\sigma_c(\omega)$ is known to be a challenge for theoreticians. Still it appears that the interplane transport is essentially incoherent for $T>T_c$, so it could be possibly related to the knowledge of planar spectral functions $A(\vec k,\omega)$. The major open question within the $t$-$J$ (or Hubbard) model is still the existence of additional low energy scales and of related low-$T$ transitions. There are certain indications that such scales must exist in such models, it remains however a subject of future studies to find whether such transitions lead to an ordered spin and charge structure or to desired superconductivity. \vskip 0.5 truecm {\bf Acknowledgments} The work is supported by the Ministry of Science and Technology of Slovenia. One of the authors (J.J.) wants to thank for the support and the hospitality of the Max-Planck Institut f\"ur Physik Komplexer Systeme, Dresden, where part of the work has been completed. \newpage \section{References} \begin{list}{}{\setlength{\itemindent}{-1cm}\setlength{\itemsep}{0pt} \setlength{\parsep}{0pt}} \item $^*$ Present address: Cadence Design Systems, D-85540 Haar, Germany \item \item ABRIKOSOV, A. A., GOR'KOV, L. P., and DZYALOSHINSKII, I. E., 1965, {\it Quantum Field Theoretical Methods in Statistical Physics} (Pergamon, Oxford), p. 169. \item ANDERSON, P. W., 1987, {\it Science} {\bf 235}, 1196. \item ANDERSON, P. W., and ZOU, Z., 1988, {\it Phys. Rev. Lett.} {\bf 60}, 132. \item ASSAAD, F. F., and IMADA, M., 1996, {\it Phys. Rev. Lett.} {\bf 76}, 3176. \item BACCI, S., and GAGLIANO, E., 1991, {\it Phys. Rev.} B {\bf 43}, 6224. \item BANG, Y., and KOTLIAR, G., 1993, {\it Phys. Rev.} B {\bf 48}, 9898. \item BARADUC, C., EL AZRAK, A., and BONTEMPS, N., 1995, {\it J. of Supercond.}, {\bf 8}, 1. \item BARNES, T., and RIERA, J., {\it Phys. Rev.} B {\bf 50}, 6817 (1994). \item BASKARAN, G., ZOU, Z., and ANDERSON, P. W., 1987, {\it Solid State Commun.} {\bf 63}, 973. \item BATLOGG, B., {\it et al.}, 1994, {\it Physica} C {\bf 235-240}, 130. \item BATLOGG, B., 1997, {\it Physica} C {\bf 235-240}, 130. \item BEDNORZ, J. G., and M\"ULLER, K. A., 1986, {\it Z. Phys.} B {\bf 64}, 189. \item BIRGENEAU, R. J., {\it et al.}, 1988, {\it Phys. Rev.} B {\bf 38}, 6614. \item BLUMBERG, G., {\it et al.}, 1994, {\it Phys. Rev.} B {\bf 49}, 13295. \item BON\v CA, J., and PRELOV\v SEK, P., 1989, {\it Solid State Commun.} {\bf 71}, 755. \item BRINKMAN, W., and RICE, T. M., 1970, {\it Phys. Rev.} B {\bf 2}, 1324. \item BULUT, N., SCALAPINO, D. J., and WHITE, S. R., 1994, {\it Phys. Rev.} B {\bf 50}, 7215. \item CASTELLA, H., ZOTOS, X., and PRELOV\v SEK, P., 1995, {\it Phys. Rev. Lett.} {\bf 74}, 972. \item CASTELLANI, C., DI CASTRO, C., and GRILLI, M., 1995, {\it Phys. Rev. Lett.} {\bf 75}, 4650. \item CHAKRAVARTY, S., HALPERIN, B. I., and NELSON, D. R., 1989, {\it Phys. Rev.} B {\bf 39}, 2344. \item CHUBUKOV, A. V., and FRENKEL, D. M., 1995, {\it Phys. Rev. Lett.} {\bf 74}, 3057. \item COOPER, S. L., {\it et al}., 1993, {\it Phys. Rev.} B {\bf 47}, 8233. \item COOPER, J. R., and LORAM, J. W., 1996, {\it J. Phys. I France} {\bf 6} 2237. \item DAGOTTO, E., and D. POILBLANC, D., 1990, {\it Phys. Rev.} B {\bf 42}, 7940. \item DAGOTTO, E., 1994, {\it Rev. Mod. Phys.} {\bf 66}, 763. \item DING, H., {\it et al.}, 1996, {\it Phys. Rev. Lett.} {\bf 76}, 1533. \item DUFFY, D., and MOREO, A., 1997, {\it Phys. Rev.} B {\bf 55}, 12918. \item EDER, R., and BECKER, K. W., 1991, {\it Phys. Rev.} B {\bf 44}, 6982. \item EDER, R., OHTA, Y., and SHIMOZATO, T., 1994, {\it Phys. Rev.} B {\bf 50}, 3350. \item EDER, R., OHTA, Y., and MAEKAWA, S., 1995, {\it Phys. Rev. Lett.} {\bf 74}, 5124. \item EL AZRAK, A., {\it et al}., 1994, {\it Phys. Rev.} B {\bf 49}, 9846. \item EMERY, V. J., 1987, {\it Phys. Rev. Lett.} {\bf 58}, 2794. \item EMERY, V. J., KIVELSON, S. A., and LIN H. Q., 1990, {\it Phys. Rev. Lett.} {\bf 64}, 475. \item FLEURY, P., and LOUDON, R., 1968, {\it Phys. Rev.} {\bf 166}, 514. \item FULDE, P., 1991, {\it Electron Correlations in Molecules and Solids}, Springer Series in Solid-State Sciences Vol. 100 (Springer-Verlag, Berlin). \item FURUKAWA, N., and IMADA, M., 1992, {\it J. Phys. Soc. Jpn.} {\bf 61}, 3331. \item GEORGES, A., KOTLIAR, G., KRAUTH, W., and ROZENBERG, M. J., 1996, {\it Rev. Mod. Phys.} {\bf 68}, 13. \item GOMEZ-SANTOS, G., JOANNOPOULOS, J. D., and NEGELE, J. W., 1989, {\it Phys. Rev.} B {\bf 39}, 4435. \item G\"OTZE, W., and W\"OLFLE, P., 1972, {\it Phys. Rev.} B {\bf 6}, 1226. \item HALDANE, F. D. M., 1981, {\it J. Phys.} C {\bf 14}, 2585. \item HAYDEN, S. M., {\it et al.}, 1996, {\it Phys. Rev. Lett.} {\bf 76}, 1344. \item HAYDOCK, R., HEINE, V., and KELLY, M. J., 1972, {J. Phys.} C {\bf 5}, 2845. \item HELLBERG, C. S., and MANOUSAKIS, E., 1997, {\it Phys. Rev. Lett.} {\bf 78}, 4609. \item HILDEBRAND, G., HAGENAARS, T. J., GRABOWSKI, S., SCHMALIAN, J., and HANKE, W., 1997, {\it Phys. Rev.} B {\bf 56}, R4317. \item HIRSCH, J. E., 1985, {\it Phys. Rev.} B, {\bf 31}, 4403. \item HUBBARD, J., 1963, {\it Proc. Roy. Soc.} A {\bf 277}, 237. \item HWANG, H. Y., {\it et al.}, 1994, {\it Phys. Rev. Lett.} {\bf 72}, 2636. \item HYBERTSEN, M. S., STECHEL, E. B., SCHL\"UTER, M., and JENNISON, D. R., 1990, {\it Phys. Rev.} B {\bf 41}, 11068. \item IMAI, T., SLICHTER, C. P., YOSHIMURA, K., and KOSUGE K., 1993, {\it Phys. Rev. Lett.} {\bf 70}, 1002. \item IMADA, M., and TAKAHASHI, M., 1986, {\it J. Phys. Soc. Jpn.} {\bf 55}, 3354. \item INO, A., 1997a, {\it et al.}, {\it Phys. Rev. Lett.} {\bf 79}, 2101. \item INO, A., 1997b, {\it et al.}, preprint. \item IYE, Y., 1992, in {\it Physical Properties of High Temperature Superconductors III}, edited by D. M. Ginsberg (World Scientific, Singapore), p.285. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1994a, {\it Phys. Rev.} B {\bf 49}, 5065. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1994b, {\it Phys. Rev.} B {\bf 50}, 7129. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1995a, {\it Phys. Rev. Lett.} {\bf 74}, 3411. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1995b, {\it Phys. Rev. Lett.} {\bf 75}, 1340. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1995c, {\it Phys. Rev.} B {\bf 52}, 6903. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1996, {\it Phys. Rev. Lett.} {\bf 77}, 892. \item JAKLI\v C, J., and PRELOV\v SEK, P., 1997, {\it Phys. Rev.} B {\bf 55}, R7307. \item JARRELL, M., GUBERNATIS, J. E., SILVER, R. N., and SIVIA, D. S., 1991, {\it Phys. Rev.} B {\bf 43}, 1206. \item JOHNSTON, D. C., SINHA, S. K., JACOBSON, A. J., and NEWSAM, J. M., 1988, {\it Physica} C {\bf 153-155}, 572. \item JOHNSTON, D. C., 1989, {\it Phys. Rev. Lett.} {\bf 62}, 957. \item KAISER, A. B., and UHER, C., 1991, in {\it Studies of High Temperature Superconductors}, Vol.7, edited by A. V. Narlikar (Nova Science Publishers, New York), p. 353. \item KAMPF, A. P., and SCHRIEFFER, J. R., 1990, {\it Phys. Rev.} B {\bf 41}, 6399. \item KANE, C. L., LEE, P. A., and READ, N., 1989, {\it Phys. Rev.} B {\bf 39}, 6880. \item KEIMER, B., {\it et al.}, 1992, {\it Phys. Rev.} B {\bf 46}, 14034. \item KHALIULLIN, G., and HORSCH, P., 1996, {\it Phys. Rev.} B {\bf 54}, R9600. \item KITAOKA, Y., {\it et al.}, 1991, {\it Physica} C {\bf 185 - 189}, 98. \item KNOLL, P., THOMSEN, C., CARDONA, M., and MURUGARAJ, P., 1990, {\it Phys. Rev.} B {\bf 42}, 4842. \item KOHN, W., 1964, {\it Phys. Rev.} {\bf 133}, A171. \item KOHNO, M., 1997, {\it Phys. Rev.} B {\bf 55}, 1435. \item LANCZOS, C., 1950, {\it J. Res. Nat. Bur. Stand.} {\bf 45}, 255. \item LITTLEWOOD, P. B., and VARMA, C. M., 1991, {\it J. Appl. Phys.} {\bf 69}, 4947. \item LORAM, J. W., MIRZA, K. A., COOPER, J. R., and LIANG, W. Y., 1993, {\it Phys. Rev. Lett.} {\bf 71}, 1740. \item LORAM, J. W., MIRZA, K. A., COOPER, J. R., ATHANASSOPOULOU, N. A., and LIANG, W. Y., 1996, {\it Proc. of 10$^{th}$ Anniversary HTS Workshop, Houston}, (World Scientific), p.341. \item LUTTINGER, J. M., 1960, {\it Phys. Rev.} {\bf 119}, 1153. \item LYONS, K. B., FLEURY, P. A., REMEIKA, J. P., COOPER, A. S., and NEGRAN, T. J., 1988, {\it Phys. Rev.} B {\bf 37}, 2353. \item MAHAN, G. D., 1990, {\it Many-Particle Physics} (Plenum, New York). \item MAKIVI\'{C}, M., and JARRELL, M., 1992, {\it Phys. Rev. Lett.} {\bf 68}, 1770. \item MALDAGUE, P. F., 1977, {\it Phys. Rev.} B {\bf 16}, 2437. \item MANDRUS, D., FORRO, L., KENDZIORA, C., and MIHALY, L., 1991, {\it Phys. Rev.} B {\bf 44}, 2418. \item MANOUSAKIS, E., 1991, {\it Rev. Mod. Phys.} {\bf 63}, 1. \item MARSHALL, D. S., {\it et al.}, 1996, {\it Phys. Rev. Lett.} {\bf 76}, 4841. \item MARTINEZ, G., and HORSCH, P., 1991, {\it Phys. Rev.} B {\bf 44}, 317. \item METZNER, W., SCHMIT, P., and VOLLHARDT, D., 1992, {\it Phys. Rev.} B {\bf 45}, 2237. \item MILA, F., and RICE, T. M., 1989, {\it Physica} C, {\bf 157}, 561. \item MILJAK, M., ZLATI\'C, V., KOS, I., THOMPSON, J. D., CANFIELD, P. C., and FISK, Z., 1993, {\it Sol. St. Commun.}, {\bf 85}, 519. \item MILLIS, A. J., MONIEN, H., and PINES, D., 1990, {\it Phys. Rev.} B {\bf 42}, 167. \item MILLIS, A. J., and MONIEN, H., 1992, {\it Phys. Rev.} B {\bf 45}, 3059. \item MONTOUX, P., and PINES, D., 1994, {\it Phys. Rev.} B {\bf 50}, 16015. \item MOREO, A., HAAS, S., SANDVIK, A. W., and DAGOTTO, E., 1995, {\it Phys. Rev.} B {\bf 51}, 12045. \item MORIYA, T., TAKAHASHI, Y., and UEDA, K., 1990, {\it J. Phys. Soc. Jpn.} {\bf 59}, 2905. \item MOTT, N. F., and DAVIS, E. A., 1979, {\it Electronic Processes in Noncrystalline Materials} (Clarendon, Oxford). \item NAGAOKA, Y., 1966, {\it Phys. Rev.} {\bf 147}, 392. \item NAGAOSA, N., and LEE, P. A., 1990, {\it Phys. Rev. Lett.} {\bf 64}, 2450. \item NAZARENKO, A., VOS, K. J. E., HAAS, S., DAGOTTO, E., and GOODING, R. J., 1995, {\it Phys. Rev.} B {\bf 51}, 8676. \item N\"UCKER, N., {\it et al.}, 1989, {\it Phys. Rev.} B {\bf 39}, 12379. \item OITMAA, J., and BETTS, D. D., 1978, {\it Can. J. Phys.} {\bf 56}, 897. \item OHATA, N., and KUBO, R., 1970, {\it J. Phys. Soc. Jpn.} {\bf 28}, 1402. \item OLSON, C. G., {\it et al.}, 1990, {\it Phys. Rev.} B {\bf 42}, 381. \item ONG, N. P., 1990, in {\it Physical Properties of High Temperature Superconductors}, edited by D. M. Ginsberg (World Scientific, Singapore), Vol. 2, p.~459. \item PANG, H., AKHLAGHPOUR, H., and JARRELL, M., 1996, {\it Phys. Rev.} B {\bf 53}, 5086. \item PARLETT, B. N., 1980, {\it The Symmetric Eigenvalue Problem} (Prentice-Hall, Englewood Cliffs). \item PARKINSON, J. B., 1969, {\it J. Phys.} C {\bf 2}, 2012. \item POILBLANC, D., 1991, {\it Phys. Rev.} B {\bf 44}, 9562. \item POILBLANC, D., ZIMAN, T., SCHULZ, H. J., and DAGOTTO, E., 1993, {\it Phys. Rev.} B {\bf 47}, 14267. \item PRELOV\v SEK, P., and ZOTOS, X., 1990, {\it Phys. Rev.} B {\bf 47}, 5984. \item PRELOV\v SEK, P., and JAKLI\v C, J., 1996, {\it Phys. Rev.} B {\bf 53}, 15095. \item PRELOV\v SEK, P., 1997a, {\it Z. Phys.} B {\bf 103}, 363. \item PRELOV\v SEK, P., 1997b, {\it Phys. Rev.} B {\bf 55}, 9219. \item PREUSS, R., HANKE, W., and von der LINDEN, W., 1995, {\it Phys. Rev. Lett.} {\bf 75}, 1344. \item PREUSS, R., HANKE, W., Gr\"ober, C., and EVERTZ, H. G., 1997, {\it Phys. Rev. Lett.} {\bf 79}, 1122. \item PRUSCHKE, T., JARRELL, M., and FREERICKS, J. K., 1995, {\it Adv. Phys.} {\bf 44}, 187. \item PUCHKOV, A. V., {\it et al.}, 1996, {\it Phys. Rev. Lett.} {\bf 77}, 3212. \item PUTIKKA, W. O., LUCHINI, M. U., and RICE, T. M., 1992, {\it Phys. Rev. Lett.} {\bf 68}, 538. \item PUTIKKA, W. O., GLENISTER, R. L., SINGH, R. R. P., and TSUNETSUGU, H., 1994, {\it Phys. Rev. Lett.} {\bf 73}, 1994. \item RAM\v SAK, A., and PRELOV\v SEK, P., 1989, {\it Phys. Rev.} B {\bf 40}, 2239. \item RICE, T. M., and ZHANG, F. C., 1989, {\it Phys. Rev.} B {\bf 39}, 815. \item RICE, T. M., 1995, in {\it Proceedings of the Les Houches Summer School, Session LVI}, edited by B. Doucot and J. Zinn-Justin (Elsevier, Amsterdam), p.19. \item ROJO, A. G., KOTLIAR, G., and CANRIGHT, G. S., 1993, {\it Phys. Rev.} B {\bf 14}, 9140. \item ROMERO, D. B., {\it et al}., 1992, {\it Solid State Commun.} {\bf 82}, 183. \item ROSSAT - MIGNOT, J., {\it et al.}, 1991, {\it Physica} C, {\bf 185 - 189}, 89. \item SCALAPINO, D. J., WHITE, S. R., and ZHANG, S., 1993, {\it Phys. Rev.} B {\bf 47}, 7995. \item SCHMITT-RINK, S., VARMA, C. M., and RUCKENSTEIN, A. E., 1988, {\it Phys. Rev. Lett.} {\bf 60}, 2793. \item SEGA, I., and PRELOV\v SEK, P., 1990, {\it Phys. Rev.} B {\bf 42}, 892. \item SHASTRY, B. S., 1989, {\it Phys. Rev. Lett.} {\bf 63}, 1288. \item SHASTRY, B. S., and SHRAIMAN, B. I., 1990, {\it Phys. Rev. Lett.} {\bf 65}, 1068. \item SHASTRY, B. S., and SUTHERLAND, B., 1990, {\it Phys. Rev. Lett.} {\bf 65}, 243. \item SHASTRY, B. S., SHRAIMAN, B. I., and SINGH, R. R. P., 1993, {\it Phys. Rev. Lett.} {\bf 70}, 2004. \item SHEN, Z.-X., and DESSAU, D. S., 1995, {\it Physics Reports} {\bf 253}, 1. \item SHIRANE, G., 1991, {\it Physica} C {\bf 185 - 189}, 80. \item SILVER, R. N., and R\"ODER, H., 1994, {\it Int. J. Mod. Phys.} C {\bf 5}, 735. \item SINGH, R. R. P., FLEURY, P. A., LYONS, and SULEWSKI, P. E., 1989, {\it Phys. Rev. Lett.} {\bf 62}, 2736. \item SINGH, R. R. P., and GLENISTER, R. L., 1992a, {\it Phys. Rev.} B {\bf 46}, 11871. \item SINGH, R. R. P., and GLENISTER, R. L., 1992b, {\it Phys. Rev.} B {\bf 46}, 14313. \item SCHLESINGER, Z., {\it et al.}, 1990, {\it Phys. Rev. Lett.} {\bf 65}, 801. \item SLICHTER, C. P., 1994, {\it Proceedings of the Los Alamos Symposium on Strongly Correlated Electronic Materials - 1993}, editors K. S. Bedell, Z. Wang, D. E. Meltzer, A. V. Balatsky, and E. Abrahams, (Addison-Wesley), p. 427. \item SOKOL, A., GAGLIANO, E., and BACCI, S., 1993, {\it Phys. Rev.} B {\bf 47}, 14646. \item SOKOL, A., and PINES, D., 1993, {\it Phys. Rev. Lett.} {\bf 71}, 2813. \item SUZUKI, M., 1993, editor, {\it Quantum Monte Carlo Methods in Condensed Matter Physics} (World Scientific, Singapore). \item STARTSEVA, T., {\it et al.}, 1997, preprint cond-mat/9706145. \item STEPHAN, W., and HORSCH, P., 1991, {\it Phys. Rev. Lett.} {\bf 66}, 2258. \item STERNLIEB, B. J., {\it et al.}, 1993, {\it Phys. Rev.} B {\bf 47}, 5320. \item TAKAGI, H., {\it et al.}, 1992, {\it Phys. Rev. Lett.} {\bf 69}, 2975. \item TAKIGAWA, M., {\it et al.}, 1991, {\it Phys. Rev.} B {\bf 43}, 247. \item TALLON, J. L., COOPER, J. R., de SILVA, P. S. I. P. N., WILLIAMS, G. V. M., and LORAM, J. W., 1995, {\it Phys. Rev. Lett.} {\bf 75}, 4114. \item TANNER, D. B., and TIMUSK, T., 1992, in {\it Physical Properties of High Temperature Superconductors III}, edited by D. M. Ginsberg (World Scientific, Singapore), p.363. \item TOHYAMA, T., OKUDA, H., and MAEKAWA, S., 1993, {\it Physica} C {\bf 215}, 382. \item TOHYAMA, T., and MAEKAWA, S., 1994, {\it Phys. Rev.} B {\bf 49}, 3596. \item TOHYAMA, T., HORSCH, P., and MAEKAWA, S., 1995, {\it Phys. Rev. Lett.} {\bf 74}, 980. \item TORRANCE, J. B., {\it et al.}, 1989, {\it Phys. Rev.} B {\bf 40}, 8872. \item TRANQUADA, J. M., {\it et al.}, 1995, {\it Nature} {\bf 375}, 561. \item TRUGMAN, S. A., 1990, {\it Phys. Rev. Lett.} {\bf 65}, 500. \item TSUJI, M., 1958, {\it J. Phys. Soc. Jpn.} {\bf 13}, 979. \item TSUNETSUGU, H., and IMADA., M., 1997, {\it J. Phys. Soc. Jpn.}, {\bf 66}, 1876. \item UCHIDA, S., {\it et al}., 1991, {\it Phys. Rev.} B {\bf 43}, 7942. \item UCHIDA, S., 1997, {\it Physica C} {\bf 282-287}, p.12. \item VARMA, C. M., LITTLEWOOD, P. B., SCHMITT-RINK, S., ABRAHAMS, E., and RUCKENSTEIN, A. E., 1989, {\it Phys. Rev. Lett.} {\bf 63}, 1996. \item VEBERI\v C, D., PRELOV\v SEK, P., and SEGA, I., 1998, {\it Phys. Rev. B}, to appear. \item VOLLHARDT, D., 1997, {\it Phys. Rev. Lett.} {\bf 78}, 1307. \item Von der LINDEN, W., 1992, {\it Phys. Rep.} {\bf 220}, 53. \item WALSTEDT, R. E., BELL, R. F., SCHNEEMEYER, L. F., WASZCZAK, J. V., and ESPINOSA, G. P., 1992, {\it Phys. Rev.} B {\bf 45}, 8074. \item WANG, Z., BANG, Y., and KOTLIAR, G., 1991, {\it Phys. Rev. Lett.} {\bf 67}, 2733. \item WELLS, B. O., 1995, {\it et al}., {\it Phys. Rev. Lett.} {\bf 74}, 964. \item WHITE, S. R., 1992, {\it Phys. Rev. Lett.} {\bf 69}, 2863. \item WHITE, S. R., and SCALAPINO, D., 1997a, {\it Phys. Rev.} B {\bf 55}, R14701. \item WHITE, S. R., and SCALAPINO, D., 1997b, preprint cond-mat/9705128. \item ZAANEN, J., and OLE\'{S}, A. M., 1988, {\it Phys. Rev.} B {\bf 37}, 9423. \item ZHANG, F. C., and RICE, T. M., 1988, {\it Phys. Rev.} B {\bf 37}, 3759. \item ZEYHER, R., 1991, {\it Phys. Rev.} B {\bf 44}, 10404. \item ZOTOS, X., PRELOV\v SEK, P., and SEGA, I., 1990, {\it Phys. Rev.} B {\bf 42}, 8445. \item ZOTOS, X., and PRELOV\v SEK, P., 1996, {\it Phys. Rev.} B {\bf 53}, 983. \end{list} \end{document}
1,314,259,996,854
arxiv
\section{Introduction} The Young Solar Analogs Project is a long-term spectroscopic and photometric effort to monitor a sample of Young Solar Analogs (YSAs) in order to gain a deeper understanding of their magnetically related stellar activity. YSAs give us a window into the conditions in the early solar system when life was establishing a foothold on the Earth. That early life had to contend with a hostile space environment, including strong ultraviolet fluxes from a young active sun (without the benefit of an ozone layer), an enhanced solar wind, strong and frequent flares, as well as significant variability in the solar irradiance. By studying solar-type stars with ages corresponding to this period ($\sim$0.3 -- 1.5 Gyr) in the history of the solar system, we can gain insight not only into the conditions on the early Earth, but a better understanding of the space environment experienced by Earth analogs, and the implications that might have for the development of life on those worlds. Stellar activity is closely related to the dynamics of the magnetic field of the star. The existence of the chromosphere and corona and the associated far-ultraviolet (FUV), extreme-ultraviolet (EUV) and X-ray emissions of a solar-type star are the result of magnetic heating, and solar and stellar active regions are associated with strong local enhancements in the stellar magnetic field. The direct detection of the magnetic fields of solar-type stars is difficult and direct measurement of FUV (both emission-line and Balmer continuum), EUV, and X-ray fluxes requires space-based observations, so the monitoring of magnetic activity and FUV, EUV, and X-ray fluxes in those stars depends upon more easily measured proxies such as, traditionally, the chromospheric flux in the cores of the \ion{Ca}{2} H \& K lines. Recent studies have shown that \ion{Ca}{2} H \& K fluxes are correlated in solar-type stars with both X-ray luminosities \citep[61 Cyg A \& B]{hempelmann03}; \citep[HD~81809]{favata04} and FUV excesses \citep{smith10,gray11}. Thus ground-based monitoring of \ion{Ca}{2} H \& K fluxes has played and continues to play a vital role in the study of stellar magnetic activity, and serves as a valuable proxy for the direct measurement of ultraviolet and X-ray fluxes. Long-term monitoring of the \ion{Ca}{2} H \& K fluxes in a sample of F-, G-, and K-type dwarfs began at Mount Wilson in 1966 \citep{wilson78,baliunas95}, and continued until 2003. That program monitored about 100 stars on a continuous basis. The stars in the Mount Wilson program range from young stars with very active chromospheres to old stars with minimal activity. The program discovered stellar activity cycles similar to that of the Sun in about 60\% of the sample, with a further 25\% varying with no well-defined cycle, and the remainder showing little variation at all. The Lowell Observatory SSS (solar-stellar spectrograph) program started in 1988 and continues today \citep{hall09}. It employs a fiber-fed spectrograph that enables \ion{Ca}{2} H \& K measurements to be carried out on both the Sun and stars with the same instrument. That program, unlike the Mount Wilson project, focuses closely on 28 stars that are most like the Sun in terms of spectral type (F8 -- G8, with most in the range G0 -- G2). Many in this sample are ``solar twins'', and thus have ages and metallicities similar to that of the Sun, but a few of the program stars may be described as ``young solar analogs'' with activity levels much higher than the Sun. Lowell Observatory, unlike the Mount Wilson project, carries out near-contemporaneous precision photometric observations, in the Str\"omgren $b$ and $y$ bands, of a number of the SSS-program solar-type stars as well as others \citep{lockwood07}. They have found, as might be expected in analogy with the Sun, that many of the SSS stars are brightest when at the highest activity levels, but, surprisingly, others are faintest when most active. It is the most active stars in their sample that show an inverse correlation between brightness and activity, suggesting that stars, as they age and decline in activity, flip from inverse- to direct-correlation behaviors. The ``Sun in Time'' project \citep{guinan09} carried out, over the course 20 years, multi-wavelength studies of a small sample of solar analogs (G0 - G5) with ages ranging from $\sim 50$ Myr to 9 Gyr. That project found that the early Sun was most likely rotating 10 times faster than at present and that its coronal X-ray and transition-region/chromospheric EUV and FUV fluxes were several hundred times higher than the present. This project as well confirmed that \ion{Ca}{2} H \& K observations are useful proxies for estimating X-ray, EUV, and FUV fluxes and variability. Spectroscopic features in the optical other than the \ion{Ca}{2} H \& K lines may yield useful stellar activity data. The core of the H$\alpha$ line samples the chromosphere, but other strong features in the spectrum may be sensitive to photospheric manifestations of stellar activity. Prime among these in the blue-violet region of the spectrum are the 4305\AA\ G-band (a molecular feature arising from the CH molecule), the 4227\AA\ \ion{Ca}{1} resonance line, and the 4340\AA\ H$\gamma$ line. These three features are temperature sensitive in late-F, G, and early K-type stars, with the G-band increasing in strength through the F and G-type stars, coming to a broad maximum in the late G-type through early K-type stars and then declining toward later types. The \ion{Ca}{1} resonance line is negatively correlated with the effective temperature, and the H$\gamma$ line positively correlated. Thus these spectral features may be useful in tracking the presence and areal coverage of sunspots and faculae on the photospheric disk. In addition, high-resolution images of the solar surface taken in the G-band show bright points (GBPs) that are strongly correlated with magnetic structures such as intergranular lanes and extended facular regions \citep{berger01,schussler03}. We will discuss in Sections \ref{sec:Gband}, \ref{sec:CaI}, and \ref{sec:HG} our definition of spectroscopic indices for the measurement of the G-band, the \ion{Ca}{1} resonance line, and the H$\gamma$ line. In \S 5.1 we test the sensitivity of these photospheric indices to temperature variations, and in \S 5.4 examine correlations between these indices and with the Mount Wilson chromospheric activity index. These tests enable us to evaluate the usefulness of these indices as temperature indicators. For the purpose of this project, we define a YSA as an F8 -- K2 dwarf with an age between 0.3 and 1.5 Gyr. A sample of 40 candidate YSAs north of $-10$\degr\ were chosen from the NStars project \citep{gray03} on the basis of the following criteria: 1) Their spectral types should lie between F8 -- K2, as we are interested in solar-type stars, and not late-K and M-type dwarfs. In addition, within that spectral-type range, the ``photospheric'' features we have identified (G-band, \ion{Ca}{1} resonance line, and the H$\gamma$ line) may be measured with sufficient accuracy. 2) The stars should be north of $-10^\circ$ declination, and sufficiently bright ($V < 8.0$) that they may be observed at high signal-to-noise (S/N $\ge 100$) on a routine basis in a reasonable length of time with our equipment (see \S\ref{sec:obs}) and 3) they should have ages approximately between 0.3 and 1.5 Gyr, for the reasons explained above. Initial ages were estimated on the basis of the ``snapshot'' \ion{Ca}{2} H \& K activity measures provided by the Nearby Stars project, and the calibration of \citet{soderblom91} and, later, when it became available, and we had derived better average activity measures of our program stars, that of \citet{mamajek08}. Some ages were also refined via the determination of rotational periods \citep{barnes07}. The list was thus culled to 31 YSAs (see Table \ref{tbl:bod}). Many of these stars have been monitored spectroscopically since 2007. We note that this list includes the star HD~189733, even though that star apparently has an age $> 4$Gyr. The activity age of HD~189733 is approximately 600 Myr \citep{melo06}, but this young age is inconsistent with the low X-ray flux of its M-dwarf companion \citep{pillitteri11}. Its rapid rotation and high activity presumably derives from the transfer of angular momentum from a close-orbiting hot jupiter \citep{pillitteri11,santapaga11}. We have retained this star in our program not only because of its intrinsic interest, but because insights may come from comparing its activity behavior to young stars with similar rotation periods and activity levels. \begin{deluxetable*}{llllll} \tablecolumns{6} \tablewidth{0pc} \tablecaption{Young Solar Analog Stars\\Basic Observational Data\label{tbl:bod}} \tablehead{ \colhead{Name} & \colhead{SpT\tablenotemark{a}} & \colhead{V} & \colhead{B-V} & \colhead{Duplicity\tablenotemark{b}} & \colhead{Program\tablenotemark{c}}} \startdata HD 166 & G8 V & 6.10 & 0.75 & s,a & \\ HD 5996 & G9 V (k) & 7.67 & 0.75 & s & \\ HD 9472 & G2+ V & 7.63 & 0.68 & s & \\ HD 13531 & G7 V & 7.36 & 0.70 & s & \\ HD 16673 & F8 V & 5.78 & 0.52 & s & \\ HD 27685 & G4 V & 7.84 & 0.67 & s,c & \\ HD 27808 & F8 V & 7.13 & 0.52 & s,c & \\ HD 27836 & G0 V (k) & 7.61 & 0.60 & s,c & \\ HD 27859 & G0 V (k) & 7.80 & 0.60 & s,c & \\ HD 28394 & F8 V & 7.02 & 0.50 & SB,c & \\ HD 42807 & G5 V & 6.44 & 0.66 & s & SSS \\ HD 76218 & G9- V (k) & 7.69 & 0.77 & s & \\ HD 82885 & G8+ V & 5.41 & 0.77 & V(B\tablenotemark{d}) & MtW,SSS \\ HD 96064 & G8+ V & 7.64 & 0.77 & V(B: M0+ Ve) & \\ HD 101501 & G8 V & 5.32 & 0.72 & s & MtW,SSS \\ HD 113319 & G4 V & 7.55 & 0.65 & s & \\ HD 117378 & F9.5 V & 7.64 & 0.56 & s & \\ HD 124694 & F8 V & 7.19 & 0.52 & cpm & \\ HD 130322 & G8.5 V & 8.04 & 0.78 & Ex;hj & \\ HD 131511 & K0 V & 6.01 & 0.83 & SB & \\ HD 138763 & F9 V & 6.51 & 0.58 & s & \\ HD 149661 & K0 V & 5.76 & 0.83 & V ? & MtW \\ HD 152391 & G8.5 V (k) & 6.64 & 0.76 & s & MtW \\ HD 154417 & F9 V & 6.01 & 0.58 & s & MtW \\ HD 170778 & G0- V (k) & 7.52 & 0.59 & s & \\ HD 189733 & K2 V (k) & 7.65 & 0.93 & V,Ex;hj & \\ HD 190771 & G2 V & 6.17 & 0.64 & V & \\ HD 206860 & G0 V & 5.94 & 0.59 & V (T2.5\tablenotemark{e}),Ex;j & MtW \\ HD 209393 & G5 V (k) & 7.97 & 0.68 & s & \\ HD 217813 & G1 V & 6.64 & 0.60 & s & \\ HD 222143 & G3 V (k) & 6.58 & 0.65 & s & \\ \enddata \tablenotetext{a}{Spectral types from \citet{gray03} and \citet{gray06} unless otherwise indicated.} \tablenotetext{b}{Key to duplicity notes: s = single, a = member of association, c = member of cluster, SB = spectroscopic binary, V = visual binary (along with spectral types of companions, if known), cpm = common proper motion companion, Ex = exoplanet host: hj = hot jupiter; j = jupiter-mass planet.} \tablenotetext{c}{The stars indicated are in common with other spectroscopic activity programs, in particular MtW = Mount Wilson project \citep{baliunas95} and the Solar/Stellar spectrograph project \citep{hall09}.} \tablenotetext{d}{Simbad lists a spectral type of M5 V for HD~82885B, but gives no source.} \tablenotetext{e}{Brown dwarf companion \citep{luhman07}.} \end{deluxetable*} The Lowell SSS project has shown the importance and value of contemporaneous photometry, and so we added a photometric component to our project in 2011. We monitor our program stars in 5 photometric bands, the Str\"{o}mgren-$v$ ($\lambda_{\rm eff} = 4100$\AA), Johnson-Cousins $B$ (4450\AA), $V$ (5510\AA), and $R$ (6530\AA) bands, and a 3~nm-wide passband centered on the H$\alpha$ line (6563\AA). This photometric system is optimized to detect stellar-activity variations. For instance, it is well-known that late-type active stars show greater variability at shorter wavelengths; this is related to a greater contrast between the photosphere and the spots, and a similar increase in the contrast between the photospheric faculae and the photosphere at those wavelengths. During flare events, emission in the Paschen continuum rises sharply with decreasing wavelength. For both these reasons, it is expected that photometric variability will be more apparent in the Str\"{o}mgren-$v$ filter than in the Str\"{o}mgren-$b$ ($\lambda_{\rm eff} = 4670$\AA) filter employed by the Lowell SSS project. Variation in stellar activity, especially during flare events, should also be apparent in the H$\alpha$ line. We will examine the relationship between these photometric data and the spectroscopic indices we present in this paper in Paper II of this series. \section{Observations} \label{sec:obs} \subsection{Spectroscopy} \label{sec:spectra} Spectroscopic observations for this project have been carried out primarily with the G/M spectrograph on the Dark Sky Observatory (Appalachian State University) 0.8-m reflector. Except for early in the endeavor, observations for this project on that instrument have been obtained with the 1200~g\,mm$^{-1}$ grating in the first order. That grating gives a spectral range of 3800 -- 4600\AA, with a resolution of 1.8\AA/2 pixels ($R \sim 2300$). This spectral range includes the \ion{Ca}{2} H \& K lines as well as the \ion{Ca}{1} resonance line, the G-band, and the H$\gamma$ line. Exposures have been calculated to give a S/N of at least 100 in the continuum near the \ion{Ca}{2} H \& K lines, which means that the S/N near the G-band is consistently better than 150. A few early observations were made with the 600~gmm$^{-1}$ grating (used in the first order), yielding a resolution of 3.6\AA/2 pixels and the 1000~g\,mm$^{-1}$ grating (used in the second order) giving a resolution of $\sim 1$\AA/2 pixels. Before April 2009, our spectra were recorded on a thinned, back-illuminated $1024 \times 1024$ pixel Tektronics CCD operated in the multipinned-phase mode. Since April 2009, we have been using an Apogee camera with a $1024 \times 256$ pixel e2v technologies CCD30-11 chip with enhanced ultraviolet sensitivity. These two chips have very similar pixel sizes and spectral sensitivities, and we have detected only minor changes in the instrumental systems (detailed below) in the transition between the two CCDs. An Fe-Ar hollow-cathode comparison lamp was observed for wavelength calibrations, and the spectroscopic data were reduced with IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.} using standard techniques. Since January 2013 the VATTspec spectrograph on the Vatican Advanced Technology Telescope (VATT; 1.8-m, located on Mount Graham, Arizona) has also been used for this project, primarily for high-cadence, high-S/N observations designed to detect flares and other short-term events on these stars. Those observations will be discussed in a later paper in this series. For these observations, the VATTspec is used with a 1200 g mm$^{-1}$ grating which gives a resolution of 0.75\AA/2 pixels in the vicinity of the \ion{Ca}{2} H \& K lines, with a spectral range of 3640 -- 4630\AA. The spectra are recorded on a low-noise STA0520A CCD with 2688$\times$512 pixels (University of Arizona Imaging Technology serial number 8228). Two hollow cathode lamps, Hg and Ar, were observed simultaneously for wavelength calibrations, and the spectroscopic data were again reduced with IRAF using standard techniques. We have also obtained high-resolution echelle data for six of our stars with the FIES spectrograph on the Nordic Optical Telescope \citep{telting14}. These data, which were obtained under the Nordic Optical Telescope Service Observing Program employed the FIES spectrograph with the high-resolution fiber, yielding a resolution of 65,000, and a spectral range from 3640 -- 7360\AA. Spectra from the FIES spectrograph were reduced with FIEStool. \subsection{Photometry} An important component of the Young Solar Analogs project is concurrent multiband photometry of our program stars. The analysis of this photometry and how it relates to our spectroscopic observations will be the subject of Paper II in this series. In March 2011 we began obtaining photometric observations in the Str\"{o}mgren-$v$, Johnson-Cousins $B$, $V$, $R$ and narrowband H$\alpha$ filter system, described in the previous section, by employing a CCD camera on a 0.15-m 1300mm focal-length astrograph attached to the 0.8-m Dark Sky Observatory reflector. The detector is a KAF-8300 monochrome CCD, operated with on-chip $2 \times 2$ binning to give an effective pixel size of $10.8 \times 10.8\mu$m. The CCD utilizes an SBIG ``even illumination shutter'' which ensures uniform exposures over the entire field even for very short exposures. Flat fields are obtained every night with a ``Flipflat'' luminescent panel which offers more consistent flats than sky flats. This instrument, which has a 48\arcmin\ $\times$ 36\arcmin\ field of view, is known as the ``Piggy-back'' telescope. It enables us to obtain photometry simultaneously with the spectroscopy. In April 2012 we installed a small robotic dome at the Dark Sky Observatory containing a clone of the Piggy-back telescope mounted on a German equatorial mount. This robotic telescope employs the CCDAutopilot5 and Pinpoint software which, when combined, allow fully automated operation with precise and consistent centering of the object to within a few arcseconds. This telescope enables us to obtain photometry on every clear night, as the YSA project has access to the 0.8-m and Piggy-back telescopes only $\sim 11 - 12$ nights a month. Both the Robotic and the Piggy-back telescopes are operated very slightly out of focus so that the star image is spread over a number of pixels. This enables more precise photometry. Multiple exposures are obtained for each target, which are reduced and then combined using the IRAF {\it xregister} function. Since August 2014 we have also obtained photometry with a wide-field imager mounted on the Robotic telescope. This wide-field imager consists of an ST-8300 SBIG CCD, a filter wheel with Johnson $B$, $V$ and $R$ filters, and a Pentax 150mm f/3.5 camera lens. This setup yields a $6.9^\circ \times 5.3^\circ$ field of view, and supplements the Robotic telescope data for program stars which do not have sufficient comparison stars in the 48\arcmin\ $\times$ 36\arcmin\ field of view of the main telescope. \subsubsection{Photometric Reduction Technique} Reducing the photometric data from the Piggy-back and Robotic telescopes is challenging in a number of ways. First, despite the small aperture (0.15-m), some of our program stars are bright ($V < 6$), which requires short exposures. To mitigate these difficulties, the telescopes are slightly defocused, and we obtain multiple exposures which are stacked using IRAF routines which preserve the stellar flux. None of our fields are crowded, and so photometry is carried out on the stacked images using the IRAF APPHOT package. We utilize differential photometry to determine the magnitudes of our program stars. In most cases, the program star is the brightest in the field. Suitable comparison and check stars are typically one or two magnitudes fainter than the program star, so the standard differential photometry technique leads to unacceptably large photometric errors. To achieve better photometric accuracy we have devised an improved method, which we call the ``Superstar technique'' (SST). The SST, instead of utilizing a handful of comparison stars, considers the photon flux in the entire star field in the image. Thus the SST adds up the flux from many different sources, both bright and faint, and constructs from that summed flux a ``super'' comparison star that often has comparable flux to the program star. The technique compares each individual source against the summed flux, thus enabling, in an interactive fashion, the elimination of variable stars from the final summed flux. In this way a reference file of comparison stars, often 20 -- 50 objects, (the ``reference stars'') is constructed. The individual fluxes in that reference file are based on averages over a large number of nights, so the relative fluxes are known to high precision. To determine the magnitude of the program star for a given night, the SST identifies as many of the reference stars as possible on the stacked frame for that night (it is not necessary to identify all of the reference stars) and uses those identified to construct the ``super'' comparison star. The summed flux for that super comparison is compared to the summed flux of the identified stars in the reference file, and that ratio enables the calculation of a $\Delta m$ for that particular observation. That $\Delta m$ is added to the instrumental magnitude of the program star to give the magnitude for that observation. The magnitudes so determined are not yet on the standard system, but are offset by a constant zeropoint shift. If a number of the reference stars have measured magnitudes on a standard system, they can be used to calculate that zeropoint shift. However, most of our work can be carried out in the instrumental system. The Superstar technique gives best results when the program star is situated in a rich stellar field, enabling the summation of scores of stellar fluxes into the single super comparison star. For those stars in our program for which 20 or more reference stars are available, the typical photometric errors in the individual Johnson-Cousins $B$, $V$, and $R$ magnitudes are on the order of 0.005 -- 0.007 mag. The errors in the Str\"{o}mgren-$v$ and H$\alpha$ bands tend to be somewhat higher: 0.007 -- 0.010 mag. For the brightest stars in our program and stars with sparse fields ($ < 20$ reference stars) the errors are higher, and typically range, on good nights, from 0.010 - 0.015 magnitude, with slightly higher errors in Str\"{o}mgren-$v$ and H$\alpha$. These are the stars that will benefit from the photometry obtained with the wide-field imager that is mounted on the Robotic telescope (see above). We defer a deeper discussion of the photometric errors until Paper II which will be devoted to an analysis of the photometric data as well as its relationship to the spectroscopic data discussed in this paper. \section{Basic Physical Parameters} \begin{deluxetable*}{llllcll} \tablecolumns{7} \tablewidth{0pt} \tablecaption{Young Solar Analog Stars\\Basic Physical Data\label{tbl:bpp}} \tablehead{ \colhead{Name} & \colhead{$T_{\rm eff}$(K)} & \colhead{$\log g$} & \colhead{[M/H]} & \colhead{$\xi_t$} & \colhead{$v\sin i$ {\scriptsize km s$^{-1}$}} & \colhead{Source\tablenotemark{a}}\\ & & & & \colhead{{\scriptsize km s$^{-1}$}} & \colhead{(error)} & } \startdata HD 166 & 5454 & 4.52 & $+0.05$ & 1.3 & \phn4.5 (0.2) & Keck \\ HD 5996 & 5463 & 4.60 & $+0.01$ & 0.7 & \phn0.0 (1.5) & Elodie \\ HD 9472 & 5705 & 4.46 & $-0.03$ & 1.1 & \phn3.1 (0.2) & Keck \\ HD 13531 & 5595 & 4.54 & $-0.02$ & 1.1 & \phn6.1 (0.1) & Keck \\ HD 16673 & 6241 & 4.38 & $-0.05$ & 1.3 & \phn7.3 (0.2) & Elodie \\ HD 27685 & 5681 & 4.43 & $+0.13$ & 1.0 & \phn1.6 (1.0) & Elodie \\ HD 27808 & 6217 & 4.31 & $+0.11$ & 1.2 & 12.7 (0.2) & Elodie \\ HD 27836 & 5843 & 4.35 & \nodata & \nodata & \nodata & \nodata \\ HD 27859 & 5887 & 4.36 & $+0.06$ & 1.2 & \phn7.3 (0.2) & Keck \\ HD 28394 & 6243 & 4.31 & $+0.09$ & 1.2 & 22.0 (1.0) & Keck \\ HD 42807 & 5722 & 4.55 & $-0.03$ & 1.2 & \phn5.0 (0.2) & Keck \\ HD 76218 & 5380 & 4.56 & $+0.07$ & 1.0 & \phn3.4 (0.2) & Keck \\ HD 82885 & 5487 & 4.43 & $+0.29$ & 1.3 & \phn3.2 (0.2) & Keck \\ HD 96064 & 5402 & 4.54 & $+0.13$ & 0.6 & \phn2.8 (0.5) & Elodie \\ HD 101501 & 5535 & 4.55 & $-0.04$ & 1.0 & \phn2.8 (0.4) & Keck \\ HD 113319 & 5736 & 4.53 & $-0.05$ & 1.1 & \phn3.6 (0.2) & Keck \\ HD 117378 & 6000 & 4.51 & $-0.07$ & 1.3 & 10.2 (0.2) & NOT \\ HD 124694 & 6195 & 4.44 & $+0.05$ & 1.2 & 17.6 (0.5) & NOT \\ HD 130322 & 5385 & 4.53 & $+0.05$ & 1.0 & \phn0.0 (1.5) & Elodie \\ HD 131511 & 5215 & 4.51 & $+0.07$ & 1.2 & \phn4.7 (0.2) & NOT \\ HD 138763 & 6040 & 4.43 & \nodata & \nodata & \nodata & \nodata \\ HD 149661 & 5255 & 4.57 & $-0.01$ & 1.0 & \phn1.5 (0.2) & Paranal \\ HD 152391 & 5443 & 4.53 & $+0.02$ & 1.2 & \phn4.3 (0.2) & NOT \\ HD 154417 & 6022 & 4.42 & $-0.02$ & 1.4 & \phn6.8 (0.2) & Keck \\ HD 170778 & 5925 & 4.48 & $+0.01$ & 1.3 & \phn7.9 (0.2) & NOT \\ HD 189733 & 5049 & 4.59 & $+0.04$ & 1.1 & \phn2.9 (0.2) & Keck \\ HD 190771 & 5789 & 4.45 & $+0.12$ & 1.5 & \phn5.4 (0.2) & NOT \\ HD 206860 & 5986 & 4.49 & $-0.07$ & 1.5 & 10.0 (0.2) & Keck \\ HD 209393 & 5670 & 4.58 & $-0.10$ & 1.0 & \phn4.0 (0.2) & Keck \\ HD 217813 & 5876 & 4.45 & $+0.00$ & 1.5 & \phn4.4 (0.2) & Keck \\ HD 222143 & 5787 & 4.43 & $+0.06$ & 1.3 & \phn3.2 (0.2) & Keck \\ Sun & 5774 & 4.44 & $+0.00$ & 1.0 & \phn1.8 (0.2) & NSO \\ \enddata \tablenotetext{a}{Keck: The Keck Observatory Archive https://koa.ipac.caltech.edu/cgi-bin/KOA/nph-KOAlogin; Elodie: The Elodie Archive http://atlas.obs-hp.fr/elodie/, \citet{moultaka04}; NOT: Nordic Optical Telescope Service Observing Proposal P50-410; Paranal: The UVES Paranal Observatory Project (POP), \citet{bagnulo03}, http://www.eso.org/sci/observing/tools/uvespop.html; NSO: \citet{kurucz84}.} \end{deluxetable*} Table \ref{tbl:bpp} presents basic physical data, namely effective temperatures, surface gravities ($\log g$), metallicities ([M/H]), microturbulent velocities ($\xi_t$), and projected rotational velocities ($v\sin i$) for the program stars. The effective temperatures were determined using the infrared flux method formulae of \citet{cassagrande10}, specifically, those for $b-y$, $B-V$, and $V-K_s$, where $K_s$ is the 2MASS K-magnitude \citep{skrutskie06}. The effective temperatures presented are straight means of the values based on those three indices, except for some of the brighter stars for which $K_s$ is saturated and thus unreliable. The statistical error associated with these temperatures is on the order of $\pm 70$K, with an additional systematic error in the zeropoint of the system of about $15 - 20$K \citep{cassagrande10}. The gravities were calculated via the absolute bolometric magnitudes, based on Hipparcos parallaxes as recalculated by \citet{vanleeuwen07} and bolometric corrections from \citet{flower96} along with the mass-luminosity relationship from \citet{andersen91}, and have errors on the order of $\pm 0.10$ in the $\log$. Metallicities, microturbulent velocities, and projected rotational velocities were calculated from measurements of high-resolution archival spectra from the HIRES spectrograph on the Keck 10-m telescope, the Elodie spectrograph on the 193-cm telescope at the Observatoire de Haute-Provence, the UVES spectrograph on the ESO VLT provided by the UVES Paranal Observatory Project, as well as new observations with the FIES spectrograph on the Nordic Optical Telescope. Projected rotational velocities were calculated with the cross-correlation method. To do this, we first estimated the line-spread function (LSF) for each spectrum by measuring the FWHM in \aa ngstroms of a number of telluric lines in the atmospheric $\alpha$-band of oxygen, centered $\sim 6300$\AA\ or, in some cases the $\alpha^\prime$ band centered near 5800\AA, and then transformed that FWHM to the echelle orders containing the spectral range 6050 -- 6200\AA\ where most of the measurements for calculating $v\sin i$ and [M/H] were made. Once the LSF was characterized, we computed synthetic spectra in the 6050 -- 6200\AA\ range with the SPECTRUM\footnote{http://www.appstate.edu/$\mathtt{\sim}$grayro/spectrum/spectrum.html} code of \citet{gray94} and solar-metallicity ATLAS12 models \citep{castelli03} calculated with the effective temperatures and gravities in Table \ref{tbl:bpp}. Those synthetic spectra were then convolved with the LSF. Cross correlations were obtained between the synthetic spectrum and the observed spectrum, and the synthetic spectrum and rotationally broadened versions of itself for a range of rotational velocities. These cross correlations were normalized at a common point and compared to derive the rotational velocity of the program star. Our results are in very good agreement with those of \citet{mishenina12} who used the cross-correlation method of \citet{queloz98}. Once the LSF and the $v\sin i$ were known, we used a $\chi^2$ minimization method comparing the observed and synthetic spectra to determine both the metallicity and the microturbulent velocity for each program star. For the synthetic spectra, we used a spectral line list in the region 6050 -- 6200\AA\ with updated $\log(gf)$ values from the NIST Atomic Spectra Database, version 5.2 \citep{nist}. Broadening parameters and $\log(gf)$ values were adjusted, when necessary, by reference to the Solar Flux Atlas \citep{kurucz84}. The metallicities and microturbulent velocities are recorded in Table \ref{tbl:bpp}. We estimate errors in that Table to be $\pm 0.05$ dex for the metallicity, and about $\pm 0.3$ km s$^{-1}$ for the microturbulent velocity. The projected rotational velocities will be used in a later paper in this series to interpret periodicities observed in our activity and photometric data. \section{Spectroscopic Indices for Stellar Activity} \label{sec:indices} Our project measures four spectroscopic indices from the spectra obtained on the G/M spectrograph. These are the \ion{Ca}{2} H \& K chromospheric activity index, based on the Mount Wilson ``$S$'' index (hereinafter $S_{\rm MW}$), and indices for the \ion{Ca}{1} 4227\AA\ resonance line, the 4305\AA\ G-band, and the 4340\AA\ H$\gamma$ line. \subsection{\ion{Ca}{2} H \& K chromospheric activity indices} \subsubsection{Definition and Measurement of the Instrumental Indices} \citet{wilson68,wilson78} and \citet{vaughan78} introduced the Mount Wilson chromospheric activity index, $S_{\rm MW}$, which recorded the chromospheric flux in the cores of the \ion{Ca}{2} H \& K lines in ratio with flux in the ``continuum'' on either side of those lines. Their instrument employed effective triangular bands with full width at half peak of 1.09\AA\ centered on the cores of the H \& K lines, and continuum bands of 20\AA\ width to the violet side (3891.067 -- 3911.067\AA) and the red (3991.067 -- 4011.067\AA). The fluxes measured through these bands are ratioed to give the $S_{\rm MW}$ index. We measure two instrumental indices from the DSO spectra, the $S_2$ index which measures the flux in the cores of the H \& K lines with 2\AA-wide rectangular bands and the $S_4$ index which employs 4\AA-wide rectangular bands in the H \& K cores. Both indices utilize the same continuum bands as the Mount Wilson Project. The indices are calculated (in analogy with the Mount Wilson index) with the equations \begin{displaymath} S_2 = 5\frac{f_{K2} + f_{H2}}{f_v + f_r} \end{displaymath} \begin{displaymath} S_4 = 5\frac{f_{K4} + f_{H4}}{f_v + f_r} \end{displaymath} where the $f$'s are the {\it monochromatic} fluxes (i.e. the integrated flux divided by the bandwidth) through the various bands described above. In particular, $f_{K2}$ and $f_{K4}$ are the fluxes measured in the core of the \ion{Ca}{2} K-line using 2 and 4\AA\ bandpasses respectively; $f_{H2}$ and $f_{H4}$ are the same for the \ion{Ca}{2} H-line, and $f_v$ and $f_r$ are the fluxes in the two continuum bands. The DSO spectra do not have sufficient resolution to directly measure 1\AA\ fluxes in the cores of the \ion{Ca}{2} H \& K lines. However, the 0.75\AA/2 pixel resolution of the VATTspec spectra does allow direct measurement of an $S_1$ index, which employs rectangular 1\AA\ passbands in the cores of the H \& K lines. The advantage of the $S_1$ index is that it is closer to the original instrumental system of the Mount Wilson project (although that project utilized a triangular passband) and the transformation from $S_1$ to $S_{\rm MW}$ is linear and does not involve a color ($B-V$) term, whereas the $S_2 \rightarrow S_{\rm MW}$ and $S_4 \rightarrow S_{\rm MW}$ transformations are both nonlinear and require a color term (see below). Steps in the measurement of the $S_1$, $S_2$, and $S_4$ indices include transforming the stellar spectrum in question to the rest frame of the star, the rebinning of the spectrum to a uniform spacing of 0.1\AA, followed by the numerical integration of the spectrum in the various passbands. We employ the raw (non-flux-calibrated) spectrum for these calculations. The division by the sum of the continuum fluxes ($f_v + f_r$) in the above equations accounts for changes in the slope of the continuum due to differing amounts of atmospheric extinction, although for routine observations we attempt to observe the star as close to the meridian as possible. For moderately high S/N spectra (S/N $> 100$), all three indices may be measured to a precision of $\sim 0.001$ in the index. \subsubsection{Calibration of the Instrumental Indices: Transformation to the Mount Wilson index} The transformation of $S_4$ to $S_{\rm MW}$, as described in \citet{gray03} is problematical, as the relationship is highly nonlinear. In addition, it was not appreciated at the time that there is a small but significant color term in the transformation. The transformation for $S_2$ is better behaved, but is still non-linear, and a color term is still required. As stated above, the $S_1$ indices measured in the VATTspec spectra are linearly correlated with the Mount Wilson $S_{\rm MW}$, and that transformation does not involve a color term. To derive that transformation, we have observed with the VATTspec a number of the chromospheric activity calibration stars used by \citet{gray03} in their original calibration of $S_{18}$, which is the same as the $S_4$ index of the present paper. The relationship between the VATTspec $S_1$ index and the mean $S_{\rm MW}$ indices recorded for those calibration stars in \citet{baliunas95} is given by: \begin{displaymath} S_{\rm MW} = -0.0011 + 4.6920S_1 \quad \sigma = 0.0119 \end{displaymath} and illustrated in Figure \ref{fig:VATTS1Smw}. The goodness of fit is not improved with a quadratic term, and the residuals show no correlation with $B-V$. Most of the scatter in that relationship may be traced to the variability of the calibration stars, especially the more active calibration stars. \begin{figure} \includegraphics[width=3.25in]{f1.eps} \caption{The $S_1 \rightarrow S_{\rm MW}$ (Mount Wilson) transformation for VATTspec spectra. The calibration is linear, and has no significant color term.} \label{fig:VATTS1Smw} \end{figure} As mentioned above the $S_2 \rightarrow S_{\rm MW}$ and the $S_4 \rightarrow S_{\rm MW}$ transformations are both non-linear and require a color term. The non-linear nature of these transformations is problematical when attempting an extrapolation of the transformation to very active stars. Because the resolution of the DSO spectra is $\sim 1.8$\AA/2 pixels, we cannot directly measure a DSO $S_1$ index. However, experimentation with the VATTspec spectra suggests a solution. The actual \ion{Ca}{2} H \& K chromospheric emission in main-sequence stars is intrinsically narrow (FWHM $\sim 0.5$\AA), narrower than even the 1\AA\ passband employed by the Mount Wilson project. That flux is entirely contained in the H \& K passbands employed in the $S_1$, $S_2$, and $S_4$ indices, but those passbands involve successively larger amounts of {\it photospheric} flux. This suggests that it should be possible to use the $S_2$ and the $S_4$ indices to {\it extrapolate} linearly to an $S_1$ index: $S_1 = 1.5S_2 - 0.5S_4$. That this is feasible can be demonstrated with the VATTspec spectra. Figure \ref{fig:VATTS1} shows the correlation between the directly measured VATTspec $S_1$ index, and $S_1^\prime$ extrapolated from $S_2$ and $S_4$. The two are linearly related, and $S_1^\prime$ can predict the directly measured $S_1$ index to better than $\pm$ 1\%. \begin{figure} \includegraphics[width=3.25in]{f2.eps} \caption{The relationship between the directly measured $S_1$(VATT) activity index and the {\it extrapolated} $S_1^\prime$ index, based on the $S_2$ and $S_4$ indices.} \label{fig:VATTS1} \end{figure} This provides a way to derive a linear transformation with no color term between the instrumental DSO system and the Mount Wilson system. An $S_1$ extrapolated index is formed from the $S_2$ and $S_4$ instrumental indices, and that $S_1$ index is calibrated to the Mount Wilson $S_{\rm MW}$ index via observations of the chromospheric activity calibration stars of \citet{gray03}. For most of those calibration stars we have only a few ($< 5$) observations scattered over the past 15 years. These we refer to as ``snapshot'' observations. However, as part of the YSA project we have intensively observed eight Mount Wilson stars -- HD~45067, HD~143761, HD~207978, HD~82885, HD~101501, HD~152391, HD~154417, and HD~206860. The first three of these stars are regularly observed ``chromospherically stable stars'' used to monitor the stability of our instrumental system (see below), and the latter five are active G-type stars. For these stars, we can form multi-year means for the instrumental indices that are much better correlated with the Mount Wilson means than the snapshot observations of the other calibration stars. In deriving the calibration, we give the snapshot observations a weight of 1 and the multi-year means a weight of 5. This yields the calibration (see Figure \ref{fig:DSOS1Smw}): \begin{displaymath} S_{\rm MW} = 0.0323 + 4.8335S_1 \quad \sigma = 0.0077 \end{displaymath} The residuals from the calibration show no evidence for a color term. In addition, as the figure illustrates, extrapolation of this linear relationship seems to hold for very active stars. \begin{figure} \includegraphics[width=3.25in]{f3.eps} \caption{The DSO $S_1 \rightarrow S_{\rm MW}$(Mount Wilson) calibration. The ordinate is the mean Mount Wilson $\langle S_{\rm MW} \rangle$ index \citep{baliunas95}. The small circles represent snapshot (single to a few) observations of the Mount Wilson calibration stars \citep{gray03}. The large squares represent Mount Wilson stars that have been regularly observed at DSO since 2007. For these stars the 8-year mean $S_1$ index (in some cases derived from over a hundred observations) is used. These stars are given five times the weight of the snapshot stars in deriving the calibration. Finally, the crosses represent individual snapshot observations of very active Mount Wilson stars. These stars were not used in the derivation of the calibration, but indicate that extrapolation of the calibration is adequate even for very active stars. } \label{fig:DSOS1Smw} \end{figure} \begin{figure} \includegraphics[width=3.25in]{f4.eps} \caption{The top panel shows the histogram of the S/N values of our observations. The S/N values are estimated in the continuum just longwards of the \ion{Ca}{2} H line. The arrow indicates the average S/N, about 180. The central panel shows the results of a Monte Carlo simulation of the measurement error of $S_{\rm MW}$ as a function of S/N. At S/N = 180, the measurement error is about $\pm 0.0025$. The bottom panel shows a similar simulation for the G-band index. While the S/N in the continuum at the G-band is $\sim 1.3$ -- $1.4$ times that just longwards of the \ion{Ca}{2} H line, we have plotted, for simplicity, the G-band errors against the H-line S/N. At S/N = 180, the measurement error in the G-band index is approximately $\pm 0.0005$.} \label{fig:SN} \end{figure} The precision of our determinations of $S_{\rm MW}$ depend on the S/N of the observations. We have attempted to estimate those precisions via a Monte-Carlo method that begins with a synthetic spectrum of the \ion{Ca}{2} H \& K region smoothed to a resolution of 1.8\AA/2 pixels (the resolution of the DSO spectra). The Monte-Carlo technique simulates exposing on the spectrum until a certain S/N is achieved in the continuum just longwards of \ion{Ca}{2} H. That exposure is processed through our measuring programs in exactly the same way as the real spectra, including the velocity correction (the synthetic spectra are given random radial velocity shifts between $-30$ and $+30$ km/s), measurements of $S_2$, $S_4$, the calculation of $S_1$, and the transformation to the Mount Wilson system) enabling a calculation of the error $\Delta S_{\rm MW}$ for a given simulation. Those errors are plotted against S/N in the middle panel of Figure \ref{fig:SN}. In the top panel of that same Figure is a histogram of the S/N values of our observations. The average S/N $\sim 180$, for which a measurement precision of $\pm 0.003$ in the $S_{\rm MW}$ index is estimated. Indeed this error estimate (which does not include any possible systematic errors in the transformation of our instrumental system to the Mount Wilson system) is consistent with our measurements of $S_1$ in the set of ``chromospherically stable'' stars (see below). The bottom panel of the figure shows a similar calculation for the G-band index (see below). \subsubsection{Stability of the Dark Sky Observatory Instrumental System} \begin{figure} \includegraphics[width=3.25in]{f5.eps} \caption{The seasonal mean residuals in the instrumental $S_1$ index observed for the three chromospherically ``stable'' stars, HD~45067 (filled circles), HD~143761 (diamonds), HD~207978 (squares). The outer ``error'' bars indicate the standard deviation in the measured index for a given season. The inner error bars indicate the standard error of the mean. This diagram and similar ones for the other instrumental indices for the G-band, \ion{Ca}{1}, and H$\gamma$ can be used to assess the stability of the instrumental system and to derive corrections to apply to the observed indices.} \label{fig:residuals} \end{figure} To monitor the stability of the Dark Sky Observatory instrumental system, we have regularly observed for the past 5 years, every clear night, at least one chromospherically ``stable'' star, chosen from a set of stars showing flat activity on the Mt. Wilson project \citep{baliunas95}. The stable stars that we observe are HD 45067, HD 143761, and HD 207978. During the course of an observing season, the standard deviations for night-to-night variations of those stars range from 0.0004 -- 0.0012 in $S_1$. The lower figure in that range translates to a standard deviation in $S_{\rm MW} \sim 0.0019$, in line with our Monte Carlo estimates for the observational error in that index. To monitor any changes in the instrumental system, we have adopted the period July 1, 2011 (MJD = JD - 2450000 = 5743) to June 30, 2013 (MJD = 6445) as the reference zeropoint baseline for the instrumental system. Residuals in the seasonal means of the instrumental indices relative to that baseline will then reveal changes in the instrumental system. This is illustrated in Figure \ref{fig:residuals} for the $S_1$ index. That figure shows that the instrumental system has remained very stable from the time that we began regular monitoring of the chromospherically stable stars. However, beginning September 1, 2013 (MJD = JD - 2450000 = 6536), there was a very small but abrupt shift in the instrumental system. That shift can be traced to the return of the CCD to the manufacturers for repairs because of the failure of the vacuum seal. During that visit, not only was the vacuum seal repaired, but a new driver was installed that fixed a very low-level but variable bias pattern. In addition, improved optical baffling was installed in the interior of the CCD housing which may have slightly reduced the already very low level of scattered light. To correct for this shift in the instrumental system, we subtract 0.0007 from the $S_1$ indices obtained since September 1, 2013. That correction may be propagated, if required, to the $S_2$ and $S_4$ indices using the relationships between those indices. We have derived similar very small corrections to the other observed indices. Before April 2009, the spectroscopic data for this project were obtained with a Tektronics CCD (see \S \ref{sec:spectra}) on the same spectrograph. We have investigated the difference in the instrumental system between the two CCDs using spectra of inactive F-, G- and K-type stars taken with both CCDs and find a small systematic difference between the two systems of 0.0019 in the measurement of the $S_1$ index. This correction has been applied to the earlier data. \subsection{The G-band Index} \label{sec:Gband} \begin{deluxetable}{lll} \tablecolumns{3} \tablewidth{0pc} \tablecaption{Band definitions for the Photospheric Indices\label{tbl:bands}} \tablehead{ \colhead{Band Name} & \colhead{Violet Edge} & \colhead{Red Edge}} \startdata Continuum ($c_1$) & 4208.0\AA & 4214.0\AA \\ \ion{Ca}{1} 4226.7\AA & 4225.7\AA & 4227.7\AA \\ Continuum ($c_2$) & 4239.4\AA & 4245.4\AA \\ Continuum ($c_3$) & 4263.0\AA & 4266.0\AA \\ G-band & 4298.0\AA & 4312.0\AA \\ Continuum ($c_4$) & 4316.0\AA & 4320.5\AA \\ Continuum ($c_5$) & 4329.0\AA & 4334.0\AA \\ H$\gamma$ & 4339.5\AA & 4341.5\AA \\ Continuum ($c_6$) & 4345.0\AA & 4349.5\AA \\ \enddata \end{deluxetable} \begin{figure} \includegraphics[width=3.25in]{f6.eps} \caption{The variation of the three {\it photospheric} indices defined in this paper as a function of $B-V$ color and spectral type. The G-band (solid circles) comes to a maximum in the early K-type stars, and then declines. The \ion{Ca}{1} index (open circles) grows with increasing $B-V$ linearly until the mid K-type stars, after which it appears to saturate. The H$\gamma$ index (open triangles) decreases linearly with increasing $B-V$. The stars used for this diagram are the Mount Wilson calibration stars of \citet{gray03}, and the $B-V$ data are from \citet{mermilliod97}.} \label{fig:BV} \end{figure} At the suggestion of \citet{hall08}, an index has been designed to measure the G-band molecular feature in the blue-violet region of the spectrum. This wide, deep feature arises from the blended Q-branches of the 0-0 and 1-1 vibrational bands of the diatomic CH molecule. The G-band appears first in the early F-type stars, strengthens through the F- and G-type stars, comes to a broad maximum in the early K-type stars on the main sequence, and then weakens toward later types \citep[see][]{gray09}. The G-band index is measured by numerically integrating the stellar monochromatic flux in a 14\AA\ rectangular band centered at 4305\AA\ (corresponding closely to the visible extent of the G-band in low-resolution spectra, and similar to the passband of G-band interference filters used in observations of the sun) and ratioing that with ``continuum'' fluxes measured in two bands on either side of the G-band (see Table \ref{tbl:bands}). The G-band index is defined as: \begin{displaymath} 1 - \frac{\frac{1}{14\textrm{\AA}}\int_{4298\textrm{\AA}}^{4312\textrm{\AA}} I(\lambda)d\lambda} {0.247c_3 + 0.753c_4} \end{displaymath} where $c_3$ and $c_4$ represent the monochromatic fluxes in the two continuum bands, respectively. Because the continuum bands are not situated symmetrically relative to the G-band passband, the weightings in the denominator are designed to give the ``continuum'' value at the wavelength of the center of the G-band passband. The ratio is subtracted from unity to give an index that varies between 0 and 1: 0 when the G-band is absent, 1 when the G-band is perfectly black. As expected, the G-band index is a strong function of $B-V$ (see Figure \ref{fig:BV}) and the spectral type. The G-band index will also be a function of metallicity and $\log g$ \citep[see][]{gray09}. We investigate in \S\ref{sec:var} the relationship of the G-band index to stellar activity. A Monte Carlo error analysis similar to that described for the $S_{\rm MW}$ index was carried out for the G-band index. This is illustrated in the lower panel of Figure \ref{fig:SN}. The typical measurement error for the G-band index at S/N $= 100$ is $\pm 0.0017$ and at S/N $= 180$ is $\pm 0.0011$. The Monte Carlo analysis appears to have captured the important sources of measurement error for the G-band, as may be deduced from Figure \ref{fig:G7}, where the standard deviations of the seasonal G-band data for all the program stars and the ``chromospherically stable'' reference stars are plotted against the G-band index. The horizontal line in that figure, which corresponds well with the lower envelope of the points, is the Monte Carlo G-band error for S/N $=180$. The dispersions that lie above that line presumably arise from actual stellar variability, a point that will be considered in \S\ref{sec:var} below. \begin{figure} \includegraphics[width=3.25in]{f7.eps} \caption{Verification of the Monte Carlo error analysis for the G-band index. This figure plots the seasonal dispersions ($\sigma$) of the G-band indices for all of the program stars plus the chromospherically stable reference stars against the G-band index. The horizontal line, which corresponds well with the lower envelope of the distribution of points, represents the Monte Carlo error calculation for S/N $=180$, the average S/N of our spectra.} \label{fig:G7} \end{figure} \subsection{The \ion{Ca}{1} Index} \label{sec:CaI} Another prominent absorption feature in the blue-violet spectrum of G- and K-type stars is the resonance line of \ion{Ca}{1} at 4226.7\AA. This absorption line grows steadily in strength toward later types, at least up to mid K-type stars. It is also sensitive to surface gravity, especially in the K-type stars \citep[see][]{gray09}. We have devised an index similar to that of the previously defined G-band index. The \ion{Ca}{1} index is measured by integrating over a 2\AA-wide band centered on the \ion{Ca}{1} line and ratioing that with fluxes in two symmetrically placed continuum bands. The formula used is: \begin{displaymath} 1 - \frac{\frac{1}{2\textrm{\AA}}\int_{4225.7\textrm{\AA}}^{4227.7\textrm{\AA}} I(\lambda)d\lambda} {0.5(c_1 + c_2)} \end{displaymath} where $c_1$ and $c_2$ are the continuum bands defined in Table \ref{tbl:bands}. As can be seen in Figure \ref{fig:BV}, the \ion{Ca}{1} index behaves as designed; it grows linearly from the F-type stars into the K-type stars, only saturating after a spectral type of K3. A Monte Carlo error analysis similar to that illustrated in Figure \ref{fig:SN} was carried out for the \ion{Ca}{1} index, giving a measurement error of $\pm 0.0027$ at S/N $= 180$. This value again corresponds well with the lower envelope of \ion{Ca}{1} index seasonal dispersions (see discussion in \S\ref {sec:Gband} above). \subsection{The H$\gamma$ Index} \label{sec:HG} Both the G-band index and the \ion{Ca}{1} index grow with decreasing temperature (at least up to the early K-type stars), and so it is useful to define another index that decreases with the temperature. The hydrogen lines behave in exactly this way in the F-, G-, and K-type stars. The best hydrogen line to use in the spectral range provided by our spectra from the Dark Sky Observatory is H$\gamma$. An index based on the H$\beta$ line would probably be preferable, because of the less crowded surroundings, but that line is outside our spectral range. The H$\gamma$ index is defined similarly to the \ion{Ca}{1} index, with a 2\AA-wide band centered on the H$\gamma$ line and flanking ``continuum'' bands (specified in Table \ref{tbl:bands}). The formula used is \begin{displaymath} 1 - \frac{\frac{1}{2\textrm{\AA}}\int_{4339.5\textrm{\AA}}^{4341.5\textrm{\AA}} I(\lambda)d\lambda} {0.4286c_5 + 0.5714c_6} \end{displaymath} The H$\gamma$ index behaves as designed, declining in strength with declining temperature (Figure \ref{fig:BV}). However, it appears to have only about half the temperature sensitivity of the \ion{Ca}{1} index. A Monte Carlo error analysis similar to that illustrated in Figure \ref{fig:SN} was carried out for the H$\gamma$ index, giving a measurement error of $\pm 0.0020$ at S/N $= 180$. The larger errors for the \ion{Ca}{1} and H$\gamma$ indices relative to the G-band index arise primarily from the narrower ``science'' bands. \begin{figure*} \begin{tabular}{ccc} \centering \includegraphics[width=2.0in]{f8a.eps} & \includegraphics[width=2.0in]{f8b.eps} & \includegraphics[width=2.0in]{f8c.eps} \\ \includegraphics[width=2.0in]{f8d.eps} & \includegraphics[width=2.0in]{f8e.eps} & \includegraphics[width=2.0in]{f8f.eps} \\ \includegraphics[width=2.0in]{f8g.eps} & \includegraphics[width=2.0in]{f8h.eps} & \includegraphics[width=2.0in]{f8i.eps} \\ \end{tabular} \caption{A montage of \ion{Ca}{2} H \& K activity index ($S_{\rm MW}$) time series (upper panel), and G-band index, \ion{Ca}{1}, and H$\gamma$ times series (lower panels) for our program stars (montage continued in Figures \ref{fig:KHG2}, \ref{fig:KHG3}, and \ref{fig:KHG4}). All the graphs are scaled identically, with a range of 0.15 in $S_{\rm MW}$, 0.03 in the G-band index, 0.08 in the \ion{Ca}{1} index, and 0.05 in the H$\gamma$ index so that amplitudes of variations and seasonal dispersions can be intercompared directly. The solid lines are Bezier curves drawn through the seasonal means. Typical error bars for S/N $= 180$ spectra are shown in the upper left-hand corner of the panels for the first star. } \label{fig:KHG1} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \centering \includegraphics[width=2.0in]{f9a.eps} & \includegraphics[width=2.0in]{f9b.eps} & \includegraphics[width=2.0in]{f9c.eps} \\ \includegraphics[width=2.0in]{f9d.eps} & \includegraphics[width=2.0in]{f9e.eps} & \includegraphics[width=2.0in]{f9f.eps} \\ \includegraphics[width=2.0in]{f9g.eps} & \includegraphics[width=2.0in]{f9h.eps} & \includegraphics[width=2.0in]{f9i.eps} \\ \end{tabular} \caption{Continuation of the montage in Figure \ref{fig:KHG1}.} \label{fig:KHG2} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \centering \includegraphics[width=2.0in]{f10a.eps} & \includegraphics[width=2.0in]{f10b.eps} & \includegraphics[width=2.0in]{f10c.eps} \\ \includegraphics[width=2.0in]{f10d.eps} & \includegraphics[width=2.0in]{f10e.eps} & \includegraphics[width=2.0in]{f10f.eps} \\ \includegraphics[width=2.0in]{f10g.eps} & \includegraphics[width=2.0in]{f10h.eps} & \includegraphics[width=2.0in]{f10i.eps} \\ \end{tabular} \caption{Continuation of the montage in Figures \ref{fig:KHG1} and \ref{fig:KHG2}.} \label{fig:KHG3} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \centering \includegraphics[width=2.0in]{f11a.eps} & \includegraphics[width=2.0in]{f11b.eps} & \includegraphics[width=2.0in]{f11c.eps} \\ \end{tabular} \caption{Continuation of the montage in Figures \ref{fig:KHG1}, \ref{fig:KHG2}, and \ref{fig:KHG3}.} \label{fig:KHG4} \end{figure*} \section{Statistical Analysis of the Spectroscopic Results} \begin{deluxetable*}{llllllrllllll} \tablecolumns{11} \tablewidth{0pt} \tablecaption{Young Solar Analog Stars\\Mean Activity Data, Predicted Rotational Periods in days, and Chromospheric Activity Ages\label{tbl:mad}} \tablehead{ \colhead{Name} & \colhead{$\langle S_{\rm MW} \rangle$} & \colhead{$\sigma$} & \colhead{$\langle\log(R^\prime_{\rm HK})\rangle$} & \colhead{$P_{\rm rot}(R^\prime_{\rm HK})$} & \colhead{$P_{\rm max}(v\sin i)$} & \colhead{Age} & \colhead{$\langle$G$\rangle$} & \colhead{$\sigma$} & \colhead{$\langle$\ion{Ca}{1}$\rangle$} & \colhead{$\sigma$} & \colhead{$\langle$H$\gamma\rangle$} & \colhead{$\sigma$} \\ & & & & \phn\phn(error) & \phn\phn(error) & (Myr) & & & & &} \startdata HD 166 & 0.429 & 0.017 & -4.393 & \phn7.52d (2.79) & 10.03d (0.45) & 375 & 0.459 & 0.002 & 0.539 & 0.006 & 0.384 & 0.005 \\ HD 5996 & 0.376 & 0.019 & -4.465 & 11.30\phantom{d} (2.80) & \phn\phn\phn\phn$\infty$ & 763 & 0.463 & 0.002 & 0.528 & 0.009 & 0.366 & 0.005 \\ HD 9472 & 0.322 & 0.011 & -4.495 & 10.06\phantom{d} (2.19) & 16.23\phantom{d} (1.05) & 762 & 0.408 & 0.002 & 0.448 & 0.009 & 0.405 & 0.005 \\ HD 13531 & 0.369 & 0.013 & -4.431 & \phn8.06\phantom{d} (2.37) & \phn7.38\phantom{d} (0.12) & 486 & 0.436 & 0.002 & 0.488 & 0.009 & 0.391 & 0.005 \\ HD 27685 & 0.310 & 0.017 & -4.510 & 10.14\phantom{d} (2.08) & 32.71\phantom{d} (20.45) & 810 & 0.425 & 0.002 & 0.456 & 0.010 & 0.410 & 0.006 \\ HD 27808 & 0.255 & 0.011 & -4.541 & \phn4.36\phantom{d} (0.80) & \phn5.15\phantom{d} (0.08) & 548 & 0.275 & 0.002 & 0.330 & 0.010 & 0.472 & 0.004 \\ HD 27836 & 0.346 & 0.011 & -4.397 & \phn4.03\phantom{d} (1.46) & \phn\phn\phn\nodata & 210 & 0.343 & 0.003 & 0.387 & 0.011 & 0.431 & 0.008 \\ HD 27859 & 0.312 & 0.009 & -4.460 & \phn5.80\phantom{d} (1.47) & \phn8.14\phantom{d} (0.22) & 390 & 0.360 & 0.002 & 0.391 & 0.011 & 0.436 & 0.005 \\ HD 28394 & 0.259 & 0.007 & -4.520 & \phn3.46\phantom{d} (0.69) & \phn3.01\phantom{d} (0.14) & 876 & 0.241 & 0.002 & 0.307 & 0.011 & 0.479 & 0.006 \\ HD 42807 & 0.339 & 0.013 & -4.451 & \phn7.58\phantom{d} (2.01) & \phn8.91\phantom{d} (0.36) & 494 & 0.416 & 0.002 & 0.450 & 0.008 & 0.402 & 0.005 \\ HD 76218 & 0.392 & 0.019 & -4.458 & 11.37\phantom{d} (2.91) & 12.54\phantom{d} (0.74) & 753 & 0.470 & 0.003 & 0.570 & 0.010 & 0.362 & 0.006 \\ HD 82885 & 0.287 & 0.018 & -4.632 & 20.77\phantom{d} (2.91) & 16.00\phantom{d} (1.00) & 2184 & 0.479 & 0.002 & 0.551 & 0.007 & 0.389 & 0.003 \\ HD 96064 & 0.476 & 0.028 & -4.355 & \phn5.81\phantom{d} (1.82) & 15.61\phantom{d} (2.79) & 230 & 0.453 & 0.003 & 0.544 & 0.012 & 0.368 & 0.008 \\ HD 101501 & 0.315 & 0.019 & -4.540 & 13.90\phantom{d} (2.56) & 15.55\phantom{d} (2.22) & 1185 & 0.460 & 0.002 & 0.518 & 0.007 & 0.380 & 0.004 \\ HD 113319 & 0.303 & 0.020 & -4.511 & \phn9.37\phantom{d} (1.92) & 12.77\phantom{d} (0.71) & 743 & 0.408 & 0.002 & 0.439 & 0.008 & 0.406 & 0.007 \\ HD 117378 & 0.302 & 0.010 & -4.452 & \phn4.20\phantom{d} (1.11) & \phn4.73\phantom{d} (0.09) & 297 & 0.331 & 0.003 & 0.368 & 0.011 & 0.433 & 0.007 \\ HD 124694 & 0.282 & 0.010 & -4.473 & \phn3.35\phantom{d} (0.80) & \phn3.08\phantom{d} (0.09) & 344 & 0.281 & 0.002 & 0.331 & 0.008 & 0.453 & 0.006 \\ HD 130322 & 0.253 & 0.030 & -4.720 & 26.31\phantom{d} (3.01) & \phn\phn\phn\phn$\infty$ & 3202 & 0.484 & 0.004 & 0.564 & 0.011 & 0.363 & 0.004 \\ HD 131511 & 0.445 & 0.021 & -4.455 & 12.57\phantom{d} (3.27) & \phn9.49\phantom{d} (0.40) & 797 & 0.496 & 0.002 & 0.611 & 0.006 & 0.357 & 0.005 \\ HD 138763 & 0.316 & 0.012 & -4.436 & \phn4.46\phantom{d} (1.28) & \phn\phn\phn\nodata & 283 & 0.322 & 0.002 & 0.366 & 0.009 & 0.440 & 0.008 \\ HD 149661 & 0.303 & 0.019 & -4.651 & 24.23\phantom{d} (3.24) & 27.73\phantom{d} (3.70) & 2581 & 0.506 & 0.002 & 0.623 & 0.007 & 0.352 & 0.004 \\ HD 152391 & 0.391 & 0.022 & -4.445 & 10.28\phantom{d} (2.81) & 10.35\phantom{d} (0.48) & 651 & 0.467 & 0.002 & 0.538 & 0.007 & 0.371 & 0.005 \\ HD 154417 & 0.263 & 0.010 & -4.550 & \phn6.92\phantom{d} (1.23) & \phn8.12\phantom{d} (0.24) & 654 & 0.330 & 0.002 & 0.373 & 0.007 & 0.448 & 0.005 \\ HD 170778 & 0.315 & 0.014 & -4.444 & \phn4.97\phantom{d} (1.37) & \phn6.35\phantom{d} (0.16) & 322 & 0.344 & 0.003 & 0.392 & 0.008 & 0.430 & 0.007 \\ HD 189733 & 0.510 & 0.021 & -4.503 & 16.91\phantom{d} (3.57) & 13.58\phantom{d} (0.94) & 1167 & 0.507 & 0.002 & 0.685 & 0.007 & 0.325 & 0.007 \\ HD 190771 & 0.326 & 0.011 & -4.462 & \phn7.37\phantom{d} (1.85) & \phn9.61\phantom{d} (0.36) & 492 & 0.401 & 0.002 & 0.453 & 0.007 & 0.421 & 0.005 \\ HD 206860 & 0.317 & 0.014 & -4.438 & \phn4.73\phantom{d} (1.34) & \phn5.01\phantom{d} (0.05) & 305 & 0.339 & 0.002 & 0.384 & 0.007 & 0.433 & 0.004 \\ HD 209393 & 0.360 & 0.013 & -4.429 & \phn7.38\phantom{d} (2.19) & 10.61\phantom{d} (0.53) & 440 & 0.419 & 0.002 & 0.464 & 0.010 & 0.387 & 0.005 \\ HD 217813 & 0.310 & 0.012 & -4.462 & \phn5.82\phantom{d} (1.47) & 11.89\phantom{d} (0.54) & 397 & 0.353 & 0.002 & 0.430 & 0.008 & 0.429 & 0.004 \\ HD 222143 & 0.310 & 0.015 & -4.497 & \phn8.87\phantom{d} (1.92) & 16.55\phantom{d} (1.03) & 675 & 0.389 & 0.002 & 0.452 & 0.008 & 0.424 & 0.005 \enddata \end{deluxetable*} \begin{deluxetable*}{llllll} \tabletypesize{\scriptsize} \tablecolumns{6} \tablewidth{0pc} \tablecaption{Index-Variation Kolmogorov-Smirnov Significance Tests\\ Probability of the Null Hypothesis} \tablehead{\colhead{Star ID} & \colhead{$S_{\rm MW}$} & \colhead{G-band} & \colhead{\ion{Ca}{1}} & \colhead{H$\gamma$} & \colhead{$N_{\rm seasons}$}} \startdata HD 166 & $< 10^{-5}$ & 0.020 & \nodata & \nodata & 4\\ HD 5996 & $< 10^{-5}$ & 0.028 & 0.0023 & \nodata & 5 \\ HD 9472 & 0.00075 & $< 10^{-5}$ & \nodata & \nodata & 5 \\ HD 13531 & $< 10^{-5}$ & 0.030 & \nodata & 0.024 & 5 \\ HD 27685 & $< 10^{-5}$ & 0.00002 & \nodata & \nodata & 6 \\ HD 27808 & \nodata & $< 10^{-5}$ & \nodata & \nodata & 5 \\ HD 27836 & \nodata & \nodata & \nodata & \nodata & 5 \\ HD 27859 & 0.011 & \nodata & \nodata & \nodata & 5 \\ HD 28394 & 0.014 & \nodata & \nodata & \nodata & 5 \\ HD 42807 & $< 10^{-5}$ & 0.014 & $< 10^{-5}$ & 0.0153 & 6 \\ HD 76218 & $< 10^{-5}$ & \nodata & 0.00033 & \nodata & 7 \\ HD 82885 & $< 10^{-5}$ & 0.00059 & 0.0011 & 0.00026 & 7 \\ HD 96064 & $< 10^{-5}$ & 0.029 & 0.00019 & \nodata & 8 \\ HD 101501 & $< 10^{-5}$ & 0.00033 & 0.0045 & 0.013 & 7 \\ HD 113319 & $< 10^{-5}$ & 0.00046 & 0.0014 & 0.00045 & 6 \\ HD 117378 & \nodata & \nodata & \nodata & \nodata & 4 \\ HD 124694 & $< 10^{-5}$ & \nodata & 0.00069 & 0.00016 & 5 \\ HD 130322 & $< 10^{-5}$ & \nodata & 0.033 & \nodata & 5 \\ HD 131511 & 0.025 & \nodata & \nodata & 0.038 & 7 \\ HD 138763 & 0.00076 & \nodata & \nodata & 0.023 & 6 \\ HD 149661 & 0.00043 & \nodata & \nodata & \nodata & 4 \\ HD 152391 & \nodata & \nodata & \nodata & \nodata & 4 \\ HD 154417 & 0.00075 & \nodata & \nodata & \nodata & 5 \\ HD 170778 & 0.00022 & \nodata & \nodata & 0.0011 & 5 \\ HD 189733 & 0.00097 & 0.015 & \nodata & $< 10^{-5}$ & 8 \\ HD 190771 & 0.0005 & 0.024 & 0.049 & 0.010 & 6 \\ HD 206860 & $< 10^{-5}$ & \nodata & 0.0031 & \nodata & 6 \\ HD 209393 & 0.00036 & 0.048 & \nodata & \nodata & 5 \\ HD 217813 & $< 10^{-5}$ & \nodata & $< 10^{-5}$ & \nodata & 5 \\ HD 222143 & $< 10^{-5}$ & 0.0011 & \nodata & \nodata & 5 \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lcccccc} \tabletypesize{\scriptsize} \tablecolumns{7} \tablewidth{0pc} \tablecaption{Pearson Statistical Tests of Index-to-Index Correlations} \tablehead{\colhead{Star ID} & \colhead{$S_{\rm MW} - G$} & \colhead{$S_{\rm MW} - Ca~I$} & \colhead{$S_{\rm MW} - H\gamma$} & \colhead{$G - Ca~I$} & \colhead{$G - H\gamma$} & \colhead{$Ca~I - H\gamma$}} \startdata HD 166 & $+0.216,0.117$ & $-0.137,0.323$ & $-0.091,0.511$ & $-0.066,0.634$ & $+0.039,0.779$ & $-0.234,0.089$ \\ HD 5996 & $-0.190,0.081$ & $-0.102,0.351$ & $-0.236,0.030$ & $\mathbf{ +0.479,0.000}$ & $\mathbf{ +0.379,0.000}$ & $+0.226,0.038$\\ HD 9472 & $-0.025,0.843$ & $-0.170,0.175$ & $-0.066,0.603$ & $\mathbf{ +0.330,0.007}$ & $+0.214,0.086$ & $+0.222,0.075$ \\ HD 13531 & $-0.106,0.338$ & $+0.055,0.621$ & $-0.093,0.400$ & $+0.257,0.018$ & $\mathbf{ +0.489,0.000}$ & $+0.178,0.106$ \\ HD 27685 & $\mathbf{ -0.391,0.001}$ & $+0.267,0.033$ & $-0.205,0.104$ & $+0.133,0.295$ & $\mathbf{ +0.330,0.008}$ & $\mathbf{+0.360,0.004}$ \\ HD 27808 & $-0.215,0.102$ & $-0.146,0.268$ & $-0.188,0.153$ & $+0.021,0.875$ & $+0.091,0.494$ & $-0.104,0.435$ \\ HD 27836 & $+0.050,0.767$ & $-0.024,0.887$ & $-0.111,0.514$ & $+0.363,0.027$ & $\mathbf{+0.558,0.000}$ & $\mathbf{+0.499,0.002}$ \\ HD 27859 & $+0.074,0.682$ & $+0.044,0.809$ & $-0.083,0.647$ & $+0.062,0.731$ & $+0.158,0.381$ & $+0.258,0.147$ \\ HD 28394 & $-0.185,0.240$ & $-0.208,0.186$ & $\mathbf{-0.506,0.001}$ & $-0.057,0.722$ & $+0.363,0.018$ & $+0.221,0.160$ \\ HD 42807 & $\mathbf{-0.291,0.004}$ & $+0.044,0.675$ & $-0.245,0.017$ & $+0.151,0.144$ & $\mathbf{+0.390,0.000}$ & $+0.208,0.043$ \\ HD 76218 & $\mathbf{-0.323,0.000}$ & $+0.019,0.837$ & $-0.170,0.066$ & $\mathbf{+0.496,0.000}$ & $\mathbf{+0.579,0.000}$ & $\mathbf{+0.570,0.000}$ \\ HD 82885 & $\mathbf{-0.464,0.000}$ & $\mathbf{-0.292,0.002}$ & $+0.113,0.241$ & $+0.155,0.108$ & $\mathbf{+0.249,0.009}$ & $+0.064,0.507$ \\ HD 96064 & $-0.164,0.142$ & $-0.030,0.787$ & $\mathbf{-0.296,0.007}$ & $\mathbf{+0.425,0.000}$ & $\mathbf{+0.404,0.000}$ & $\mathbf{+0.279,0.011}$ \\ HD 101501 & $\mathbf{-0.351,0.000}$ & $-0.135,0.168$ & $-0.021,0.830$ & $-0.017,0.866$ & $\mathbf{+0.388,0.000}$ & $-0.016,0.869$ \\ HD 113319 & $\mathbf{-0.508,0.000}$ & $-0.038,0.731$ & $\mathbf{-0.493,0.000}$ & $+0.053,0.627$ & $\mathbf{+0.503,0.000}$ & $+0.135,0.212$ \\ HD 117378 & $+0.045,0.756$ & $-0.187,0.193$ & $\mathbf{-0.501,0.000}$ & $-0.073,0.615$ & $-0.075,0.606$ & $\mathbf{+0.345,0.014}$ \\ HD 124694 & $-0.266,0.032$ & $-0.053,0.677$ & $\mathbf{-0.457,0.000}$ & $+0.133,0.292$ & $+0.103,0.413$ & $+0.190,0.129$ \\ HD 130322 & $\mathbf{-0.429,0.003}$ & $+0.046,0.762$ & $-0.010,0.943$ & $\mathbf{+0.390,0.007}$ & $+0.258,0.082$ & $\mathbf{+0.481,0.001}$ \\ HD 131511 & $\mathbf{-0.307,0.010}$ & $-0.165,0.176$ & $-0.115,0.347$ & $+0.166,0.172$ & $\mathbf{+0.320,0.007}$ & $+0.181,0.138$ \\ HD 138763 & $+0.055,0.730$ & $-0.253,0.106$ & $-0.350,0.023$ & $\mathbf{+0.500,0.001}$ & $\mathbf{+0.490,0.001}$ & $\mathbf{+0.520,0.000}$ \\ HD 149661 & $\mathbf{-0.442,0.005}$ & $\mathbf-0.346,0.031$ & $+0.102,0.536$ & $+0.364,0.023$ & $+0.317,0.049$ & $-0.130,0.430$ \\ HD 152391 & $-0.414,0.026$ & $-0.068,0.726$ & $-0.281,0.140$ & $+0.194,0.314$ & $+0.129,0.505$ & $-0.085,0.662$ \\ HD 154417 & $-0.090,0.635$ & $-0.041,0.831$ & $+0.072,0.706$ & $+0.187,0.323$ & $-0.005,0.980$ & $-0.106,0.578$ \\ HD 170778 & $-0.152,0.277$ & $\mathbf{-0.524,0.000}$ & $\mathbf{-0.385,0.004}$ & $+0.223,0.108$ & $+0.301,0.029$ & $\mathbf{+0.361,0.008}$ \\ HD 189733 & $-0.116,0.245$ & $-0.067,0.502$ & $-0.017,0.867$ & $\mathbf{+0.341,0.000}$ & $+0.210,0.033$ & $\mathbf{+0.262,0.007}$ \\ HD 190771 & $\mathbf{-0.307,0.003}$ & $\mathbf{-0.285,0.006}$ & $-0.240,0.022$ & $\mathbf{+0.289,0.006}$ & $+0.177,0.095$ & $\mathbf{+0.266,0.011}$ \\ HD 206860 & $-0.073,0.466$ & $\mathbf{-0.470,0.000}$ & $\mathbf{-0.445,0.000}$ & $+0.164,0.102$ & $+0.066,0.513$ & $+0.177,0.076$ \\ HD 209393 & $\mathbf{-0.270,0.011}$ & $+0.078,0.466$ & $-0.208,0.051$ & $\mathbf{+0.473,0.000}$ & $+0.220,0.038$ & $+0.189,0.077$ \\ HD 217813 & $-0.228,0.038$ & $-0.192,0.081$ & $\mathbf{-0.284,0.009}$ & $+0.200,0.070$ & $+0.191,0.083$ & $-0.049,0.662$ \\ HD 222143 & $-0.089,0.411$ & $-0.006,0.959$ & $-0.211,0.050$ & $-0.053,0.623$ & $+0.163,0.131$ & $+0.097,0.372$ \\ \enddata \end{deluxetable*} Figures \ref{fig:KHG1}, \ref{fig:KHG2}, \ref{fig:KHG3}, and \ref{fig:KHG4} show montages (in order of HD number) of time series of the \ion{Ca}{2} H \& K index (transformed to the Mount Wilson system index $S_{\rm MW}$), the G-band index, the \ion{Ca}{1} index, and the H$\gamma$ index for our program stars. Table \ref{tbl:mad} lists mean values for the $S_{\rm MW}$ index and $\log(R^\prime_{\rm HK})$ from our observations of the program stars. The $S_{\rm MW}$ index measures the flux in the cores of the \ion{Ca}{2} H \& K lines, but that flux includes contributions from both the chromosphere and the photosphere. A quantity $R^\prime_{\rm HK}$, which is a useful measure of the chromospheric flux only, may be derived from $S_{\rm MW}$ using a method outlined in \citet{noyes84}. The $\log(R^\prime_{\rm HK})$ index may be calibrated against age and rotation period \citep{mamajek08}. We have included columns in Table \ref{tbl:mad} listing the expected rotation period (in days), $P_{\rm rot}(R^\prime_{\rm HK})$, based on the calibration of \citet{mamajek08}, as well as the upper limit to the rotation period derived from our $v\sin i$ values listed in Table \ref{tbl:bpp} ($P_{\rm max}(v\sin i)$), along with associated errors. Note that, with the possible exception of HD~82885, the rotation periods derived from the activity levels are consistent, within the errors, with the rotation period upper limits deduced from the projected rotational velocities. Chromospheric ``activity ages'' for our program stars, based on the calibration of \citet{mamajek08}, are also included in Table \ref{tbl:mad}. As it turns out, all but three of our stars (HD~82885, HD~130322, and HD~149661) lie within the target age limits for this project, 0.3 -- 1.5 Gyr. However, the discrepancy between $P_{\rm rot}(R^\prime_{\rm HK})$ and $P_{\rm max}(v\sin i)$ for HD~82885 suggests that the chromospheric activity age for that star may not be accurate. For instance, \citet{donahue96} quote a rotation period for HD~82885 of 18.6 days. This gives a gyrochronological age, using the calibration of \citet{barnes07}, of 1.6 Gyr. \subsection{The Sensitivity of the Photospheric Indices to Temperature Variations} It was hypothesized in the Introduction that the three ``photospheric'' indices defined in \S\ref{sec:Gband} -- \ref{sec:HG} will be primarily sensitive to temperature, and thus might be useful in measuring integrated temperature changes on the stellar surface arising from spots and/or photospheric faculae. To determine the usefulness of these indices for that purpose, we need to assess their sensitivity to these changes. Figure \ref{fig:BV} displays plots of these indices (using the Mount Wilson calibration stars from Table 5 of \citet{gray03}) versus $B-V$. That Figure shows that in the realm of the late F-type stars to the early K-type stars all three indices vary approximately linearly with $B-V$. The following equations are straight-line fits to the linear portions of those curves: \begin{align*} B-V =\, & 0.259 + 0.966\, {\rm G}\\ B-V =\, & 0.212 + 1.010\,{\rm Ca\, I}\\ B-V =\, & 1.68 - 2.539\, {\rm H}\gamma \end{align*} where G, \ion{Ca}{1}, and H$\gamma$ refer to their respective indices, and $B-V$ refers to the Johnson $B-V$ index. Both the G-band index and the \ion{Ca}{1} index have slopes of nearly unity with respect to $B-V$, and so changes in those indices should translate directly into changes in $B-V$. The H$\gamma$ index has a sensitivity that is smaller by about a factor of 2.5. We will report in Paper II that many of our stars vary $\le 0.03$ -- 0.07 magnitude in the Johnson $V$-band, and in the instances where we can measure color ($B-V$) changes, those changes are generally $\le 0.01$ mag. This is roughly what we might expect if variations in brightness (due to sunspots and faculae) move the star parallel to the main sequence. If the observed changes in the photospheric indices arise solely from temperature effects, we might therefore expect to observe variations in the G-band and \ion{Ca}{1} up to 0.01 in the index, and by a factor of about 2.5 smaller in H$\gamma$. Such changes should be detectable in at least the G-band and \ion{Ca}{1} indices, as the measurement errors in those indices are on the order of 0.001 -- 0.003. Indeed, because of those measurement errors, these indices are potentially more useful in measuring temperature changes than photometric colors where the errors are larger. Interestingly, the data in Table 4 do indeed indicate variations in \ion{Ca}{1} of about the expected magnitude ($\le 0.01$), but the observed variations in the G-band are smaller by a factor of two or more ($\le 0.004$). Hence, while it is plausible that the observed variations in \ion{Ca}{1} are temperature related, it is clear that the variations in the G-band may have a different or more complex origin. The observed variations in H$\gamma$ are smaller than those observed in \ion{Ca}{1}, but not by the factor we would expect if those variations are governed by temperature alone. We will examine these questions in more detail in \S 5.4 below. \subsection{Statistical Tests for Season-to-Season Variability} \label{sec:var} The $S_{\rm MW}$ plots are the traditional tool for detecting and characterizing activity cycles in stars. The detection and characterization of activity cycles in active stars requires time series observations that exceed, preferably by a factor of two or more, the period or characteristic timescale of the star in question. That normally requires observations over decades, and so, except for stars that our program has in common with other long-term surveys, such as the Mt. Wilson program, we are limited in what we can say on that subject. What can be done at the current stage of the project is to 1) evaluate the reality of the variations in the seasonal means and/or variances of the four ``activity'' indices -- the $S_{\rm MW}$, G-band, \ion{Ca}{1}, and H$\gamma$ indices -- that are suggested by the time series montages and 2) to examine and try to understand the existence of correlations between those indices. To assess the significance of the variations in the four indices on a year-to-year basis (variations within a given observing season will be examined in Paper II of this series where we will evaluate rotation periods for our stars), we have employed the Kolmogorov-Smirnov (KS) statistical test. KS tests are used in judging the significance of whether or not two experimental or observational distributions of a certain variable differ; the difference may arise from either a difference in the means or in the variances of the two distributions. We may consider each set of seasonal data (the ``clumps'' in Figures \ref{fig:KHG1}, \ref{fig:KHG2}, \ref{fig:KHG3}, and \ref{fig:KHG4}) as independent samples of the index in question, and compare those samples for a given star on a pair-wise basis using the KS test to ascertain whether significant variation in the mean value and/or variance of the index has occured over the period we have observed the star. The way we perform the tests is as follows. Let us suppose we have observed the star for four years (four observing seasons), seasons 1, 2, 3, and 4. We then carry out KS tests on each of the following six pair-wise comparisons: $1 \leftrightarrow 2$, $1 \leftrightarrow 3$, $1 \leftrightarrow 4$, $2 \leftrightarrow 3$, $2 \leftrightarrow 4$, and $3 \leftrightarrow 4$. The KS test yields a $p$-statistic for each comparison. Smaller values of $p$ indicate higher significance. For instance, $p = 0.01$ indicates that the null hypothesis (no variation in the mean value or variance of the index) may be rejected with a confidence of 99\%. But the fact that we need to estimate the significance of variations in a time series rather than simply between two seasons complicates the analysis. For instance, let us suppose we have five seasons of observations. This results in a set of 10 pair-wise comparisons. If only one of those comparisons results in $p = 0.01$, that does not rise to the level of significance because we would expect, on the average, in a set of 10 comparisons, to encounter $p \le 0.01$ 10\% of the time -- a significance of only 90\%. However, if a given set contains {\it multiple} comparisons with small $p$, we may then combine those probabilities in assessing the significance of the observed variation. We use a Monte Carlo technique to evaluate these probabilities. A random number generator was used to generate multiple gaussian distributions of an observational variable, all with the same mean and variance (and thus for these artificial data the null hypothesis is true). In total we generated 100,000 sets of 4-season data, each involving 6 pair-wise comparisons, for a total of 600,000 comparisons, and evaluated each comparison with KS statistics. We did the same for sets of 5-season, 6-, 7-, and 8-season data, the latter involving 2.8 million pair-wise comparisons. We were able to verify, for instance, that comparisons with $p \le 0.01$ were encountered with the expected frequency. We then used these artificial data sets to evaluate the significance of variations in our observational data. To take a real example, in one of our 5-season data sets (HD~9472), we had, for the $S_{\rm MW}$ index, the following values of $p$: 0.0116, 0.0132, 0.0188, 0.0281, and 0.0315. The remaining five comparisons had $p > 0.05$. We then used the 5-season Monte Carlo data to ask ``What proportion of 5-season sets have $p_{\rm min} \le 0.0116$ and four other comparisons with $p \le 0.0315$?'' The result yields an overall $p = 0.00075$. We have listed in Table 5 all the time series for which the overall $p \le 0.05$, indicating ``significant'' variability. Spurious significant $p$ values can be created by outliers in the dataset. We have reduced to a minimum the number of outliers in the dataset by rejecting all spectra with S/N $< 50$ and by examining each spectrum to eliminate those with obvious defects (such as cosmic rays) in the wavelength bands used for the calculation of the indices. The remaining outliers cannot be rejected on a statistical basis (and may indeed represent true excursions of the star) and so are included in the statistical tests. It is clear from Table 5 that almost all of the program stars show significant season-to-season variations in $S_{\rm MW}$. The ones that do not have only 4 or 5 seasons of data, so it is entirely possible that with a few more seasons of data all will show significant variation. 50\% show significant variations in the G-band index, 40\% in the \ion{Ca}{1} index, and 37\% in the H$\gamma$ index. We expect that continuing the project for a few more years will increase those proportions as well. We emphasize that a lack of significant variation in the seasonal means and variances does not imply that the star is constant within a given season. For instance, it is well known, and we will further demonstrate in Paper II, that the ``scatter'' (at least for the $S_{\rm MW}$ index) within a given season can arise from rotational modulation in the index. \subsection{Comments on the Nature of the Observed Variability} A number of our stars that have significant season-to-season variations appear to be showing very short-term periodic or ``pseudo periodic'' behavior in the $S_{\rm MW}$ index. Examples include HD~9472, HD~13531, HD~27685 (superimposed on a secular rise in activity), HD~217813, as well as some others. Despite the shortness of the datasets, the above-mentioned stars show significant periods in the range of 2 -- 4 years with a Lomb-Scargle analysis. To judge the reality of such short periods, which are considerably shorter than the periods found in \citet{baliunas95} we may refer to other similar datasets. For example, a number of stars in the \citet{lockwood07} dataset appear to show very similar behavior (see the stars HD~39587, HD~131156, HD~152391, HD~115404, HD~201092 for some possible examples). This short-term variation appears to come and go and is often superimposed on longer timescale variations. Analysis of the Lockwood et al. or similar datasets will be required to evaluate the reality of these variations. Stars in our dataset show a variety of behaviors associated with the dispersion in the $S_{\rm MW}$ activity index within a given season. For instance, the stars HD~27859 ($\langle\sigma_{S_{\rm MW}}\rangle = 0.007$), HD~124694 (0.008), HD~154417 (0.007), HD~217813 (0.008), and HD~222143 (0.006) all show very tight activity dispersions within a given season. On the other hand, HD~130322 (0.014), HD~131511 (0.018), and HD~189733 (0.018) show average seasonal dispersions greater by a factor of two or more within a given season. This distinction appears to be intrinsic, as we are careful to achieve adequate S/N for all of our observations, and there are bright and ``faint'' stars in both sets. We note, however, that those stars that have particularly low seasonal dispersions are F and early G-type stars, while the three with the higher dispersions are all K-type stars. If the seasonal dispersions arise from rotational modulation, as active regions rotate across the stellar disk, then this suggests that the late-type stars mentioned above may be dominated by one or a small number of active regions, whereas for the F- and early G-type stars in the project sample, active regions are smaller and more dispersed across the stellar disk. One caution should be noted: the activity behavior of HD~189733 may not be typical of young active K-type stars, as it has apparently been spun up by angular momentum transfer from its hot-jupiter companion (see Introduction). HD~131511 does, however, behave in quite a similar way to HD~189733. Even though HD~131511 is a spectroscopic binary, its stellar companion is in a much wider orbit \citep[$P_{\rm orb} = 125.4$d][]{nidever02,jancart05}, and probably has not yet had an important influence on the angular momentum of the primary. We also note that in a recent Nordic Optical Telescope FIES spectrum of HD~131511, the emission in the cores of the \ion{Ca}{2} H \& K lines appear symmetrical, and so we see no evidence for emission from the companion. This will need to be verified by further spectra at different phases of the companion's orbit. \begin{figure} \includegraphics[width=3.25in]{f12.eps} \caption{The variation in the seasonal dispersion of $S_{\rm MW}$ with time for three stars, HD~76218 (solid line), HD~131511 (dashed line), and HD~189733 (dotted line).} \label{fig:vs} \end{figure} Interestingly, some of our stars appear to show variations in their seasonal dispersion behavior. Whether that variation in dispersion is cyclical can only be determined with longer time series. The F-test is the appropriate test for determining the statistical significance of season-to-season differences in the variance of an index. HD~131511, for instance, appears to alternate between seasons with high and moderate dispersions in the $S_{\rm MW}$ index. Examination of the $S_{\rm MW}$ plot for HD~131511 (Figure \ref{fig:KHG3}) shows four seasons with relatively high dispersions and three with moderate dispersions. F-tests carried out on the 21 pair-wise comparisons between the seven seasons show highly significant variations, with an overall $p = 0.0011$ (calculated using the same Monte Carlo technique employed to evaluate the KS tests). HD~189733 may be behaving in a similar way, although the statistics are of lower significance ($p = 0.045$). HD~76218 apparently also varies in seasonal dispersion ($p = 0.016$). The variation in HD~76218 is unusual in the sense that when the seasonal dispersion is the highest, the activity level is at or near a minimum. This is opposite to the sun which shows the greatest dispersion in the \ion{Ca}{2} flux at activity maximum \citep{keil98}. Figure \ref{fig:vs} shows the variation in seasonal dispersion with time for HD~76218, HD~131511, and HD~189733. A possible interpretation of this behavior is that these stars vary between a state in which the active regions are relatively small, numerous, and dispersed (low-to-moderate seasonal dispersion) and a state which is dominated by one or a few large active regions (high seasonal dispersion). This variation in seasonal dispersion may represent a novel type of activity cycle in stars, or it may be evidence for a flipflop cycle \citep{jetsu93} and/or active longitudes. More observations will be required to fully characterize this behavior. \subsection{Correlations Between Indices} A further question to address is whether or not significant correlations exist between the four ``activity'' indices measured in this paper. In the Introduction we gave the rational for the three ``photospheric'' indices defined in this paper -- the G-band, \ion{Ca}{1}, and H$\gamma$ indices -- and suggested that these three indices might vary in step with activity variations largely through related temperature changes in the photosphere connected with changes in spots and photospheric faculae. How the photospheric indices would vary in relation to $S_{\rm MW}$ would then depend on whether cool spots or hot photospheric faculae dominate. If the photospheric indices vary primarily on the basis of temperature, we would expect the G-band index to vary directly with the \ion{Ca}{1} index, and inversely with respect to the H$\gamma$ index. We will see below whether this is indeed the case. Table 6 shows the results of Pearson's r-tests for linear correlations between the four indices. These comparisons are made with the original observations, and not with the seasonal means. Since all of these indices are measured in a single spectrum, we do not have to worry about time differences between the observations of the different indices. The first column in Table 6 is the stellar ID, the second tabulates the results of the Pearson r-test for correlations between $S_{\rm MW}$ and the G-band index, the third the same for $S_{\rm MW}$ and \ion{Ca}{1}, the fourth for $S_{\rm MW}$ and H$\gamma$, the fifth for the G-band and \ion{Ca}{1}, the sixth for the G-band and H$\gamma$, and the seventh for \ion{Ca}{1} and H$\gamma$. Each comparison consists of two numbers, Pearson's linear correlation coefficient $r$, and the $p$-statistic, from which the probability of the null hypothesis (zero correlation) may be calculated. Small $p$ indicates a signficant correlation. Correlations with $p \le 0.015$ are indicated with bold type in Table 6. We have adopted $p \le 0.015$ as a useful standard for judging the significance of these correlations because, for a given index, and 30 tabulated stars, we should expect at that significance level only 0.5 spurious correlations. A glance at Table 6 shows the presence of multiple significant, in many cases highly significant, correlations, although none of those correlations are particularly strong ($r < 0.6$). We have examined each of these correlations graphically to assure ourselves that none are caused by one or a few ``outliers''. Could these correlations arise from instrumental effects? We reject that for a number of reasons: 1) We have not included in these tests data from the earlier Photometrics CCD, and so that means that all of the observations involved in these tests have been carried out with the same CCD on the same spectrograph on the same telescope, and all have been reduced identically. 2) The passbands used in defining these indices do not overlap, and so a spectral defect (cosmic ray, etc.) that affects one index will not affect another. 3) While some stars show highly significant correlations, others do not. Instrumental effects would lead to significant correlations (or not) in all stars, not just a limited number. Let us now examine the nature of those correlations. For the $S_{\rm MW}$ -- G-band comparison, 11 out of the 30 stars show significant ($p \le 0.015$) correlations, and all of those are negative correlations, meaning that in those stars $S_{\rm MW}$ and the G-band vary oppositely; when one increases, the other decreases. Note that the correlations that do not rise to our level of significance are as well almost all negative. For the $S_{\rm MW}$ -- \ion{Ca}{1} comparison, only 4 of the 30 stars show a significant correlation, but again all of those are negative. For $S_{\rm MW}$ -- H$\gamma$, 8 of the 30 stars show significant correlations, and again all of those correlations are negative. So, a strong conclusion is that where significant correlations are present, the ``photospheric'' indices are all negatively correlated with $S_{\rm MW}$. What about correlations between the photospheric indices? Examination of Table 6 shows the presence of many highly significant correlations between these indices, all of which are {\it positive}. So, the tendency is, when the G-band weakens, so too do \ion{Ca}{1} and H$\gamma$. This behavior is not consistent with the hypothesis that the photospheric indices are primarily affected by temperature changes in the photosphere arising from changes in spots and photospheric faculae. What then are the possible physical causes behind the observed behaviors? One possibility that we must consider is whether the \ion{Ca}{1} and H$\gamma$ indices are affected by changes in CH opacity. The G-band is a molecular feature, arising from the CH molecule, but CH absorption lines are ubiquitous in the region of the spectrum containing the \ion{Ca}{1} 4227\AA\ resonance line and H$\gamma$. To test this hypothesis we calculated a number of synthetic spectra for late F, mid-G, and early K-type stars, all identical except for differences in CH absorption strength (appropriate for the size of the variations we observe in the G-band index), and then measured the resulting \ion{Ca}{1} and H$\gamma$ indices. Those indices showed very small changes compared to the resulting changes in the G-band index, and in the opposite sense, which would yield {\it negative} correlations instead of the positive ones observed. The negative correlation between $S_{\rm MW}$ and the H$\gamma$ index might be understood on the basis of line emission. In the spectrum of the solar chromosphere, both \ion{Ca}{2} H \& K and H$\gamma$ (as well as, of course, H$\alpha$ and H$\beta$) are seen in emission, and so it is reasonable to expect that $S_{\rm MW}$ and H$\gamma$ would be negatively correlated on this basis, as emission {\it fills in} the H$\gamma$ line, resulting in a index smaller than for a purely photospheric line, while chromospheric emission yields an increase in $S_{\rm MW}$ over what would be measured for pure absorption in \ion{Ca}{2} H \& K. Neither the G-band nor \ion{Ca}{1} show up in any significant way in the chromospheric spectrum, so this mechanism does not help to explain the negative correlations of those indices with $S_{\rm MW}$ or their positive correlations with H$\gamma$. As noted above, the existence of direct correlations between all three photospheric indices is difficult to understand on the basis of temperature differences. This suggests that the physical cause underlying those direct correlations does not depend on temperature. Mechanisms that may be relevant here were noted by \citet{basri89} who observed that the equivalent widths of metallic lines (especially low-excitation lines) in the blue-violet part of the spectrum were reduced in certain active stars, apparently due either to continuum emission arising in the chromosphere or upper photosphere leading to the phenomenon of ``veiling'' or to nonradiative heating in the upper layers of the photosphere in plage regions resulting in weaker line cores \citep[see also][]{chapman68,giampapa79,labonte85,labonte86}. Indeed \citet{gray06} noted a similar phenomenon in the spectra of active K-type dwarfs, particularly in the vicinity of the \ion{Ca}{1} line. Interestingly, they noted that some active K dwarfs show this phenomena, and other equally active dwarfs do not. Both of these mechanisms can help to explain not only the direct correlations between the G-band, the \ion{Ca}{1} line, and H$\gamma$, but also are consistent with the negative correlations between those indices and $S_{\rm MW}$ because as stellar activity increases, both the veiling and/or core-weakening and the emission in \ion{Ca}{2} H \& K would presumably increase together. Furthermore, a closer look at Table 6 reveals that the most significant G-band anti-correlations with $S_{\rm MW}$ occur at spectral types where the G-band is near its maximum strength, and most of the significant H$\gamma$ anti-correlations appear in the late-F and early G-type stars where H$\gamma$ is still a strong feature, exactly what one would expect if the mechanisms suggested by \citet{basri89} were active. We might then ask why temperature effects, hypothesized at the beginning of this paper to be the primary drivers of changes in the ``photospheric'' indices do not appear to be important? This question requires further investigation, but it may be that for the indices considered in this paper, temperature effects arising from changes in both photospheric faculae and spots -- which would tend to cancel -- sufficiently balance out so that temperature variations become only a secondary cause in driving changes in these indices. While it may be disappointing that the purpose for which we designed these indices has not been realized, it does appear that these indices can be used to monitor the emission flux in the Paschen continuum arising from stellar activity. This suggests that these three indices may also prove to be useful proxies for monitoring emission in the ultraviolet {\it Balmer} continuum, which is largely inaccessible from the ground. If that proves to be the case, these indices would be of direct utility in achieving the original goals of this project. \section{Conclusions} This paper reports on initial results from the Young Solar Analogs project, which began in 2007 and which is monitoring the stellar activity of 31 late F-, G-, and early K-type stars with ages between 300 million and 1.5 billion years. We have detailed the transformation between our instrumental \ion{Ca}{2} activity indices and the Mount Wilson $S$ activity index. In addition, we have defined three new photospheric indices based on the G-band, the \ion{Ca}{1} resonance line in the blue-violet, and the H$\gamma$ line, and have examined, on a detailed statistical basis, how those indices vary and how they are related. All four indices show strong evidence for variability on a multi-year timescale in our data. The anti-correlations between $S_{\rm MW}$ and the photospheric indices and the positive correlations between the photospheric indices suggest the presence of varying continuum emission and/or non-radiative heating of the upper layers of the photosphere in at least some of the program stars. Further observations and modelling will be required to better understand these physical mechanisms and to evaluate the utility of the ``photospheric'' indices as proxies for ultraviolet continuum emission. Subsequent papers in this series will examine medium-term variations in these indices and the multi-band photometry, as well as short-term variations. \acknowledgments The authors would like to thank an anonymous referee for careful and detailed comments that resulted in a considerably improved paper. The authors would also like to thank Lee Hawkins, Dark Sky Observatory engineer, for expert and enthusiastic technical assistance in the construction and maintenance of the Robotic dome. We are also pleased to acknowledge the assistance of Mike Hughes (Electronics technician), Dana Greene, Machinist, and David Sitar, all at Appalachian State University. This project has been supported by NSF grant AST-1109158. We are also grateful for funding provided by The Fund for Astrophysical Research during an early stage of this project. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. This research has also made use of the Elodie Archive (http://atlas.obs-hp.fr/elodie/). We also acknowledge use of archival spectra from the UVES Paranal Observatory Project (ESO DDT Program ID 266.D-5655). It is also a pleasure to acknowledge the service observing program at the Nordic Optical Telescope. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. We acknowledge with pleasure the Veusz software package\footnote{http://home.gna.org/veusz/} which was used for the figures in this paper.
1,314,259,996,855
arxiv
\section{Introduction} The success of the Standard Model in describing the electroweak scale phenomena notwithstanding the apparent problems with the high-energy behaviour have lead to revival of interest in better understanding the UV properties of general gauge-Yukawa theories, see e.g. Refs~\cite{Litim:2014uca,Antipin:2017ebo,Eichhorn:2016esv}. In particular, gauge-Yukawa theories with a large number of fermion flavours, $N_f$, provide interesting candidates within the asymptotic-safety framework as opposed to the traditional asymptotic-freedom paradigm~\cite{Gross:1973ju,Politzer:1973fx}. The groundwork for these considerations was laid few decades ago with the computation of the leading large-$N_f$ behaviour of the gauge $\beta$-functions~\cite{Espriu:1982pb,PalanquesMestre:1983zy,Gracey:1996he} for $N_f$ fermion charged under the gauge group; see also Refs~\cite{Holdom:2010qs,Shrock:2013cca}. The leading $1/N_f$ contribution to the $\beta$-function is obtained by resumming the gauge self-energy diagrams with ever increasing chain of fermion bubbles constituting a power series in $K=\alpha N_f/\pi$. It was noticed that this series has a finite radius of convergence; in the case of $\mathrm{U}(1)$ gauge group $K=15/2$. Furthermore, the leading $1/N_f$ contribution to the $\mathrm{U}(1)$ $\beta$-function has a negative pole at $K=15/2$, thereby suggesting that this behaviour could cure the Landau-pole behaviour of the SM $\mathrm{U}(1)$ coupling, see e.g. Refs~\cite{Holdom:2010qs,Mann:2017wzh,Pelaggi:2017abg}. Recently, a further step towards a more complete understanding of these models was achieved by working out the leading $1/N_f$ contribution from the gauge sector to a Yukawa coupling~\cite{Kowalska:2017pkt}; an extension to semi-simple gauge groups was discussed in Ref.~\cite{Antipin:2018zdg}. However, only a single fermion flavour was assumed to couple to the scalar, and the scalar self-energy remained uneffected by the $N_f$ fermion bubbles. Our work is the first step to bridge this remaining gap: we provide the leading $1/N_f$ $\beta$-function for pure Yukawa theory, where $N_f$ flavours of fermions couple to the scalar field via Yukawa interaction. We leave the more detailed study within a general gauge-Yukawa framework for future work. Interestingly, the pure Yukawa model is closely related to the Gross--Neveu--Yukawa model, whose critical exponents have been recently computed up to $1/N_f^2$~\cite{Gracey:2017fzu,Manashov:2017rrx}; see also the earlier studies on the Gross--Neveu model e.g. Refs~\cite{Gracey:1990wi,Vasiliev:1992wr}. The paper is organized as follows: In Sec.~\ref{sec:def} we introduce the framework and notations and in Sec.~\ref{sec:renC} give the expressions for the renormalization constants. In Sec.~\ref{sec:resum} we perform the resummations of the bubble chains and give closed form expressions for the renormalization constants. In Sec.~\ref{sec:beta} we collect the results, and write down the final expression for the $\beta$-function, and in Sec.~\ref{sec:concl} we conclude. Explicit formulas for the loop integrals are given in Appendix~\ref{sec:loops}. \section{The framework and definitions} \label{sec:def} We consider the massless Yukawa theory for a real scalar field, $\phi$, and a fermionic multiplet, $\psi$, consisting of $N_f$ flavours interacting through the usual Yukawa interaction: \begin{equation} \mathcal{L}_{\mathrm{Yuk}} = g \bar{\psi} \psi \phi. \end{equation} We define the rescaled coupling, \begin{equation} K \equiv \frac{g^2}{4 \pi^2} N_f, \end{equation} which is kept constant in the limit $N_f\to\infty$. The $\beta$-function of the rescaled coupling, $K$, can then be expanded in powers of $1/N_f$ as \begin{equation}\label{eq:F0F1} \beta(K) \equiv \frac{\mathrm{d}K}{\mathrm{d}\ln\mu}= K^2\left[ F_0 + \frac{1}{N_f} F_1(K)\right] + \mathcal{O}\left(1/N_f^2\right). \end{equation} The purpose of this paper is to compute $F_0$ and $F_1(K)$. The former is entirely fixed at the one-loop level and can be derived just by rescaling the well-known result for the $\beta$-function at that order, while the evaluation of $F_1(K)$ requires the resummation of diagrams in Fig.~\ref{fig:bubbleChains} involving all-order fermion-bubble chains. \begin{figure} \centering \begin{subfigure}{\textwidth} \scalebox{0.5}{ \begin{minipage}[c]{.25\textwidth} \begin{tikzpicture}[node distance=2cm] \coordinate (v1); \coordinate[right = of v1] (v2); \coordinate[right = of v2] (v3); \coordinate[right = of v3] (v4); \coordinate[right = of v4] (v5); \draw[rscalar] (v1) -- (v2); \draw[mfermion] (v2) arc(180:90:2) coordinate (v8) arc(90:0:2); \draw[mfermion] (v2) arc(-180:-90:2) coordinate (v9) arc(-90:0:2); \draw[rscalar] (v4) -- (v5); \draw[rscalar] (v8) -- (v9) node[pos=0.12,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.32,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.44] (v11) {} node[pos=0.56] (v12) {} node[midway,circle,fill=white,dotted, minimum size=0.5 cm]{} node[pos=0.68,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.88,draw,solid,whiteblob,minimum size=0.5 cm] {}; \draw[dotted] (v11)--(v12); \end{tikzpicture} \end{minipage} ~\hspace{4.5cm}$+$\hspace{0.5cm} \begin{minipage}[c]{.25\textwidth} \vspace{-.25cm} \begin{tikzpicture}[node distance=2cm] \coordinate (v1); \coordinate[right = of v1] (v2); \coordinate[right = of v2] (v3); \coordinate[right = of v3] (v4); \coordinate[right = of v4] (v5); \draw[rscalar] (v1) -- (v2); \draw[mfermion] (v2) arc(180:155:2) coordinate (v8) arc(155:25:2) coordinate (v9) arc(25:0:2); \draw[mfermion] (v2) arc(-180:0:2); \draw[rscalar] (v4) -- (v5); \draw[rscalar] (v8) -- (v9) node[pos=0.12,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.32,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.43] (v11) {} node[pos=0.57] (v12) {} node[midway,circle,fill=white,dotted, minimum size=0.5 cm]{} node[pos=0.68,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.88,draw,solid,whiteblob,minimum size=0.5 cm] {}; \draw[dotted] (v11)--(v12); \end{tikzpicture} \end{minipage} \hspace{4.5cm}$+$ \hspace{.5cm}\vspace{2cm} \begin{minipage}[c]{.25\textwidth} \vspace{0.25cm} \begin{tikzpicture}[node distance=2cm] \coordinate (v1); \coordinate[right = of v1] (v2); \coordinate[right = of v2] (v3); \coordinate[right = of v3] (v4); \coordinate[right = of v4] (v5); \draw[rscalar] (v1) -- (v2); \draw[mfermion] (v2) arc(-180:-155:2) coordinate (v8) arc(-155:-25:2) coordinate (v9) arc(-25:0:2); \draw[mfermion] (v2) arc(180:0:2); \draw[rscalar] (v4) -- (v5); \draw[rscalar] (v8) -- (v9) node[pos=0.12,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.32,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.43] (v11) {} node[pos=0.57] (v12) {} node[midway,circle,fill=white,dotted, minimum size=0.5 cm]{} node[pos=0.68,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.88,draw,solid,whiteblob,minimum size=0.5 cm] {}; \draw[dotted] (v11)--(v12); \end{tikzpicture} \end{minipage} } \caption{Scalar self-energy corrections.} \label{fig:sBubbleChain} \end{subfigure}\\ \begin{subfigure}{0.4\textwidth}\quad \centering ~\\ \vspace{1.9cm} \scalebox{0.75}{ \begin{tikzpicture}[node distance=1cm] \coordinate (v1); \coordinate[right = of v1] (v2); \coordinate[right = of v2] (v31); \coordinate[right = of v31] (v32); \coordinate[right = of v32] (v3); \coordinate[right = of v3] (v4); \draw[mfermion] (v1) -- (v4); \draw[rscalar] (v2) arc(180:0:1.5) node[pos=0.12,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.32,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.45] (v11) {} node[pos=0.55] (v12) {} node[midway,circle,fill=white,dotted, minimum size=0.5 cm]{} node[pos=0.68,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.88,draw,solid,whiteblob,minimum size=0.5 cm] {}; \draw[dotted] (v11)--(v12); \end{tikzpicture} } \caption{Fermion self-energy correction.} \label{fig:fBubbleChain} \end{subfigure} \vspace{2cm} \begin{subfigure}{0.4\textwidth} \centering \scalebox{0.75}{ \begin{tikzpicture}[node distance=1cm] \coordinate (v1); \coordinate[right = of v1] (v21); \coordinate[right = of v21] (v2); \coordinate[above right = of v2] (v31); \coordinate[above right = of v31] (v32); \coordinate[above right = of v32] (v3); \coordinate[below right = of v2] (v41); \coordinate[below right = of v41] (v42); \coordinate[below right = of v42] (v4); \draw[rscalar] (v1) -- (v2); \draw[mfermion] (v2) -- (v3) node[pos=0.85] (v5) {}; \draw[mfermion] (v2) -- (v4) node[pos=0.85] (v6) {}; \draw[rscalar] (v5) -- (v6) node[pos=0.12,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.32,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.43] (v11) {} node[pos=0.57] (v12) {} node[midway,circle,fill=white,dotted, minimum size=0.5 cm]{} node[pos=0.68,draw,solid,whiteblob,minimum size=0.5 cm] {} node[pos=0.88,draw,solid,whiteblob,minimum size=0.5 cm] {}; \draw[dotted] (v11)--(v12); \end{tikzpicture} } \caption{Vertex correction.} \label{fig:vBubbleChain} \end{subfigure} \vspace{-1.5cm} \caption{Scalar self-energy, fermion self-energy, and vertex corrections due to a chain of fermion bubbles.} \label{fig:bubbleChains} \end{figure} The $\beta$-function can be obtained from \begin{equation}\label{eq:beta} \beta = K^2 \frac{\partial G_1(K)}{\partial K}, \end{equation} where $G_1$ is defined by \begin{equation}\label{eq:ZK} \text{ln}\,Z_K \equiv \text{ln}\, (Z_S^{-1} Z_F^{-2} Z_V^2) = \sum_{n=1}^\infty \frac{G_n(K)}{\epsilon^n}, \end{equation} and $Z_S$, $Z_F$, and $Z_V$ are the renormalization constants for the scalar wave function, the fermion wave function, and the 1PI vertex, respectively. The scalar wave function renormalization constant is determined via \begin{equation}\label{eq:Z_Sdef} Z_S = 1 - \text{div} \{ Z_S \Pi_0(p^2, Z_K K,\epsilon) \}, \end{equation} where $\Pi_0(p^2,K_0,\epsilon)$ is the scalar self-energy divided by $p^2$, where $p$ is the external momentum. Here and in the following, $\text{div}{X}$ denotes the poles of $X$ in $\epsilon$. The self-energy can be written as \begin{equation}\label{eq:PI0} \Pi_0(p^2,K_0,\epsilon) = K_0 \Pi^{(1)}(p^2,\epsilon) + \frac{1}{N_f} \sum_{n=2}^\infty K_0^n \Pi^{(n)}(p^2,\epsilon), \end{equation} where $\Pi^{(1)}$ gives the one-loop result, and $\Pi^{(n)}$ the $n$-loop part containing $n-2$ fermion bubbles in the chain, and summing over the topologies given in Fig.~\ref{fig:sBubbleChain}. Other contributions are of higher order in $1/N_f$ and are thus omitted. For the fermion self-energy and vertex renormalization constants, the lowest non-trivial contributions are already $\mathcal{O}(1/N_f)$, and we, therefore, have \begin{equation}\label{eq:Z_Fdef} Z_F = 1 - \text{div} \left\{ \Sigma_0(p^2, Z_K K, \epsilon) \right\}, \end{equation} \begin{equation} \Sigma_0(p^2,K_0,\epsilon) = \frac{1}{N_f} \sum_{n=1}^\infty K_0^n \Sigma^{(n)}(p^2,\epsilon), \end{equation} where $\Sigma^{(n)}$ is depicted in Fig.~\ref{fig:fBubbleChain} with $n-1$ fermion bubbles. Similarly, \begin{equation}\label{eq:Z_Vdef} Z_V = 1 - \text{div} \left\{ V_0(p^2, Z_K K, \epsilon) \right\}, \end{equation} \begin{equation} V_0(p^2,K_0,\epsilon) = \frac{1}{N_f} \sum_{n=1}^\infty K_0^n V^{(n)}(p^2,\epsilon), \end{equation} where $V^{(n)}$ again contains $n-1$ fermion bubbles and is shown diagrammatically in Fig~\ref{fig:vBubbleChain}. Finally, we briefly comment on the scalar three-point and four-point functions, assuming that they are generated via fermion loops: the former exactly vanishes for massless fermions, while the latter is found to be already $\mathcal{O}(1/N_f)$ at the lowest order. Therefore, they can be neglected for the purpose of our analysis. \section{Renormalization constants} \label{sec:renC} In this section our goal is to extract the contributions to the renormalization constants that are $\mathcal{O}(1/N_F)$ and relevant for the computation of the $\beta$-function. Our starting point for $Z_S$ is Eq.~\eqref{eq:Z_Sdef}. Using the expansion of the scalar self-energy, Eq.~\eqref{eq:PI0}, we obtain \begin{equation}\label{eq:Z_S2} \begin{split} Z_S & = 1 - \text{div} \bigg\{ Z_S Z_K K \Pi^{(1)}(p^2,\epsilon) + \frac{1}{N_f} \sum_{n=2}^\infty Z_S ( Z_K K)^n \Pi^{(n)}(p^2,\epsilon) \bigg\}. \end{split} \end{equation} Recalling that $Z_K \equiv Z_S^{-1}Z_F^{-2}Z_V^2$ and substituting Eqs~\eqref{eq:Z_Fdef} and~\eqref{eq:Z_Vdef}, the first term between brackets can be written as \begin{multline} \text{div} \left \{ Z_S Z_K \Pi^{(1)}(p^2,\epsilon) K \right\} \\ =K \text{div}\left\{ \Pi^{(1)}\right\} + \frac{1}{N_f} \text{div} \left\{ 2 K \, \text{div}\left\{\Sigma_0(p^2,Z_K K,\epsilon) - V_0(p^2,Z_K K,\epsilon) \right\} \Pi^{(1)}(p^2,\epsilon) \right\}. \end{multline} The $\Pi^{(1)}$ part corresponds to the one-loop diagram and is given by \begin{equation} \begin{split} \Pi^{(1)}(p^2,\epsilon) \equiv & \text{div}\left\{\Pi^{(1)}\right\} + \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) = \frac{1}{(4\pi)^{d/2-2}} \frac{G(1,1)}{2} (-p^2)^{d/2-2} \\ =& \frac{1}{\epsilon} + \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon), \end{split} \end{equation} where $d=4-\epsilon$, the loop function, $G(1,1)$, is defined in Eq.~\eqref{eq:G2} in Appendix~\ref{sec:app1}, and we have introduced the notation $\Pi^{(1)}_{\mathrm{F}}$ to indicate the finite part of $\Pi^{(1)}$. Then, \begin{equation}\begin{split}\label{eq:Z_Sfirst} \text{div} &\left \{ Z_S Z_K \Pi^{(1)}(p^2,\epsilon) K \right\} \\ &= \frac{K}{\epsilon} + \frac{1}{N_f} \text{div} \bigg\{ 2 K \, \text{div}\left\{\Sigma_0(p^2,Z_K K,\epsilon) - V_0(p^2,Z_K K,\epsilon) \right\} \\ & \qquad\qquad\qquad\quad\times \left( \text{div}\left\{\Pi^{(1)}\right\} + \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon)\right) \bigg\} \\ &= \frac{K}{\epsilon} + \frac{1}{N_f} \text{div} \left\{ 2 K \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) \left[\Sigma_0(p^2,Z_K K,\epsilon) - V_0(p^2,Z_K K,\epsilon) \right] \right\} \\ &\quad + \frac{1}{N_f}\times\ \text{higher poles}, \end{split} \end{equation} where the higher poles, i.e., higher than $1/\epsilon$, arise from the product of two divergent parts and will be omitted because they play no role in what follows. Then, at the lowest order in $1/N_f$, \begin{equation} Z_S = 1 - \frac{K}{\epsilon} + \mathcal{O}\left(1/N_f \right). \end{equation} Therefore, every time $Z_K K$ appears in the argument of $\Sigma_0$ and $V_0$, it can be replaced by $ K\left(1 - \frac{K}{\epsilon}\right)^{-1}$; the additional contributions are higher order in $1/N_f$. For Eq.~\eqref{eq:Z_Sfirst}, we arrive at \begin{multline} \text{div} \left \{ Z_S Z_K \Pi^{(1)}(p^2,\epsilon) K \right\}\\ = \frac{K}{\epsilon} + \sum_{n=1}^\infty K^{n+1} \text{div} \left\{ 2 \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) \left(1-\frac{K}{\epsilon}\right)^{-n} \left[ \Sigma^{(n)}(p^2,\epsilon) - V^{(n)}(p^2,\epsilon) \right] \right\}. \end{multline} Similarly, the second term of Eq.~\eqref{eq:Z_S2} reads \begin{equation} \frac{1}{N_f} \text{div} \left\{ \sum_{n=2}^\infty Z_S ( Z_S^{-1} K)^n \Pi^{(n)}(p^2,\epsilon) \right\} = \frac{1}{N_f} \sum_{n=2}^\infty K^n \text{div}\left\{ \left( 1 - \frac{K}{\epsilon} \right)^{1-n} \Pi^{(n)}(p^2,\epsilon) \right\}. \end{equation} Altogether, we can write $Z_S$ as \begin{multline} Z_S = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \left\{ \left( 1 - \frac{K}{\epsilon} \right)^{1-n} \left( 2 \Pi^{(1)}_{\mathrm{F}} \left[ \Sigma^{(n-1)} - V^{(n-1)} \right] + \Pi^{(n)} \right) \right\}, \end{multline} where the explicit functional dependence on $(p^2,\epsilon)$ has been omitted to lighten the notation. Using the binomial expansion, \begin{equation} \left(1 - \frac{K}{\epsilon}\right)^{1-n} = \sum_{i=0}^\infty \left( \begin{array}{c} n + i - 2 \\ i \end{array} \right) \frac{K^i}{\epsilon^i} \end{equation} and performing a shift in the summation, $n \rightarrow n - i$, we find our final expression for $Z_S$: \begin{equation}\label{eq:Z_Sfinal} Z_S = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \text{div} \left\{ \sum_{i=0}^{n-2} \left( \begin{array}{c} n-2 \\ i \end{array} \right) \frac{1}{\epsilon^i} \left( 2 \Pi^{(1)}_{\mathrm{F}} \left( \Sigma^{(n-i-1)}-V^{(n-i-1)} \right) + \Pi^{(n-i)} \right) \right\}. \end{equation} We notice that Eq.~\eqref{eq:Z_Sfinal} differs essentially from its counterpart in the QED~\cite{PalanquesMestre:1983zy} because of the contribution from the fermion self-energy and the vertex, which exactly cancel in QED because of the Ward identity. The expression for $Z_F$ can be derived from Eq.~\eqref{eq:Z_Fdef} in a similar manner: \begin{equation}\label{eq:Z_Ffinal} \begin{split} Z_F & = 1 - \frac{1}{N_f} \sum_{n=1}^\infty \text{div}\left\{ \left( Z_K K \right)^n \Sigma^{(n)}(p^2,\epsilon) \right\} \\ & = 1 - \frac{1}{N_f} \sum_{n=1}^\infty K^n \text{div}\left\{\left(1-\frac{K}{\epsilon}\right)^{-n}\Sigma^{(n)}(p^2,\epsilon) \right\} \\ & = 1 - \frac{1}{N_f} \sum_{n=1}^\infty K^n \text{div} \left\{ \sum_{i=0}^{n-1} \left( \begin{array}{c} n-1 \\ i \end{array} \right) \frac{1}{\epsilon^i} \Sigma^{(n-i)}(p^2,\epsilon) \right\}, \end{split} \end{equation} where we have again performed the same shift $n \rightarrow n-i$ in the last line. The derivation of $Z_V$ is completely analogous, and we can readily write the expression for $Z_V$: \begin{equation}\label{eq:Z_Vfinal} Z_V = 1 - \frac{1}{N_f} \sum_{n=1}^\infty K^n \text{div} \left\{ \sum_{i=0}^{n-1} \left( \begin{array}{c} n-1 \\ i \end{array} \right) \frac{1}{\epsilon^i} V^{(n-i)}(p^2,\epsilon) \right\}. \end{equation} \section{Resummation} \label{sec:resum} In this section we provide closed formulas for Eqs~\eqref{eq:Z_Sfinal}, \eqref{eq:Z_Ffinal}, and~\eqref{eq:Z_Vfinal}. \subsection{The vertex} By explicit computation, the $n$-loop contribution to $V_0$ is \begin{equation}\label{eq:Vn} \begin{split} V^{(n)}(p^2,\epsilon) =& \frac{(-1)^{n}}{4} \left(\frac{1}{(4 \pi)^{d/2-2}} \right)^n \left(\frac{G(1,1)}{2} \right)^{n-1} (-p^2)^{n(d/2-2)} \\ &\times G\left(1,1-(n-1)(d/2-2) \right), \end{split} \end{equation} where $G(n_1,n_2)$ is defined in Eq.~\eqref{eq:G2}. We notice that, as in Ref.~\cite{PalanquesMestre:1983zy}, Eq.~\eqref{eq:Vn} allows for the following expansion: \begin{equation}\label{eq:V^nexp} V^{(n)}(p^2,\epsilon) = (-1)^{n} \frac{1}{n \epsilon^n} \frac{v(p^2,\epsilon,n)}{2}, \end{equation} where \begin{equation} v(p^2,\epsilon,n) = \sum_{j=0}^\infty v_j(p^2,\epsilon) (n \epsilon)^j, \end{equation} and $v_j(p^2,\epsilon)$ are regular in the limit $\epsilon \rightarrow 0$ for all $j$. In particular, $v_0(\epsilon)$ is independent of $p^2$ and is explicitly given by \begin{equation}\label{eq:v0def} v_0(\epsilon) = \frac{2 \Gamma(2-\epsilon)} {\Gamma\left(1-\frac{\epsilon}{2}\right)^2 \Gamma \left(2 - \frac{\epsilon}{2} \right) \Gamma \left( \frac{\epsilon}{2} \right)\,\epsilon}. \end{equation} Substituting Eqs~\eqref{eq:Vn} and~\eqref{eq:V^nexp} in Eq.~\eqref{eq:Z_Vfinal}, we find: \begin{equation}\label{eq:Z_Vsum} Z_V = 1 - \frac{1}{N_f} \sum_{n=1}^\infty (-K)^n\text{div}\left\{ \sum_{j=0}^{n-1} \frac{1}{\epsilon^{n-j}} \sum_{i=0}^{n-1} \left( \begin{array}{c} n-1 \\ i \end{array} \right)(-1)^i (n-i)^{j-1} \frac{v_j(p^2,\epsilon)}{2} \right\}. \end{equation} Then, by using the result of Ref.~\cite{PalanquesMestre:1983zy}, \begin{equation} \sum_{i=0}^{n-1}\left( \begin{array}{c} n-1 \\ i \end{array} \right)(-1)^i (n-i)^{j-1} = - \delta_{j,0} \frac{(-1)^n}{n}, \ j=0,\dots,n-1, \end{equation} Eq.~\eqref{eq:Z_Vsum} gets simplified to \begin{equation}\label{eq:Z_Vsim} Z_V = 1 + \frac{1}{2 N_f} \sum_{n=1}^\infty \frac{K^n}{\epsilon^n} \frac{v_0(\epsilon)}{n }. \end{equation} Expanding $v_0(\epsilon)$ as \begin{equation} v_0(\epsilon) = \sum_{j=0}^\infty v_0^{(j)} \epsilon^j \end{equation} and keeping only the $1/\epsilon$ pole of Eq.~\eqref{eq:Z_Vsim}, we find the closed formula for $Z_V$: \begin{equation} Z_V = 1 + \frac{1}{2 \epsilon N_f} \sum_{n=1}^\infty \frac{K^n}{n}v_0^{(n-1)} = 1 + \frac{1}{2 \epsilon N_f} \int_0^K v_0(t) \mathrm{d}t. \end{equation} \subsection{The fermion self-energy} The $n$-loop contribution to $\Sigma_0$ is found to be \begin{equation}\label{eq:Sigmaexp} \begin{split} \Sigma^{(n)}(p^2,\epsilon) =& - \frac{(-1)^{n}}{8} \left( \frac{1}{(4 \pi)^{d/2-2}} \right)^n \left( \frac{G(1,1)}{2} \right)^{n-1} (-p^2)^{n(d/2-2)} \\ &\times \left[ G(1,1-(n-1)(d/2-2))-G(1,-(n-1)(d/2-2)) \right]. \end{split} \end{equation} Similarly to Eq.~\eqref{eq:Vn}, Eq.~\eqref{eq:Sigmaexp} can be expanded as \begin{equation} \Sigma^{(n)}(p^2,\epsilon)= - (-1)^{n} \frac{1}{n \epsilon^n} \frac{\sigma(p^2,\epsilon,n)}{4}, \end{equation} where \begin{equation} \sigma(n,\epsilon,p^2) = \sum_{j=0}^{\infty} \sigma_j(p^2,\epsilon) (n \epsilon)^j, \end{equation} and $\sigma_j(p^2,\epsilon)$ are regular for $\epsilon \rightarrow 0$. Again, $\sigma_0(\epsilon)$ is independent of $p^2$, and it is given by \begin{equation} \sigma_0(\epsilon) = - \frac{ 2^{5 - \epsilon} \Gamma\left( \frac{3}{2} - \frac{\epsilon}{2} \right)} { \sqrt{\pi}(4 - \epsilon) \Gamma\left(-\frac{\epsilon}{2}\right) \epsilon} \frac{\text{sin}\left(\frac{\pi \epsilon}{2}\right)}{\pi \epsilon}. \end{equation} Using the same procedure as in the previous section, we find that only $\sigma_0(\epsilon)$ contributes to $Z_F$. Keeping only the $1/\epsilon$ pole, the closed formula for $Z_F$ is \begin{equation} Z_F = 1 - \frac{1}{4 \epsilon N_f} \int_0^K \sigma_0(t) \mathrm{d}t. \end{equation} \subsection{The scalar self-energy} The evaluation of the bubble diagrams in Fig.~\ref{fig:sBubbleChain} is quite cumbersome and is discussed in Appendix~\ref{sec:app2}. Here, we notice that the expression for $\Pi^{(n)}(p^2,\epsilon)$, $n \geq 2$, allows for the following expansion: \begin{equation} \Pi^{(n)}= - \frac{3}{2} \frac{(-1)^n}{n (n-1) \epsilon^n} \pi(p^2,\epsilon,n), \end{equation} where \begin{equation} \pi(p^2,\epsilon,n) = \sum_{j=0}^\infty \pi_j(p^2,\epsilon)(n \epsilon)^j, \end{equation} and $\pi_j(p^2,\epsilon)$ are regular for $\epsilon \rightarrow 0$. Similarly to the previous cases, $\pi_0(\epsilon)$ is independent of $p^2$. In view of Eq.~\eqref{eq:Z_Sfinal}, we define \begin{equation} 2 \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) \left( \Sigma^{(n-1)}(p^2,\epsilon) -V^{(n-1)}(p^2,\epsilon) \right) + \Pi^{(n)}(p^2,\epsilon) \equiv \frac{(-1)^n}{n (n-1) \epsilon^n} \xi(p^2,\epsilon,n), \end{equation} where \begin{equation}\label{eq:csindef} \xi(p^2,\epsilon,n) \equiv n \epsilon \, \Pi^{(1)}_{\mathrm{F}} \left( \frac{\sigma(p^2,\epsilon,n-1)}{2}+ v(p^2,\epsilon,n-1) \right) - \frac{3}{2} \pi(p^2,\epsilon,n), \end{equation} and \begin{equation} \xi(p^2,\epsilon,n) = \sum_{j=0}^\infty \xi_j(p^2,\epsilon) (n \epsilon)^j, \end{equation} with $\xi_j(\epsilon,p^2)$ regular for $\epsilon \rightarrow 0$ for all $j$. In particular, $\xi_0(\epsilon)$ is independent of $p^2$ and is explicitly given by \begin{equation} \xi_0(\epsilon)=-\frac{(1-\epsilon)\Gamma(4-\epsilon)}{\Gamma\left(2-\frac{\epsilon}{2}\right) \Gamma\left(3-\frac{\epsilon}{2}\right)\pi\epsilon}\sin\left(\frac{\pi\epsilon}{2}\right) \end{equation} Then, using the above definitions, Eq.~\eqref{eq:Z_Sfinal} can be written as \begin{equation} \begin{split} Z_S & = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \text{div} \left\{ \sum_{i=0}^{n-2} \left( \begin{array}{c} n-2 \\ i \end{array} \right) \frac{1}{\epsilon^i} \frac{(-1)^{n-i}}{(n-i)(n-i-1)\epsilon^{n-i}} \xi(p^2,\epsilon,n-i) \right\} \\ & = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty (-K)^n \text{div} \left\{\sum_{j=0}^{n-1} \frac{1}{\epsilon^{n-j}} \xi_j(p^2,\epsilon) \sum_{i=0}^{n-2} \left( \begin{array}{c} n-2 \\ i \end{array} \right)(-1)^{i} \frac{(n-i)^{j-1}}{(n-i-1)} \right\}. \end{split} \end{equation} Moreover, we find that \begin{equation} \sum_{i=0}^{n-2} \left( \begin{array}{c} n-2 \\ i \end{array} \right) (-1)^{i} \frac{(n-i)^{j-1}}{(n-i-1)} = \begin{cases} \frac{(-1)^n}{n} & j = 0 \\ \frac{(-1)^n}{n-1} & j = 1,\dots,n-1 \end{cases}, \end{equation} and therefore the expression for $Z_S$ can be significantly simplified: \begin{equation} \label{eq:ZSxi} \begin{split} Z_S & = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \, \text{div} \left\{ \frac{1}{\epsilon^n} \left( \frac{\xi_0(\epsilon)}{n} + \frac{1}{n-1} \sum_{j=1}^{n-1} \xi_j(p^2,\epsilon) \epsilon^j \right) \right\} \\ & = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \, \text{div} \left\{ \frac{1}{\epsilon^n} \left( \frac{\xi_0(\epsilon)}{n} + \frac{1}{n-1} \sum_{j=1}^{\infty} \xi_j(p^2,\epsilon) \epsilon^j \right) \right\} \\ & = 1 - \frac{K}{\epsilon} - \frac{1}{N_f} \sum_{n=2}^\infty K^n \, \text{div} \left\{ \frac{1}{\epsilon^n} \left( \frac{\xi_0(\epsilon)}{n} + \frac{\xi(p^2,\epsilon,1) - \xi_0(\epsilon)}{n-1} \right) \right\}, \end{split} \end{equation} where in the second line we extended the sum over $j$ up to $\infty$ without affecting the result, since all the terms for $ j > n-1$ are finite. The function $\xi(p^2,\epsilon,1)$, corresponding to \begin{equation} \xi(p^2,\epsilon,1) \equiv \sum_{j=0}^\infty \xi_j(p^2,\epsilon)\epsilon^j, \end{equation} can be evaluated by taking in Eq.~\eqref{eq:csindef} the limit $n\rightarrow 1$, although the latter is formally defined for $n \geq 2$. We find the following expression: \begin{equation}\label{eq:cancel} \xi(p^2,\epsilon,1) = -\frac{\Gamma(4-\epsilon)}{\Gamma\left(2-\frac{\epsilon}{2}\right) \Gamma\left(3-\frac{\epsilon}{2}\right)\pi\epsilon}\sin\left(\frac{\pi\epsilon}{2}\right) \equiv \xi(\epsilon,1). \end{equation} Few comments are in order: Eq.~\eqref{eq:cancel} ensures that $Z_S$ is independent of the external momentum $p^2$, as it should. This result comes from an exact cancellation among the different contributions of the scalar self-energy, the fermion self-energy, and the vertex in Eq.~\eqref{eq:csindef}. In particular, we find that \begin{equation}\label{eq:pi1} \begin{split} \pi(p^2,\epsilon,1) & = \frac{2}{3} \left(\frac{\sigma(p^2,\epsilon,0)}{2} + v(p^2,\epsilon,0) \right) \left[1 + 1 \cdot \epsilon \, \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) \right] \\ & = \frac{2}{3} \left(\frac{\sigma_0(\epsilon)}{2} + v_0(\epsilon) \right) \left[1 + \epsilon \, \Pi^{(1)}_{\mathrm{F}}(p^2,\epsilon) \right], \end{split} \end{equation} and therefore \begin{equation}\label{eq:xieps1} \xi(\epsilon,1) = - \frac{\sigma_0(\epsilon)}{2} - v_0(\epsilon), \end{equation} which is equivalent to Eq.~\eqref{eq:cancel}. Interestingly, Eq.~\eqref{eq:pi1} only holds for $n=1$. All in all, the $p^2$ independence of Eq.~\eqref{eq:cancel} provides a non-trivial check for our computation. Moreover, we see that \begin{equation} \xi_0(\epsilon) = (1-\epsilon) \xi(\epsilon,1). \end{equation} We are now ready to resum the series in Eq.~\eqref{eq:ZSxi}. By expanding $\xi_0(\epsilon)$ as \begin{equation} \xi_0(\epsilon) = \sum_{j=0}^\infty \xi_0^{(j)} \epsilon^j, \end{equation} the $\frac{1}{n}$ term in Eq.~\eqref{eq:ZSxi} is given by \begin{equation}\begin{split} \sum_{n=2}^\infty \frac{K^n}{\epsilon^n} \frac{\xi_0(\epsilon)}{n} & = \frac{1}{\epsilon} \sum_{n=2}^\infty \frac{K^n}{\epsilon^n} \frac{\xi_0^{(n-1)}}{n} + \text{higher poles} \\ & = \frac{1}{\epsilon} \left( \, \sum_{n=0}^\infty K^{n+1} \frac{\xi_0^{(n)}}{n+1} - K \xi_0^{(0)} \right) + \text{higher poles} \\ & = \frac{1}{\epsilon} \int_0^K \left[ \xi_0(t) - \xi_0(0) \right] \mathrm{d}t + \text{higher poles} . \end{split} \end{equation} As for the $\frac{1}{n-1}$ term, using $\xi_0(\epsilon)=(1-\epsilon) \xi(\epsilon,1)$ and expanding $\xi(\epsilon,1)$ as \begin{equation} \xi(\epsilon,1) = \sum_{j=0}^\infty \tilde{\xi}^{(n)} \epsilon^j, \end{equation} we find \begin{equation} \begin{split} \sum_{n=2}^\infty \frac{K^n}{\epsilon^{n}} \frac{ \epsilon \xi(\epsilon,1)}{n-1} & = \frac{K}{\epsilon} \sum_{n=0}^\infty \frac{K^{n+1}}{n+1} \tilde{\xi}^{(n)} + \text{higher poles} \\ & = \frac{K}{\epsilon} \int_0^K \xi(t,1)\mathrm{d}t + \text{higher poles}. \end{split} \end{equation} Finally, the closed formula for $Z_S$ reads \begin{equation} Z_S = 1 - \frac{K}{\epsilon} - \frac{1}{ \epsilon N_f} \int_0^K \left[ \xi_0(t) - \xi_0(0) + \xi(t,1) K \right] \mathrm{d}t. \end{equation} \section{The $\beta$-function} \label{sec:beta} Using the results of the previous section together with Eq.~\eqref{eq:ZK}, we can finally proceed to evaluating the $\beta$-function. First, we find that \begin{equation} G_1(K)= K + \frac{1}{N_f} \int_0^K \left( \xi_0(t) - \xi_0(0) + \xi(t,1) K + \frac{\sigma_0(t)}{2} + v_0(t) \right) \mathrm{d}t. \end{equation} Now, it is straightforward to compute the $\beta$-function: \begin{equation}\label{eq:beta0} \begin{split} \beta(K) =& K^2 + \frac{K^2}{N_f} \left\{ - \xi_0(0) + \xi(K,1) + \frac{\sigma_0(K)}{2} + v_0(K)+ \int_0^K \xi(t,1) \mathrm{d}t \right\}. \end{split} \end{equation} Recalling Eq.~\eqref{eq:xieps1} and using $\xi_0(0) = -\frac{3}{2}$, Eq.~\eqref{eq:beta0} can be further simplified to \begin{equation}\label{eq:beta} \frac{\beta(K)}{K^2} = 1 + \frac{1}{N_f} \left\{ \frac{3}{2} + \int_0^K \xi(t,1) \mathrm{d}t \right\}. \end{equation} Finally, by comparison with Eq.~\eqref{eq:F0F1}, we see that $F_0 = 1$ and \begin{equation} F_1(K) = \frac{3}{2} + \int_0^K \xi(t,1) \mathrm{d}t. \end{equation} We plot the integrand, $\xi(t,1)$, in Fig.~\ref{fig:I1}. We have checked that our $\beta$-function agrees at the leading order in $N_f$ up to four-loop level by comparing with the result of Ref.~\cite{Zerf:2017zqi}, and with the result extracted from the critical exponents in Gross--Neveu--Yukawa model computed using a different technique~\cite{Gracey:2017fzu}. Finally, let us comment on the pole structure: the integrand, $\xi(t,1)$, has the first pole occuring at $t=5$, which results in a logarithmic singularity for $F_1(K)$ around $K=5$. Due to the sign of $\xi(t,1)$, we see that $F_1(K)$ approaches large negative values for $K\rightarrow5^-$. This suggests the existence of a UV fixed point at $K_\text{UV} \lesssim 5$ such that $F_1(K_\text{UV}) = - N_f$. \begin{figure}[t] \centering \includegraphics[width=8cm]{I1v2.pdf} \caption{The function $\xi(t,1)$.} \label{fig:I1} \end{figure} \section{Conclusions} \label{sec:concl} We have computed the leading $1/N_f$ contribution for the $\beta$-function in Yukawa theory with $N_f$ fermion flavours coupling to a real scalar. We obtained a closed form expression for the $\beta$-function up to order $\mathcal{O}(1/N_f)$. This expression has a finite radius of convergence, and the first singularity occurs at $K=5$. The present result adds an interesting ingredient to models with a large number of fermions, and makes a contribution to better understand the UV behaviour of gauge-Yukawa theories. \section*{Acknowledgments} We are grateful to John Gracey for bringing our attention the connection to the Gross--Neveu model. We thank Florian Goertz and Valentin Tenorth for discussions and valuable comments.
1,314,259,996,856
arxiv
\section{Introduction} \subsection{Our Contributions} We consider the following problem. We are given a set of agents and a set of items. There are as many items as agents. Each agent has a \emph{dichotomous preference} over the items, that is, each agent evaluates each item as acceptable or unacceptable. (See, e.g., \cite{BM04} for situations where dichotomous preferences naturally arise.) Over the set of agents, we are given a communication graph. We are also given two assignments of items to agents, where each agent receives an acceptable item. Now, we want to determine whether one assignment can be reached from the other assignment by rational exchanges. Here, a rational exchange means that each of the two agents accepts the item assigned to the other, and they are joined by an edge in the communication graph. We investigate algorithmic aspects of this problem. Our results are two-fold. We first prove that our problem can be solved in polynomial time if the communication graph is a tree. Second, we prove that our problem is $\mathsf{PSPACE}$-complete even when the communication graph is complete (that is, every pair of agents can exchange their items). This $\mathsf{PSPACE}$-completeness result shows an interesting contrast to the $\mathsf{NP}$-completeness in the strict preference case~\cite{MB20}. The question studied in this paper is related to the generation of a random assignment. Bogomolnaia and Moulin~\cite{BM04} stated several good properties of random assignments in situations with dichotomous preferences. One of the typical methods for generating a random assignment is based on the Markov chain Monte Carlo method \cite{LP17}. In this method, we consider a sequence of small changes for assignments and hope that the resulting assignment is sufficiently random. For this method to work, we require all possible assignments can be reached from an arbitrary initial assignment, i.e., the irreducibility of the Markov chain. This paper studies such an aspect of random assignments under dichotomous preferences from the perspective of combinatorial reconfiguration \cite{N18}. \subsection{Backgrounds} The problem of assigning indivisible items to agents has been extensively studied in algorithmic game theory and computational social choice (see, e.g., \cite{M13,KlausMR16}). Applications of this kind of problem include job allocation, college admission, school choice, kidney exchange, and junior doctor allocation to hospital posts. When we consider this kind of problem, we implicitly assume that agents can observe the situations of all the agents and freely communicate with others. Recently, assignment problems without these assumptions have been studied. For example, fairness concepts based on limited observations on others have been considered in \cite{AKP17,ABCGL18,BCGLMW19,BKN18,FMT19}. In a typical setting in this direction, we are given a graph defined on the agents and fairness properties are defined on a pair of agents joined by an edge of the graph or the neighborhoods of vertices. This paper is concerned with the latter assumption, that is, we consider assignment problems in the situation where the communication between agents is limited. Our problem is concerned with situations where each agent is initially endowed with a single item: Those situations commonly arise in the housing market problem~\cite{SS74}. In the housing market problem, the goal is to reach one of the desired item assignments by exchanging items among agents from the initial assignment. For example, the top-trading cycle algorithm proposed by Shapley and Scarf~\cite{SS74} is one of the most fundamental algorithms for this problem, and variants of the top-trading cycle algorithm have been proposed (see, e.g., \cite{AS99,AD12}). As said above, in the standard housing market problem, we assume that any pair of agents can exchange their items. However, in some situations, this assumption does not seem to be realistic. For example, when we consider trading among a large number of agents, it is natural to consider that agents can exchange their items only if they can communicate with each other. Recently, the setting with restricted exchanges has been considered \cite{GLW17,HX20,LPS21,MB20}. More precisely, we are given an undirected graph defined on the agents representing possible exchanges, and a pair of agents can exchange their items only if they are joined by an edge. Gourv\`{e}s, Lesca, and Wilczynski~\cite{GLW17} initiated the algorithmic research of exchanges over social networks in the housing market problem. They assumed that each agent has a strict preference over the items, and considered the question that asks which allocation of the items can emerge by rational exchanges between two agents. More concretely, they considered the problem of determining whether a target assignment can be reached from an initial assignment by rational exchanges between two agents. Here a rational exchange means that both agents prefer the item assigned to the other to her/his currently assigned item and they are joined by an edge. We can see that if the target assignment is reachable from the initial assignment, then the target assignment can emerge by decentralized rational trades between agents. Gourv\`{e}s, Lesca, and Wilczynski~\cite{GLW17} proved that this problem is $\mathsf{NP}$-complete in general, and can be solved in polynomial time when the communication graph is a tree. Later, M\"{u}ller and Benter~\cite{MB20} proved that this problem is $\mathsf{NP}$-complete even when the communication graph is complete, and can be solved in polynomial time when the communication graph is a cycle. In addition to reachability between assignments by rational exchanges, the problem of determining whether an assignment where a specified agent receives a target item can be reached from an initial assignment by rational exchanges has been studied. Gourv\`{e}s, Lesca, and Wilczynski~\cite{GLW17} proved that this problem is $\mathsf{NP}$-complete even when the communication graph is a tree. Huang and Xiao~\cite{HX20} proved that this problem can be solved in polynomial time when the communication graph is a path. In addition, they proved the $\mathsf{NP}$-completeness and the polynomial-time solvability in stars for preferences that may contain ties. Li, Plaxton, and Sinha~\cite{LPS21} considered the following variant of the model mentioned above~\cite{GLW17,HX20,MB20}. In their model, we are given a graph defined on the items and an exchange between some agents is allowed if their current items are joined by an edge. For this model, Li, Plaxton, and Sinha~\cite{LPS21} proved similar results to the results for the former model \cite{GLW17,HX20,MB20}. Our problem can be regarded as one kind of problems where we are given an initial configuration and a target configuration of some combinatorial objects, and the goal is to check the reachability between these two configurations via some specified operations. In theoretical computer science, this kind of problem has been studied under the name of \emph{combinatorial reconfiguration}. The algorithmic studies of combinatorial reconfiguration were initiated by Ito et al.~\cite{IDHPSUU11}. See, e.g., \cite{N18} for a survey of combinatorial reconfiguration. In Section~\ref{setion:pspace_complete}, we use a known result in combinatorial reconfiguration. \section{Preliminaries} Assume that we are given a finite set $N$ of agents and a finite set $M$ of items such that $|N| = |M|$. For each item $j \in M$, we are given a subset $N_j$ of agents who can accept $j$. For each agent $i \in N$, define a subset $M_i \subseteq M$ as the set of acceptable items in $M$, i.e., $j \in M_i$ if and only if $i \in N_j$. For a subset $X \subseteq M$, we define $N_X = \bigcup_{j \in X}N_j$. We define the ordered families $\ensuremath\mathcal{M}$ and $\ensuremath\mathcal{N}$ as $\ensuremath\mathcal{M} = (M_i \mid i \in N)$ and $\ensuremath\mathcal{N} = (N_j \mid j \in M)$. Furthermore, we are given an undirected graph $G = (N, E)$. The setup can be rephrased in terms of graphs. From the family $\ensuremath\mathcal{N}=(N_j \mid j \in M)$, we may define the following bipartite graph $H$. The vertex set of $H$ is $N\cup M$, and two vertices $i\in N$ and $j \in M$ are joined by an edge if and only if $i \in N_j$ (or equivalently, $j \in M_i$). The graph $G$ is defined over the set $N$. See \figurename~\ref{fig:itemalloc_graph1}. \begin{figure}[t] \centering \includegraphics[scale=1]{itemalloc_graph1.pdf} \caption{The graph representation. Graph $G$ is shown in red, and graph $H$ is shown in gray.} \label{fig:itemalloc_graph1} \end{figure} A bijection $a \colon N \to M$ is called an \emph{assignment} if $a(i) \in M_i$ for every agent $i \in N$, i.e., $a(i)$ is an item that is acceptable for $i$. By the assignment $a$, we say an item $j$ is \emph{assigned} to an agent $i$ if $a(i)=j$. In terms of the graph $H$, an assignment corresponds to a \emph{perfect matching} of $H$. Hall's marriage theorem states that a perfect matching of $H$ exists if and only if $|S| \leq |N_S|$ for all $S\subseteq M$. Hall's marriage theorem will be used in the next section to prove our theorems. For a pair of assignments $a, b \colon N \to M$, we write $a \to b$ if there exist distinct agents $i, i^{\prime} \in N$ satisfying the following two conditions. \begin{itemize} \item For every agent $k \in N \setminus \{i,i^{\prime}\}$, $a(k) = b(k)$. \item $a(i) = b(i^{\prime})$, $a(i^{\prime}) = b(i)$, and $\{i,i^{\prime}\} \in E$. \end{itemize} See \figurename~\ref{fig:itemalloc_swap1}. As a handy notation, we use $a(Y) = \{a(i) \mid i \in Y\}$ for every $Y \subseteq N$ and $a^{-1}(X) = \{a^{-1}(j) \mid j \in X\}$ for every $X \subseteq M$. \begin{figure}[t] \centering \includegraphics[scale=1]{itemalloc_swap1.pdf} \caption{An exchange operation. Assignments are drawn with thick black segments as perfect matchings.} \label{fig:itemalloc_swap1} \end{figure} Our problem is defined as follows. An instance is specified by a $6$-tuple $\ensuremath{\mathcal{I}} = (N, M, \ensuremath\mathcal{N}, G, a, b)$, where $a$ and $b$ are assignments. The goal is to determine whether there exists a sequence $a_0, a_1, \dots, a_{\ell}$ of assignments such that $a_{t -1} \to a_t$ for every integer $t \in \{1,2,\ldots,\ell\}$, $a_0 = a$, and $a_{\ell} = b$. In this case, we say that $a$ can be \emph{reconfigured} to $b$, or $b$ is \emph{reachable} from $a$. Observe that $a_0^{-1}(j), a_1^{-1}(j), \dots , a_\ell^{-1}(j)$ are in the same connected component of $G[N_j]$, where $G[N_j]$ is the subgraph of $G$ induced by $N_j$. Thus, when we consider the reachability of the assignments, we may assume that $G[N_j]$ is connected for every $j \in M$ without loss of generality. For the family $\ensuremath\mathcal{N}$, a non-empty subset $X \subseteq M$ of items is \emph{stable} if $|X| = \left|N_X\right|$. We remind that $N_X = \bigcup_{j \in X}N_j$. A stable subset $X \subseteq M$ is \emph{proper} if $\emptyset \not= X \subsetneq M$. \section{Trees: A Characterization} In this section, we consider the case when $G$ is a tree. We give a sufficient condition for the reachability of the assignments, which is essential to design a polynomial-time algorithm in Section~\ref{sec:tree:alg}. As described in the previous section, it suffices to deal with the case when $G[N_j]$ is connected for every $j \in M$. \begin{theorem}\label{thm:chartree} Suppose that $G$ is a tree and $G[N_j]$ is connected for every $j \in M$. If there exists no proper stable subset of items in $M$, then every assignment can be reconfigured to any other assignment. \end{theorem} We prove the theorem by induction on $|N|$. When $|N|=1$, the claim is obvious. Consider an instance $(N, M, \ensuremath\mathcal{N}, G, a, b)$ with $|N| \ge 2$. Assume that there exists no proper stable subset of items in $M$, i.e., $|N_X| \ge |X|+1$ for any nonempty subset $X \subsetneq M$. We consider the following two cases separately: \begin{enumerate} \item There exists a subset $X \subseteq M$ such that $N_X \not= N$ and $|N_X| = |X|+1$. \item For any nonempty subset $X \subseteq M$, we have that $N_X=N$ or $|N_X| \ge |X|+2$. \end{enumerate} \subsection{Case 1} Suppose that there exists a subset $X \subseteq M$ such that $N_X \not= N$ and $|N_X| = |X|+1$. Among such sets, let $X$ be an inclusionwise minimal one. Note that $X \not= M$. \begin{lemma} \label{clm:01} $G[N_X]$ is connected. \end{lemma} \begin{proof} Assume to the contrary that $G[N_X]$ is not connected. Then, there exists a partition $X_1, \cdots , X_t$ of $X$ with $t \ge 2$ such that $G[N_{X_1}], \dots , G[N_{X_t}]$ are distinct connected components of $G[N_X]$. Since there exists no proper stable subset, we obtain $|N_{X_i}| > |X_i|$ for $i=1, \dots , t$. Hence, $|N_X| = \sum |N_{X_i}| \ge \sum (|X_i|+1) \ge |X| + t > |X|+1$, which is a contradiction. \end{proof} We denote $R := N_X$ to simplify the notation. The idea is to consider the inside of $G[R]$ and the graph obtained from $G$ by shrinking $R$, separately. Since $|R| = |X|+1$, we observe the following. \begin{observation} \label{obs:01} For any assignment $c \colon N \to M$, there exists an item $j \in M \setminus X$ such that $c(R) = X \cup \{j\}$. \end{observation} For an item $j \in M \setminus X$, a bijection $c' \colon R \to X \cup \{j\}$ is called an {\em assignment in $R$ using $j$} if $c'(i) \in M_i$ for any $i \in R$. If $j$ is clear from the context, it is simply called an {\em assignment in $R$}. \begin{lemma} \label{clm:02} Let $j$ be an item in $M \setminus X$ and let $i$ be an agent in $N_j \cap R$. Then, there exists an assignment $c'$ in $R$ such that $c'(i) = j$. \end{lemma} \begin{proof} It suffices to show the existence of an appropriate bijection from $R \setminus \{i\}$ to $X$. For any nonempty subset $S \subseteq X$, we obtain $|N_S| \ge |S|+1$ as there exists no proper stable set. This shows that $|S| \le |N_S \setminus \{ i \}|$ holds for all $S \subseteq X$. Therefore, a desired assignment $c'$ exists by Hall's marriage theorem. \end{proof} \begin{lemma} \label{clm:03} Let $j$ be an item in $M \setminus X$. Define $N' := R$, $M' := X \cup \{j\}$, and $\ensuremath\mathcal{N}' := (N_{j'} \cap R \mid j' \in X \cup \{j\})$. If $|N_j \cap R| \ge 2$, then $(N', M', \ensuremath\mathcal{N}', G[R], a', b')$ is a yes-instance (i.e., $a'$ can be reconfigured to $b'$) for any assignments $a'$ and $b'$ in $R$. \end{lemma} \begin{proof} We first show that $|N'_Y| \ge |Y|+1$ for any nonempty subset $Y \subsetneq M'$, where $N'_Y := N_Y \cap R$, by the following case analysis. \begin{itemize} \item Suppose that $j \not\in Y$. In this case, $|N'_Y| = |N_Y| \ge |Y|+1$ holds as $M$ has no proper stable subset. \item Suppose that $Y = X' \cup \{j\}$ holds for some nonempty subset $X' \subsetneq X$. Since $|N_{X'}| \ge |X'| + 2$ by the minimality of $X$, we obtain $|N'_Y| = |N_Y \cap R| \ge |N_{X'} \cap R| = |N_{X'}| \ge |X'| + 2 \ge |Y|+1$. \item Suppose that $Y = \{j\}$. In this case, $|N'_Y| = |N_j \cap R| \ge 2 = |Y|+1$ by the assumption. \end{itemize} Therefore, we obtain $|N'_Y| \ge |Y|+1$ for each case. We also see that $G[N_{j'} \cap R]$ is connected for each $j' \in X \cup \{j\}$, because $G[N_{j'}]$ and $G[R]$ are connected (see Lemma~\ref{clm:01}) and $G$ is a tree. Since $|N'| < |N|$, by applying the induction hypothesis, we see that $(N', M', \ensuremath\mathcal{N}', G[R], a', b')$ is a yes-instance. \end{proof} By using these lemmas, we have the following. \begin{lemma} \label{clm:04} Let $j$ be an item in $M \setminus X$. Define $N' := R$, $M' := X \cup \{j\}$, and $\ensuremath\mathcal{N}' := (N_{j'} \cap R \mid j' \in X \cup \{j\})$. Let $i_1, i_2 \in N_j \cap R$ be agents and let $a'$ be an assignment in $R$ such that $a'(i_1) = j$. Then, there exists an assignment $b'$ in $R$ such that $b'(i_2) = j$ and $(N', M', \ensuremath\mathcal{N}', G[R], a', b')$ is a yes-instance (i.e., $a'$ can be reconfigured to $b'$). \end{lemma} \begin{proof} If $i_1 = i_2$, then $b'=a'$ satisfies the condition. Otherwise, since $|N_j \cap R| \ge |\{i_1, i_2\}| = 2$, the lemma holds by Lemmas~\ref{clm:02} and~\ref{clm:03}. \end{proof} The following lemma shows that any assignment can be reconfigured to an assignment for which we can apply Lemma~\ref{clm:03} in $G[R]$. \begin{lemma} \label{clm:05} Let $c \colon N \to M$ be an assignment. Then, there exist an assignment $c^*\colon N \to M$ and an item $j^* \in M \setminus X$ such that $c^*(R) = X \cup \{j^*\}$, $|N_{j^*} \cap R| \ge 2$, and $c$ can be reconfigured to $c^*$. \end{lemma} \begin{proof} By Observation~\ref{obs:01}, there exists a unique vertex $q$ in $R$ such that $c(q) \in M \setminus X$. Let $Q \subseteq N$ be the vertex set of the connected component of $G - E(R)$ containing $q$, where $E(R)$ is the set of edges with both endpoints in $R$. Since $G$ is a tree, we obtain the following: \begin{enumerate}[(C1)] \item $q$ is a cut vertex of $G$ separating $Q\setminus \{q\}$ and $N \setminus Q$; \item Any vertex in $N \setminus Q$ that is adjacent to $q$ is contained in $R$. \end{enumerate} Define $Y \subseteq c(Q)$ as an inclusionwise minimal nonempty set of items such that $|N_Y \cap Q| = |Y|$. Note that such $Y$ exists, because $Y = c(Q)$ satisfies that $|N_Y \cap Q| = |Y|$. We observe a few properties of $Y$. First, $G[N_Y \cap Q]$ is connected by the minimality of $Y$. Second, by $Y \subseteq c(Q)$ and $|N_Y \cap Q| = |Y|$, it holds that $c^{-1}(Y) = N_Y \cap Q$. Third, since $Y$ is not a proper stable subset, we obtain $|N_Y| > |Y| = |N_Y \cap Q|$, and hence there exists a vertex in $N_Y \setminus Q$. Then, there exists an item $j^* \in Y$ with $N_{j^*} \setminus Q \not= \emptyset$. We also see that $N_{j^*} \cap Q \not= \emptyset$ as $c^{-1}(j^*) \in Q$. Since $G[N_{j^*}]$ is connected and $N_{j^*}$ intersects both $Q$ and $N \setminus Q$, (C1) shows that $N_{j^*}$ contains $q$. Furthermore, $N_{j^*}$ contains a vertex $q' \in N \setminus Q$ that is adjacent to $q$. Since $q' \in R$ by (C2), we obtain $|N_{j^*} \cap R| \ge |\{q, q'\}| = 2$. We next claim that there exists a bijection $c^* \colon c^{-1}(Y) \to Y$ such that $c^*(i) \in M_i$ for $i \in c^{-1}(Y)$ and $c^*(q) = j^*$. For any nonempty subset $S \subseteq Y \setminus \{j^*\}$, we obtain $|N_S \cap c^{-1}(Y)| = |N_S \cap Q| \ge |S|+1$ by the minimality of $Y$. This shows that $|S| \le |(N_S \cap c^{-1}(Y)) \setminus \{ q \}|$ holds for all $S \subseteq Y \setminus \{j^*\}$. Therefore, a desired bijection $c^*$ exists by Hall's marriage theorem. Note that $c^*$ can be naturally extended to a bijection from $N$ to $M$ by defining $c^*(i) = c(i)$ for $i \in N \setminus c^{-1}(Y)$. Then, it holds that $c^*(R) = X \cup \{j^*\}$. Finally, we show that $c$ can be reconfigured to $c^*$. To see this, it suffices to consider $G[c^{-1}(Y)]$. For any nonempty subset $S \subsetneq Y$, we obtain $|N_S \cap c^{-1}(Y)| = |N_S \cap Q| \ge |S|+1$ by the minimality of $Y$. This means that there is no proper stable subset if we restrict the instance to $G[c^{-1}(Y)]$. We also see that $G[N_{j'} \cap c^{-1}(Y)]$ is connected for each $j' \in Y$, because $G[N_{j'}]$ and $G[c^{-1}(Y)]=G[N_Y \cap Q]$ are connected and $G$ is a tree. Therefore, by the induction hypothesis, any pair of assignments in $G[c^{-1}(Y)]$ can be reconfigured to each other. This shows that $c$ can be reconfigured to $c^*$. \end{proof} By applying Lemma~\ref{clm:05} in which $c=b$, there exist an assignment $b^*\colon N \to M$ and an item $j^* \in M \setminus X$ such that $b^*(R) = X \cup \{j^*\}$, $|N_{j^*} \cap R| \ge 2$, and $b$ can be reconfigured to $b^*$. Conversely, it is obvious that $b^*$ can be reconfigured to $b$. Let $G^\circ$ be the graph obtained from $G$ by shrinking $R$ to a single vertex $r$, and let $N^\circ$ be its vertex set, i.e., $N^\circ = (N \setminus R) \cup \{r\}$. Let $M^\circ = M \setminus X$. For $j \in M^\circ$, define $N^\circ_j$ as follows: $$ N^\circ_j= \begin{cases} N_j \cup \{r\}& \mbox{if $N_j \cap R \not= \emptyset$,} \\ N_j & \mbox{otherwise.} \end{cases} $$ We can easily see that $G^\circ[N^\circ_j]$ is connected for each $j \in M^\circ$ as $G[N_j]$ is connected. For assignments $a$ and $b^*$ in $G$, let $a^\circ$ and $b^\circ$ be the corresponding assignments in $G^\circ$, which are naturally defined by Observation~\ref{obs:01}. \begin{lemma} \label{clm:06} $(N^\circ, M^\circ, \ensuremath\mathcal{N}^\circ, G^\circ, a^\circ, b^\circ)$ is a yes-instance. \end{lemma} \begin{proof} We show that this instance has no proper stable subset of items. Assume to the contrary that $Y \subsetneq M^\circ$ is a proper stable subset, that is, $|N^\circ_Y| = |Y|$. If $r \not\in N^\circ_Y$, then $|N_Y| = |N^\circ_Y| = |Y|$, and hence $Y$ is a proper stable subset in the original instance, which is a contradiction. Otherwise, $|N_{Y\cup X}| = |(N^\circ_Y \setminus \{r\}) \cup R| = |N^\circ_Y| - 1 + |R| = |Y| + |X|$, and hence $Y \cup X$ is a proper stable subset in the original instance, which is a contradiction. Therefore, $(N^\circ, M^\circ, \ensuremath\mathcal{N}^\circ, G^\circ, a^\circ, b^\circ)$ has no proper stable subset of items, which shows that it is a yes-instance by the induction hypothesis. \end{proof} We next show that a reconfiguration in $G^\circ$ can be converted to one in $G$ in the following sense. \begin{lemma} \label{clm:10} Let $c^\circ_1, c^\circ_2 \colon N^\circ \to M^\circ$ be assignments in $G^\circ$ such that $c^\circ_1 \rightarrow c^\circ_2$, and let $c_1 \colon N\to M$ be an assignment in $G$ that corresponds to $c^\circ_1$. Then, there exists an assignment $c_2 \colon N\to M$ in $G$ such that $c_2$ corresponds to $c^\circ_2$ and $c_1$ can be reconfigured to $c_2$ in $G$. \end{lemma} \begin{proof} Suppose that $c^\circ_1(i) = c^\circ_2(i^{\prime})$, $c^\circ_1(i^{\prime}) = c^\circ_2(i)$, and $\{i,i^{\prime}\} \in E(G^\circ)$. We first consider the case when $r \not\in \{i,i^{\prime}\}$. Define $c_2 \colon N\to M$ as $c_2(i) = c_1(i^{\prime})$, $c_2(i^{\prime}) = c_1(i)$, and $c_2(k) = c_1(k)$ for $k \in N \setminus \{i,i^{\prime}\}$. Then, $c_2$ corresponds to $c^\circ_2$ and $c_1 \rightarrow c_2$. We next consider the case when $r \in \{i,i^{\prime}\}$. By symmetry, we may assume that $r=i'$. Let $j = c^\circ_1(r)$ and let $q \in R$ be the unique vertex that is adjacent to $i$ in $G$. Since $c^\circ_1(r) = c^\circ_2(i) = j$ implies that $N_j \cap R \not= \emptyset$ and $i \in N_j$, it holds that $q \in N_j$. By using Lemma~\ref{clm:04} in which $a'$ is the restriction of $c_1$ to $R$ and $i_2=q$, we see that there exists an assignment $c_3\colon N\to M$ in $G$ such that $c_3(q)=j$, $c_3(k) = c_1(k)$ for $k \in N \setminus R$, and $c_1$ can be reconfigured to $c_3$. Define $c_2 \colon N\to M$ as $c_2(i) = c_3(q)$, $c_2(q) = c_3(i)$, and $c_2(k) = c_3(k)$ for $k \in N \setminus \{i,q\}$. Then, $c_2$ corresponds to $c^\circ_2$ and $c_3 \rightarrow c_2$, which shows that $c_2$ satisfies the conditions in the lemma. \end{proof} We are now ready to show that $(N, M, \ensuremath\mathcal{N}, G, a, b)$ is a yes-instance. Since Lemma~\ref{clm:06} shows that $(N^\circ, M^\circ, \ensuremath\mathcal{N}^\circ, G^\circ, a^\circ, b^\circ)$ is a yes-instance, there exists a reconfiguration sequence from $a^\circ$ to $b^\circ$. By using Lemma~\ref{clm:10}, this sequence can be converted to a reconfiguration sequence from $a$ to some assignment $b'$ in $G$ such that $b'(i) = b^\circ (i) = b^*(i)$ for $i \in N \setminus R$ and $b'(R) = X \cup \{b^\circ (r)\} = X \cup \{j^*\}$. Furthermore, since $|N_{j^*} \cap R| \ge 2$, Lemma~\ref{clm:03} shows that $b'$ can be reconfigured to $b^*$. Therefore, there exists a reconfiguration sequence $a \to \dots \to b' \to \dots \to b^* \to \dots \to b$, and hence $(N, M, \ensuremath\mathcal{N}, G, a, b)$ is a yes-instance. \subsection{Case 2} In this subsection, we consider the case when $N_X=N$ or $|N_X| \ge |X|+2$ holds for any nonempty subset $X \subseteq M$. We begin with the following lemmas. \begin{lemma} \label{clm:08} If $a(\ell) = b(\ell)$ for some leaf $\ell$, then $a$ can be reconfigured to $b$. \end{lemma} \begin{proof} Consider the instance $(N', M', \ensuremath\mathcal{N}', G', a, b)$ obtained from $(N, M, \ensuremath\mathcal{N}, G, a, b)$ by removing $\ell$ and $a(\ell)$. That is, $G' = G - \ell$, $N' = N \setminus \{\ell \}$, $M' = M \setminus \{a(\ell)\}$, $N'_j = N_j \setminus \{\ell\}$ for $j \in M'$, and the domains of $a$ and $b$ are restricted to $N'$. Then, for any nonempty subset $Y \subsetneq M'$, we obtain $|N'_Y| = |N_Y \setminus \{\ell\}| \ge |N_Y| - 1 \ge (|Y| + 2) - 1 \ge |Y| + 1$, where we note that $ |N_Y| \ge \min (|N|, |Y| + 2) = |Y| + 2$ by the assumption in this subsection. Therefore, the obtained instance has no proper stable subset, and hence the restriction of $a$ can be reconfigured to that of $b$ in $G'$ by the induction hypothesis. Since $a(\ell) = b(\ell)$, this shows that $a$ can be reconfigured to $b$ in $G$. \end{proof} \begin{lemma} \label{clm:11} If there exist distinct leaves $\ell$ and $\ell'$ such that $a(\ell') \not= b(\ell)$, then $a$ can be reconfigured to $b$. \end{lemma} \begin{proof} We first show that there exists an assignment $c\colon N \to M$ such that $c(\ell') = a(\ell')$ and $c(\ell) = b(\ell)$. For any nonempty subset $S \subseteq M \setminus \{a(\ell'), b(\ell)\}$, we obtain $|N_S \setminus \{ \ell, \ell'\}| \ge |N_S| - 2 \ge (|S|+2) - 2 = |S|$, where we note that $|N_S| \ge \min (|N|, |S| + 2) = |S| + 2$ by the assumption in this subsection. Therefore, a desired assignment $c$ exists by Hall's marriage theorem. Since $a(\ell')=c(\ell')$, Lemma~\ref{clm:08} shows that $a$ can be reconfigured to $c$. Similarly, since $c(\ell) = b(\ell)$, $c$ can be reconfigured to $b$ by Lemma~\ref{clm:08} again. Therefore, $a$ can be reconfigured to $b$, which completes the proof. \end{proof} We are now ready to show that $a$ can be reconfigured to $b$. If $G$ has at least three leaves, then there exist distinct leaves $\ell$ and $\ell'$ such that $a(\ell') \not= b(\ell)$, and hence $a$ can be reconfigured to $b$ by Lemma~\ref{clm:11}. Thus, the remaining case is when $G$ is a path with exactly two leaves $\ell$ and $\ell'$. We may assume that $a(\ell) = b(\ell')$ and $a(\ell')=b(\ell)$, since otherwise $a$ can be reconfigured to $b$ by Lemma~\ref{clm:11}. We may also assume that $G$ has at least three vertices, since otherwise the lemma is obvious. Let $q$ be the unique vertex adjacent to $\ell$. We now show that there exists an assignment $c\colon N \to M$ such that $c(\ell) = a(\ell)$ and $c(q) = a(\ell')$. Note that $q \in N_{a(\ell')}$, because $a(\ell')=b(\ell)$ and $G$ is a path. For any nonempty subset $S \subseteq M \setminus \{a(\ell), a(\ell')\}$, we obtain $|N_S \setminus \{ q, \ell\}| \ge |N_S| - 2 \ge (|S|+2) - 2 = |S|$ by the assumption in this subsection. Therefore, a desired assignment $c$ exists by Hall's marriage theorem. Since $a(\ell) = c(\ell)$, $a$ can be reconfigured to $c$ by Lemma~\ref{clm:08}. Furthermore, since $c(\ell') \not= c(q) = a(\ell') = b(\ell)$, $c$ can be reconfigured to $b$ by Lemma~\ref{clm:11}. By combining them, we have that $a$ can be reconfigured to $b$, which completes the proof. \section{Trees: Algorithm} \label{sec:tree:alg} Theorem \ref{thm:chartree} leads to the following polynomial-time algorithm to determine whether two given assignments can be reconfigured to each other. \begin{theorem} \label{thm:tree-algo} We can determine in polynomial time whether for a given instance $(N, M, \ensuremath\mathcal{N}, G, a, b)$, $a$ can be reconfigured to $b$, when $G$ is a tree. \end{theorem} Recall that we may assume that $G[N_j]$ is connected for every $j \in M$. To prove Theorem \ref{thm:tree-algo}, we first give a polynomial-time algorithm to find a proper stable subset of items, if it exists. \begin{lemma} \label{lem:propstab-algo} We can determine in polynomial time whether for a given instance $(N, M, \ensuremath\mathcal{N}, G, a, b)$, there exists a proper stable subset of items and find one with minimum size if it exists, when $G$ is a tree. \end{lemma} Below we present a proof for Lemma~\ref{lem:propstab-algo} using submodular functions. Before the proof, we summarize definitions and properties of submodular functions that we use in the proof. For a finite set $\Xi$, the \emph{power set} of $\Xi$ is the family of all subsets of $\Xi$ and denoted by $2^\Xi$. A function $f\colon 2^\Xi \to \mathbb{R}$ is \emph{submodular} if $f(X)+f(Y) \geq f(X\cup Y)+f(X\cap Y)$ for all $X, Y \subseteq \Xi$. The submodular function minimization is a problem to find a set $X^* \subseteq \Xi$ such that $f(X^*) \leq f(X)$ for all $X \subseteq \Xi$; such a set $X^*$ is a \emph{minimizer} of $f$. Here, the submodular function $f$ is not given explicitly, but it is given as oracle access. Namely, we assume that we may retrieve the value $f(X)$ for each set $X \subseteq \Xi$ in polynomial time. A minimizer of a submodular function $f$ does not have to be unique. If $X^*$ and $Y^*$ are minimizers of $f$, then $X^* \cup Y^*$ and $X^* \cap Y^*$ are also minimizers of $f$, which can easily be seen from the submodularity of $f$. This implies that there exists a unique minimum-size minimizer of any submodular function. A minimum-size minimizer of a submodular function (given as oracle access) can be obtained in polynomial time~\cite{murota-dcabook}. \begin{proof}[Proof of Lemma~\ref{lem:propstab-algo}] For each item $j \in M$, we define the function $f_j\colon 2^{M\setminus \{j\}} \to \mathbb{R}$ as \[ f_j(X) = |N_{X\cup \{j\}}| - |X\cup \{j\}| \] for all $X\subseteq M\setminus \{j\}$. Since $H$ has the assignment $a$, $f_j(X) \geq 0$ for all $X \subseteq M \setminus \{j\}$ by Hall's marriage theorem. Thus, since $f_j(M\setminus \{j\}) = 0$, the minimum value of $f_j$ is zero. Notice that $f_j(X) =0$ if and only if $X\cup \{j\}$ is stable. It is easy to see that the function $f_j$ is submodular, and for any submodular function, a unique minimum-size minimizer can be found in polynomial time as noted above. Let $X_j$ be the unique minimum-size minimizer of $f_j$ and let $X^*_j = X_j \cup \{j\}$. Then, $X^*_j$ is the unique minimum-size stable subset containing $j$. Let $j^* \in M$ be an item that minimizes $|X^*_{j^*}|$. Since $X^*_j$ is the unique minimum-size stable subset containing $j$ for each $j \in M$, $X^*_{j^*}$ is the minimum-size nonempty stable subset of items. Therefore, a proper stable subset exists if and only if $X^*_{j^*} \not= M$, which can be determined in polynomial time by computing $X^*_{j^*}$. Furthermore, if $X^*_{j^*} \not= M$, then $X^*_{j^*}$ is a proper stable subset with minimum size. \end{proof} For our algorithm, we first decide whether, for a given instance $(N, M, \ensuremath\mathcal{N}, G, a, b)$, there exists a proper stable subset. If none exists, then Theorem \ref{thm:chartree} implies that $a$ can be reconfigured to $b$, and we are done. Assume that there exists a proper stable subset of items for the instance. Let $X$ be one with minimum size. We first observe that, by the minimality, $G[N_X]$ is connected. To see this, assume to the contrary that $G[N_X]$ is not connected. Let $(Y_1, \dots , Y_p)$ be the partition of $X$ such that $G[N_{Y_t}]$ forms a connected component of $G[N_X]$ for each $t \in \{1, \dots , p\}$, where $p \ge 2$. Note that such a partition exists, because $G[N_{j'}]$ is connected for all $j' \in X$. Since $X$ is a minimum-size proper stable set, it holds that $|N_{Y_t}|>|Y_t|$ for all $t \in \{1, \dots , p\}$. This implies that $|N_X| - |X| = \sum_{t=1}^p (|N_{Y_t}| - |Y_t|) > 0$, which is a contradiction. We then apply our algorithm recursively to the instances obtained by $G[N_X]$ and $G[N\setminus N_X]$, respectively. Here, $G[N\setminus N_X]$ consists of several connected components, whose vertex sets are denoted by $N^1,\dots, N^\ell$, for some $\ell \geq 1$, and $G[N\setminus N_X]$ yields $\ell$ instances. The following lemma is crucial. For $i=1,\dots, \ell$, define $M^i = a(N^i)$. \begin{lemma} \label{lem:propstab-impossible} Let $(N, M, \ensuremath\mathcal{N}, G, a, b)$ be an instance such that $G$ is a tree and let $X$ be a proper stable subset of items. If there exists an item $j \in M^i$ such that $b^{-1}(j) \not\in N^i$, then $a$ cannot be reconfigured to $b$. \end{lemma} \begin{proof} For simplicity, we may assume that $j$ is in $M^1$ and $b^{-1}(j) \not\in N^1$. Since $G$ is a tree, there exists a unique edge $(i_1, i'_1)$ between $N^1$ and $N_X$, where $i_1\in N^1$ and $i'_1\in N_X$. Suppose that $a$ can be reconfigured to $b$ by a reconfiguration sequence $a=a_0 \to a_1 \to \dots \to a_\ell=b$. Then, in the reconfiguration sequence, there exists an index $t$ such that $a^{-1}_{t-1}(j) = i_1$ and $a^{-1}_{t}(j) = i'_1$. That is, $j=a_{t-1}(i_1)=a_{t}(i'_1)$. This means that there exists an item $j^\prime \in M$ such that $j^\prime = a_{t-1}(i'_1) = a_{t}(i_1)$, i.e., from the assignment $a_{t-1}$ to $a_{t}$, the agents $i_1$ and $i'_1$ exchange the items $j$ and $j^\prime$. Since $X$ is stable, we see that $a^{-1}_{t-1}(X)=N_X$, and hence $j^\prime \in X$ holds. Since $j^\prime = a_{t}(i_1)$, $i_1\in N_{j^\prime} \subseteq N_X$. This contradicts that $i_1$ is in $N^1$. \end{proof} Armed with Lemmas \ref{lem:propstab-algo} and \ref{lem:propstab-impossible}, we are ready for describing our algorithm. \begin{enumerate}[Step 1.] \item Decide whether a proper stable subset exists. If there is none, then we answer Yes. Otherwise, let $X$ be a proper stable subset with minimum size, and proceed to Step~2. \item The subgraph $G[N\setminus N_X]$ consists of several connected components, whose vertex sets are denoted by $N^1,\dots, N^\ell$, for some $\ell \geq 1$. For $i=1,\dots, \ell$, define $M^i = a(N^i)$. Check whether there exists an item $j \in M^i$ such that $b^{-1}(j) \not\in N^i$. If there exists such an item, then we answer No. Otherwise, proceed to Step 3. \item We construct $\ell+1$ smaller instances as follows. The first instance is $(N_X, X, \ensuremath\mathcal{N}_X, G[N_X], a_X, b_X)$, where $\ensuremath\mathcal{N}_X = (N_j \mid j \in X)$ and $a_X, b_X\colon N_X \to X$ are the restrictions of $a, b$ to $N_X$, respectively. The other instances are $(N^i, M^i, \ensuremath\mathcal{N}^i, G[N^i], a_i, b_i)$ for $i=1,\dots, \ell$, where $\ensuremath\mathcal{N}^i = (N_j \cap N^i \mid j \in M^i)$ and $a_i, b_i\colon N^i \to M^i$ are the restrictions of $a, b$ to $N^i$, respectively. By the assumption of Step 3, those instances are well-defined. Those $\ell + 1$ instances are solved recursively. If the answers to the smaller instances are all Yes, then the answer to the whole instance is also Yes. Otherwise, the answer to the whole instance is No. \end{enumerate} The correctness is immediate from Theorem \ref{thm:chartree} and Lemma \ref{lem:propstab-impossible}, and the running time is polynomial by Lemma \ref{lem:propstab-algo}. Thus, the proof of Theorem \ref{thm:tree-algo} is completed. \section[Complete Graphs: PSPACE-Completeness]{Complete Graphs: $\mathsf{PSPACE}$-Completeness} \label{setion:pspace_complete} In this section, we prove that our problem is $\mathsf{PSPACE}$-complete even when $G$ is a complete graph. \begin{theorem} The problem is $\mathsf{PSPACE}$-complete even if $G$ is a complete graph. \end{theorem} \begin{proof} The membership in $\mathsf{PSPACE}$ is immediate since each assignment can be encoded in polynomial space, and each swap can be performed in polynomial space (even in polynomial time). Thus, we concentrate on $\mathsf{PSPACE}$-hardness. The following ``bipartite perfect matching reconfiguration problem'' is known to be $\mathsf{PSPACE}$-complete~\cite{BonamyBHIKMMW19} . We are given a bipartite graph $H^{\prime}$ and two perfect matchings $M_1, M_2$ of $H^{\prime}$, and we are asked to decide whether $M_1$ can be transformed to $M_2$ by a sequence of exchanges of two matching edges with two non-matching edges such that those four edges form a cycle of $H^{\prime}$. From an instance $(H^{\prime}, M_1, M_2)$ of the bipartite perfect matching reconfiguration problem, we construct an instance $(N, M, \ensuremath\mathcal{N}, G, a, b)$ of our problem where $G$ is a complete graph. Denote two color classes (partite sets) of $H^{\prime}$ by $A$ and $B$. Then, let $N=A$ and $M=B$. Since $M_1, M_2$ are perfect matchings of $H^{\prime}$, it holds that $|N|=|A|=|B|=|M|$. For each $j \in B=M$, we define $N_j$ as the set of vertices in $A=N$ that are adjacent to $j$ in $H^{\prime}$. Then, $\ensuremath\mathcal{N}$ is the family $(N_j \mid j \in M)$. The assignments $a,b$ are defined by $M_1, M_2$ as $a(i)=j$ if and only if $\{i,j\} \in M_1$ and $b(i)=j$ if and only if $\{i,j\} \in M_2$. The graph $G$ is a complete graph on $N$. This finishes the construction of the instance. We emphasize that $G[N_j]$ is indeed connected for every $j \in M$ since $G$ is a complete graph. Observe that an exchange operation in the bipartite perfect matching reconfiguration problem precisely corresponds to an exchange operation in our problem. Thus, the reduction is sound and complete, and the proof is finished. \end{proof} Note that the bipartite perfect matching reconfiguration problem is $\mathsf{PSPACE}$-complete even when the input graph has a bounded bandwidth and maximum degree five~\cite{BonamyBHIKMMW19}. For strict preferences, the problem in complete graphs is $\mathsf{NP}$-complete~\cite{MB20}. Thus, we encounter a huge difference between the complexity status for dichotomous preferences ($\mathsf{PSPACE}$-complete) and strict preferences ($\mathsf{NP}$-complete). This is because with strict preferences each exchange strictly improves the utility of the two agents involved in the exchange, and thus the length of a reconfiguration sequence is always bounded by a polynomial of the number of agents. On the other hand, with dichotomous preferences, a reconfiguration sequence can be exponentially long. \section{Concluding Remarks} Further studies are required for the following research directions. The complexity status for other types of graphs $G$ is not known. The shortest length of a reconfiguration sequence is not known even for trees. In particular, when there is a reconfiguration sequence, we do not know whether the shortest length is bounded by a polynomial in $|N|$. We may also study other types of preferences. \bibliographystyle{plain
1,314,259,996,857
arxiv
\section{Methods} \label{sec:methods} In \autoref{fig:architecture}, the model architecture is visualized. After an initial downsampling, the RGB input image is fed into the Krizhevsky network. The Krizhevsky architecture consists of stacked convolutions, each one followed by a rectifiying nonlinearity and optional maxpooling and response normalization. The final three fully connected layers of the Krizhevsky network were removed as we are only interested in spatially located features. Each layer (convolution, rectifier, pooling and normalization) results in a single image of response for each filter in the layer. To predict fixations, we first select one or multiple layers from the network. We rescale all the response images that we want to include in our model to the size of the largest layer of the network, resulting in a list of up to 3712 responses for each location in an image. Each of these responses is then individually normalized to have unit standard deviation on the full dataset. After this preprocessing, the features are fed into the following model. At leach image location, our saliency model linearly combines the responses $r_k(x, y)$ using weights $w_k$. The resulting image is then convoled with a Gaussian kernel whose width is controlled by $\sigma$, yielding the saliency map \[ s(x, y) = \sum_k w_k r_k(x, y) * G_\sigma. \] \noindent It is well known that fixation locations are strongly biased towards the center of an image \citep{Tatler2007Centerbias}. To account for this center bias, the saliency prediction is linearly combined with a fixed center bias prediction $c(x, y)$: \[ o(x, y) = \alpha c(x, y) + s(x, y) \] \noindent To predict fixation probabilities, this output is finally fed into a softmax, yielding a probability distribution over the image: \[ p(x, y) = \frac{\exp\left(o(x, y)\right)}{\sum_{x,y} \exp\left(o(x,y)\right)} \] For generalization, $\ell_1$-regularization on the weights is used to encourage sparsity. For training fixations $(x_1, y_1), \dots, (x_N, y_N)$ this yields the cost function \[ c(\mu, \alpha, w) = -\frac{1}{N} \sum_{i}^N \log p(x_i, y_i) + \lambda \frac{\|w\|_1}{\|w\|_2} \] To quantify which layers help most in predicting the fixations and lead to least overfitting, we trained models on a variety of subsets of layers (see \autoref{sec:selecting_layers} and \autoref{fig:layer_restriction_results}). We checked the generalization performance of these models on the remaining 540 images from MIT1003 that have not been used in training. As performance measure we use shuffled area under the curve (shuffled AUC) here \citep{Tatler2005ROC}. In AUC, the saliency map is treated as a classifier score to separate fixations from ``nonfixations'': presented with two locations in the image, the classifier chooses the location with the higher saliency value as fixation. The AUC measures the classification performance of this classifer. The standard AUC uses a uniform nonfixation distribution, while in the case of shuffled AUC, fixations from other images are used as nonfixations. As shuffled AUC assumes the saliency maps not include the biases of the prior distribution \citep[see][]{Barthelme} we had to use a uniform center bias for this evaluation. \subsection{Implementation details} \label{sec:implementation_details} For training, we used roughly half of the dataset MIT1003 \citep{Judd2009Model}. By using only the images of the most common size of $1024\times 768$ pixels (resulting in 463 images), we were able to use the nonparametric estimate of the center bias described in \cite{Kuemmerer2014} (mainly a 2d histrogram distribution fitted using the fixations from all other images). Our implementation of the Krizhevsky network uses the architecture and trained filters as published by \cite{jia2014caffe} with the following modifications: the original architecture uses a fixed input size of $224\times 224$. As we removed the fully connected layers, we do not need to restrict to a fixed input size but can feed arbitrary images into the network. Furthermore we use convolutions of type \textit{full} (i.e. zero-pad the input) instead of \textit{valid} which would result in convolution outputs that are smaller than the input. This modification is useful, because we need saliency predictions for every point in the image. Note that the caffe implementation of the Krizhevsky network differs slightly from the original architecture in \cite{krizhevsky2012}, as the pooling and the normalization layers have been switched. The subsampling factor for the inital downsampling of the images was set to 2. The sparsity parameter $\lambda$ was chosen using grid search and turned out to be $0.001$ in the final model. However, even setting it to much smaller values did have very little effect on training and test performance (see \autoref{sec:regularizaton} for more details). All calculations of log-likelihoods, cost functions and gradients were done in theano \citep{bergstra+al:2010-scipy}. To minimize the cost function on the training set of fixations, the mini-batch based BFGS method as described in \cite{Sohl-Dickstein2013} was used. It combines the benefits of batch based methods with the advantage of second order methods, yielding high convergence rates with next to no hyperparameter tuning. To avoid overfitting to the subjects, leave-one-out cross-validation over the 15 subjects contained in the database was used. The code for our model including training and analysis will be published at \url{http://www.bethgelab.org/code/deepgaze/}. \section{Results} \label{sec:results} \subsection{Performance Results} \label{sec:performance_results} \begin{figure} \begin{center} \input{figures/results.pgf} \end{center \caption Performance of Deep Gaze I compared to a list of other influential models, expressed as the ratio of explained information (see text for details). All models except for Deep Gaze I have been postprocessed to account for a pointwise nonlinearity, center bias and blurring (see \cite{Kuemmerer2014} for details). } \label{fig:loglikelihoods} \end{figure} We use an information theoretic measure to evaluate our model: log-likelihood. Log-likelihood is a principled measure for probabilistic models and has numerous advantages. See \cite{Kuemmerer2014} for an extensive discussion. Log-likelihoods are much easier to understand when expressed as difference of log-likelihood relative to a baseline model. This \textit{information gain}\footnote{To be more precise, this value is an estimated expected information gain} expresses how much more efficient the model is in describing the fixations than the baseline model: if a model with an information gain of \SI{1}{\bit\per\marginpar{FIX}} is used to encode fixation data, it can save on average one bit per fixation compared to the baseline model. The information gain is even more intuitive when compared to the explainable information gain, i.e., the information gain of the real distribution compared to the baseline model. This comparison yields a ratio of explained information gain to explainable information gain which will be called ``explainable information gain explained'' or just ``information gain explained'' in the following. See \cite{Kuemmerer2014} for a more thorough explanation of this notion. The baseline model is a non-parametric model of the image-independent prior distribution $p(x,y)$, while the explainable information is estimated using a non-parametric model of the fixation distribution $p(x, y \mid I)$ for a given image $I$ (which we call the \textit{gold standard model}). The gold standard model is cross-validated between subjects and thus captures all the structure in the fixations that is purely due to the spatial structure of the image. See \cite{Kuemmerer2014} for details on the baseline model and the gold standard model. By expressing the information gain of a model as a percentage of the possible information gain, we can asses how far we have come in describing the fixations. It is important to note that this interpretation is only possible due to the fact that information gain is on a ratio scale \citep{michell1997quantitative}: differences and ratios of information gains are meaningful -- opposed to other measures like AUC. In \autoref{fig:loglikelihoods}, the percentage of information gain explained is plotted for our model in comparison to a range of influential saliency models, including the state-of-the-art models. Of the possible information gain, the best existing model (eDN) is able to explain only \SI{\percentIgBest}{\percent}. Deep Gaze I is able to increase this information gain to \SI{\percentIgDeepSal}{\percent}. \begin{figure}[ht!] \begin{center} \include{figures/mit_results.pgf} \end{center} \caption{Performance results on the MIT benchmark. \textbf{(a)}: Shuffled AUC performance of Deep Gaze I (green bar, 71.69\%) compared with all other models in the MIT benchmark. The x-axis is at the level of the center bias model. The three top performing models after Deep Gaze I are in order of decreasing performance: AWS (67.90\%, \cite{garcia2012relationship}), RARE2012 (66.54\%, \cite{Riche2013a}), and AIM (65.64\%, \cite{BruceTsotso2009Saliency}). \textbf{(b)} AUC performance of Deep Gaze I (green bar, 84.40\%) compared with all other models in the MIT benchmark that performed better than the center bias. The x-axis is at the level of the center bias model. The three top performing models after Deep Gaze I are in order of decreasing performance: BMS (82.57\%, \cite{zhang2013saliency}), Mixture of Saliency Models (82.09\%, Han and Satoh, 2014), and eDN (81.92\%, \cite{Vig2014}). Notice that AUC and shuffled AUC use different definitions of saliency map: While AUC expects the saliency maps to model the center bias, shuffled AUC explicitly does not and penalizes models that do. Therefore, for the shuffled AUC performances of Deep Gaze I the saliency maps have been calculated with a uniform prior distribution, while for the AUC performances the saliency maps have been calculated with a nonparametric prior (see text for details) \protect\footnotemark[2]. Performances of other models from the MIT benchmark as of September 2014. } \label{fig:mit-benchmark} \end{figure} \begin{figure}[ht!] \begin{center} \input{figures/figure_layer_comparison_with_sROC.pgf} \end{center} \caption{Performance of Deep Gaze I when trained on different subsets of the Krizhevsky layers: \textbf{(a)}: Results for models that use layers from a given depth upwards. The left plot shows the percentage of explainable information gain explained on the images used in training for training subjects and test subjects (refer to \autoref{sec:performance_results} for an explanation of this measure). The dotted line indicates the performance of the model we used in the MIT Saliency Benchmark (which only used the output of the convolutions of layer 5). The right plot shows the shuffled AUC on the images used in training and on the remaining test images. Here, the models have been averaged over all test subjects and the saliency maps assume uniform center bias, as expected by shuffled AUC (see \autoref{sec:mit-benchmark} for details). The dotted line indicates the performance of the final model on the test images. \textbf{(b), (c), (d)}: Results for models that use layers up to a given depth (b), layers of a certain depth (c) and layers of a certain type (d). The plots are as in (a).} \label{fig:layer_restriction_results} \end{figure} \begin{figure}[h!] \begin{center} \input{figures/used_features.pgf} \end{center} \caption{Analysis of used features I: \textbf{(a)} Patches of maximum response: Each square of patches shows for a specific feature of the Krizhevsky architecture the nine patches that led to highest response (resp.~smallest response, if the feature has a negative weight in the model). Each patch corresponds to exactly the part of the image that contributes to the response in the location of maximum response. The features used have been choosen by the absolute value of the weight that Deep Gaze I assigned to them. The numbers over the patches show $|w_k|/\max_k |w_k|$. } \label{fig:used_features} \end{figure} \begin{figure}[ht!] \begin{center} \input{figures/used_features_context.pgf} \end{center} \caption{Analysis of used features II: Details for some of the patches from \autoref{fig:used_features} The four double columns (a) to (d) correspond to the first four features shown \autoref{fig:used_features}. In each double column, the four rows correspond to the first four patches shown for this feature in \autoref{fig:used_features}. The left column of each double column shows the patches in the context of the full image, while the feature's response over the full image is shown in the right column. The position of the maximum is indicated by a dot. } \label{fig:used_features_context} \end{figure} \subsection{Results on MIT Saliency Benchmark} \label{sec:mit-benchmark} We submitted our model to the MIT Saliency Benchmark (\cite{mit-saliency-benchmark}). The benchmark evaluates saliency models on a dataset of 300 images and 40 subjects. The fixations are not available to make training for these fixations impossible. The MIT Saliency Benchmark evaluates models on a variety of metrics, including AUC with uniform nonfixation distribution and shuffled AUC (i.e. AUC with center bias as nonfixation distribution). The problem with these metrics is that most of them use different definitions of saliency maps. This hold especially for the two most used performance metrics: AUC and shuffled AUC. While AUC expects the saliency maps to model the center bias, shuffled AUC explicitly does not so and penalizes models that do (see \cite{Barthelme} for details). As Deep Gaze I uses an explicit representation of the prior distribution, it is straightforward to produce the saliency maps according to both definitions of AUC: For AUC we use a nonparametric prior estimate, for shuffled AUC we use a uniform prior distribution. As the images of the dataset are of different size, we could not use our non-parametric center bias as is. Instead, we took all fixations from the full MIT-1003 dataset and transformed their position to be relative to a image of size $100\times 100$. Then we trained a Gaussian kernel density estimator on these fixations. This density estimate was then rescaled and renormalized for each image. Doing so, we beat the state-of-the-art models in the MIT Saliency Benchmark by a large margin in AUC as well as shuffled AUC (see \autoref{fig:mit-benchmark}): For shuffled AUC, we reach 71.69\% compared to 67.90\% for the best performing model AWS (center bias is at 50\%). For AUC we reach 84.40\% compared to 82.57\% for the best performing model BMS (center bias is at 78.31\%). Relative to the center bias, this is an increase of AUC performance by more than 40\%. \footnotetext[2]{Note that the MIT Saliency Benchmark webpage reports only performances for the saliency maps with the nonparametric prior. Therefore, there the shuffled AUC performance is lower.} \subsection{Layer selection} \label{sec:selecting_layers} The final model used only the convolutions of the top-most layer of the Krizhevsky-architecture. This is a principled choice: the top layer can be expected to include most high-level influences and the relu, pool and norm units are often viewed mainly as the nonlinearities needed to provide a new feature space for the next level of convolutions. But this choice was also backed by a series of comparison models where more or other layers have been included in the model: In \autoref{fig:layer_restriction_results}, performance results are reported for models including layers from a given depth upwards (\autoref{fig:layer_restriction_results}a), layers up to a given depth (\autoref{fig:layer_restriction_results}b), layers of a given depth (\autoref{fig:layer_restriction_results}c) and layers of a given type (\autoref{fig:layer_restriction_results}d). It can be seen that the architecture chosen finally (layer 5 convolutions) generalizes best to the images of the test set in terms of shuffled AUC. It is also worth noting that models including more layers are substantially better at predicting the test subjects fixations on the images used in training (\autoref{fig:layer_restriction_results}a, left plot): when using all layers, a performance of 83\% information gain explained is reached for the test subjects. This suggests that the generalization problems of these models are not due to intersubject variability. They most probably suffer from the fact that the variety of objects in the training images is not rich enough, leading to overfitting to the images (not to the subjects). Therefore we can expect improved performance from using a larger set of images in training. \subsection{Analysis of used features} \label{sec:used_features} In this section we analyze which features of the Krizhevsky architecture contributed most to the fixation predictions. By getting a solid understanding of the involved features, we can hope to extract predictions from the model that can be tested psychophysically in the future. In \autoref{fig:used_features}, we took the 10 most weighted features from the 256 convolution features in layer 5. For each of these 10 features, we plotted the 9 patches from the dataset that led to the highest response (resp. lowest response for features with negative weight). In \autoref{fig:used_features_context}, the first four patches of the first four features are shown in more detail: The patches are shown in the context of the entire image and also the feature's response to this image is shown. Clearly, the most important feature is sensitive to faces. The second most important feature seems to respond mainly to text. The third most important feature shows some sort of pop-out response: it seems to respond to whichever feature sticks out from an image: the sign of a bar in the first patch, two persons in a desert in the second patch and, most notably, the target in a visual search image in the fourth patch. Note that the salient feature depends heavily on the image context, so that a simple luminance or color contrast detector would not achieve the same effect. This shows that Deep Gaze I is not only able to capture the influence of high level objects like faces or text, but also more abstract high-level concepts (like popout). \section{Discussion} Deep Gaze I was able to increase the explained information gain to \SI{\percentIgDeepSal}{\percent} compared to \SI{\percentIgBest}{\percent} for state of the art models. On the MIT Saliency Benchmark we were also able to beat the state of the art models by a substantial margin. One main reason for this performance is the ability of our model to capture the influence of several high-level features like faces and text but also more abstract ones like popout (\ref{sec:used_features}). It is important to note that all reported results from Deep Gaze I are direct model performances, without any fitting of a pointwise nonlinearity as performed in \cite{Kuemmerer2014}. This indicates that the deep layers provide a sufficiently rich feature space to enable fixation prediction via simple linear combination of the features. The convolution responses turned out to be most informative about the fixations. While features trained on ImageNet have been shown to generalize to other recognition and detection tasks \citep[e.\;g. ][]{donahue2013decaf, razavian2014cnn}, to our knowledge this is the first work where ImageNet features have been used to predict behaviour. Extending state-of-the-art neural networks with attention is an exciting new direction of research \citep{tang2014learning,mnih2014recurrent}. Humans use attention for efficient object recognition and we showed that Krizhevsky features work well for predicting human attention. Therefore it is likely that these networks could be brought closer to human performance by extending them with Krizhevsky features. This could be an interesting field for future research. \section{Conclusions} Our contribution in this work is twofold: First, we have shown that deep convolutional networks that have been trained on computer vision tasks like object detection boost saliency prediction. Using the well-known Krizhevsky network \citep{krizhevsky2012}, we were able to outperform state-of-the-art saliency models by a large margin, increasing the amount of explained information by \SI{\factorBestToDeepSalPercent}{\percent} compared to state-of-the art. We believe this approach will enable the creation of a new generation of saliency models with high predictive power and deep implications for psychophysics and neuroscience \citep{yamins2014performance,Zeiler2013visualizing}. An obvious next step suggested by this approach is to replace the Krizhevsky network by the ImageNet 2014 winning networks such as VGG \citep{Simonyan2014} and GoogLeNet \citep{Szegedy2014}. A second conceptual contribution of this work is to optimize the saliency model by maximizing the log-likelihood of a point process \citep[see ][]{Barthelme,Kuemmerer2014}. We believe that the combination of high performance feature spaces for object recognition as obtained from the ImageNet benchmark with principled maximum likelihood learning opens the door for a ``Deep Gaze'' program towards explaining all the explainable information in the spatial image-based fixation structure. \section{Acknowledgements} This work was mainly supported by the German Research Foundation (DFG; priority program 1527, Sachbeihilfe BE 3848-1) and additionally by the German Ministry of Education, Science, Research and Technology through the Bernstein Center for Computational Neuroscience (FKZ 01GQ1002) and the German Excellency Initiative through the Centre for Integrative Neuroscience Tübingen (EXC307).
1,314,259,996,858
arxiv
\section{Introduction} Twin-width is an invariant of graphs introduced in \cite{BKTW20}. It is used to study the parameterized complexity of graph algorithms. It has applications in logic, enumerative combinatorics etc. Recently, it has appeared in many articles (\cite{BGKTW21}, \cite{BGKTW21'}, \cite{BGMSTT21}, \cite{BKRT22}, \cite{BGTT22}, \cite{BCKKLT22}). Moreover, it has been studied in the context of finitely generated groups \cite{BGTT22}. The twin-width is defined for a finite simple graph, later it is extended for a simple infinite graph. The computation of twin-width of a finite graph is extremely difficult. It has been computed before for complete graphs, path graphs, cyclic graphs (or graphs with at most one cycle), Paley graphs, Caterpillar tree, planar graphs etc. In this article, we compute twin-width for all graphs with vertices 4 and 5 and prove the following theorems. \begin{thm}\label{4vertices} The twin-width of a graph with 4 vertices is less than equal to 1. \end{thm} \begin{thm}\label{5vertices} The twin-width of a graph with 5 vertices is less than equal to 2. \end{thm} It is an open problem to determine whether there is an $n$-vertex graph having twin-width at least $n/2$( see \cite{AHKO22}, page 3) . Therefore, the above-mentioned theorems show that we should look for $n\geq 3$. It is known that twin-with is invariant under taking complement graphs. It was not known how twin-width behaves under other graph operations. In this article, we prove that they are not invariant under taking dual graphs and line graphs. \begin{thm}\label{dual} Let $\mathcal{C}$ be the collection of simple connected planar graphs whose dual are also a simple connected planar graphs. The construction of dual graphs does not preserve twin-width, i.e., there exists a graph $G$ in $\mathcal{C}$ such that the twin-width of $G$ and $G^*$ are not equal, where $G^*$ is the dual graph of $G$. \end{thm} \begin{thm}\label{linegraph} The construction of line graph does not preserve twin-width. \end{thm} However, there are some graphs, King's graph and Rook's graph, which are associated with Chess and are important in Graph Theory. In this article, we study twin-width of these graphs. We briefly recall the definitions of King's graph and Rook's graph. A \textit{King's graph} is a graph that represents all legal moves of the king chess piece on a chessboard where each vertex represents a square on a chessboard and each edge is a legal move. More specifically, an $(n\times m)$-King's graph is a King's graph of an $(n\times m)$-chessboard. On the other hand, a \textit{Rook's graph} is a graph that represents all legal moves of the rook chess piece on a chessboard. Each vertex of a rook's graph represents a square on a chessboard, and each edge connects two squares on the same row or on the same column (each edge connects the squares that a rook can move between). In this article, we prove the following two theorems. \begin{thm}\label{King'sgraph} The twin-width of a $(n\times m)$-King's graph is less than 7. \end{thm} \begin{thm}\label{Rook'sgraph} The twin-width of a $(n \times m)$ Rook's graph is less than equal to $2(m - 1)$. \end{thm} \subsection{Organization} In Section 2, we introduce our necessary definitions, notations and abreviations. In Section 3, we survey the known results of twin-width of finite graphs. We study the behaviour of twin-width under graph operations in Section 4. In Section 5, we compute the twin-width of finite graphs with 4 and 5 vertices. In Section 6, we provide upper bounds of twin-width of King's graph and Rook's graph. \section{Preliminaries: some definitions, notations and abreviations:} A \textit{trigraph} $G$ is a graph with a vertex set $V(G)$, a black edge set $E(G)$, and a red edge set $R(G)$ (the error edges), where $E(G)$ and $R(G)$ are disjoint. The set of neighbours of a vertex $v$ in a trigraph $G$, denoted by $N_G(v)$, consists of all the vertices adjacent to $v$ by a black or red edge. The degree of a vertex $v$ is defined by the number $\mid N_G(v)\mid$. A $d$-trigraph is a trigraph $G$ such that the red graph $(V(G), R(G))$ has degree at most $d$. In this situation, we also say that the trigraph has red degree at most $d$. A \textit{contraction} or \textit{identification} in a trigraph $G$ consists of merging two (non-necessarily adjacent) vertices $u$ and $v$ into a single vertex $w$, and defining the edges of $G'$ (the new graph after contraction) in the following way: Every vertex of the symmetric difference $N_G(u)\bigtriangleup N_G(v)$ is linked to $w$ by a red edge. Every vertex $x$ of the intersection $N_G(u)\cap N_G(v)$ is linked to $w$ by a black edge if both $ux\in E(G)$ and $vx\in E(G)$, and by a red edge otherwise. The rest of the edges (not incident to $u$ or $v$) remain unchanged. Also, the vertices $u$ and $v$ (together with the edges incident to these vertices) are removed from the trigraph. A \textit{sequence of $d$-contractions} is a sequence of $d$-trigraphs $G_n, G_{n-1},\cdots, G_1$, where $G_n = G$, $G_1 = K_1$, where $K_1$ is the graph on a single vertex, and $G_{i-1}$ is obtained from $G_i$ by performing a single contraction of two (non-necessarily adjacent) vertices. We observe that $G_i$ has precisely $i$ vertices, for every $i\in \{1,\cdots, n\}$. The twin-width of $G$, denoted by $tww(G)$, is the minimum integer $d$ such that $G$ admits a $d$-sequence. Now, we provide an example of a sequence contractions of a finite graph. In the sequence of graphs depicted below, we start with the given finite graph in the extreme left end and we label the vertices by $a, b, c, d, e, f, g$. The next diagram is the result of the contraction of the vertices $e$ and $f$ and in the resulting graph we label the new vertex by $ef$. In this way, we obtain a sequence of graphs by gradually contacting other vertices. The graph in the extreme left end of the second line of this sequence is obtained by contracting the vertices $ad$ and $g$ in the graph depicted in the extreme right end of the first line of the sequence. \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (0,1) {b}; \node (a3) at (0,2) {c}; \node (a4) at (1,0) {d}; \node (a5) at (1,1) {e}; \node (a6) at (1,2) {f}; \node (a7) at (2,2) {g}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a1) -- (a6); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a2) -- (a6); \draw (a3) -- (a5); \draw (a3) -- (a6); \draw (a4) -- (a5); \draw (a5) -- (a7); \draw (a6) -- (a7); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (0,1) {b}; \node (a3) at (0,2) {c}; \node (a4) at (1,0) {d}; \node (a5) at (1,1.5) {ef}; \node (a7) at (2,2) {g}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw[red] (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a5); \draw[red] (a4) -- (a5); \draw (a5) -- (a7); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (1,0) {ad}; \node (a2) at (0,1) {b}; \node (a3) at (0,2) {c}; \node (a5) at (1,1.5) {ef}; \node (a7) at (2,2) {g}; \draw (a1) -- (a2); \draw[red] (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a5); \draw (a3) -- (a5); \draw (a5) -- (a7); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,-0.5) {ad}; \node (a2) at (0.5,1) {bef}; \node (a3) at (0,2) {c}; \node (a7) at (2,1) {g}; \draw[red] (a1) -- (a2); \draw (a2) -- (a3); \draw (a2) -- (a7); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,-0.5) {adg}; \node (a2) at (0.5,1) {bef}; \node (a3) at (0,2) {c}; \draw[red] (a1) -- (a2); \draw (a2) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,-0.5) {adg}; \node (a2) at (0,1) {bcef}; \draw[red] (a1) -- (a2); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.4,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {abcdefg}; \end{tikzpicture} \end{center} Moreover, we will draw every contraction sequence by this fashion in this article. We end this section by defining twin-width of an infinite graph. It is defined by the maximum of the twin-widths of its induced finite subgraphs. \section{A survey of the known results of twin-width of finite graphs} In this section, we survey the results regarding the twin-width of complete graphs, planar graphs, graphs with at most one cycle, Caterpillar tree and Paley graph. \begin{thm} The complete graph with $n$-vertices, denoted by $K_n$, and the complete bipartite with ${n,m}$-vertices, denoted by $K_{n,m}$, have twin-width zero. \end{thm} \begin{thm}\cite{H22} The twin-width of any simple planar graph $G$ is at most 9. \end{thm} \begin{thm}\label{onecycle}\cite{AHKO22} If every component of a graph $G$ has at most one cycle, then $tww(G)\leq 2$. \end{thm} We obtain the following corollary from the above-mentioned theorem. \begin{cor}\label{cyclicgraph} The cyclic graph with $n$-vertices (denoted by $C_n$) has twin-width less than equal to 2. \end{cor} A \textit{caterpillar tree} is a tree in which all the vertices are within distance 1 of a central path. We draw an example of a Caterpillar tree below. \begin{center} \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (2,0){c}; \node (a4) at (2,-1) {d}; \node (a5) at (3,0) {e}; \node (a6) at (2.5,1) {f}; \node (a7) at (3,1) {g}; \node (a8) at (3.5,1) {h}; \node (a9) at (2.5,-1) {i}; \node (a10) at (3.5,-1){j}; \node (a11) at (4,0) {k}; \draw (a1) -- (a2); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a5) -- (a6); \draw (a5) -- (a7); \draw (a5) -- (a8); \draw (a5) -- (a9); \draw (a5) -- (a10); \draw (a5) -- (a11); \end{tikzpicture} \end{center} \begin{thm}\label{caterpillar}\cite{AHKO22} For a tree $T$, $tww(T)\leq 1$ if and only if $T$ is a Caterpillar tree. \end{thm} The above-mentioned theorem gives rise to the following corollary. \begin{cor}\label{pathgraph} The path graph with $n$ vertices, denoted by $P_n$, has twin-width less than equal to 1. \end{cor} Let $q$ be a prime power such that $q\equiv 1$ (mod 4). The Paley graph of order $q$, denoted by $P(q)$, is defined as follows: The vertices of the graph are the elements of the field $\textbf{F}_q$ and the vertices $i$ and $j$ are adjacent if $(j-i)$ is a quadratic residue in $\textbf{F}_q$. \begin{thm}\cite{AHKO22} For each prime $q$ with $q\equiv 1$ (mod 4), the Paley graph $P(q)$ has twin-width exactly $(q-1)/2$. \end{thm} \begin{rem} The twin-width of the class of Paley graphs is unbounded. \end{rem} Finally, we end this section by mentioning results on groups. We say that a finitely generated group has bounded or unbounded twin-width if one of its Cayley graphs have bounded or unbounded twin-width. \begin{thm}\cite{BGTT22} The solvable, hyperbolic, ordered finitely generated groups have finite twin-width. \end{thm} \begin{thm}\cite{BGTT22} There is a finitely generated group with infinite twin-width. \end{thm} \section{Graph operations and their twin-width} In this section, we study the behaviour of twin-width under graph operations, like complement graph, dual graph and line graph. \subsection{Complement of a graph}\label{subsection4.3} The \textit{complement of a graph} $G$ is a graph $H$ on the same vertices such that two distinct vertices of $H$ are adjacent if and only if they are not adjacent in $G$. We obtain the following theorem from \cite{BKTW20} (see Subsection 4.1). \begin{thm}\label{complementgraph} Twin-width is invariant under complementation. \end{thm} \subsection{Dual Graph}\label{subsection4.1} The \textit{dual graph} of a planar graph $G$ is a graph that has a vertex for each face of $G$. The dual graph has an edge for each pair of faces in $G$ that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Now, we prove Theorem \ref{dual}. \vspace{5mm} \textbf{Proof of Theorem \ref{dual}:} Let $G$ be the following graph: \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-0.5,-0.5) {e}; \node (a2) at (0.5,-0.5) {f}; \node (a3) at (0,0.5){d}; \node (a4) at (-1.5,-1.5) {b}; \node (a5) at (1.5,-1.5) {c}; \node (a6) at (0,2) {a}; \draw (a1) -- (a2); \draw (a2) -- (a3); \draw (a3) -- (a1); \draw (a4) -- (a5); \draw (a5) -- (a6); \draw (a6) -- (a4); \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a6); \end{tikzpicture} \end{center} If we contract any two vertices of $G$, it generates a red edge. Therefore, the twin-width of $G$ is greater than equal to 1. Now, we compute the dual graph of $G$. We denote the triangular region `def' by $r1$, the trapezium `bcfe' by $r2$, the trapezium `adfc' by $r3$, the trapezium `abed' by $r4$ and the region outside $\triangle abc$ by $r5$. Therefore, the dual of $G$, denoted by $G^*$, will be the following graph: \begin{center} \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-0.5,-0.5) {r3}; \node (a2) at (0.5,-0.5) {r4}; \node (a3) at (0,0.5){r2}; \node (a4) at (0,-1.5) {r5}; \node (a5) at (0,2) {r1}; \draw (a1) -- (a2); \draw (a2) -- (a3); \draw (a3) -- (a1); \draw (a4) -- (a1); \draw (a4) -- (a2); \draw (a4) -- (a3); \draw (a5) -- (a1); \draw (a5) -- (a2); \draw (a5) -- (a3); \end{tikzpicture} \end{center} We apply the following $0$-contraction sequence to $G^*$. \begin{center} \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-0.5,-0.5) {r2r3}; \node (a2) at (0.5,-0.5) {r4}; \node (a4) at (0,-1.5) {r5}; \node (a5) at (0,0.5) {r1}; \draw (a1) -- (a2); \draw (a4) -- (a1); \draw (a4) -- (a2); \draw (a5) -- (a1); \draw (a5) -- (a2); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {r2r3r4}; \node (a4) at (0,-1) {r5}; \node (a5) at (0,1) {r1}; \draw (a1) -- (a4); \draw (a5) -- (a1); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {r2r3r4}; \node (a4) at (0,1.5) {r1r5}; \draw (a1) -- (a4); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=1.5,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {r1r2r3r4r5}; \end{tikzpicture} \end{center} Therefore, $G^*$ has twin-width zero. Hence, the twin-widths of $G$ and $G^*$ are different. \hfill\(\Box\) \subsection{Line graph of a graph}\label{subsection4.2} The \textit{line graph} of an undirected graph $G$ is another graph $L(G)$ that represents the adjacencies between edges of $G$. $L(G)$ is constructed in the following way: for each edge in $G$, make a vertex in $L(G)$; for every two edges in $G$ that have a vertex in common, make an edge between their corresponding vertices in $L(G)$. Now, we prove the Theorem \ref{linegraph}. \vspace{5mm} \textbf{Proof of Theorem \ref{linegraph}:} Let, $G$ be the following graph: \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,1.5) {a}; \node (a2) at (0,0.5) {b}; \node (a3) at (1,2) {c}; \node (a4) at (1,1) {d}; \node (a5) at (1,0) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \end{tikzpicture} \end{center} Since $G$ is a complete bipartite graph, it has twin-width zero. However, the line graph of $G$, denoted by $L(G)$, is the following graph: \begin{center} \begin{tikzpicture} [scale=1.2,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a'}; \node (a2) at (0,2) {b'}; \node (a3) at (1,1) {c'}; \node (a4) at (2,1) {d'}; \node (a5) at (3,0) {e'}; \node (a6) at (3,2) {f'}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a6); \draw (a3) -- (a4); \draw (a4) -- (a5); \draw (a4) -- (a6); \draw (a5) -- (a6); \end{tikzpicture} \end{center} We observe that if we contract any two vertices of $L(G)$, it generates a red edge. Therefore, the twin-width of $L(G)$ is greater than equal to $1$, which implies that the twin-width is not preserved under taking line graph. \hfill\(\Box\) \section{Computation of twin-width of finite graphs with 4 and 5 vertices} In this section, we prove Theorem \ref{4vertices} and Theorem \ref{5vertices}. \vspace{5mm} \textbf{Proof of Theorem \ref{4vertices}:} First, we make a list of graphs which are disconnected. \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a4) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a2) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a3); \end{tikzpicture} \end{center} Since the twin-width of a (disconnected) graph is the maximum of the twin-widths of its components, it is easy to see that the twin-widths of the disconnected graphs with 4 vertices are zero. Now, we make a list of graphs which are Caterpillar tree. \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a2) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a1) -- (a3); \end{tikzpicture} \end{center} By Theorem \ref{caterpillar}, these two graphs have twin-width less than equal to 1. Finally, we make a list of graphs whose complement graphs are disconnected. \begin{center} \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a3) -- (a4); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a3) -- (a4); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (1,1) {c}; \node (a4) at (0,1) {d}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \end{tikzpicture} \end{center} Since the twin-width of a graph is same as the twin-width of the complement graph by Theorem \ref{complementgraph} and the twin-width of a disconnected graph with 4 vertices is less than equal to 1, we obtain that the twin-widths of the graphs in the above list is zero. Hence, we have our theorem. \hfill\(\Box\) \vspace{5mm} \textbf{Proof of Theorem \ref{5vertices}:} First, we make a list of disconnected simple graphs with 5 vertices: \begin{center} 1. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \end{tikzpicture} \qquad 2. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a3) -- (a4); \end{tikzpicture} \qquad 3. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 4. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a5); \draw (a3) -- (a4); \end{tikzpicture} \qquad 5. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 6. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 7. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 8. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 9. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 10. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 11. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 12. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 13. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} Since the twin-width of a disconnected graph is the maximum of the twin-widths of its components and the twin-width of a simple graph with 4 vertices is less than equal to 1 by Theorem \ref{4vertices}, the twin-width of the disconnected graphs with 5 vertices are less than equal to 1. Now, we make a list of graphs which are Caterpillar tree and have 5 vertices. \begin{center} 14. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 15. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 16. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \end{center} Since a Caterpillar tree has twin-width less than equal to 1 by \ref{caterpillar}, the graphs in the above list have twin-width less than equal to 1. Next, we make a list of graphs which have at most one cycle. \begin{center} 17. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 18. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 19. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad 20. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad 21. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \end{center} Since the graphs with at most one cycle have twin-width less than equal to 2, the graphs in the above-mentioned list have twin-width less than equal to 2. Now, we make a list of graphs which can be reduced to a graph with 4 vertices after applying a 1-step contraction ( all edges will remain black). \begin{center} 22. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {ae}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a3) -- (a4); \end{tikzpicture} \end{center} \begin{center} 23. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {ab}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} \begin{center} 24. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {cd}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a2) -- (a5); \draw (a3) -- (a5); \end{tikzpicture} \end{center} \begin{center} 25. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {bc}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a2) -- (a4); \draw (a2) -- (a5); \end{tikzpicture} \end{center} \begin{center} 26. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a4) at (0,2) {cd}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a4); \draw (a2) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} Since the graphs with 4 vertices have twin-width less than equal to 1, the graphs in the above-mentioned list will also have twin-width less than equal to 1. Next, we have a graph with twin-width less than equal to 2. \begin{center} 27. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {ae}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \draw[red] (a1) -- (a2); \draw (a1) -- (a3); \draw[red] (a1) -- (a4); \draw (a2) -- (a4); \draw (a3) -- (a4); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {ae}; \node (a2) at (1,0) {bd}; \node (a3) at (3/2,1) {c}; \draw[red] (a1) -- (a2); \draw (a1) -- (a3); \draw[red] (a2) -- (a3); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {ace}; \node (a2) at (1,0) {bd}; \draw[red] (a1) -- (a2); \end{tikzpicture} \qquad \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (0,0) {abcde}; \end{tikzpicture} \end{center} Now, we make a list of graphs whose complement graphs are disconnected. Since a graph has same twin-width as its complement graphs by Theorem \ref{complementgraph}, the graphs in this list will have twin-width less than equal to 1. \begin{center} 28. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' (or `d') is connected by edges with every other vertices. Therefore, the complement graph is disconnected.) \begin{center} 29. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' is connected by edges with every other vertices. Therefore, the complement graph is disconnected. ) \begin{center} 30. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' is connected by edges with every other vertices. Therefore, the complemented graph is disconnected. ) \begin{center} 31. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' (or `d') is connected by edges with other vertices. Therefore, the complement graph is disconnected. ) \begin{center} 32. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a2) -- (a3); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' is connected by edges with every other vertices. Therefore, the complement graph is disconnected.) \begin{center} 33. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} (The vertex `c' (or `d' or `e') is connected by edges with every other vertices. Therefore, the complement graph is disconnected. ) Finally, we are left with the following complete graph whose twin-width is zero. \begin{center} 34. \begin{tikzpicture} [scale=.9,auto=center,every node/.style={circle,fill=blue!20}] \node (a1) at (-1,0) {a}; \node (a2) at (1,0) {b}; \node (a3) at (3/2,1) {c}; \node (a4) at (0,2) {d}; \node (a5) at (-3/2,1) {e}; \draw (a1) -- (a2); \draw (a1) -- (a3); \draw (a1) -- (a4); \draw (a1) -- (a5); \draw (a2) -- (a3); \draw (a2) -- (a4); \draw (a2) -- (a5); \draw (a3) -- (a4); \draw (a3) -- (a5); \draw (a4) -- (a5); \end{tikzpicture} \end{center} Hence, we have our theorem \hfill\(\Box\) \section{Upper bound of twin-width of King's graph and Rook's graph} In this section, we prove Theorem \ref{King'sgraph} and Theorem \ref{Rook'sgraph}. Before going into the proof, we introduce two types of graph products Strong Product and Cartesian Product which will be required in the proofs of the theorems. The \textit{strong product} of graphs $G$ and $H$, denoted by $G\boxtimes H$, is a graph such that the vertex set of $G\boxtimes H$ is the Cartesian product $V(G) \times V(H)$, where $V(G)$ and $V(H)$ are the set of vertices of $G$ and $H$, respectively; and distinct vertices $(u,u')$ and $(v,v')$ are adjacent in $G\boxtimes H$ if and only if $u = v$ and $u'$ is adjacent to $v'$, or $u' = v'$ and $u$ is adjacent to $v$, or $u$ is adjacent to $v$ and $u'$ is adjacent to $v'$. On the other hand, the \textit{Cartesian product} of graphs $G$ and $H$, denoted by $G\square H$, is a graph such that the vertex set of $G\square H$ is the Cartesian product $V(G) \times V(H)$; and two vertices $(u,u')$ and $(v,v')$ are adjacent in $G\square H$ if and only if either $u = v$ and $u'$ is adjacent to $v'$ in $H$, or $u'$ = $v'$ and $u$ is adjacent to $v$ in G. However, we now prove Theorem \ref{King'sgraph}. \vspace{5mm} \textbf{Proof of Theorem \ref{King'sgraph}:} The $(n\times m)$-King's graph, is a strong product of two path graphs $P_n$ and $P_m$. We obtain from \cite{BGKTW21} (Theorem 9) that $$tww(G\boxtimes H)\leq max \{tww(G)(\Delta(H) + 1) +2 \Delta(H), tww(H) + \Delta(H)\} .$$ Since the twin-width of a path graph is less than equal to $1$ and $\Delta(P_m)=2$, we obtain that the twin-width of a King's graph with $n$-vertices is less than equal to $7$. \hfill\(\Box\) Now, we prove Theorem \ref{Rook'sgraph}. \vspace{5mm} \textbf{Proof of Theorem \ref{Rook'sgraph}:} The $(n\times m)$-Rook's graph is a Cartesian product of two Complete graphs $K_n$ and $K_m$. We obtain from \cite{PS22} (Theorem 3.1) that for any graphs $G$ and $H$, $$tww(G\square H)\leq max\{ tww(G) + \Delta(H), tww(H)\} + \Delta(H).$$ Since the twin-width of a complete graph is zero and $\Delta(K_m)$ is $(m-1)$, we have the twin-width of $(n\times m)$-Rook's graph is less than equal to $2(m-1)$. \hfill\(\Box\) \bigskip
1,314,259,996,859
arxiv
\section{Introduction} Open quantum random walks were recently defined by Attal \textit{et al.} in \cite{APSS}. These processes have a simple definition, implementing a Markovian dynamics influenced by internal degrees of freedom, and can be useful to model a variety of phenomena: quantum algorithms (see \cite{SP}), transfer in biological systems (see~\cite{MSKPE}) and possibly quantum exclusion processes. In addition, a continuous-time version can be defined (see \cite{PelOQRW}). Therefore, open quantum random walks seem to be good quantum analogues of Markov chains. The usefulness of (classical) Markov chains, however, comes not only from the vast number of situations they can model, but also from the many properties implied by their simple definition. A textbook description of Markov chains, for instance, can start with the notion of irreducibility, which is easily characterized through the connectedness of the associated graph, and implies mean-ergodic convergence in law if an invariant probability exists (which is the case when the state space is finite). The next notion, the aperiodicity of an irreducible chain, is not as easy to characterize, but has simple sufficient conditions (\textit{e.g.} the existence of loops) and implies convergence in law, at least when the state space is finite. Last, the notion of connected subsets of the initial graph allows one to decompose a Markov chain into irreducible ones, to characterize its invariant states as convex combinations of invariant states for restricted chains. On the other hand, the only general properties of open quantum random walks proven so far are the central limit theorem for the position process (see~\cite{AGS}) and the general K\"ummerer-Maassen theorem for quantum trajectories (see~\cite{KuMa}). In the present paper we discuss an analogue of the above textbook description of Markov chains, for open quantum random walks. The non-commutative nature of the objects under study, and specifically the fact that the transition probabilities are replaced by operators acting on a Hilbert space, are the cause of higher mathematical complexity. Some intuitive aspects of classical Markov chains, however, fruitfully remain, and we can recover a vision of irreducibility, period, and accessibility, in terms of paths. This is of interest for the study of more general quantum Markov processes, as it gives indications on the relevant extensions of classical concepts, and on techniques of proofs of associated results. We view this as an additional justification for the study of open quantum random walks. Our theory will be constructed starting from pre-existing tools: \begin{itemize} \item a notion of irreducibility for general positive maps on non-commutative algebras, together with an associated Perron-Frobenius theorem, that was developed by various authors in the late seventies and early eighties (\cite{AHK}, \cite{EHK}, \cite{WE}, \cite{Gro}); \item a notion of period, together with associated results on the peripheral spectrum, that were defined in the same setting by Groh (\cite{Gro}) and extended by Fagnola and Pellicer (\cite{FP}); \item some old and new inspiring ergodic results (see \cite{FV} and \cite{KuMa}) and a decomposition of the support of invariant states proposed more recently by Baumgartner and Narnhofer (\cite{BN}) for quantum discrete time processes acting on finite dimensional spaces. \end{itemize} We briefly describe the structure of the article and the main contents. Section~\ref{section_OQRWs} recalls the definitions, notations and basic results regarding open quantum random walks from \cite{APSS}. We describe the two types of (classical) processes associated to an OQRW: the process ``with (repeated) measurement", commonly called ``quantum trajectory", and the process ``without measurement". Sections \ref{section_irreducibility} and~\ref{section_aperiodicity} discuss, respectively, irreducibility and aperiodicity for OQRWs. Both follow the same structure: they start by recalling standard definitions and properties of irreducibility or aperiodicity for positive maps on operator algebras; then study the application to the special case of OQRWs. Some immediate consequences on the ergodic behavior of the evolution are underlined. Section~\ref{section_recurrence} applies the results of the previous two sections to obtain convergence properties for irreducible, or irreducible aperiodic, open quantum random walks, for both processes described in section \ref{section_OQRWs}, \textit{i.e.} ``with measurement" and ``without measurement". Section \ref{section_nonirreducible} expands on reducible open quantum random walks, characterizing in different ways their irreducible components. The resulting decomposition can be seen as related to a ``quantum communication relation'' among vectors of the underlying Hilbert space. Section \ref{section_stationary} states the general form of stationary states for reducible open quantum random walks. Its central point is the full exploitation of some results from \cite{BN}, which we state and prove in full detail. Section \ref{section_extension} mentions a natural extension of open quantum random walks, which are strongly related to the quantum Markov chains defined by Gudder in \cite{Gudder}. For this extension we discuss without proof a characterization of irreducibility, periodicity, communication classes, and their consequences: as we will see, all previous results will remain with paths on a graph replaced by paths on a multigraph. We conclude with section \ref{section_examples}, which is dedicated to examples and applications. We start a study of translation-invariant open quantum random walks on $\mathbb{Z}^d$ continued in \cite{CP2}, and extending that of \cite{AGS}. We study examples which illustrate our most practical convergence results, namely Corollaries \ref{coro_rec1}, \ref{coro_rec2}, and \ref{coro_rec3}, as well as our decomposition result, Theorem \ref{theo_invariantstates}. \paragraph{Acknowledgements} The authors wish to thank Stéphane Attal for providing perpetual impetus to this project, Matteo Gregoratti for the organization of a meeting in Milano that played an important role in the development of this article, and Clément Pellegrini for many enthusiastic discussions. RC also gratefully acknowledges the support of PRIN project 2010MXMAJR and GNAMPA project ``Semigruppi markoviani su algebre non commutative'', and YP the support of ANR project “HAM-MARK”, n${}^\circ$ANR-09-BLAN-0098-01. \section{Open quantum random walks}\label{section_OQRWs} In this section we recall basic results and notations about open quantum random walks. For a more detailed exposition of OQRWs and related notions we refer the reader to \cite{APSS}. We consider a Hilbert space $\mathcal H$ of the form $\mathcal H = \bigoplus_{i\in V}\mathfrak h_i$ where $V$ is a countable set of vertices, and each $\mathfrak h_i$ is a separable Hilbert space (making $\mathcal H$ separable). This is a generalization with respect to standard OQRWs where the space $\mathcal H$ is $\mathfrak h \otimes\ell^2(V)$, or equivalently $\mathfrak h_i = \mathfrak h$ for all $i\in V$. This generalization will be useful when we consider decompositions of OQRWs, especially in section~\ref{section_nonirreducible}. We view $\mathcal H$ as describing the degrees of freedom of a particle constrained to move on $V$: the ``$V$-component" describes the spatial degrees of freedom (the position of the particle) while $\mathfrak h_i$ describes the internal degrees of freedom of the particle, when it is located at site $i\in V$. For clarity, whenever a vector $x\in\mathcal H$ belongs to the subspace $\mathfrak h_i$, we will denote it by $x\otimes\vec i$, and drop the (implicit) assumption that $x\in\mathfrak h_i$. Similarly, when an operator $A$ on $\mathcal H$ satisfies $\mathrm{Ker}\, A \subset \mathfrak h_j^\perp$ and $\mathrm{Ran}\, A\subset \mathfrak h_i$, we denote it by $A=L_{i,j}\otimes \ketbra ij$ where $L_{i,j}$ is viewed as an operator from $\mathfrak h_j$ to $\mathfrak h_i$. Therefore, for $i,j,k$ in $V$, we have the relation \[ \big(L_{i,j}\otimes\ketbra ij\big) \ \big(x\otimes\vec k\big) = \left\{ \begin{array}{ll} 0 & \mbox{ if }j\neq k, \\ L_{i,j}\,x\otimes \vec i & \mbox{ if }j=k. \end{array}\right. \] All of these notations are consistent with the special case of $\mathcal H = \mathfrak h \otimes \ell^2(V)$, and with the interpretation of $\mathcal H$ described above. \smallskip We consider a map on the space $\mathcal I _1(\mathcal H)$ of trace-class operators on $\mathcal H$, \begin{equation}\label{eq_OQRW} \mathfrak M \, : \, \rho \mapsto \sum_{i,j\in V}A_{i,j}\,\rho \,A_{i,j}^* \end{equation} where, for any $i,j$ in $V$, the operator $A_{i,j}$ is of the form $L_{i,j}\otimes |i\rangle \langle j|$ and the operators $L_{i,j}$ satisfy \begin{equation}\label{eq_stochastic} \forall j\in V\quad \sum_{i\in V}L_{i,j}^* L_{i,j}=\mathrm{Id}, \end{equation} where the series is meant in the strong convergence sense. The $L_{i,j}$ are thought of as encoding both the probability of a transition from site $j$ to site $i$, and the effect of that transition on the internal degrees of freedom. Equation \eqref{eq_stochastic} therefore encodes the ``stochasticity" of the transitions $L_{i,j}$. Clearly \eqref{eq_OQRW} defines a trace-preserving (TP) map $\mathcal I_1(\mathcal H)\to \mathcal I_1(\mathcal H)$, which is completely positive (CP), \textit{i.e.} for any $n$ in $\mathbb{N}^*$, the extension $\mathfrak{M}\otimes \mathrm{Id}$ to $\mathcal I_1(\mathcal H)\otimes \mathcal B(\mathbb{C}^n)$ is positive. In particular, such a map transforms states (\textit{i.e.} positive elements of $\mathcal I_1(\mathcal H)$ with trace one) into states. A completely-positive, trace-preserving map will be called a CP-TP map. We shall call a map $\mathfrak{M}$ as defined by \eqref{eq_OQRW} an open quantum random walk, or OQRW. Note that \eqref{eq_stochastic} implies that $\|\mathfrak{M}\|=1$ as an operator on $\mathcal I_1(\mathcal H)$ (see Remark \ref{remark_TPnormone} below). \begin{remark}\label{remark_extensions} In our interpretation of $L_{i,j}$ above, it would be more precise to say that the transition from site $j$ to site $i$ is encoded by the completely positive map $\rho_j\mapsto L_{i,j}\, \rho_j \, L_{i,j}^*$. A natural extension would be to replace this with a more general completely positive map $\rho_j\mapsto \Phi_{i,j}(\rho_j)$. We recover the ``transition operation matrices" introduced by Gudder in \cite{Gudder}. This will be discussed in section~\ref{section_extension}. \end{remark} Let us recall that the topological dual $\mathcal I_1(\mathcal H)^*$ can be identified with $\mathcal B(\mathcal H)$ through the duality \[(\rho,X)\mapsto \mathrm{Tr}(\rho\, X).\] \begin{remark}\label{remark_TPnormone} Trace-preservation of a map $\Phi$ is equivalent to $\Phi^*(\mathrm{Id})=\mathrm{Id}$. The adjoint $\Phi^*$ is then a positive, unital (\textit{i.e.} $\Phi^*(\mathrm{Id})=\mathrm{Id}$) map on $\mathcal B (\mathcal H)$, and by the Russo-Dye theorem (\cite{RD}) one has $\|\Phi^*\|=\|\Phi^*(\mathrm{Id})\|$ so that $\|\Phi\|=\|\Phi^*\|=1$. \end{remark} \begin{defi} We say that an open quantum random walk $\mathfrak M$ is finite if $V$ is finite and every $\mathfrak h_i$ is finite-dimensional. \end{defi} \begin{remark}\label{remark_finite} If an open quantum random walk is finite, then $\mathfrak M^*(\mathrm{Id})=\mathrm{Id}$ implies that $1$ is an eigenvalue of $\mathfrak M$. Since $\mathfrak{M}$ preserves the trace and the positivity, this implies that there exists an invariant state. \end{remark} \begin{remark} As noted in \cite{APSS}, classical Markov chains can be written as open quantum random walks. More precisely, if the transition matrix is $P=(P_{i,j})$ then, taking $L_{i,j}=\sqrt{P_{j,i}}\, U_{i,j}$ with any $U_{i,j}$ satisfying $U_{i,j}^* U_{i,j}=\mathrm{Id}_{\mathfrak h_j}$, will preserve states of the form $\sum_{i\in V} p_i \otimes |i\rangle\langle i|$, and the induced dynamics on the family~$(p_i)_{i\in V}$ will be described by the transition matrix $P$. However, if $\mathrm{dim}\,\mathfrak h_i>~1$ we will run into possible non-uniqueness problems \textit{e.g.} for the invariant states of $\mathfrak{M}$ (see section \ref{section_nonirreducible}). We feel this is an artificial degeneracy, not related to the properties of the Markov chain, but rather to the choice of the dilation. We will therefore only consider \textit{minimal} dilations of classical Markov chains, where $\mathrm{dim}\,\mathfrak h_i =1$ for all $i$ in $V$, and $L_{i,j}=\sqrt{P_{j,i}\,}$. \end{remark} A crucial remark is that, for any initial state $\rho$ on $\mathcal H$, which can be expanded~as \[ \rho=\sum_{i,j\in V} \rho(i,j)\otimes \ketbra ij\] and, for any $n\geq 1$, the evolved state $\mathfrak M ^n(\rho)$ is of the form \begin{equation}\label{eq_Mn1} \mathfrak M ^n (\rho) = \sum_{i\in V} \mathfrak M^n (\rho,i) \otimes \ketbra ii, \end{equation} where \textit{e.g.} for $n=1$, \begin{equation}\label{eq_Mn2} \mathfrak M^1 (\rho,i)=\sum_{j\in V} L_{i,j}\, \rho(j,j)\, L_{i,j}^*. \end{equation} Each $\mathfrak M^n(\rho,i)$ is a positive, trace-class operator on $\mathfrak h _i$ and $ \sum_{i\in V} \mathrm{Tr}\,\mathfrak M^n(\rho,i) =1$. Therefore, the range of $\mathfrak{M}$ is included in the set ${\cal I_D}$ of block diagonal trace-class operators, $$ {\cal I_D}=\{ \rho=\sum_{i\in V}\rho(i)\otimes \ketbra ii, \, \sum_{i\in V}\mathrm{Tr}(|\rho(i)|)<+\infty \}, $$ and $\mathcal I_D^*$ can be identified with \[\mathcal B_D =\{ X=\sum_{i\in V} X(i)\otimes \ketbra ii, \ \mathrm{sup}\,\|X(j)\|_{\mathcal B(\mathfrak h_j)}<\infty \}.\] This feature will have a great importance in the characterization of many properties of OQRWs, \textit{e.g.}: \begin{enumerate} \item the invariant states of an OQRW $\mathfrak M$ belong to $\cal I_D$, \item the reducibility of $\mathfrak M$ can be established considering only block-diagonal projections (see section \ref{section_irreducibility}) and the only meaningful enclosures are generated by vectors of the form $x\otimes \vec i$ (see section \ref{section_nonirreducible}), \item the cyclic projections defining the period have block-diagonal form (see section~\ref{section_aperiodicity}). \end{enumerate} In addition, we remark from \eqref{eq_Mn2} that $\mathfrak{M}^n(\rho)$ depends only on the diagonal elements $\rho(i,i)$. Therefore, from now on, we will only consider states of the form $\rho=\sum_{i\in V}\rho(i)\otimes \ketbra ii$. Equation \eqref{eq_Mn2} remains valid, replacing $\rho(i,i)$ by~$\rho(i)$. \smallskip We now describe the (classical) processes of interest, associated with $\mathfrak{M}$. We start from a state $\rho$ which we assume to be of the form $\rho=\sum_{i\in V}\rho(i) \otimes \ketbra ii$. We evolve $\rho$ for a time $n$, obtaining the state $\mathfrak{M}^n(\rho)$ as in \eqref{eq_Mn1}. We then make a measurement of the position observable. According to standard rules of quantum measurement, we obtain the result $i\in V$ with probability $\mathrm{Tr}\,\mathfrak{M}^n(\rho,i)$. Therefore, the result of this measurement is a random variable $Q_n$, with law $\mathbb{P}(Q_n=i)= \mathrm{Tr}\,\mathfrak M^n(\rho,i)$ for $i\in V$. In addition, if the position $Q_n=i\in V$ is observed, then the state is transformed to $\frac{\mathfrak M^n(\rho,i)}{\mathrm{Tr}\,\mathfrak M^n(\rho,i)}$. We call this process $(Q_n,\frac{\mathfrak{M}^n(\rho,Q_n)}{\mathrm{Tr}\,\mathfrak{M}^n(\rho,Q_n)})_n$ the process ``without measurement" to emphasize the fact that virtually only one measurement is done, at time $n$. Notice that, in practice, two values of this process at times $n<n'$ cannot be considered simultaneously since the measure at time $n$ perturbs the system, and therefore subsequent measurements. This is reflected in the fact that \textit{a priori}, $\mathfrak{M}^n(\rho)$ and $\mathfrak{M}^{n'}(\rho)$ do not commute (see~\cite{DaviesBook} for a short introduction to the conceptual difficulties of associating random variables to operators). Now assume that we make a measurement at every time $n\in\mathbb{N}$, applying the evolution by $\mathfrak{M}$ between two measurements. Again assume that we start from a state $\rho$ of the form $\sum_{i\in V}\rho(i)\otimes |i\rangle \langle i|$. Suppose that at time $n$, the position was measured at $X_n=j$ and the state (after the measurement) is $\rho_n \otimes \ketbra {j}{j}$. Then after the evolution, the state becomes \[\mathfrak{M}(\rho_n\otimes \ketbra jj)= \sum_{i\in V} L_{i,j}\, \rho_n\, L_{i,j}^* \otimes \ketbra ii\] so that a measurement at time $n+1$ gives a position $X_{n+1}=i$ with probability $\mathrm{Tr}\, L_{i,j}\, \rho_n\, L_{i,j}^*$, and then the state becomes $\rho_{n+1}\otimes\ketbra ii$ with $\rho_{n+1}=\frac{L_{i,j}\, \rho_n\, L_{i,j}^*}{\mathrm{Tr}\, L_{i,j}\, \rho_n\, L_{i,j}^*}$. The sequence of random variables $(X_n,\rho_n)$ is therefore a Markov process with transitions defined by \[ \mathbb{P}\Big((X_{n+1},\rho_{n+1})\!=\!(i,\!\frac{L_{i,j}\,\rho_n\, L_{i,j}^*}{\mathrm{Tr}(L_{i,j}\rho L_{i,j}^*)}\!)\Big|(X_n,\rho_n)\!=\!(j,\rho_n)\Big)=\mathrm{Tr} (L_{i,j}\,\rho_n\, L_{i,j}^*)\quad \forall i\in V,\] and initial law $\mathbb{P}\big((X_0,\rho_0)=(i,\frac{\rho(i)}{\mathrm{Tr}\rho(i)})\big)=\mathrm{Tr}\,\rho(i).$ Note that the sequence of positions $X_0~=~i_0$, \ldots, $X_n=i_n$ is observed with probability \[\mathrm{Tr}\,L_{i_n,i_{n-1}}\ldots L_{i_1,i_0}\,\rho(i_0)\, L_{i_1,i_0}^*\ldots L_{i_n,i_{n-1}}^*\] and completely determines the state $\rho_n$: \begin{equation}\label{eq_rhon} \rho_n = \frac{L_{i_n,i_{n-1}}\ldots L_{i_1,i_0}\,\rho(i_0)\, L_{i_1,i_0}^*\ldots L_{i_n,i_{n-1}}^*}{\mathrm{Tr}\,L_{i_n,i_{n-1}}\ldots L_{i_1,i_0}\,\rho(i_0)\, L_{i_1,i_0}^*\ldots L_{i_n,i_{n-1}}^*}. \end{equation} As emphasized in \cite{APSS}, this implies that for every $n$ the laws of $X_n$ and $Q_n$ are the same, \textit{i.e.} \[ \mathbb{P}(X_n=i)=\mathbb{P}(Q_n=i)\quad \forall i\in V.\] It also implies for any $n$ the relation \begin{equation}\label{eq_exprhon} \mathbb{E}(\rho_n\otimes\ketbra {X_n}{X_n})=\mathfrak{M}^n(\rho). \end{equation} \section{Irreducibility for OQRWs}\label{section_irreducibility} In this and in the following sections, $\Phi$ is assumed to be a positive map on the ideal $\mathcal I_1(\mathcal H)$ of trace operators on some given Hilbert space $\mathcal H$. We recall that such a map is automatically bounded as a linear map on $\mathcal I_1(\mathcal H)$ (see \textit{e.g.} Lemma~2.2 in \cite{Sch}), so that it is also weak-continuous. In most practical cases, we will additionally assume that $\|\Phi\|=1$; as we noted in Remark \ref{remark_TPnormone}, this will be the case, in particular, if~$\Phi$ is trace-preserving. We recall some standard notations: an operator $X$ on $\mathcal H$ is called positive, denoted $X\geq0$, if for $\phi\in \mathcal H$, one has $\langle \phi, X\, \phi\rangle \geq 0$. It is called definite positive, denoted $X>0$, if for $\phi\in \mathcal H\setminus\{0\}$, one has $\langle\phi, X\, \phi\rangle>0$. \smallskip We summarize here the definition of irreducibility introduced by Davies (see~\cite{Dav}), and other related notions. We shall see in Proposition \ref{ergodic=irreducible} that the first two (irreducibility and ergodicity) are equivalent. \begin{defi} The positive map $\Phi$ is called: \begin{itemize} \item irreducible if the only orthogonal projections $P$ reducing $\Phi$, \textrm{i.e.} such that~$\Phi\big(P\mathcal I_1(\mathcal H)P\big) \subset P\mathcal I_1(\mathcal H)P$, are $P=0$ and $\mathrm{Id}$, \item ergodic if, for any $\rho\geq 0$, $\rho\neq 0$ in $\mathcal I_1(\mathcal H)$, there exists $t$ such that~$\mathrm{e}^{t \Phi} (\rho)>~0$, \item positivity-improving if, for any $\rho\geq 0$, $\rho\neq 0$ in $\mathcal I_1(\mathcal H)$, one has $\Phi(\rho)>0$, \item $n$-regular for $n\in{\mathbb N}^*$ if $\Phi^n$ is positivity improving. \end{itemize} \end{defi} \begin{remark} The condition $\Phi\big(P\mathcal I_1(\mathcal H)P\big) \subset P\mathcal I_1(\mathcal H)P$ is equivalent to the condition $\Phi(P)\leq \alpha P$ for some $\alpha>0$ whenever $P\in \mathcal I_1(\mathcal H)$, i.e. whenever $P$ is finite-dimensional. In the infinite-dimensional case one can prove that $P$ reduces $\Phi$ if and only if for any finite-dimensional projection $Q$ with $Q\leq P$, one has~$\Phi(Q)\leq \alpha P$. \end{remark} \begin{remark} An equivalent formulation of ergodicity is that for any $\rho\geq 0$, $\rho\neq 0$ in $\mathcal I_1(\mathcal H)$, for any $t>0$ one has $\mathrm{e}^{t \Phi} (\rho)>0$. This follows from the observation that the support projection of $\mathrm{e}^{t \Phi}(\rho)$ does not depend on $t>0$. \end{remark} There is a possible confusion here due to the fact that some authors (\cite{FP}, \cite{Gro}) work in the Heisenberg representation, \textit{i.e.} in our notation consider $\Phi^*$, while others (\cite{EHK}, \cite{Sch}), like us, work in the Schr\"odinger representation. For completeness we give the next proposition, which connects the two representations: \begin{prop} Let $\Phi$ be a positive, trace-preserving map on~$\mathcal I_1(\mathcal H)$. \begin{itemize} \item An orthogonal projection $P$ reduces $\Phi$ if and only if $P\le \Phi^*(P)$, \textit{i.e.} $1-P$ reduces~$\Phi^*$, \item $\Phi$ is ergodic (resp. positivity improving, regular) if and only if $\Phi^*$ is ergodic (resp. positivity improving, regular). \end{itemize} \end{prop} \noindent{\bf Proof:} Assume first that $P$ reduces $\Phi$, i.e. $\Phi(P{\cal I}_1({\cal H})P)\subset P{\cal I}_1({\cal H})P$. Then for any~$\phi\in\mathcal H$ of norm one, \[\langle \phi,P \phi \rangle = \mathrm{Tr} (\ketbra{P\phi}{P\phi}) =\mathrm{Tr}\big(\Phi(|P\phi\rangle \langle P\phi|)\big)\] and, by the reduction assumption, this is \[\mathrm{Tr}\big(P\,\Phi(|P\phi\rangle \langle P\phi|)\big) = \langle P\phi,\Phi^*(P) P\phi \rangle \leq \langle \phi,\Phi^*(P) \, \phi \rangle,\] so that $P\leq \Phi^*(P)$. Conversely, if $P\le\Phi^*(P)$, then, for any trace-class $\rho\geq 0$, \[ \mathrm{Tr}(P\rho P) \leq \mathrm{Tr}(P\rho P\,\Phi^*(P)) = \mathrm{Tr}(P\,\Phi(P\rho P)\,P) \le \mathrm{Tr}(\Phi(P\rho P)) = \mathrm{Tr}(P\rho P). \] We therefore have the equality $\mathrm{Tr}(P\,\Phi(P\rho P)\,P) = \mathrm{Tr}(\Phi(P\rho P))$ which implies the inclusion $\Phi(P\rho P)\in P{\cal I}_1({\cal H})P$ for $\rho\geq 0$, hence for any $\rho \in\mathcal I_1(\mathcal H)$. To prove the second point consider \textit{e.g.} $\Phi$ ergodic. For any $\rho\geq 0$, $\rho\neq 0$ in~$\mathcal I_1$ one has $\mathrm{e}^{t \Phi} (\rho)>0$ for all $t>0$. So, for any bounded positive, non-zero operator~$X$, we have $ 0 < \mathrm{Tr}(X\mathrm{e}^{t \Phi} (\rho)) = \mathrm{Tr}(\mathrm{e}^{t \Phi^*} \!(X)\,\rho). $ Taking $\rho$ of the form~$|\phi\rangle\langle \phi|$, we deduce $\mathrm{e}^{t \Phi^*} (X)>0$. Other statements are proved in the same way. $\Box$ The article \cite{EHK} shows that irreducibility and ergodicity are equivalent, but considers only the finite-dimensional case. We extend this statement to the infinite-dimensional case below: \begin{prop}\label{ergodic=irreducible} A positive map $\Phi$ on $\mathcal I_1(\mathcal H)$ is ergodic if and only if it is irreducible. \end{prop} \noindent{\bf Proof:} If $\Phi$ is not irreducible, then there exists a non-trivial projection $P$ and a non-negative trace-class operator $\rho$ such that $\Phi(P\rho P)\leq \alpha P$ and $P\rho P\neq 0$ but then, for any $t$, one has $\mathrm{e}^{t\Phi}(P\rho P)\leq \mathrm{e}^{t\alpha} P$ so that $\mathrm{e}^{t\Phi}(P\rho P)$ is non-definite for all $t$ and $\Phi$ is not ergodic. To prove the converse we use the characterization in terms of the dual $\Phi^*$. Assume $\Phi$, hence $\Phi^*$, is irreducible, consider $X\geq 0$, $X\neq 0$ in $\mathcal B(\mathcal H)$; for a fixed~$t>0$ let $$ e_p(X)= \sum_{k=0}^p \frac{t^k}{k!} \Phi^{*k}(X).$$ Define $P$ to be the support projection of $\mathrm{e}^{t\Phi^*}(X)$ and $P_{p}=\ind_{[1/p,+\infty[}(e_p(X))$. Obviously $P_p\leq P$ and $P_p \leq p \, e_p(X)$ for all $p$, and $P_p \to P$ in the sense of strong convergence as $p\to\infty$, thanks to the properties of bounded measurable functional calculus (see \textit{e.g.} Theorem VII.2 in \cite{RS1}). We have: \begin{eqnarray*} \frac1p \,\Phi^*(P_{p}) & \leq & \Phi^*(e_p(X)) = \sum_{k=1}^{p+1} \frac kt \frac{t^k}{k!} \,\Phi^{*k}(X)\\ &\leq& \frac{p+1}t \,e_{p+1}(X) \leq \frac{p+1}t \, \mathrm{e}^{t\Phi^*}(X) \end{eqnarray*} so that $\mathrm{supp}\,\Phi^*(P_{p})\subset \mathrm{supp}\,P$ and, by the weak-$\ast$ continuity of $\Phi^*$, one has $\mathrm{supp}\,\Phi^*(P)\subset\mathrm{supp}\, P$, \textit{i.e.} $P$ reduces $\Phi^*$. Since $\mathrm{e}^{t\Phi^*}(X)\geq X$, the projector $P$ cannot be zero, so by irreducibility $P$ is $\mathrm{Id}$ and $\mathrm{e}^{t\Phi^*}(X)>0$. $\Box$ \begin{remark} In \cite{EHK}, ergodicity is defined for finite dimensional $\mathcal H$ by the property that $(\mathrm{Id}+\Phi)^{\mathrm{dim}\,\mathcal{H}-1}$ is positivity-improving. This is equivalent to our definition: see the remark following Lemma 3.1 in \cite{Sch}. \end{remark} \smallskip When speaking about reducibility/irreducibility of quantum maps, one enters a jungle of different approaches and terminologies, which, in many cases, are essentially equivalent. Concerning this, we recall that a reducing projection~$P$ is called by some authors a \textit{subharmonic projection} for $\Phi^*$, following the line common to the classical literature on Markov chains. Also, more recently (in \cite{BN}, as far as we know), the notion of enclosure has been introduced in the context of CP-TP maps. A closed vector space $\mathcal V$ is called an \textit{enclosure} if~$\mathrm{supp} \, \rho\subset \mathcal V$ implies $\mathrm{supp} \, \mathfrak M(\rho)\subset \mathcal V$. It is immediate that a space $\mathcal V$ is an enclosure if and only if the projection $P$ on $\mathcal V$ reduces~$\mathfrak M$. So, an equivalent way to define irreducibility is asking that there exist no non-trivial enclosures. The notion of enclosure will be crucial in the discussion of decompositions of reducible open quantum random walks (see section \ref{section_nonirreducible}). \medskip Next, we characterize irreducibility in terms of unravellings. We consider a completely positive trace preserving map $\Phi$ and fix an unravelling $(A_\kappa)_{\kappa\in K}$ of~$\Phi$, provided by Kraus' representation theorem (see \cite{Kraus} or \cite{NieChu}, where this is called the operator-sum representation): \begin{equation}\label{eq_Kraus}\Phi(\rho)=\sum_{\kappa\in K} A_\kappa \rho A_\kappa^\ast.\end{equation} We will characterize irreducibility (and the property of being positivity-improving or $n$-regular) in terms of an unravelling $(A_\kappa)_{\kappa\in K}$. We denote by $\mathbb{C}[A]$ the set of polynomials in $A_\kappa$, \textit{i.e.} the algebra (\textit{not} the *-algebra ) generated by the operators $A_\kappa$, $\kappa\in K$. The following result summarizes Schrader's Lemmas 3.3 and 3.4 (\cite{Sch}): \begin{lemme}\label{lemma_ergodicityCPTPmaps} A completely positive map $\Phi$ of the form \eqref{eq_Kraus} is : \begin{itemize} \item positivity improving if and only if for any $\phi\in\mathcal H\setminus\{0\}$, the set $\{A_\kappa \,\phi, \, \kappa\in~K\}$ is total in~$\mathcal H$, \item $n$-regular if and only if for any $\phi\in\mathcal H\setminus\{0\}$, the set $\{A_{k_1}\!\ldots\! A_{k_n} \phi, \, k_1,\ldots,k_n\in~K\}$ is total in $\mathcal H$, \item irreducible if and only if for any $\phi\in\mathcal H\setminus\{0\}$, the set $ \mathbb{C}[A] \, \phi$ is dense in~$\mathcal H$. \end{itemize} \end{lemme} Lemmas 3.3 and 3.4 in \cite{Sch} are stated in terms of the operators $A_\kappa^*$. The connection with our statement comes from the following straightforward lemma: \begin{lemme}\label{lemma_totality} Consider a family $(A_\kappa)_{\kappa\in K}$ of operators on $\mathcal H$. Then the following are equivalent: \begin{itemize} \item for any $\phi\in\mathcal H\setminus\{0\}$, the set $\{A_\kappa\,\phi, \kappa\in K\}$ is total in $\mathcal H$, \item for any $\phi$ and $\psi$ in $\mathcal H\setminus\{0\}$, there exists $k$ in $K$ such that $\langle \psi, A_\kappa \,\phi\rangle\neq 0$, \item for any $\phi\in\mathcal H\setminus\{0\}$, the set $\{A_\kappa^*\, \phi, \kappa\in K\}$ is total in $\mathcal H$. \end{itemize}\end{lemme} Before we state our characterization of irreducibility for open quantum random walks, let us introduce some notation: for $i,j$ in $V$ we call a \textit{path from $i$ to~$j$} any finite sequence $i_0,\ldots,i_\ell$ in $V$ with $\ell\geq 1$, such that $i_0=i$ and $i_\ell=j$. Such a path is said to be \textit{of length $\ell$}. We denote by $\mathcal P (i,j)$ (resp. $\mathcal P_\ell (i,j)$) the set of paths from $i$ to $j$ of arbitrary length (resp. of length $\ell$). A path from $i$ to $i$ will be called a loop; by convention we consider the sequence $\{i\}$ as a loop (with length one), \textit{i.e.} an element of $\mathcal P(i,i)$. For $\pi=(i_0,\ldots,i_\ell)$ in $\mathcal P (i,j)$ we denote by $L_\pi$ the operator from $\mathfrak h_i$ to $\mathfrak h_j$: $$ L_\pi=L_{i_\ell,i_{\ell-1}}\ldots L_{i_1,i_0}=L_{j,i_{\ell-1}}\ldots L_{i_1,i}.$$ We can now prove: \begin{prop}\label{prop_ergodicityOQRW} The CP-TP map $\mathfrak{M}$ is irreducible if and only if, for every $i$ and $j$ in $V$, one of the following equivalent conditions holds: \begin{itemize} \item for any $x$ in $\mathfrak h_i\setminus\{0\}$, the set $\{L_\pi x \, |\, \pi\in\mathcal P(i,j)\}$ is total in $\mathfrak h_j$, \item for any $x$ in $\mathfrak h_i\setminus\{0\}$ and $y$ in $\mathfrak h_j\setminus\{0\}$ there exists a path $\pi$ in $\mathcal P(i,j)$ such that~$\langle y, L_\pi x\rangle~\neq~0.$ \end{itemize} \end{prop} \noindent{\bf Proof:} This is an immediate application of Lemmas \ref{lemma_ergodicityCPTPmaps} and \ref{lemma_totality}, and the observation that, if $A_{j,i}=L_{j,i}\otimes\ketbra ji$, then \[ A_{j_\ell,i_\ell}\ldots A_{j_1,i_1}=\left\{ \begin{array}{ll} L_{j_\ell,i_{\ell}}\ldots L_{i_2,i_1} \otimes \ketbra {j_\ell}{i_1} & \mbox{ if }i_\ell=j_{\ell-1},\ldots, i_2=j_1,\\ 0 & \mbox{ otherwise.}\end{array}\right. \Box \] \smallskip As a first application we prove that ``positivity-improving" is an essentially useless notion in the framework of open quantum random walks, and that we have a constraint on the values $n$ allowing $n$-regularity: \begin{coro} The CP-TP map $\mathfrak{M}$ is positivity-improving if and only if every~$\mathfrak h_i$ is one-dimensional and the underlying classical Markov process has positive transition probabilities. It is $n$-regular if and only if, for any $i$ and $j$ in $V$, one of the equivalent formulations holds: \begin{itemize} \item for any nonzero $x$ in $\mathfrak h_i$, the set $\{L_\pi x \, |\, \pi\in\mathcal P_n(i,j)\}$ is total in $\mathfrak h_j$, \item for any nonzero $x$ in $\mathfrak h_i$ and $y$ in $\mathfrak h_j$ there exists a path $\pi$ in $\mathcal P_n(i,j)$ such that $\langle y, L_\pi x\rangle \neq 0.$ \end{itemize} A necessary condition for $n$-regularity is $\mathrm{card}\, \mathcal P_n(i,j)\geq \mathrm{dim}\, \mathfrak h_j$ for all $i,j\in V$. \end{coro} \noindent{\bf Proof:} By the first point of Lemma \ref{lemma_ergodicityCPTPmaps}, $\mathfrak{M}$ is positivity-improving iff, for all $i,j\in V$, the set $\{L_{j,i}\,x\}$ is total in $\mathfrak h_j$ for any $x$ in $\mathfrak h_i$. Since $\mathrm{dim}\,\mathrm{Vect}\{L_{j,i}\,x\} \leq 1$, this is possible only if $\mathrm{dim}\, \mathfrak h_j=1$. In that case, the open quantum random walk is the minimal dilation of a classical Markov chain and the statement is obvious. The other statements are obtained by applying these requirements to $\mathfrak M^n$. $\Box$ \smallskip We can therefore give the following definition for an irreducible OQRW, which emphasizes our interpretation in terms of paths. \begin{defi}\label{defi_irroqrw} Let $\mathfrak M$ be an open quantum random walk. We say that two sites $i,j$ in $V$ are connected by $\mathfrak M$, which we denote by $i\overset{\mathfrak M}{\rightarrow} j$, if one of the equivalent conditions of Proposition \ref{prop_ergodicityOQRW} holds. As we have shown, $\mathfrak M$ is irreducible if and only if, for any two $i$ and $j$ in $V$, one has $i\overset{\mathfrak M}{\rightarrow} j$ and $j\overset{\mathfrak M}{\rightarrow} i$. \end{defi} \begin{remark} A minimal dilation of a classical Markov chain is irreducible if and only if the Markov chain is irreducible in the classical sense. \end{remark} Until now, we have basically found necessary and sufficient conditions for irreducibility of an open quantum random walk. In section \ref{section_nonirreducible} we will discuss decompositions of reducible open quantum random walks into irreducible ones. \smallskip The following proposition essentially comes from \cite{Sch}: \begin{prop}\label{prop_Schrader} Assume a 2-positive map $\Phi$ on $\mathcal I_1(\mathcal H)$ has an eigenvalue $\lambda$ of modulus $\|\Phi\|$, with eigenvector $\rho$. Then: \begin{itemize} \item $\|\Phi\|$ is also an eigenvalue, with eigenvector $|\rho|$, \item if $\Phi$ is irreducible, then $\lambda$ is a simple eigenvalue. \end{itemize} In particular, if $\Phi$ is irreducible and has an eigenvalue of modulus $\|\Phi\|$, then~$\|\Phi\|$ is a simple eigenvalue, with an eigenvector that is definite-positive. \end{prop} \begin{remark} Here and in the rest of this paper, by a simple eigenvalue of an operator $\Phi$ we mean a scalar $\lambda$ such that $\mathrm{dim\,Ker}\,(\Phi-\lambda\,\mathrm{Id})=1$. \end{remark} \noindent{\bf Proof:} Theorems 4.1 and 4.2 from \cite{Sch} give us the first two statements. The third one follows from the fact that $\exp\|\Phi\|\times |\rho|=(\exp\Phi)(|\rho|)>0$ by irreducibility. $\Box$ \smallskip In relation with the above results we can prove the following ergodic convergence result for irreducible 2-positive, trace-preserving maps, which applies in particular to the case of CP-TP maps and can be seen as a discrete time version of the Frigerio-Verri ergodic theorem (\cite{FV}, Theorem 1.1): \begin{prop}\label{prop_ergodicconvergence} Let $\Phi$ be a positive contraction on $\mathcal I_1(\mathcal H)$ that has~$1$ as a simple eigenvalue. Then the associated eigenvector is (up to normalization) an invariant state $\rho^{\mathrm{inv}}$ and, for any state $\rho$, one has the weak convergence \begin{equation}\label{eq_propergodicconvergence} \frac1n\sum_{k=0}^{n-1} \Phi^k(\rho)\underset{n\to\infty}{\longrightarrow} \rho^{\mathrm{inv}}. \end{equation} \end{prop} \noindent{\bf Proof:} Consider an invariant trace-class operator $\rho^{\mathrm{inv}}$. Since $\Phi$ preserves positivity, one can assume that $\rho^{\mathrm{inv}} \geq 0$ and by necessity its trace is non-zero, so it can be assumed to have trace one. Define $\Psi_n=\frac1n\sum_{k=0}^{n-1}\Phi^k$. One has $ \mathrm{Tr}[\Psi_n(\rho)\,X]=\mathrm{Tr}[\rho \,\Psi_n^*(X)]$. By the Banach-Alaoglu theorem, $\Psi_n^*(X)$ has weak-$\ast$ convergent subsequences. Denote by $Y$ the weak-$\ast$ limit of a subsequence $\Psi_{n_k}^*(X)$; one has $\Phi^*\circ\Psi_{n_k}^*(X)\to \Phi^*(Y)$ by the weak-$\ast$-continuity of $\Phi^*$, and, for any trace-class~$\rho$, \begin{eqnarray*} \mathrm{Tr}[\rho(\mathrm{Id}-\Phi^*)(Y)]&=&\lim_k\mathrm{Tr}[\rho\,(\mathrm{Id}-\Phi^*)\,\Psi_{n_k}^*(X)]\\ &=& \lim_k \mathrm{Tr}[\rho\, \frac1{n_k}\sum_{j=0}^{n_k-1}(\Phi^{*j}-\Phi^{*(j+1)})(X)] \\ &=&\lim_k\mathrm{Tr}[\rho\, \frac1{n_k}(\mathrm{Id}-\Phi^{*{n_k}})(X)] = 0, \end{eqnarray*} so that $\mathrm{Tr}\left[(\mathrm{Id}-\Phi)(\rho)\, Y\right]=0$, for any $\rho$, implying $Y\in\left(\mathrm{Ran}\, (\mathrm{Id}-\Phi)\right)^\perp$. That space is of dimension one by assumption, so that $Y=\lambda_X \mathrm{Id}$ and we have $\lim_k \mathrm{Tr}\left[\Psi_{n_k}(\rho)\,X\right]\to \lambda_X$ for any trace-class $\rho$. Writing this for $\rho$ equal to the eigenvector $\rho^{\mathrm{inv}}$ leads to $\lambda_X=\mathrm{Tr}(\rho^{\mathrm{inv}} X)$, showing that $\lambda$ is independent on the subsequence $(n_k)_k$. When $\rho$ is a state we obtain the convergence \eqref{eq_propergodicconvergence}. This concludes the proof. $\Box$ The following theorem is a direct application of Proposition \ref{prop_Schrader}: \begin{theo}\label{theo_unicity} An irreducible open quantum random walk $\mathfrak{M}$ has an invariant state if and only if $1$ is an eigenvalue of $\mathfrak M$. If it does, then it has only one, and that invariant state is faithful. \end{theo} A second theorem follows from Proposition \ref{prop_ergodicconvergence}: \begin{theo}\label{theo_ergodicconvergence} Assume that an open quantum random walk $\mathfrak{M}$ is irreducible and has an invariant state~$\rho^{\mathrm{inv}}$. For any state $\rho$, one has $\frac 1n\sum_{k=0}^{n-1}\mathfrak M ^k(\rho) \to \rho^{\mathrm{inv}}$ weakly. \end{theo} \section{Period and aperiodicity for OQRWs}\label{section_aperiodicity} As in the previous section, we start with a review of the notion of period for a positive trace-preserving map $\Phi$. Here we follow Fagnola and Pellicer (\cite{FP}) and Groh (\cite{Gro}). We define $\oset{{\tiny d}}{-}$ to be subtraction \textit{modulo} $d$. \begin{defi}\label{def-period} Let $\Phi$ be a positive, trace-preserving, irreducible map and let $(P_0,\ldots,P_{d-1})$ be a resolution of identity, \emph{i.e.} a family of orthogonal projections such that $\sum_{k=0}^{d-1} P_k=\mathrm{Id}$. One says that $(P_0,\ldots,P_{d-1})$ is $\Phi$-cyclic if $\Phi^*(P_k)=P_{k\oset{{\tiny d}}{-} 1}$ for $k=0,\ldots,d-1$. The supremum of all $d$ for which there exists a $\Phi$-cyclic resolution of identity $(P_0,\ldots,P_{d-1})$ is called the \textrm{period} of $\Phi$. If $\Phi$ has period~$1$ then we call it aperiodic. \end{defi} \begin{remark} Even if in an embryonic stage, we recall that a characterization of a cyclic resolution of the identity was already given, in the Schr\"odinger picture, in \cite{EHK}, Theorem 3.4. \end{remark} \begin{example}\label{ex_Orey} Define a quantum Orey chain to be a CP-TP map $\Phi$ on $\mathcal I_1(\mathcal H)$ such that for any $\rho,\sigma$ in $\mathcal I_1(\mathcal H)$ one has $\Phi^n(\rho)-\Phi^n(\sigma)\rightarrow0$ in trace-norm, as~$n\to\infty$. A quantum Orey chain is aperiodic. Indeed, if $(P_0,\ldots,P_{d-1})$ is a cyclic resolution of identity with $d\geq 1$ then for $\rho$, $\sigma$ satisfying $ \mathrm{supp}\,\rho\subset \mathrm{Ran}\, P_0$, $\mathrm{supp}\,\sigma\subset \mathrm{Ran}\, P_1$, we have $$ \mathrm{Tr}{\big((\Phi^n(\sigma)-\Phi^n(\rho))P_k\big)} =\mathrm{Tr}{\big((\sigma-\rho)\Phi^{*n}(P_k)\big)} =\left\{\begin{array}{ll} 1 & \mbox{ for } k\oset{{\tiny d}}{-} n = 0 \\ -1 & \mbox{ for } k\oset{{\tiny d}}{-} n = 1 \\ 0 & \mbox{ otherwise, } \end{array}\right. $$ which contradicts the Orey property. \end{example} The following result is a combination of Theorems 3.7 and 4.3 of Fagnola-Pellicer in \cite{FP} (the latter was also partially proven by Groh in \cite{Gro}). Note that these results are proven in finite dimension, but they immediately extend to infinite dimension. \begin{prop}\label{prop_aperiodicCPTP} If $\Phi$ is an irreducible, 2-positive map on $\mathcal I_1(\mathcal H)$ and has finite period $d$ then: \begin{itemize} \item the peripheral point spectrum of $\Phi^*$, i.e. the set $\mathrm{Sp}_{pp}\Phi^* $, is a subgroup of the circle group $\mathbb T$, \item the primitive root of unity $\mathrm{e}^{\mathrm{i} 2\pi/d}$ belongs to $\mathrm{Sp}_{pp}\Phi^*$ if and only if $\Phi$ is~$d$-periodic. \end{itemize} \end{prop} The following result is an immediate consequence of Proposition \ref{prop_aperiodicCPTP}. \begin{prop}\label{prop_cvgCPTPaperiodic} If a $2$-positive TP map $\Phi$ on $\mathcal I_1(\mathcal H)$ is irreducible and aperiodic with invariant state $\rho^{\mathrm{inv}}$, and $\mathcal H$ is finite-dimensional then \begin{itemize} \item $\mathrm{Sp_{pp}}\,\Phi\cap\mathbb T = \{1\}$ \item for any $\rho\in \mathcal I_1(\mathcal H)$ one has $\Phi^n(\rho)\rightarrow \rho^{\mathrm{inv}}$ as $n\to\infty$. \end{itemize} \end{prop} When considering completely positive maps, we will need to be able to characterize the period in terms of an unravelling to apply it to OQRWs. We therefore fix an unravelling $A=(A_\kappa)_{\kappa\in K}$ of $\Phi$, \textit{i.e.} $\Phi(\rho)=\sum_{\kappa\in K}A_\kappa\,\rho\, A_\kappa^*$. \begin{defi} Let $(P_0,\ldots,P_{d-1})$ be a resolution of identity. One says that it is $A$-cyclic if $P_j A_\kappa= A_\kappa P_{j\oset{{\tiny d}}{-} 1}$ for $j=0,\ldots,d-1$ and any $k$. \end{defi} The following is Theorem 5.4 from \cite{FP}, which again extends to the infinite dimensional case, with the same proof. \begin{prop}\label{prop_Vcyclic} Let $\Phi$ be an irreducible CP-TP map on $\mathcal I_1(\mathcal H)$. A resolution of the identity is $\Phi$-cyclic if and only if it is $A$-cyclic. \end{prop} \begin{remark} For a $\Phi^*$-invariant weight (not necessarily a state) $\rho$ and a cyclic resolution of identity $(P_0,\ldots,P_{d-1})$, every projection $P_k$ has the same weight, since for any fixed $k$ we have $\rho(P_k)=\rho(\Phi^{*n}(P_k))$ for all $n$, and, in particular for $n=k$, we get $\rho(P_k)=\rho(P_0)$. \end{remark} Now, we consider once again the special case of an OQRW $\mathfrak M$; with the notations introduced in previous sections, the associated unravelling is given by the operators $A_{i,j}=L_{i,j}\otimes\ketbra i j$, for $i,j\in V$. \begin{prop}\label{prop_eqperiodicity} A resolution of the identity $(P_0,\ldots,P_{d-1})$ is cyclic for an irreducible open quantum random walk $\mathfrak{M}$ if and only if $P_k=\sum_{j\in V} P_{k,j}\otimes |j\rangle \langle j|$ for every $k$, with projectors $P_{k,j}$ satisfying the relation \begin{equation}\label{eq_periodicity} P_{k,i} L_{i,j} = L_{i,j} P_{k\oset{{\tiny d}}{-} 1,j}. \end{equation} \end{prop} \noindent{\bf Proof:} Assume that there exists an $A$-cyclic resolution of identity $(P_0,\ldots, P_{d-1})$. Since $\mathfrak{M}^*(P_k)=P_{k\oset{{\tiny d}}{-} 1}$, every $P_k$ is block-diagonal, \textit{i.e.} $P_k=\sum_j P_{k,j}\otimes |j\rangle \langle j|$, and from Proposition \ref{prop_Vcyclic}, for any $i,j$ in $V$: $ P_k \, L_{i,j}\otimes |i\rangle\langle j| = L_{i,j}\otimes |i\rangle\langle j| \, P_{k\oset{{\tiny d}}{-} 1}$. This gives relation \eqref{eq_periodicity}. The converse is obvious. $\Box$ \begin{remark}\label{remark-nonuniquedec} For classical, irreducible, $d$-periodic Markov chains with stochastic matrix $K$, the cyclic components are uniquely determined and coincide with the irreducible communication classes $C_0,\ldots, C_{d-1}$ of the (aperiodic) Markov chain with transition matrix $K^d$. In the quantum context, the role of the partition $C_0,\ldots, C_{d-1}$, or, better yet, of the corresponding indicator functions $\ind_{C_0},\ldots, \ind_{C_{d-1}}$, is played by the cyclic projections $P_0,\ldots,P_{d-1}$. Indeed, notice that in the classical case $K \ind_{C_k}= \ind_{C_{k-1}}$ and, for the minimal dilation of this Markov chain, the cyclic projections $P_0,\ldots,P_{d-1}$ are uniquely determined as $P_k=\sum_{j\in C_k}|j\rangle\langle j|$. However, an important difference should be underlined, with respect to the classical case: in general, the resolution of the identity which verifies the definition of the period is not uniquely determined, since the decomposition of $\Phi^d$ into minimal irreducible components is not unique in general, as we will see in section \ref{section_stationary}. An example of this fact can be easily constructed, as we now describe. \end{remark} \begin{example} Take an OQRW $\mathfrak M$ with two sites and $\mathfrak h_1=\mathfrak h_2={\mathbb C}^2$, and introduce the matrix $R=\begin{pmatrix}0& -1\\ 1 & 0\end{pmatrix}$. Then we consider $$ L_{11}=L_{21}=\frac{\mathrm{i}}{ \sqrt{2\,}}\, R, \qquad L_{12}=L_{22}= - \frac{\mathrm{i}}{ \sqrt{2\,}}\, R. $$ This $\mathfrak M$ is an irreducible OQRW (by a direct application of Proposition \ref{prop_ergodicityOQRW}) with period 2, and the cyclic projections $P_0,P_1$ can be chosen in different ways: $$ P_0^{(x)} = |x\rangle\langle x| \otimes |1\rangle\langle 1| + |x\rangle\langle x| \otimes |2\rangle\langle2|, \quad P_1^{(x)} = |Rx\rangle\langle Rx| \otimes |1\rangle\langle1| + |Rx\rangle\langle Rx| \otimes |2\rangle\langle2| $$ is a cyclic decomposition of the OQRW for any norm-one vector $x$ in $\mathbb{C}^2$. As mentioned above, this is due to the fact that the map $\mathfrak M^2$ does not have a unique decomposition in irreducible components: $\mathfrak{M}^2$ is the OQRW with all transition operators $L$ equal to $ \mathrm{Id} /\sqrt 2$. \end{example} We now discuss some results which will give us simple sufficient criteria for aperiodicity of an open quantum random walk. \begin{lemme}\label{prop_caracaperiodicite} Let $\mathfrak{M}$ be a $d$-periodic open quantum random walk. Let $i,j\in V$ and $x\in\mathrm{Ran\,}P_{k,i}$, $y\in\mathrm{Ran\,}P_{k',j}$ for some $k,k'\in\{0,\ldots,d-1\}$. For any path~$\pi\in\mathcal P(i,j)$ of length $\ell$ one has $ \langle y, L_\pi x\rangle =0$ unless $k'-k=\ell \ \mathrm{mod}\,d$. \end{lemme} \noindent{\bf Proof:} Relation \eqref{eq_periodicity} implies that $L_\pi x$ belongs to the range of $P_{k\oset{{\tiny d}}{+}\ell,j}$. $\Box$ \begin{theo}\label{theo_caracaperiodicite} Consider an irreducible open quantum random walk. For $i$ in~$V$, $x$ in $\mathfrak h_i$, define \begin{equation}\label{def_Dix} D(i,x) = {\rm GCD}\{ \ell\ge 1, \exists \pi \in {\cal P}_\ell(i,i)\;{\rm s.t.}\; \langle x,L_\pi x\rangle \neq 0 \}. \end{equation} Then, for every $x$ in the range of $P_{k,i}$, the period $d$ is a divisor of $D(i,x)$. In particular, if there exists $i$ in $V$ such that, for all $x\in \mathfrak h_i,\, D(i,x)=1$, then the open quantum random walk is aperiodic. \end{theo} \noindent{\bf Proof:} Irreducibility implies that the defining set of $\ell$'s is nonempty, so that $D(i,x)$ is well-defined. The result follows from Lemma \ref{prop_caracaperiodicite}. $\Box$ \begin{coro}\label{coro_perturbation} Consider an irreducible open quantum random walk $\mathfrak{M}$. If there exists $i$ in $V$ such that \begin{equation}\label{eq_coroperturbation} \forall x\in \mathfrak h_i,\, \langle x,L_{i,i} x\rangle\neq0 \end{equation} then the open quantum random walk is aperiodic. \end{coro} \begin{remark} The definition of the quantity $D(i,x)$ in Theorem~\ref{theo_caracaperiodicite} has an interpretation in terms of paths, and is reminiscent of the definition of the period for a state $i$ of a classical Markov chain with transition matrix $K$, \textit{i.e.} $D(i)=\mathrm{GCD}\,\{ \ell\geq 1\, |\, K_{ii}^{\ell}>0\}$. In addition, $D(i)$ coincides with \eqref{def_Dix} when applied to an OQRW which is a minimal dilation of the Markov chain. In the quantum context, however, $D(i,x)$ does not always coincide with the period, and, in particular, is not invariant with the argument $(i,x)$ even if the OQRW is irreducible (see Example \ref{example_Dnotconstant}). Even worse, the relation $d\, |\, D(i,x)$ may not hold if $x$ does not belong to the range of some $P_{i,k}$. Since the $P_k$ are \emph{a priori} unknown, the practical study of the period of an OQRW is difficult when simple sufficient conditions (such as the condition for aperiodicity given in Theorem \ref{theo_caracaperiodicite}) do not hold. \end{remark} \smallskip In difficult cases, the following result can be helpful: \begin{prop}\label{prop_caracaperiodic} Consider an irreducible, finite, $d$-periodic open quantum random walk $\mathfrak{M}$. If for some $i$ in $V$, and some $\ell$ prime with $d$, there exists a loop $\pi\in\mathcal P_\ell(i,i)$ of length $\ell$, such that $L_\pi$ is invertible, then $d$ is a divisor of $\mathrm{dim}\,\mathfrak h_i$. \end{prop} \noindent{\bf Proof:} By Bezout's lemma, for any $k$ in $0,\ldots, d-1$ there exists an integer $a$ such that $a\ell=k \, \mathrm{mod}\, d$. Then $L_\pi^a P_{0,i} L_\pi^{-a}=P_{k,i}$, so that $\mathrm{dim}\, P_{k,i}$ does not depend on $k$. Therefore $\mathrm{dim}\, \mathfrak h_i= d\, \mathrm{dim}\, P_{0,i}$ and the conclusion follows. $\Box$ \begin{remark}\label{remark_loops} As a consequence of Corollary \ref{coro_perturbation}, starting from a finite irreducible periodic open quantum random walk $\mathfrak M$ we can perturb it into an aperiodic one, $\mathfrak M_{(\varepsilon)}$, in different ways. If there exists $i_0$ in $V$ such that $L_{i_0,i_0}=0$ then one possible way is to define \[ L_{i,j}^{(\varepsilon)}=L_{i,j} \mbox{ if }j\neq i_0\quad \mbox{and} \quad L_{i,i_0}^{(\varepsilon)} =\left\{\begin{array}{ll}\sqrt{\varepsilon\,}\,\mathrm{Id}& \mbox{ if }i=i_0,\\ \sqrt{1-\varepsilon\,}\,L_{i,i_0} & \mbox{ if }i\neq i_0. \end{array}\right. \] This is the analogue of ``adding a loop" for classical Markov chains. Another way is to ``add a loop" at every site, a method we will use in Example \ref{ex_Vn}. \end{remark} For clarity we restate Proposition \ref{prop_cvgCPTPaperiodic} specifically for OQRWs: \begin{theo}\label{theo_convergenceaperiodic} Consider an irreducible, aperiodic and finite open quantum random walk $\mathfrak{M}$. For any state $\rho$, the sequence $(\mathfrak M^n(\rho))_n$ converges to the invariant state~$\rho^{\mathrm{inv}}$ (which is unique, and faithful). \end{theo} \section{Ergodic properties of irreducible OQRWs}\label{section_recurrence} We will now discuss the consequences of the previous theoretical results in terms of ergodic properties of irreducible open quantum random walks. A first result in this direction is the following, which is a consequence of the ergodic theorem due to K\"ummerer and Maassen (\cite{KuMa}). For completeness we give a self-contained proof in the present framework. \begin{theo}[K\"ummerer-Maassen]\label{theo_kuma} If the open quantum random walk $\mathfrak M$ is finite then there exists a random variable $\rho^{\mathrm{inv}}= \sum_{i\in V}\rho^{\mathrm{inv}}(i)\otimes|i\rangle\langle i |$ with values in the set of invariant states on $\mathcal H=\bigoplus_{i\in V}\mathfrak h_i$ such that almost-surely, \[ \frac 1n\sum_{k=0}^{n}\rho_k\otimes|X_k\rangle\langle X_k| \underset{n\to\infty}{\longrightarrow} \sum_{i\in V}\,\rho^{\mathrm{inv}}(i)\otimes|i\rangle\langle i |.\] \end{theo} \noindent{\bf Proof:} Let $\eta_n$ be the state $\rho_n \otimes |X_n\rangle\langle X_n|$. Denote by $\mathcal F_n$ the $\sigma$-algebra generated by $\eta_k$ for $k\leq n$, and let \[m_n = \sum_{k=0}^n \eta_k- \sum_{k=0}^{n-1} \mathfrak{M}( \eta_{k}).\] We have, from \eqref{eq_exprhon} above, $\mathbb E(m_{n+1} - m_n | \mathcal F_n) =0$ so that $(m_n)_{n}$ is a martingale, and since $\|m_{n+1}-m_n\|=\| \eta_{n+1} - \mathfrak M (\eta_n)\|$ is uniformly bounded, we can apply the law of large numbers for martingales with uniformly bounded increments. Therefore, $ \frac{1}{n} \sum_{k=0}^{n}\eta_k - \frac{1}{n} \sum_{i=0}^{n-1} \mathfrak{M} (\eta_k) \to 0 $ where convergence is meant in the almost-sure sense. In turn, this implies for any $N\in \mathbb{N}^*$, \[ \frac{1}{n} \sum_{k=0}^{n}\eta_k - \frac{1}{n} \sum_{k=0}^{n-1} \mathfrak{M}^N (\eta_k) \to 0 \] so that \[ \frac{1}{n} \sum_{k=0}^{n}\eta_k - \frac{1}{n} \sum_{k=0}^{n-1} \frac{\mathrm{Id}+\mathfrak{M}+\ldots+\mathfrak{M}^{N-1}}N (\eta_k) \to 0. \] For any state $\eta$, $ \frac{\mathrm{Id}+\mathfrak{M}+\ldots+\mathfrak{M}^{N-1}}N (\eta)$ converges when $N$ goes to infinity to an invariant state. This can be seen viewing $\mathfrak{M}$ as a contraction on the Hilbert-Schmidt space $\mathcal I_2(\mathcal H)$, \textit{i.e.} $\mathcal B(\mathcal H)$ equipped with the scalar product $\mathrm{Tr}(A^*B)$. This invariant state must be of the form $P\eta$, where $P$ is a linear operator on $\mathcal I_1(\mathcal H)$. The operator $P$ can be approximated uniformly by $\frac{I + \mathfrak{M} + \ldots + \mathfrak{M}^{N-1}}{N}$, therefore $ \frac{1}{n} \sum_{k=0}^{n} \eta_k - \frac{1}{n} \sum_{k=0}^{n-1} P \eta_k\to 0. $ On the other hand $P\mathfrak M = P$ implies that $ \mathbb{E}(P\eta_{n+1}|\mathcal F_n)=P\eta_n$, \textit{i.e.} $(P\eta_n)_n$ is a bounded martingale, so $\frac{1}{n} \sum_{k=0}^{n} P \eta_k$ converges almost-surely to some invariant state. This concludes the proof. $\Box$ A direct consequence of Theorem \ref{theo_kuma} (of which we shall preserve the notations) and of our previous observations on the form of $\rho^{\mathrm{inv}}$ is the following: \begin{coro}\label{coro_rec1} If the open quantum random walk $\mathfrak{M}$ is finite and irreducible with invariant (and faithful) state $\rho^{\mathrm{inv}}=\sum_{i\in V}\rho^{\mathrm{inv}}(i)\otimes|i\rangle\langle i |,$ then for all $i$~in~$V$, define $ N_n(i)=\mathrm{card}\,\{k\leq n\, |\, X_k=i\}$. We have \begin{eqnarray*} \frac{N_n(i)}{n} &\underset{n\to\infty}{\longrightarrow}& \mathrm{Tr}\,\rho^{\mathrm{inv}}(i)\mbox{ almost-surely,}\\ \frac1n\sum_{k=0}^{n-1}\mathbb{P}(X_k=i) &\underset{n\to\infty}{\longrightarrow}& \mathrm{Tr}\,\rho^{\mathrm{inv}}(i),\\ \frac 1{N_n(i)}\sum_{k=0}^{n-1}\rho_k \, \ind_{X_k=i}&\underset{n\to\infty}{\longrightarrow}& \frac{\rho^{\mathrm{inv}}(i)}{\mathrm{Tr}\,\rho^{\mathrm{inv}}(i)}\mbox{ almost-surely.} \end{eqnarray*} \end{coro} \noindent{\bf Proof:} This is simply obtained by examination from Theorem \ref{theo_kuma}. $\Box$ \begin{remark} The only new result that we brought to the above picture is Theorem~\ref{theo_unicity}, which tells us that the state $\rho^{\mathrm{inv}}$ is unique and faithful, and in particular $\mathrm{Tr}\,\rho^{\mathrm{inv}}(i)>0$ for any $i$ in $V$. This implies that, for any irreducible open quantum random walk with an invariant state $\rho^{\mathrm{inv}}$, one has $$ \mbox{for all }i\in V, \quad \mathbb{P}(X_n=i\ \mbox{infinitely often})=1,$$ $$ \mbox{for all }i\in V,\, x\in \mathfrak h_i,\quad \mathbb{P}(\langle x,\rho_n(i)\,x\rangle\,\ind_{X_n=i}>0 \ \mbox{infinitely often})=1.$$ The first statement has an immediate interpretation in terms of ``spatial recurrence" (every site $i$ in $V$ is visited infinitely often), the second one is stronger and can be seen as ``spatial and internal recurrence". \end{remark} \smallskip The second ergodic result of this section is a consequence of Theorem \ref{theo_ergodicconvergence}. \begin{coro}\label{coro_rec2} If the open quantum random walk $\mathfrak{M}$ is irreducible with invariant (and faithful) state $\rho_{\mathrm{inv}}=\sum_{i\in V}\rho^{\mathrm{inv}}(i)\otimes|i\rangle\langle i |,$ then for all $i$ in~$V$, \begin{eqnarray*} \frac1n \sum_{k=0}^{n-1} \mathbb{P}(Q_k=i)&\underset{n\to\infty}{\longrightarrow}& \mathrm{Tr}\rho^{\mathrm{inv}}(i)\\ \frac 1n\sum_{k=0}^{n-1}\mathfrak M ^k(\rho,i) &\underset{n\to\infty}{\longrightarrow}& \rho^{\mathrm{inv}}(i) \mbox{ in the weak-$\ast$ sense}. \end{eqnarray*} \end{coro} \begin{remark} The assumption that there exists an invariant state is necessary in Corollary \ref{coro_rec2} (contrary to Corollaries \ref{coro_rec1} and \ref{coro_rec3}, where it is always true and only stated to establish notations), because we do not assume finiteness of $\mathfrak{M}$. Since for all $n\in \mathbb{N}$ and $i$ in $V$ one has $\mathbb{P}(X_n=i)=\mathbb{P}(Q_n=i)$, the first statement of Corollary \ref{coro_rec2} is a refinement of the second statement of Corollary~\ref{coro_rec1}. \end{remark} Our third corollary is a consequence of Theorem \ref{theo_convergenceaperiodic}. It is an improvement of the previous result in the case where we have aperiodicity. \begin{coro}\label{coro_rec3} If the open quantum random walk $\mathfrak{M}$ is finite, irreducible and aperiodic with invariant (and faithful) state $\rho_{\mathrm{inv}}=\sum_{i\in V}\rho^{\mathrm{inv}}(i)\otimes|i\rangle\langle i |,$ then for all $i$ in $V$, \begin{eqnarray*}\mathbb{P}(Q_n=i) &\underset{n\to\infty}{\longrightarrow}& \mathrm{Tr}\rho^{\mathrm{inv}}(i),\\ \mathfrak M ^n(\rho,i) &\underset{n\to\infty}{\longrightarrow}& \rho^{\mathrm{inv}}(i).\end{eqnarray*} \end{coro} \begin{remark}\ \vspace{-0.5em}\begin{itemize} \item Corollary \ref{coro_rec2} seems rather useless, from an operational point of view: there is no joint realization of the different $Q_k$ or $\mathfrak M^k(\rho,i)$ for different $k$; or, in other terms, measuring $Q_k$ disrupts the existence of $Q_{k'}$ or $\mathfrak M^{k'}(\rho,i)$ for ${k'}>k$. Corollary \ref{coro_rec3}, on the other hand, is operational, and tells us that, if the system is left to evolve for a large time, then a single measurement will give the position $i$ with approximate probability $\mathrm{Tr}\rho^{\mathrm{inv}}(i)$, and the state after that unique measurement will be approximately $\frac{\rho^{\mathrm{inv}}(i)}{\mathrm{Tr}\rho^{\mathrm{inv}}(i)}$. These limiting quantities are the same as those that appear for limits \emph{with} measurements. This results display evident similarities with the behavior of classical Markov chains. \item The results concerning the probabilities may not seem surprising, precisely because of the equality of the laws of $X_n$ and $Q_n$ and of the K\"ummerer-Maassen theorem; the convergence in Corollary \ref{coro_rec3}, however, is completely new. Results regarding the induced states $\mathfrak{M}^n(\rho,i)$ in Corollary \ref{coro_rec2} are new and show that the limits are the same whether we do an infinite number of measurements (for the sequence $(\rho_n)_n$), or just one but after an ``infinite" time (for $(\mathfrak{M}^n(\rho))_n$). In addition, all convergences in Corollary~\ref{coro_rec3} (without averaging) are new. \item Example \ref{ex_Vn} shows that irreducibility is a necessary assumption for Corollary \ref{coro_rec3}, as one could expect from analogous results for classical Markov chains. \end{itemize} \end{remark} \section{Reducible OQRWs and communication classes} \label{section_nonirreducible} In this section we study the failure of irreducibility for an open quantum random walk. Considering reducible OQRWs, the first natural problem one has to face is how to characterize reducibility and how to determine reducing, possibly minimal, components. A reasonable way to proceed, mimicking what happens for classical Markov chains, is to define a communication relation between vectors of the Hilbert space~${\cal H}$: this relation should be an equivalence relation constructed in such a way that the induced equivalence classes are the irreducible components of the map $\mathfrak{M}$. We will see that it is possible to do this in a way which is consistent with the classical case. However, it is important to immediately underline that the quantum case displays peculiar features: the decomposition of the Hilbert space ${\cal H}$ as the direct sum of irreducible components is not unique in general. This is not at all surprising if one thinks about the structure of invariant states for a CP-TP map (see \cite{BN}, from which we take much of our inspiration): essentially, the quantum peculiarity is related to the fact that there can exist stationary states which are not simple convex combinations of the stationary states on each irreducible component. \smallskip Following Baumgartner and Narnhofer (\cite{BN}), we call a closed vector space~$\mathcal V$ an \textit{enclosure} (for $\mathfrak{M}$) if $\mathrm{supp} \, \rho\subset \mathcal V$ for $\rho$ a positive, trace-class operator implies $\mathrm{supp} \, \mathfrak M(\rho)\subset \mathcal V$. The next proposition will be extremely useful. The fact that the support of an invariant state is an enclosure is however a known result, see \textit{e.g.} section 1 in~\cite{FV}. From now on we fix an OQRW $\mathfrak{M}$ with the same notation as in section \ref{section_OQRWs}. \begin{prop}\label{lemma_enctraj} \ \\ \vspace{-1.8em} \begin{enumerate} \item If $\mathcal V$ is a closed subspace of $\mathcal H$, then it is an enclosure if and only if it is stable by $L_\pi\otimes |j\rangle\langle i |$ for any $i,j$ in $V$ and $\pi\in\mathcal P(i,j)$. In particular, if a vector $x=\sum_{i\in V}{x_i}\otimes |i\rangle$ is in $\mathcal V$ then $ {L_\pi x_i}\otimes\vec{j}\in\mathcal V$. \item A projection $P$ reduces $\mathfrak M$ if and only if $$ P(L_\pi\otimes |j\rangle\langle i |)P=(L_\pi\otimes |j\rangle\langle i |)P \quad \mbox{for all $i,j$ in $V$ and } \pi\in\mathcal P(i,j). $$ \item If $\mathcal V$ is an enclosure, then $(L_\pi\otimes |j\rangle\langle i |)(\mathcal V)\subset \mathfrak h_j\cap \mathcal V$ for any $i,j$ in $V$ and~$\pi\in\mathcal P(i,j)$, and $\bigoplus_{j\in V} {\rm Vect}\{(L_\pi\otimes |j\rangle\langle i |)(\mathcal V),i\in V,\pi \in\mathcal P(i,j)\}$ is also an enclosure. \item The support of an invariant state is an enclosure. \end{enumerate} \end{prop} \noindent{\bf Proof:} \begin{enumerate} \item Suppose that $\mathcal V$ is an enclosure. Remark that, for any positive integer $\ell$ and any $x=\sum_{i\in V}{x_i}\otimes |i\rangle$ in $\mathcal H$, one has \begin{equation} \label{eq_Mell} \mathfrak{M}^\ell (\ketbra xx) = \sum_{i,j\in \, V} \sum_{\pi\in\mathcal P_\ell(i,j)} \ketbra{L_\pi x_i}{L_\pi x_i}\otimes \ketbra jj; \end{equation} so, if $x$ is in $\mathcal V$, then every ${L_\pi x_i}\otimes\vec{j}$ is in $\mathcal V$. Conversely, let us now suppose that $\mathcal V$ is stable under the action of the operators $L_\pi\otimes |j\rangle\langle i |$. Starting from~$\rho$ with $\mathrm{supp}\, \rho$ in $\mathcal V$, then considering its spectral decomposition and~\eqref{eq_Mell} above shows that $\mathrm{supp}\,\mathfrak{M}^\ell(\rho)\subset \mathcal V$. This proves the first statement. \item Just recall that a subspace of $\mathcal H$ is the support of a reducing projection if and only if it is an enclosure. This point is then an immediate consequence of the previous one. \item This point also follows immediately from $1$. \item Consider an invariant state $\rho_0$ and a state $\rho$ with support contained in $\mathrm{supp}\,\rho_0$. Then there exists a weak approximation of $\rho$ by an increasing sequence of finite-dimensional operators $\rho_n$ with $\mathrm{supp}\,\rho_n\subset\mathrm{supp}\,\rho_0$. Furthermore, for every $n$, there exists a $\lambda_n$ such that $\rho_n\leq \lambda_n\rho_0$, so that $\mathfrak{M}(\rho_n)\leq \lambda_n \rho_0$ and $\mathrm{supp}\,{\mathfrak M}(\rho_n)\subset\mathrm{supp}\,\rho_0$. The sequence $\mathfrak{M}(\rho_n)$ is increasing and weakly convergent to $\mathfrak{M}(\rho)$ so that $\mathrm{supp}\,\mathfrak{M}(\rho)\subset\mathrm{supp}\, \rho_0$, which proves that $\mathrm{supp}\, \rho_0$ is an enclosure.\ $\Box$ \end{enumerate} In general, it is not true that all the reducing projections are diagonal, \textit{i.e.} of the form $\sum_{i\in V}P_i\otimes\ketbra ii$, but by the previous Proposition, point $3$, if $\mathfrak M$ is reducible, then it admits at least one block-diagonal reducing projection. So the reducibility of an OQRW can be established considering only block-diagonal projections. Moreover, notice that the support projection of an invariant state is block-diagonal, \textit{i.e.} of the form $P=\sum_{i\in V} P_i\otimes \ketbra ii$. We can characterize the block-diagonal projections reducing $\mathfrak M$ using the unravelling of $\mathfrak M$. \begin{prop}\label{SubhProj} An orthogonal block-diagonal projection $P=\sum_j P_j\otimes|j\rangle\langle j|$ reduces $\mathfrak M$ if and only if $\mathrm{Ran}\,L_{i,j}P_j\subset \mathrm{Ran}\,P_i$, (i.e. $L_{i,j}P_j=P_i L_{i,j}P_j$) for all $i$ and $j$ in $V$. Equivalently, a closed subspace of the form $\mathcal V=\oplus_{i\in V}\mathcal V_i$, with~$\mathcal V_i\subset \mathfrak h_i$, is an enclosure if and only if $L_{i,j} \mathcal V_j \subset \mathcal V_i$ for all $i$ and $j$. \end{prop} \noindent{\bf Proof:} It is clear that it is sufficient to prove only the first statement, due to the relation between enclosures and reducing projections that we have recalled also in the previous proof. So, take a reducing projection $P=\sum_{j\in V} P_j\otimes|j\rangle\langle j|$. By point 2 in the previous proposition, it is necessary that the range of $P$ is invariant under the action of all operators of the form $L_{i,j}\otimes |i\rangle\langle j |$ and so the relation $L_{i,j}P_j=P_i L_{i,j}P_j$ for all $i$ and $j$ in $V$ immediately follows. The reverse implication is also easy to obtain using again the characterization in point 2 of previous proposition and the fact that any operator $L_\pi\otimes |i\rangle\langle j |$, for a path~$\pi=(i_0,i_1,....i_{\ell})\in {\mathcal P}(i,j)$ ($i=i_0,\, j=i_\ell$) of length $\ell$, is the composition of the operators $L_{i_{k+1},i_k}\otimes |i_{k+1}\rangle\langle i_k |$, with $k=0,...\ell-1$. $\Box$ \begin{coro} When $P=\sum_{j\in V} P_j\otimes|j\rangle\langle j|$ is a reducing projection, each $P_j$ is a projection on a subspace preserved by $L_{jj}$. \end{coro} \begin{remark}\label{InvariantStates} Suppose that, for all sites $i$ and $j$, there exists ``a path of invertible operators which connects them'', i.e. a path $\pi\in{\cal P}(i,j)$ such that $L_\pi$ is invertible. In this case, the previous proposition proves that, if $P=\sum_{j\in V} P_j\otimes|j\rangle\langle j|$ is a reducing projection, then $\mathrm{rank}\,P_i=\mathrm{rank}\, P_j$ for any $i,j$ in $V$. In particular, if a state $\rho=\sum_{i\in V}\rho_i\otimes \ketbra ii$ is invariant, then $\rho_i\neq 0$ for all $if$ (i.e. an invariant state is supported by all sites), and if $\rho_i$ is faithful on $\mathfrak h_i$ for some index $i$, then~$\rho_j$ is faithful on $\mathfrak h_j$ for any $j$ in $V$. \end{remark} \smallskip The next notion, of enclosure generated by a single vector in $\mathcal H$, will be crucial in our analysis of decompositions of reducible OQRWs: \begin{defi} For $\phi$ in $\mathcal H$, we denote by $\mathrm{Enc}(\phi)$ the closed vector space $$\mathrm{Enc}(\phi)=\overline{ \mathrm{Vect}\,\bigcup_{i,j\in V}\{(L_\pi\otimes \ketbra ji) \, \phi \, |\, \pi\in\mathcal P(i,j)\}}.$$ Consistently with Proposition \ref{lemma_enctraj}, we will consider specifically enclosures of vectors $x\otimes \vec i$, which take the form $$\mathrm{Enc}(x\otimes \vec i)=\overline{ \mathrm{Vect}\,\bigcup_{j\in V}\{{L_\pi\, x}\otimes|j\rangle \, |\, \pi\in\mathcal P(i,j)\}}.$$ \end{defi} We will be mostly interested in enclosures that are minimal but non-trivial (\emph{i.e.} not equal to $\{0\}$). From now on, the term \emph{minimal enclosure} will refer to minimal, non-trivial enclosures. The following lemma contains relevant properties of enclosures. \begin{lemme}\label{lemma_lemenc}\ \vspace{-0.5em} \begin{itemize} \item The space $\mathrm{Enc}( x\otimes\vec i)$ is the smallest enclosure containing $x\otimes \vec i$. \item Any minimal enclosure is of the form $\mathrm{Enc}(x\otimes\vec i)$. \item If two minimal enclosures $\mathrm{Enc}(x\otimes\vec i)$ and $\mathrm{Enc} (y\otimes\vec j)$ are distinct then they are in direct sum. \end{itemize} \end{lemme} \noindent{\bf Proof:} All statements follow from Proposition \ref{lemma_enctraj}. $\Box$ \begin{remark}\label{remark_encxi} In the same way that the specific form of $\mathfrak{M}(\rho)$ led us to consider only states $\rho$ of the form $\rho=\sum_{i\in V}\rho(i) \otimes |i\rangle \langle i|$, Proposition \ref{lemma_enctraj} shows that vectors of the form $x\otimes \vec i$ are of particular interest. In particular, any minimal enclosure will be generated by a vector $x\otimes \vec i$. It is not true, however, that any $\mathrm{Enc} (x\otimes \vec i)$ is a minimal enclosure, as the following example shows. \end{remark} \begin{example}\label{ex_2} Take $V=\{1,2,3\}$ with $$ L_{1,2}=L_{2,3}=L_{3,1}=\frac1{\sqrt 5}\begin{pmatrix}1 & 0\\ 0 & 2\end{pmatrix} \qquad L_{2,1}=L_{3,2}=L_{1,3}=\frac1{\sqrt 5}\begin{pmatrix}2 & 0\\ 0 & 1\end{pmatrix}.$$ One can see that for $k=1,2$, $\mathrm{Enc}(e_k\otimes |1\rangle)$, are minimal enclosures, equal to~$\mathbb{C}\, e_k\otimes\ell^2(V)$, but the space $\mathrm{Enc}((e_1+e_2)\otimes |1\rangle)$ is equal to $\mathbb{C}^2\otimes\ell^2(V)$. \end{example} \begin{remark} \label{remark-siteconnection} Let us return to the notion of irreducibility, as introduced in Definition~\ref{defi_irroqrw}: an open quantum random walk $\mathfrak{M}$ is irreducible if for any $i,j$ in $V$, one has $i\overset{\mathfrak M}{\rightarrow} j$, which by Proposition \ref{prop_ergodicityOQRW} is defined by the equivalent conditions \begin{equation}\label{eq_connect1} \forall \, x\in\mathfrak h_i, \, y\in \mathfrak h_j,\ \exists \pi\in\mathcal P(i,j) \mbox{ such that }\langle{y},{L_\pi x}\rangle\neq 0, \end{equation} \begin{equation}\label{eq_connect2} \forall \, x\in\mathfrak h_i, \, y\in \mathfrak h_j,\ y\in \overline{\mathrm{Vect}\,\{L_\pi\, x\, |\, \pi\in \mathcal P(i,j)\}}. \end{equation} From the above discussion it is clear that both conditions can be characterized using enclosures: \begin{eqnarray*} \exists \pi\in\mathcal P(i,j) \mbox{ such that }\langle{y},{L_\pi x}\rangle\neq 0 &\Leftrightarrow& y \not\in \mathrm{Enc}(x\otimes \vec i)^\perp\\ y\in \overline{\mathrm{Vect}\,\{L_\pi\, x\, |\, \pi\in \mathcal P(i,j)\}} &\Leftrightarrow& y \in \mathrm{Enc}(x\otimes\vec i). \end{eqnarray*} As we will see below, in Proposition \ref{prop_coherence}, the orthogonal of an enclosure can be related to another enclosure. This will allow us to strengthen the connection between the two notions above. \end{remark} The above discussion gives immediately: \begin{lemme} An open quantum random walk $\mathfrak{M}$ is irreducible if and only if~$\mathcal H$ is a minimal enclosure, or equivalently, if $\mathcal H=\mathrm{Enc}(x\otimes \vec i)$ for any $x\otimes \vec i$ in~$\mathcal H$. \end{lemme} To emphasize the picturesque aspect of our definition of irreducibility, we define the following notion of accessibility among vectors, denoted by $\overset{\mathfrak M}{\rightarrow}$. We remark that the notation, $\overset{\mathfrak M}{\rightarrow}$, is the same we used in Definition \ref{defi_irroqrw}, but this should not generate confusion, the difference being clear in the arguments we use: in the previous case, the connection $\overset{\mathfrak M}{\rightarrow}$ is between sites $i$, $j$ in $V$, whereas here it is between vectors $\phi$, $\psi$ of the Hilbert space $\mathcal H$. \begin{defi} For $\phi$, $\psi$ in $\mathcal H$, we denote $\phi \overset{\mathfrak M}{\rightarrow} \psi$ if $\psi\in\mathrm{Enc}(\phi)$, and $\phi \overset{\mathfrak M}{\leftrightarrow} \psi$ if $\phi \overset{\mathfrak M}{\rightarrow} \psi$ and $\psi \overset{\mathfrak M}{\rightarrow} \phi$. \end{defi} Again, we will be specifically interested in the relation $\overset{\mathfrak M}{\rightarrow}$ between vectors of the form $x\otimes \vec i$, $y\otimes \vec j$ and we have immediately \[ x\otimes \vec i \overset{\mathfrak M}{\rightarrow} y\otimes \vec j \Leftrightarrow y \in \overline{\{L_\pi \, x \, |\, \pi \in \mathcal P(i,j)\}}. \] Going back to the connection between sites, we can also add that, due to Remark \ref{remark-siteconnection}, $$ i\overset{\mathfrak M}{\rightarrow} j \qquad\Leftrightarrow \qquad x\otimes \vec i \overset{\mathfrak M}{\rightarrow} y\otimes \vec j \mbox{ for all } x\in\mathfrak h_i, y\in\mathfrak h_j. $$ The following Proposition can easily be proven: \begin{prop} The relation $\overset{\mathfrak M}{\rightarrow}$ on $\mathcal H$ is transitive, and $\overset{\mathfrak M}{\leftrightarrow}$ is an equivalence relation. Any minimal enclosure is an equivalence class of $\overset{\mathfrak M}{\leftrightarrow}$. \end{prop} \begin{remark} Every equivalence class of a vector $x\otimes \vec i $ by $\overset{\mathfrak M}{\leftrightarrow}$ is a subset of~$\mathcal H$ contained in $\mathrm{Enc} (x\otimes \vec i)$, but it may fail to be an enclosure and even a subspace. A minimal dilation of a classical Markov chain with a proper transient class easily gives an example of an equivalence class that is not an enclosure. For an example where an equivalence class is not a subspace, consider Example \ref{ex_notsubspace} below. \end{remark} \begin{example}\label{ex_notsubspace} Consider $V=\{1,2\}$, $\mathfrak h_1=\mathfrak h_2=\mathbb{C}^2$ with canonical basis denoted by $(e_1,e_2)$, and introduce the OQRW $\mathfrak{M}$ with transitions \[L_{1,1}=L_{2,2}=\frac1{\sqrt 2}\begin{pmatrix}0&1\\1&0\end{pmatrix} \qquad L_{1,1}=L_{2,2}=\frac1{\sqrt 2}\,\mathrm{Id}.\] Then the only minimal enclosures are \[E_+=\mathrm{Enc}\big((e_1+e_2)\otimes \vec 1 \big) = \mathbb{C}\, (e_1+e_2)\otimes\ell^2(V)\] \[E_-=\mathrm{Enc}\big((e_1-e_2)\otimes \vec 1 \big) = \mathbb{C}\, (e_1-e_2)\otimes\ell^2(V)\]\end{example} and for any $x\not\in \mathbb{C}(e_1+e_2)\cup \mathbb{C}(e_1-e_2)$ one has \[\mathrm{Enc}(x\otimes \vec 1) = \mathrm{Enc}(x\otimes \vec 2) = \mathcal H.\] Therefore, for such an $x$, the equivalence class of $x\otimes \vec 1$ is $\mathcal H \,\setminus (E_+\cup E_-)$. \section{Decompositions of OQRWs and invariant states}\label{section_stationary} In this section we wish to focus on the behavior of an OQRW $\mathfrak M$ on the so-called fast recurrent subspace, i.e. the support of the $\mathfrak M$-invariant states. We decompose the corresponding restriction of $\mathfrak M$ into a ``direct sum" of irreducible OQRWs $\mathfrak M_k$, establish when this decomposition is unique, and study how the different irreducible components interact. We follow the lines traced in \cite{BN} for quantum evolutions on finite dimensional Hilbert spaces; we will state and prove generalizations to infinite dimension. As we will see in Proposition \ref{prop_coherence}, the form of invariant states is dictated by the unicity or non-unicity of the decompositions into minimal enclosures, and Lemma \ref{lemma_unicitydec} shows that non-unicity is related to the existence of mutually non-orthogonal minimal enclosures. To further study the stationary states of an OQRW, we recall some notation. Inspired by \cite{FV}, we denote: \[ \mathcal R = \mathrm{sup}\{\mathrm{supp}\,\rho\, |\, \rho \mbox{ an invariant state}\} . \] This space is often called the fast recurrent space. \begin{remark}\label{remark_RDfinite} The above definition of $\mathcal R$ is unfortunately not explicit, and makes a (small) part of Theorem \ref{theo_invariantstates} describing stationary states tautological. In the finite dimensional case, $\mathcal R$ can be equivalently described (as is done in \cite{BN}) without reference to the set of invariant states, as $\mathcal R=\mathcal D^\perp$, where $\mathcal D$ is defined~by \[\mathcal D = \{\phi\in \mathcal H\, |\,\langle\phi,\mathfrak{M}^n(\rho)\,\phi\rangle\underset{n\to\infty}{\longrightarrow}0 \mbox{ for any state }\rho\}.\] \end{remark} The following Lemma is an immediate consequence of Proposition \ref{lemma_enctraj}. \begin{lemme} The subspace $\mathcal R$ is an enclosure. \end{lemme} We let $\mathcal D =\mathcal R^\perp$, which is characterized as \[\mathcal D = \{\phi \in \mathcal H \, |\, \langle \phi,\rho\,\phi\rangle =0 \mbox{ for any invariant state } \rho\}.\] From the block-diagonal structure of $\mathfrak M^n (\rho)$ for $\rho$ any state, we clearly have \[ \mathcal R = \bigoplus_{i\in V} \mathcal R_i \mbox{ with } \mathcal R _i \subset \mathfrak h_i, \qquad \mathcal D = \bigoplus_{i\in V} \mathcal D_i \mbox{ with } \mathcal D _i \subset \mathfrak h_i. \] Since our main interest is to investigate the invariant states of open quantum random walk $\mathfrak{M}$ on $\mathcal H$, we will be interested in decomposing $\mathcal R$, not~$\mathcal D$, into irreducible subsystems. \smallskip We will use the following results, which were stated in \cite{BN} in the finite dimensional case. We extend them here to infinite dimension. \begin{prop}\label{prop_coherence} If $\mathcal V$ and $\mathcal W$ are two subspaces of $\mathcal H$ such that $\mathcal V \cap \mathcal W =\{0\}$ and $\rho$ is a state with support in $\mathcal V \oplus \mathcal W$, then denote \[\rho_{\mathcal V}= P_{\mathcal V}\, \rho \,P_{\mathcal V},\quad \rho_{\mathcal W}= P_{\mathcal W}\, \rho \,P_{\mathcal W}\, \quad \rho_{\mathcal C}= P_{\mathcal V}\, \rho\, P_{\mathcal W}, \quad \rho_{\mathcal C}'=P_{\mathcal W}\,\rho \,P_{\mathcal V} \] so that $\rho = \rho_{\mathcal V} + \rho_{\mathcal W} + \rho_{\mathcal C}+\rho_{\mathcal C}'$. Decompose $\mathfrak{M}(\rho)$ in a similar way. \begin{enumerate} \item If $\mathcal V$ is an enclosure, then $P_{\mathcal W}\, \mathfrak{M} (\rho_C+\rho_C')\, P_{\mathcal W}=0$. \item If $\mathcal V$ is an enclosure, then so is $\mathcal V^\perp\cap\mathcal R$. \item If $\mathcal V$ and $\mathcal W$ are enclosures, then \[ \mathfrak{M} (\rho)_{\mathcal V} =\mathfrak{M}(\rho_{\mathcal V}) \quad \mathfrak{M} (\rho)_{\mathcal W} =\mathfrak{M}(\rho_{\mathcal W}) \quad \mathfrak{M} (\rho)_{\mathcal C} =\mathfrak{M}(\rho_{\mathcal C}) \quad \mathfrak{M} (\rho)_{\mathcal C}' =\mathfrak{M}(\rho_{\mathcal C}').\] \item A subspace of $\mathcal R$ is a minimal enclosure if and only if it is the support of an extremal invariant state. In particular, if $\mathcal V\subset\mathcal R$ is an enclosure, then it contains a (non-trivial) minimal enclosure. \item If $\rho$ is $\mathfrak{M}$-invariant and $\mathcal V$ and $\mathcal W$ are two minimal enclosures contained in~$\mathcal R$, such that the decomposition of $\mathcal V \oplus \mathcal W$ into a sum of minimal enclosures is unique, then $\rho_{\mathcal C}=0$ and $\rho_{\mathcal C}'=0$. \end{enumerate} \end{prop} \noindent{\bf Proof:} We essentially borrow the main ideas of the proofs from \cite{BN}, adding some variations when required by the infinite dimensional setting. \begin{enumerate} \item To prove the first point we define $\kappa_{\pm \varepsilon}=\frac1\varepsilon \,\rho_{\mathcal V}\pm \,(\rho_{\mathcal C}+\rho_{\mathcal C}')+\varepsilon \, \rho_{\mathcal W}.$ We have~$\kappa_{\pm\varepsilon}\geq 0$ (as can be checked from $\langle u, \kappa_{\pm\varepsilon}\, u \rangle = \langle u_{\pm\varepsilon}, \rho\, u_{\pm\varepsilon} \rangle$ where $ u_{\pm\varepsilon}=\frac1{\sqrt \varepsilon}\,P_{\mathcal V}u + \sqrt \varepsilon\, P_{\mathcal W}u$), so that $\mathfrak M(\kappa_{\pm\varepsilon})\geq 0$, and, because $\mathcal V$ is an enclosure, the support of $\mathfrak M(\rho_{\mathcal V})$ is contained in $\mathcal V$, so that \[ P_{\mathcal W}\,\mathfrak M(\kappa_{\pm \varepsilon})\, P_{\mathcal W} = \pm P_{\mathcal W}\,\mathfrak M(\rho_{\mathcal C}+\rho_{\mathcal C}')\,P_{\mathcal W}+ \varepsilon \,P_{\mathcal W}\, \mathfrak M (\rho_{\mathcal W}) \, P_{\mathcal W}.\] This must be $\geq 0$ for any $\varepsilon$, and by necessity $ P_{\mathcal W}\,\mathfrak M(\rho_{\mathcal C}+\rho_{\mathcal C}')\,P_{\mathcal W}=0$. \item Consider $\mathcal W=\mathcal V^\perp$ and $\eta$ any invariant state; then \[\eta_{\mathcal V}+\eta_{\mathcal W}+\eta_{\mathcal C}+\eta_{\mathcal C}' = \mathfrak{M}(\eta_{\mathcal V}) + \mathfrak{M}(\eta_{\mathcal W})+\mathfrak{M}(\eta_{\mathcal C})+\mathfrak{M}(\eta_{\mathcal C}').\] Projecting by $P_{\mathcal W}$ this yields $\eta_{\mathcal W}= P_{\mathcal W} \mathfrak{M}(\eta_{\mathcal W})P_{\mathcal W} $, so that $P_{\mathcal V}\,\mathfrak{M}(\eta_{\mathcal W})\, P_{\mathcal V}$ is positive with zero trace. Therefore $P_{\mathcal V}\,\mathfrak{M}(\eta_{\mathcal W})\, P_{\mathcal V}=0$ which implies $P_{\mathcal V}\,\mathfrak{M}(\eta_{\mathcal W})=\mathfrak{M}(\eta_{\mathcal W})\, P_{\mathcal V}=0$ and so $\eta_{\mathcal W}=\mathfrak{M}(\eta_{\mathcal W})$. As the support of a stationary state, $\mathrm{supp}\,\eta_{\mathcal W}= \mathrm{supp}\,\eta\cap\mathcal V^\perp$ is an enclosure. Taking the supremum over all possible invariant states $\eta$, this tells us that $\mathcal R \cap \mathcal V^\perp$ is also an enclosure. \item If both $\mathcal V$ and $\mathcal W$ are enclosures, then by point 1, and the fact that $\mathrm{supp}\,\mathfrak{M}(\rho_{\mathcal V})\subset \mathcal V$ and $\mathrm{supp}\,\mathfrak{M}(\rho_{\mathcal W})\subset \mathcal W$, we have \begin{equation}\label{eq_etapeinvariance} \mathfrak{M}(\rho_{\mathcal C})+\mathfrak{M}(\rho_{\mathcal C}')=\mathfrak{M}(\rho)_{\mathcal C} +\mathfrak{M}(\rho)_{\mathcal C}'. \end{equation} Now remark that if \textit{e.g.} $\phi\in \mathcal V$ and $\psi\in \mathcal W$, then for any $i$ and $j$ in $V$ we have \[\big(L_{i,j}\otimes\ketbra ij\big) \phi \in \mathcal V\quad\mbox{ and }\big(L_{i,j}\otimes\ketbra ij\big)\psi \in \mathcal W. \] Therefore, \eqref{eq_etapeinvariance} actually implies $\mathfrak{M}(\rho_{\mathcal C})=\mathfrak{M}(\rho)_{\mathcal C}$ and $\mathfrak{M}(\rho_{\mathcal C}')=\mathfrak{M}(\rho)_{\mathcal C}'$. \item If $\mathcal V$ is a minimal enclosure contained in $\mathcal R$, then there exists an $\mathfrak{M}$-invariant state $\rho$ such that $\rho_{\mathcal V}=P_{\mathcal V}\rho P_{\mathcal V} \neq 0$. By point $3$, we have $\rho_{\mathcal V}=\mathfrak{M}(\rho)_{\mathcal V}=\mathfrak{M}(\rho_{\mathcal V})$, and so $\rho_{\mathcal V}$ is (up to normalization) an invariant state of $\mathfrak{M}_{|\mathcal I_1(\mathcal V)}$. Since ${\mathcal V}$ is irreducible, by Theorem \ref{theo_unicity}, $\mathfrak{M}_{|\mathcal I_1(\mathcal V)}$ has a unique invariant state, which has support equal to $\mathcal V$. Therefore, $\rho_{\mathcal V}$ is a state with support $\mathcal V$. This $\rho_{\mathcal V}$ must be extremal since $\rho_{\mathcal V}=t\, \rho_1 + (1-t)\, \rho_2$ with~$\rho_1$, $\rho_2$ invariant states and $t\in]0,1[$ would imply that $\rho_1$, $\rho_2$ are invariant states with support in $\mathcal V$ but then by unicity, $\rho_{\mathcal V}=\rho_1=\rho_2$. Conversely, if $\mathcal V= \mathrm{supp}\, \rho$ with $\rho$ an extremal invariant state, then $\mathcal V$ must be an enclosure. If, by contradiction, we suppose it is not minimal, then there exists an enclosure $\mathcal W$ with $\mathcal W \subset\mathcal V\subset \mathcal R$; then, using point~$2$ and repeating the arguments of the previous implication would yield the existence of two states (up to normalization) $\rho_{\mathcal W}$ and $\rho_{\mathcal W^\perp\cap \mathcal V}$ which are invariant, of which $\rho$ is a convex combination. The extremality of $\rho$ implies that $\mathcal W$ is either $\{0\}$ or $\mathcal V$ and so $\mathcal V$ is minimal. To prove the last statement, observe that by definition there exists an invariant $\rho$ such that $\mathcal V\cap\, \mathrm{supp}\,\rho\neq\{0\}.$ By point 3, $\mathcal V$ contains the support of the invariant state $\rho_{\mathcal V}$. By the Krein-Milman theorem, $\rho_{\mathcal V}$ is a convex combination of extremal invariant states, so there exists an invariant state $\eta$ such that $\mathrm{supp}\,\eta\subset \mathrm{supp}\, \rho_{\mathcal V}$, and the minimal enclosure $\mathrm{supp}\,\eta$ is contained in $\mathcal V$. \item If $\mathcal V$ and $\mathcal W$ are minimal enclosures contained in $\mathcal R$, then, as in the proof of point $4$, they are the supports of invariant states $\rho_{\mathcal V}$ and $\rho_{\mathcal W}$. Because the decomposition of $\mathcal V\oplus \mathcal W$ into minimal enclosures is unique, $\rho_{\mathcal V}$ and~$\rho_{\mathcal W}$ are the unique extremal invariant states of $\mathfrak{M}_{|\mathcal V \oplus \mathcal W}$. Since the set of invariant states is convex, then by the Krein-Milman theorem, $\rho$ is a convex combination of $\rho_{\mathcal V}$ and $\rho_{\mathcal W}$, so $\rho_{\mathcal C}$ and $\rho_{\mathcal C}'$ must be zero. $\Box$ \end{enumerate} \smallskip We can now return to the study of enclosures generated by vectors of the form~$x\otimes |i\rangle$. Remark that ``non-connectedness of $i$ and $j$ through $\overset{\mathfrak M}{\rightarrow}$" (Definition~\ref{defi_irroqrw}), when stated in terms of enclosures, is related to the existence of~$x\in\mathfrak h_i$, $y\in\mathfrak h_j$, such that one of the following holds: \begin{itemize} \item[\textbf{(a1)}] $ y \otimes \vec j \not\in \mathrm{Enc}(x\otimes \vec i)^\perp $ and $ x\otimes \vec i \in \mathrm{Enc} (y \otimes \vec j)^\perp$, \item[\textbf{(a2)}] $ y \otimes \vec j \in \mathrm{Enc}(x\otimes \vec i)^\perp $ and $ x\otimes \vec i \in \mathrm{Enc} (y \otimes \vec j)^\perp$. \end{itemize} Our first task will be to show that, when restricting to the subspace $\mathcal R$, the situation \textbf{(a1)} cannot appear. The following Lemma indeed holds: \begin{lemme}\label{coro_conda} If $x\otimes\vec i$ and $y\otimes \vec j$ are in $\mathcal R$, then one of the following situations holds: \begin{enumerate} \item $ x\otimes\vec i\not\in\mathrm{Enc}(y\otimes\vec j)^\perp$ and $ y\otimes\vec j\not\in\mathrm{Enc}(x\otimes\vec i)^\perp$ \item $\mathrm{Enc}(x\otimes\vec i )\perp \mathrm{Enc}(y\otimes \vec j)$. \end{enumerate} \end{lemme} \noindent{\bf Proof:} It is sufficient to notice that, if $y\otimes\vec j \in \mathrm{Enc}(x\otimes\vec i)^\perp\cap \mathcal R$, then the minimal enclosures containing $x\otimes\vec i$ and $y\otimes\vec j$ are orthogonal. Indeed, by point~$2$ in Proposition \ref{prop_coherence}, the subspace $ \mathrm{Enc}(x\otimes \vec i)^\perp \cap \mathcal R$ is an enclosure, and it contains~$y\otimes \vec j $ by assumption. $\Box$ \begin{remark} Beware that, in situation 1 of Lemma \ref{coro_conda}, one may still have $\mathrm{Enc}( x\otimes\vec i)$ and $\mathrm{Enc}(y\otimes \vec j)$ non-orthogonal but in direct sum, as the following example shows. \end{remark} \begin{example}\label{ex_3} We consider an OQRW $\mathfrak M$ with two sites, \textit{i.e.} $V=\{1,2\}$, and~$\mathfrak h_1=\mathfrak h_2={\mathbb C}^2$, and, for a fixed $p\in]0,1[$, $$ L_{11}=L_{22}=\sqrt{p\,}\,\mathrm{Id}, \qquad L_{12}=L_{21}=\sqrt{1-p\,}\, B\; \qquad \mbox{ with }\quad B=\left(\begin{array}{cc} 0&1\\ 1&0 \end{array}\right). $$ We denote the canonical basis of $\mathfrak h$ by $e_1,e_2$. Then $$\mathrm{Enc}(e_1\otimes\vec 1)=\mathrm{Vect}\{e_1\otimes \vec 1, e_2\otimes \vec 2\}$$ $$\mathrm{Enc}((e_1+e_2)\otimes\vec 1)=\mathrm{Vect}\{(e_1+e_2)\otimes \vec 1, (e_1+e_2)\otimes \vec 2\}$$ are non-orthogonal but have trivial intersection. \end{example} \smallskip \begin{lemme}\label{lemma_unicitydec} Let $\mathcal V = E_1\oplus E_2$, where $E_1$ and $E_2$ are minimal enclosures contained in $\mathcal R$. The decomposition of $\mathcal V$ into a direct sum of minimal enclosures is unique if and only if any enclosure $\mathcal W$ such that $\mathcal W\not\perp E_1$ and $\mathcal W\not\perp E_2$ satisfies $\mathcal W \cap \mathcal V = \{0\}$. If the latter statement holds, then the two enclosures are orthogonal. \end{lemme} \noindent{\bf Proof:} Assume the decomposition of $\mathcal V$ as a direct sum of minimal enclosures is unique. Then $E_1 \perp E_2$, otherwise by Proposition \ref{prop_coherence}, \[\mathcal V \cap E_1^\perp = \mathcal V \cap (\mathcal R \cap E_1^\perp )\] would be an enclosure that does not contain $E_2$, leading to a different decomposition of $\mathcal V$. Now consider a minimal enclosure $\mathcal W$ with $\mathcal W\not\perp E_1$ and $\mathcal W\not\perp E_2$. This implies $\mathcal W\neq E_1$ so by Lemma \ref{lemma_lemenc}, $\mathcal W \cap E_1=\{0\}$. If $\mathcal W\cap \mathcal V\neq \{0\}$ then it is an enclosure in $\mathcal W$ so by minimality, $\mathcal W\subset \mathcal V$. Then $\mathcal W \oplus E_1$ is a direct sum of minimal enclosures contained in $\mathcal V$, so, by point $2$ in Proposition \ref{prop_coherence}, one can complete this as a decomposition of~$\mathcal V$ into a direct sum of minimal enclosures. This is a contradiction, leading to $\mathcal W\cap \mathcal V= \{0\}$. Now assume that any enclosure $\mathcal W$ such that $\mathcal W\not\perp E_1$ and $\mathcal W\not\perp E_2$ satisfies $\mathcal W \cap \mathcal V = \{0\}$. Taking first $\mathcal W= E_2$, which obviously has a non trivial intersection with $\mathcal V$, we obtain that $E_1\perp E_2$. Now consider some minimal enclosure $E_3$ contained in $\mathcal V$. Then, by assumption, one has \textit{e.g.} $E_3\perp E_1$ and $E_3\not\perp E_2$. But then by point $2$ in Proposition \ref{prop_coherence}, one has $E_3\subset E_1^\perp \cap \mathcal V$, which, as proved above, is $E_2$. This proves the uniqueness of the decomposition. $\Box$ \smallskip The following remark shows that Lemma \ref{lemma_unicitydec} is consistent with the unicity of the irreducible decomposition for classical Markov chains: \begin{remark}\label{remark_classicalorthogonal} Consider a minimal dilation $\mathfrak{M}$ of a classical Markov chain. By Proposition \ref{lemma_enctraj} and Lemma \ref{lemma_lemenc}, any minimal enclosure is of the form $\mathbb{C}~\otimes~\ell^2(V_i)$ for $V_i\subset V$. Therefore, for such an OQRW $\mathfrak{M}$, any distinct minimal enclosures~$\mathcal V$ and $\mathcal W$ are always orthogonal. \end{remark} \smallskip Once again, the following result is proven in \cite{BN} in finite dimension. We extend the proof to infinite dimension. \begin{coro}\label{coro_partialisom} Assume that $\mathcal V = E_1\oplus E_2$ where $E_1$ and $E_2$ are minimal enclosures contained in $\mathcal R$, but that the decomposition into a direct sum of minimal enclosures, as in Lemma \ref{lemma_unicitydec}, is non-unique. Then \begin{equation}\label{eq_egalitedimensions} \dim\,E_1= \dim \,E_2. \end{equation} If, in addition, $E_1\perp E_2$, then there exists a partial isometry $Q$ from $E_1$ to~$E_2$ satisfying \begin{equation}\label{eq_partialisometry} Q^* \,Q = \mathrm{Id}_{|E_1}\qquad Q\,Q^* = \mathrm{Id}_{|E_2} \end{equation} and for any $\rho$ in $\mathcal I_1(\mathcal H)$, for $R=Q+Q^*$, and $P_i=P_{E_i}$, $i=1,2$: \begin{equation}\label{eq_commpartialisom} R\,\mathfrak{M}(\rho)\, P_i + P_i\, \mathfrak{M}(\rho)\, R=\mathfrak{M}\big(R\,\rho\, P_i + P_i\,\rho\, R\big). \end{equation} \end{coro} \noindent{\bf Proof:} Assume that there exists a minimal enclosure $\mathcal W$ that is distinct from $E_1$ and non-orthogonal to it. Then by point $2$ of Proposition \ref{prop_coherence}, $E_1\cap \mathcal W^\perp$ is an enclosure contained in $E_1$. By minimality of $E_1$ and non-orthogonality between those two enclosures, $E_1\cap\mathcal W^\perp=\{0\}$. Therefore $\dim E_1 \leq \dim\mathcal W$, and by symmetry one has the equality $\dim E_1 = \dim\mathcal W$. If $E_1\not\perp E_2$, this yields equality \eqref{eq_egalitedimensions}. Otherwise, the non-unicity of the decomposition implies the existence of minimal enclosures $\widetilde E_1$ and $\widetilde E_2$ such that \[E_1 \oplus E_2=\widetilde E_1 \oplus \widetilde E_2.\] and one can assume that \textit{e.g.} $\widetilde E_1$ is distinct from both $E_1$ and $E_2$. Necessarily $\widetilde E_1$ is also non-orthogonal to both $E_1$ and $E_2$, and taking $\mathcal W= \widetilde E_1$ we recover equality~\eqref{eq_egalitedimensions}. Assume now that $E_1\perp E_2$. By the above discussion there exists a minimal enclosure $\mathcal W$ distinct from $E_1$ and non-orthogonal to~it. Denote by $P_{1}$, $P_{2}$, $P_{\mathcal W}$ the orthogonal projections on $E_1$, $E_2$, $\mathcal W$ respectively. Define the map $\mathfrak N$ on~$\mathcal B(\mathcal H)$ by \[\mathfrak N : X\mapsto P_{\mathcal R}\,\mathfrak{M}^*(X)\,P_{\mathcal R}.\] One sees immediately that if $E=E_1$, $E_2$ or $\mathcal W$, then $P_{E}$ is (up to multiplication) the unique invariant of $\mathfrak N_{|\mathcal B(E)}$. Consider the decomposition of $P_{\mathcal W}=\begin{pmatrix}A& B^*\\B&C\end{pmatrix}$ in the splitting $\mathcal V=E_1\oplus E_2$, where necessarily $B\neq 0$. A simple consequence of Proposition \ref{prop_coherence} is that in the same decomposition, $\mathfrak{N}(P_{\mathcal W})=\begin{pmatrix}\mathfrak{N}(A)& \mathfrak{N}(B)^*\\ \mathfrak{N}(B)&\mathfrak{N}(C)\end{pmatrix}$. Therefore $A$ is proportional to $P_{1}$ and $C$ to $P_{2}$. Writing relations $P=P^*=P^2$ satisfied by $P_{\mathcal W}$, one sees that $B$ must be proportional to an operator $Q$ satisfying the relations~\eqref{eq_partialisometry}. In addition, fixing that same operator $Q$, for $\theta\in[0,\pi]$, the operator that has the form \[P_{\theta}=\begin{pmatrix}\cos^2 \theta & \sin \theta\cos\theta\, Q^*\\ \sin \theta\cos\theta\, Q & \sin^2\theta \end{pmatrix} \] is an orthogonal projection preserved by the map $\mathfrak{N}$. So its range is an enclosure and, by point $3$ of Proposition \ref{prop_coherence}, $P_\theta$ will satisfy the relation \[\mathfrak{M}(P_{\theta} \, \rho \, P_{\theta})= P_{\theta} \,\mathfrak{M}(\rho)\,P_{\theta}.\] Differentiating this relation with respect to the $\theta$ variable, we have \[ \mathfrak{M}\big(\frac{\mathrm d P_\theta}{\mathrm d\theta} \, \rho \, P_{\theta}+ P_{\theta} \, \rho \, \frac{\mathrm d P_\theta}{\mathrm d\theta}\big)=\frac{\mathrm d P_\theta}{\mathrm d\theta} \, \mathfrak{M}(\rho) \, P_{\theta}+ P_{\theta} \, \mathfrak{M}(\rho) \, \frac{\mathrm d P_\theta}{\mathrm d\theta} \] Computing the derivatives at $\theta=0$ and $\theta=\pi/2$, we obtain relation \eqref{eq_commpartialisom}. $\Box$ \smallskip \begin{coro}\label{coro_partialisom2} Assume that $\mathcal V = E_1\oplus E_2$ where $E_1$ and $E_2$ are mutually orthogonal minimal enclosures, contained in $\mathcal R$, but that the decomposition into a direct sum of minimal enclosures is non-unique. Denote by $\rho^{\mathrm{inv}}_i$ the unique invariant state with support in $E_i$, $i=1,2$, and by $Q$ the partial isometry defined in Corollary \ref{coro_partialisom}. Then $\rho^{\mathrm{inv}}_2=Q\, \rho^{\mathrm{inv}}_1 \, Q^*$. If $\rho$ is an invariant state with support in $\mathcal V$, write $\rho=\begin{pmatrix}\rho_{1,1} & \rho_{1,2}\\ \rho_{2,1} & \rho_{2,2}\end{pmatrix}$. Then: \begin{itemize} \item $\rho_{1,1}$ is proportional to $\rho^{\mathrm{inv}}_1$, \item $\rho_{2,2}$ is proportional to $\rho^{\mathrm{inv}}_2$, \item $\rho_{1,2}$ is proportional to $\rho^{\mathrm{inv}}_1\, Q^*=Q^*\rho^{\mathrm{inv}}_2$, \item $\rho_{2,1}$ is proportional to $\rho^{\mathrm{inv}}_2\, Q=Q\rho^{\mathrm{inv}}_1$. \end{itemize} \end{coro} \noindent{\bf Proof:} The first identity is obtained by applying relation \eqref{eq_commpartialisom} to $\rho=\rho^{\mathrm{inv}}_1$ with $P_1$, then applying it again to the resulting relation, this time with $P_2$. That each $\rho_{i,j}$ is an invariant is an immediate consequence of Proposition \ref{prop_coherence}. The relation satisfied by $\rho_{1,2}$ and $\rho_{2,1}$ is then obtained by applying relation~\eqref{eq_commpartialisom} to \textit{e.g.} $\rho_{1,2}$, with $P_1$ or $P_2$. $\Box$ \medskip We are now in a position to state the relevant decomposition associated to an open quantum random walk $\mathfrak M$. \begin{prop}\label{prop_finaldec} Let $\mathfrak M$ be an OQRW on $\mathcal H=\bigoplus_{i\in V}\mathfrak h _i$. There exists an orthogonal decomposition of $\mathcal H$ in the form \begin{equation}\label{eq_finaldec} \mathcal H = \mathcal D \oplus \bigoplus_{\alpha \in A}\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})\oplus \bigoplus_{\beta \in B}\bigoplus_{\gamma \in C_\beta} \mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}}) \end{equation} such that the sets $A$, $B$, $C_\beta$ are at most countable, $A$ and $B$ can be empty (but not simultaneously), any $C_\beta$ has cardinality at least two, and: \begin{itemize} \item every $\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})$ or $\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})$ in this decomposition is a minimal enclosure, and therefore an equivalence class for $\overset{\mathfrak M}{\leftrightarrow}$, \item for $\alpha$ in $A$, the only minimal enclosure not orthogonal to $\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})$ is $\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})$ itself, \item for $\beta$ in $B$ and $\gamma\in C_\beta$, any minimal enclosure that is not orthogonal to~$\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})$ is contained in $\bigoplus_{\gamma \in C_\beta} \mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})$. \end{itemize} \end{prop} \noindent{\bf Proof:} We start with the decomposition $\mathcal H = \mathcal D \oplus \mathcal R$, and proceed to decompose~$\mathcal R$. Consider the set of all minimal enclosures $\mathrm{Enc}(x\otimes\vec i)$ with the property that the only minimal enclosure non-orthogonal to $\mathrm{Enc}(x\otimes\vec i)$ is $\mathrm{Enc}(x\otimes\vec i)$ itself. By separability this set is at most countable. We can label these enclosures $\mathrm{Enc}(x_\alpha\otimes \vec{i_\alpha})$, $\alpha \in A$. Let $\mathcal O=\bigoplus_{\alpha\in A}\mathrm{Enc}(x_\alpha\otimes\vec {i_\alpha})$. Then $\mathcal O$ is an enclosure, and if $\mathcal R \cap \mathcal O^\perp\neq \{0\}$ then, by point $2$ of Proposition \ref{prop_coherence}, it is also an enclosure and we proceed to decompose it. Consider families of minimal enclosures labeled by a set $C$, $\{\mathrm{Enc}({x_\gamma}\otimes\vec{i_\gamma}), \,\gamma\in C\}$ with the property that any minimal enclosure that is not orthogonal to the space $\bigoplus_{\gamma\in C}\mathrm{Enc}({x_\gamma}\otimes\vec{i_\gamma})$ is contained in $\bigoplus_{\gamma\in C}\mathrm{Enc}({x_\gamma}\otimes~\vec{i_\gamma})$; by the assumption that $\mathcal R \cap \mathcal O^\perp\neq \{0\}$ this set is not empty. Pick a maximal such family, and index it as $\{\mathrm{Enc}({x_{1,\gamma}}\otimes\vec{i_{1,\gamma}}), \,\gamma\in C_1\}$. By point 2 of Proposition \ref{prop_coherence} and Lemma \ref{lemma_lemenc}, one can assume that the different enclosures in this family are mutually orthogonal. If \[\mathcal R\cap \mathcal O^\perp \cap \big(\bigoplus_{\gamma\in C_1}\mathrm{Enc}({x_{1,\gamma}}\otimes\vec{i_{1,\gamma}})\big)^\perp\neq \{0\}\] we can iterate this process. $\Box$ \begin{remark}\label{remark_classicalorthogonal2} By Remark \ref{remark_classicalorthogonal} and Lemma \ref{lemma_unicitydec}, any minimal dilation~$\mathfrak{M}$ of a classical Markov chain is simply of the form $\mathcal H =\mathcal D \oplus \bigoplus_{\alpha \in A}\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})$. \end{remark} \smallskip We will use this decomposition to characterize the form of stationary states. Before we state our next result, let us give some notation. We fix a decomposition \eqref{eq_finaldec} as considered in Proposition \ref{prop_finaldec}. We define for every $\alpha \in A$ and $(\beta,\gamma)\in B\times C_\beta$ the following orthogonal projections (for $\mathcal V$ a subspace of $\mathcal H$, the orthogonal projection on $\mathcal V$ is denoted $P_{\mathcal V}$): \[P_0 = P_{\mathcal D}\qquad P_\alpha = P_{\mathrm{Enc}({x_\alpha}\otimes\vec{i_\alpha})}\qquad P_{\beta,\gamma}=P_{\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})}\] and for a state $\rho$, and indices $i$, $j$ taking the values $0$, $\alpha \in A$ or $(\beta,\gamma)\in B\times C_\beta$ \begin{equation} \label{eq_decomprho} \rho_i=P_i \, \rho \, P_i\qquad \rho_{i,j}= P_i \, \rho \, P_j. \end{equation} When $\mathcal V$ is a subspace of $\mathcal H$ such that $\mathcal I_1(\mathcal V)$ is stable by $\mathfrak M$, we will talk about the restriction $\mathfrak M_{|\mathcal V}$ of $\mathfrak M$ to $\mathcal V$ (instead of the restriction $\mathfrak M_{|\mathcal I_1(\mathcal V)}$ of~$\mathfrak M$ to $\mathcal I_1(\mathcal V)$). In addition, for $i$ taking the values $\alpha\in A$ or $(\beta,\gamma)\in B\times C_\beta$, we denote by~$\rho^{\mathrm{inv}}_i$ the unique invariant state of $\mathfrak M_{|\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})}$ or $\mathfrak M_{|\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})}$. \begin{theo}\label{theo_invariantstates} Let $\rho$ be a $\mathfrak M$-invariant state with $\mathcal H$ separable. With the notation~\eqref{eq_decomprho} we have \begin{enumerate} \item $\rho_0=0$, \item every $\rho_{\alpha}$ is proportional to $\rho^{\mathrm{inv}}_\alpha$, which has support $\mathrm{Enc}({x_{\alpha}}\otimes\vec{i_{\alpha}})$, \item every $\rho_{(\beta,\gamma)}$ is proportional to $\rho^{\mathrm{inv}}_{(\beta,\gamma)}$, which has support $\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})$, \item for $\gamma\neq \gamma'$ in $C_\beta$, the off-diagonal term $\rho_{((\beta,\gamma),(\beta,\gamma'))}$, which we simply denote by $\rho_{(\beta,\gamma,\gamma')}$, may be non-zero, and is an invariant of $\mathfrak{M}$. In addition, there exists a partial isometry $Q_{(\beta,\gamma,\gamma')}$ from $\mathrm{Enc}({x_{\beta,\gamma}}\otimes\vec{i_{\beta,\gamma}})$ to $\mathrm{Enc}({x_{\beta,\gamma'}}\otimes\vec{i_{\beta,\gamma'}})$ such that: \begin{itemize} \item $\rho^{\mathrm{inv}}_{(\beta,\gamma')}=Q_{(\beta,\gamma,\gamma')}\, \rho^{\mathrm{inv}}_{(\beta,\gamma)}\, Q_{(\beta,\gamma,\gamma')}^*$ \item $\rho_{(\beta,\gamma,\gamma')}$ is proportional to $Q^*_{(\beta,\gamma,\gamma')}\,\rho^{\mathrm{inv}}_{(\beta,\gamma')}=\rho^{\mathrm{inv}}_{(\beta,\gamma)}\, Q^*_{(\beta,\gamma,\gamma')}$, \end{itemize} \item all other $\rho_{i,j}$ (taking the values $0$, $\alpha \in A$ or $(\beta,\gamma)\in B\times C_\beta$) are zero. \end{enumerate} \end{theo} \noindent{\bf Proof:} This follows from a repeated application of Propositions \ref{prop_coherence} and \ref{prop_finaldec}, and Corollary \ref{coro_partialisom2}. $\Box$ \begin{remark} Our main comment here is that there may exist ``coherences" between minimal blocks, \textit{i.e.} non-zero off-diagonal blocks $\rho_{i,j}$, for $i,j$ corresponding to distinct minimal irreducible blocks. Invariant states are not, contrarily to the classical case, just convex combinations of states invariant for the reduced (irreducible) dynamics. We will observe this in Example \ref{example-nonuniquedec}. Note however, that, according to Remark \ref {remark_classicalorthogonal2}, this cannot happen for minimal dilations of classical Markov chains. \end{remark} \begin{remark} One might have hoped that a relevant decomposition of $\mathfrak{M}$ would separate sites, \textit{i.e} that one could decompose $\mathcal R$ into a sum of minimal enclosures $\bigoplus\mathrm{Enc}({x_k}\otimes\vec{i_k})$ with $\mathrm{Enc}({x_k}\otimes\vec{i_k}) \subset \bigoplus_{i\in I_k}\mathfrak h_i$ for disjoint $I_k$. This is not true, as Example \ref{example_nodisconnect} shows. \end{remark} \begin{example}\label{example_nodisconnect} Consider again Example \ref{ex_2}. We have a unique decomposition of $\mathcal H=\mathfrak h~\otimes~\ell^2(V)$ as a sum of minimal enclosures, $$ \mathfrak h \otimes \ell^2(V) = \mathrm{Enc}({e_1}\otimes\vec{1})\oplus\mathrm{Enc}( {e_2}\otimes\vec{1})$$ even though the two minimal enclosures $$ \mathrm{Enc}( {e_k}\otimes\vec{1})= \mathbb{C}\,{e_k}\otimes \ell^2(V), \quad k=1,2, $$ connect all three sites. Note also that, in accordance with Lemma \ref{lemma_unicitydec}, the two unique enclosures are mutually orthogonal. \end{example} \begin{remark} Applying Theorem \ref{theo_invariantstates} and the Frigerio-Verri ergodic theorem (see~\cite{FV}) one can obtain results about the ergodic behaviour of $(\mathfrak{M}^n(\rho))_n$, that extend Proposition~\ref{prop_ergodicconvergence} to the reducible case. This will be done in a forthcoming article. However, in certain cases, the results given in the present article can be enough to describe convergence in reducible OQRWs: see Example \ref{ex_apss}. \end{remark} \section{Extensions of open quantum random walks}\label{section_extension} In this section, we define an extension of open quantum random walks, already mentioned in Remark \ref{remark_extensions}. We consider again a countable set of vertices $V$ and a separable Hilbert space $\mathcal H = \bigoplus_{i\in V} \mathfrak h _i$. An extended open quantum random walk will be a map $\widetilde{\M} : \mathcal I_1(\mathcal H) \to \mathcal I_1(\mathcal H)$ such that if $\rho=\sum_{i,j\in V}\rho(i,j)\otimes \ketbra ij$ then \begin{equation}\label{eq_extendedoqrw} \widetilde{\M}(\rho) = \sum_{i\in V} \Big( \sum_{j\in V} \Phi_{i,j}\big(\rho(j,j)\big)\Big)\otimes \ketbra ii \end{equation} where each $\Phi_{i,j}$ is a completely positive map from $\mathcal I_1(\mathfrak h_j)$ to $\mathcal I_1(\mathfrak h_i)$ such that, for any $j$ in $V$, \begin{equation}\label{eq_stochasticphi} \sum_{i\in V}\Phi_{i,j}^*(\mathrm{Id}_{\mathfrak h_i})= \mathrm{Id}_{\mathfrak h_j}. \end{equation} This defines a transition operation matrix in the sense of Gudder (see \cite{Gudder}). Again this $\widetilde{\M}$ maps $\mathcal I_1(\mathcal H)$ to the set $\mathcal I_{\mathcal D}$ of block diagonal trace-class operators (see section \ref{section_OQRWs}). In addition, the Kraus representation associates to each $\Phi_{i,j}$ a countable set $E(j,i)$ and, for every $e\in E(j,i)$, a map $L_e$ from $\mathfrak h_j$ to $\mathfrak h_i$ such that $\Phi_{i,j}$ can be written as \[\Phi_{i,j}(\rho)=\sum_{e\in E(j,i)} L_e^{\,} \,\rho \, L_e^* \quad \mbox{for any }\rho\in \mathcal I_1(\mathfrak h_j).\] We view the operators $L_e$ as associated to the edges of a directed multigraph~$(V,E)$ where $E=\cup_{i,j\in V} E(j,i)$. Then if we denote by $E(j)=\cup_{i\in V}E(j,i)$ the set of outgoing edges at $j$, the stochasticity condition \eqref{eq_stochasticphi} becomes similar to~\eqref{eq_stochastic}: \[\forall j\in V\quad \sum_{e\in E(j)} L_e^* L_e^{\,} = \mathrm{Id}.\] This reminds us that the present framework encompasses open quantum random walks as defined in the rest of this article. What's more, it should be noted that the power $\mathfrak{M}^n$ of an OQRW $\mathfrak{M}$ is not in general an OQRW, but is always an extended OQRW. All the results of the previous sections can be extended to this more general class of evolutions. \smallskip As in section \ref{section_OQRWs}, starting from a state $\rho= \sum_{i\in V}\rho(i)\otimes\ketbra ii$ we can define processes ``without measurement" $(\widetilde Q_n,\frac{\widetilde{\M}^n(\rho,\widetilde Q_n)}{\mathrm{Tr}\,\widetilde{\M}^n(\rho,\widetilde Q_n)})_{n\in \mathbb{N}}$: denote \[\widetilde{\M}^n(\rho)=\sum_{i\in V}\widetilde{\M}^n(\rho,i)\otimes\ketbra ii. \] Then the process ``without measurement" is determined by the variable $\widetilde Q_n$, with law \[\mathbb{P}(\widetilde Q_n=i)= \mathrm{Tr} \,\widetilde{\M}^n(\rho,i)\] and the process ``with measurement" $(\widetilde X_n, \widetilde \rho_n)_{n\in \mathbb{N}}$ by \[(\widetilde X_0,\widetilde \rho_0)=\big(j,\rho(j)\big) \mbox{ with probability }\mathrm{Tr}\,\rho(j)\] \[ \mathbb{P}\Big((\widetilde X_{n+1},\widetilde \rho_{n+1})\!=\!(i,\!\frac{\Phi_{i,j}(\widetilde \rho_n)}{\mathrm{Tr}\,\Phi_{i,j}(\widetilde \rho_n)}\!)\Big|(\widetilde X_n,\widetilde \rho_n)\!=\!(j,\widetilde\rho_n)\Big)=\mathrm{Tr} \,\Phi_{i,j}(\widetilde\rho_n)\quad \forall i\in V.\] Note that these classical processes associated to $\widetilde{\M}$ were not considered in \cite{Gudder}. \smallskip We claim that our vision of open quantum random walks in terms of paths $\pi$ in~$\mathcal P(i,j)$ on a directed graph extends to this framework, with paths $\tilde\pi$ in~$\widetilde{\P}(i,j)$ on a directed multigraph. \smallskip In particular, we recover all results from sections \ref{section_irreducibility} through \ref{section_stationary}, replacing $\mathcal P$ with $\widetilde{\P}$ in our assumptions, and $Q_n, \mathfrak{M}^n(\rho,i), X_n, \rho_n$ with $\widetilde Q_n, \widetilde{\M}^n(\rho,i), \widetilde X_n, \widetilde\rho_n$. More precisely, Proposition \ref{prop_ergodicityOQRW} and Definition \ref{defi_irroqrw} on irreducibility, as well as Lemma \ref{prop_caracaperiodicite} and Theorem \ref{theo_caracaperiodicite} on the period, extend to $\widetilde{\M}$ by simply replacing every $\mathcal P$ with $\widetilde{\P}$. Proposition \ref{prop_eqperiodicity} holds if \eqref{prop_eqperiodicity} becomes \[ P_{k,i} L_e = L_e P_{k\oset{{\tiny d}}{-} 1,j}\quad \forall\, e\in E(j,i). \] And similarly Corollary \ref{coro_perturbation} holds if relation \eqref{eq_coroperturbation} becomes \[ \forall x\in \mathfrak h_i,\ \exists \, e\in E(i,i) \mbox{ such that }\langle x,L_e x\rangle\neq0. \] Then, the whole of section \ref{section_recurrence} holds if the processes $Q_n, \mathfrak{M}^n(\rho,i), X_n, \rho_n$ are replaced with $\widetilde Q_n, \widetilde{\M}^n(\rho,i), \widetilde X_n, \widetilde\rho_n$. Similarly, sections \ref{section_nonirreducible} and \ref{section_stationary} remain the same, replacing $\mathcal P$ with $\widetilde{\P}$ in the definition of enclosures. \section{Examples and applications}\label{section_examples} \def\overset{\scriptscriptstyle n}{+}{\overset{\scriptscriptstyle n}{+}} \def\overset{\scriptscriptstyle n}{-}{\overset{\scriptscriptstyle n}{-}} \begin{example}\label{ex_homogeneous} We start with an application to space-homogeneous open quantum random walks on a graph associated with a set of generators of a group. This applies in particular to open quantum random walks on $\mathbb{Z}^d$, which we study in \cite{CP2}. To be more precise, we assume that $V$ is a set of vertices in an additive (abelian) group $G$, that $\mathfrak h_i = \mathfrak h$ does not depend on $i$, and that there is a finite set $S\subset G$ such that $L_{i,j}=L_{j-i}$ depends only on $j-i$, and is zero unless~$j-i \in S$. \def\eta^{\mathrm{inv}}{\eta^{\mathrm{inv}}} We associate to this OQRW the map \begin{equation}\label{eq_defL} \begin{array}{cccc} \mathfrak L :& \mathcal I (\mathfrak h) & \to & \mathcal I (\mathfrak h) \\ &\eta & \mapsto & \sum_{s\in S} L_s\, \eta \, L_s^* \end{array}. \end{equation} If $\mathfrak{M}$ is irreducible, then clearly $\mathfrak L$ is also irreducible, and by Proposition \ref{prop_ergodicconvergence}, it has at most one invariant state which we then denote by $\eta^{\mathrm{inv}}$. Note that, if $\mathfrak h$ is finite-dimensional, then this $\eta^{\mathrm{inv}}$ exists. \begin{remark} From Lemma \ref{lemma_ergodicityCPTPmaps}, one easily sees that $\mathfrak L$ is irreducible if and only if the operators $L_s$, $s\in S$, have no non-trivial common invariant subspace. This criterion is stated, in particular, in \cite{Fare}. \end{remark} \begin{prop}\label{prop_inexistenceetatinv} Assume $\mathfrak{M}$ as above is irreducible. \begin{itemize} \item If $V$ is infinite, then $\mathfrak{M}$ does not have an invariant state. \item If $V$ is finite, then $\mathfrak L$ has an invariant state $\eta^{\mathrm{inv}}$ and the unique invariant state of $\mathfrak{M}$ is $$ \sum_{i\in V} \frac{\eta^{\mathrm{inv}}}{\mathrm{card}\,V} \otimes |i\rangle \langle i |.$$ \end{itemize} \end{prop} \noindent{\bf Proof:} Assume there exists an invariant state $\rho^{\mathrm{inv}}$. Since $\mathfrak{M}$ is invariant by translation, any translation of that state is also an invariant state, so by Theorem~\ref{theo_unicity}, the state $\rho^{\mathrm{inv}}$ is translation-invariant. It must therefore be of the form $\sum_{v\in V} \rho\otimes | v\rangle \langle v|.$ If $V$ is infinite, this has trace either infinite or null and in either case this is a contradiction. If $V$ is finite then it is easy to see that $\rho$ must be an invariant of $\mathfrak L$. $\Box$ \smallskip The Perron-Frobenius theorem for CP maps, Proposition \ref{prop_ergodicconvergence}, allows us to obtain a large deviation principle and a central limit theorem for the position process $(X_n)_{n\in\mathbb{N}}$ associated with an open quantum random walk $\mathfrak{M}$ and an initial state $\rho$ (see section \ref{section_OQRWs}), therefore extending the results of \cite{AGS}. In addition, we can also make more precise the convergence of the sequence of states $(\rho_n)_{n\in\mathbb{N}}$ (still using the notations of section \ref{section_OQRWs}). This will be done in a separate paper \cite{CP2} studying in detail OQRWs on~$\mathbb{Z}^d$. \end{example} \medskip \begin{example}\label{ex_apss} We consider the example given in section 12.1 of \cite{APSS}. In our notation this example is given by $V=\{1,2\}$, $\mathfrak h=\mathbb{C}^2$ (with canonical basis $(e_1,e_2)$) and transitions given by \[ L_{1,1}=\begin{pmatrix}a & 0 \\ 0 & b\end{pmatrix}\quad L_{1,2}=\begin{pmatrix}0 & \!\!\sqrt {p\,} \\ 0 & 0\end{pmatrix} \quad L_{2,2}=\begin{pmatrix}1 & 0 \\ 0 & \!\!\sqrt {q\,}\end{pmatrix}\quad L_{2,1}=\begin{pmatrix}c & 0 \\ 0 & d\end{pmatrix}\] where we assume $q=1-p\in(0,1)$, $|a|^2+|b|^2=|c|^2+|d|^2=1$, $0<|a|^2,|c|^2<1$. Note that we do not need the additional assumptions $a\neq b$, $c\neq d$, $ab\neq \sqrt{q\,}$, $a^2\neq q$, $b^2\neq q$ done in \cite{APSS}. First observe that the only minimal enclosure is \[ {\rm Enc}(e_1\otimes \vec 2) ={\rm Vect}(e_1\otimes \vec 2). \] Indeed, \begin{itemize} \item $\mathrm{Enc}(e_1\otimes \vec 1)$ obviously contains $\mathrm{Enc}(L_{2,1}e_1\otimes \vec 2)=\mathrm{Enc}(e_1\otimes \vec 2)$; \item $\mathrm{Enc}(x\otimes \vec 2)$ contains $\mathrm{Enc}(L_{1,2}x\otimes \vec 1)$ and if $x=x_1e_1+x_2e_2$ with $x_2\neq 0$, this contains $\mathrm{Enc}(e_1\otimes \vec 1)$. \item $\mathrm{Enc}(x\otimes \vec 1)$ contains $\mathrm{Enc}(L_{2,1}x\otimes \vec 2) ={\rm Enc}\big((cx_1e_1+dx_2e_2)\otimes \vec 2\big)$, and if $x_2$ is non null, then we fall in the previous case and conclude. \end{itemize} Therefore the decomposition \eqref{eq_finaldec} is given by \[\mathfrak h \otimes \ell^2(V)= \mathcal D \oplus \big\{\!\begin{pmatrix}a\\0\end{pmatrix}\otimes \vec 2,\ a\in\mathbb{C}\big\}.\] By the equivalent definition of $\mathcal D$ given in Remark \ref{remark_RDfinite}, any eigenvector associated to an eigenvalue of modulus one must be orthogonal to $\mathcal D$. So the OQRW $\mathfrak{M}$ has a unique eigenvalue of maximum modulus, which is the simple eigenvalue~$1$ associated with the eigenvector \[\rho^{\mathrm{inv}}=\begin{pmatrix}1 & 0 \\ 0 & 0\end{pmatrix}\otimes \ketbra 22\] and this implies that, for any initial state $\rho$, one has $\mathfrak{M}^n(\rho)\rightarrow\rho^{\mathrm{inv}}$ as $n\to\infty$. \end{example} \begin{example}\label{ex_Vn} We consider a family of examples which extends the main example given in \cite{APSS}. This family is indexed by $n\in\mathbb{N}^*\cup\{\infty\}$; every $\mathfrak h$ is $\mathbb{C}^2$ and $V$ is either $V_n=\{1,\ldots,n\}$ or $V_\infty=\mathbb{Z}$, and the operators $L_{i,j}$ are defined by $$ L_{i\oset{{\tiny n}}{+}}\def\ind{1\hspace{-0.27em}\mathrm{l} 1,i}=L_+=\frac1{\sqrt 3} \begin{pmatrix} \hphantom{,}1 & 1 \\ \hphantom{,}0 & 1 \end{pmatrix},\qquad L_{i\oset{{\tiny n}}{-} 1,i}=L_-=\frac1{\sqrt 3}\begin{pmatrix} 1 & 0 \\ -1 & 1 \end{pmatrix},$$ where here $\oset{{\tiny n}}{+}}\def\ind{1\hspace{-0.27em}\mathrm{l}$, $\oset{{\tiny n}}{-}$ denote addition or substraction modulo $n$ in the case where $n<\infty$, and standard addition or substraction if $n=\infty$. We denote by $\mathfrak{M}_{(n)}$ the above open quantum random walk. We first show that, in any case, this chain is irreducible, using the characterization given in Proposition \ref{prop_ergodicityOQRW}. For this, fix $i$ and $j$ in $\mathbb{N}$, and let $\Delta=i-k$. For~$p$ large enough, consider $\pi$ of the form $(i,i-1,\ldots,i-\Delta-p,i-\Delta-p+1,\ldots,j)$ (\textit{i.e.} one first moves down $p+\Delta$ times, then up $p$ times), one has \begin{eqnarray*} L_\pi&=&L_+^{p}L_-^{\Delta+p}\\ &=& 3^{-p-\Delta/2} [ \left( \begin{array}{cc} 1 & 0 \\ -\Delta & 1 \end{array}\right) + p \left( \begin{array}{cc} -\Delta & 1 \\ -1 & 0 \end{array}\right) - p^2 \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)]. \end{eqnarray*} Assume that some vectors $x_i=\begin{pmatrix} a_i\\ b_i \end{pmatrix}$ and $x_j=\begin{pmatrix} a_j\\ b_j\end{pmatrix}$ satisfy $ \langle x_j, L_+^{p}L_-^{\Delta+p}\,x_i\rangle~=~0$ for arbitrarily large $p$. Then one must have $$ \langle x_j, \begin{pmatrix} 1 & 0 \\ -\Delta & 1 \end{pmatrix}x_i\rangle = \langle x_j, \left(\begin{array}{cc} \Delta & -1 \\ 1 & 0 \end{array}\right) x_i\rangle = \langle x_j, \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right) x_i\rangle =0.$$ By inspection we see that these conditions imply $a_i=b_i=0$ or $a_j=b_j=0$. Therefore, the set $L_\pi x_i$ is total in $\mathfrak h_j$, for any choice of $x_i$. We now discuss the period. First notice that, for any non null vector $x$ in~$\mathbb C^2$, we always have either $\langle x,L_+L_- x\rangle\neq 0$ or $\langle x,L_-L_+ x\rangle\neq 0$. This implies that $D(i,x)\in \{1,2\}$ (just using relation \eqref{def_Dix}) for all $i\in V$ and all $x$, so, by Theorem \ref{theo_caracaperiodicite}, the period can be only $1$ or $2$. If $n$ is odd, then for $p\in \mathbb N^*$, consider $x=\begin{pmatrix}a\\b\end{pmatrix}\neq 0$. Then \begin{equation}\label{eq_periodVn} \braket{x}{L_+^{pn}\,x}=\braket{\begin{pmatrix}a\\b\end{pmatrix}}{\frac1{3^{pn/2}}\begin{pmatrix}1 & np\\0 & 1 \end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}} = \frac1{3^{pn/2}} \, (a^2+np\, ab + b^2) \end{equation} (this quantity is associated to the path $\pi=1,\ldots, n,\ldots, 1,\ldots, n, 1$ starting from $1$ and going ``up", doing $p$ loops before stopping at $1$). Since $x\neq 0$, the quantity \eqref{eq_periodVn} is zero for at most one $p$, so $D(1,x)$, defined in \eqref{def_Dix}, divides $pn$ for any large enough $p\in\mathbb N^*$. Consequently $D(1,x)=1$ and, by Theorem \ref{theo_caracaperiodicite}, the period is~$1$. By translation-invariance, $D(i,x)=1$ for all $i$ in $V$. On the other hand, if $n$ is even or infinite, it is clear that the chain has period $2$: the projections $$ P_{\mathrm{even}}=\sum_{i\,\mathrm{even}} \mathrm{Id}\otimes|i\rangle\langle i|\quad \mbox{and} \quad P_{\mathrm{odd}}=\sum_{i\,\mathrm{odd}} \mathrm{Id}\otimes|i\rangle\langle i|$$ are $\mathfrak M$-cyclic. We define one more open quantum random walk, to illustrate the method of ``adding loops" described in Remark \ref{remark_loops} to make an OQRW aperiodic: we define for $\varepsilon\in]0,1[$ the open quantum random walk $\mathfrak{M}_{(n,\varepsilon)}$ with sites $V_n$ and transition operators \[L_{i\oset{{\tiny n}}{+}}\def\ind{1\hspace{-0.27em}\mathrm{l} 1,i}^{(\varepsilon)}=L_+^{(\varepsilon)}=\sqrt{1-\varepsilon\,}\,L_+\qquad L_{i\oset{{\tiny n}}{-} 1,i}^{(\varepsilon)}=L_-^{(\varepsilon)}=\sqrt{1-\varepsilon\,}\,L_-\quad L_{i,i}^{(\varepsilon)}=\sqrt{\varepsilon\,}\,\mathrm{Id}. \] Note that we consider this perturbation by ``adding a loop" at every site, because it simplifies both the computation of the invariant state, and the simulation. Then $\mathfrak{M}_{(n,\varepsilon)}$ is clearly irreducible and, from Corollary \ref{coro_perturbation}, it is aperiodic. For each choice of open quantum random walk $\mathfrak{M}_{(n)}$ (respectively $\mathfrak{M}_{(n,\varepsilon)}$) we associate a map $\mathfrak L_{(n)}$ (respectively $\mathfrak L_{(n,\varepsilon)}$) on $\mathcal I_1(\mathbb C^2)$, as in \eqref{eq_defL}. We can check that in all cases, the state $\frac12\,\mathrm{Id}$ on $\mathbb C^2$ is the only invariant of that map. By Proposition \ref{prop_inexistenceetatinv}, for $n\in\mathbb N^*$, the only invariant map of $\mathfrak{M}_{(n)}$ (respectively $\mathfrak{M}_{(n,\varepsilon)}$) is \[\rho^{\mathrm{inv}}=\sum_{i\in V_n} \frac1{2n}\,\mathrm{Id}\otimes \ketbra ii \] We summarize our results: \begin{prop} Consider the open quantum random walks $\mathfrak{M}_{(n)}$ and $\mathfrak{M}_{(n,\varepsilon)}$ as above. We have: \begin{itemize} \item for every $n$ in $\mathbb N^*\cup\{\infty\}$, the OQRWs $\mathfrak{M}_{(n)}$ and $\mathfrak{M}_{(n,\varepsilon)}$ are irreducible, \item for $n$ in $2\,\mathbb N^*\cup\{\infty\}$ the OQRW $\mathfrak{M}_{(n)}$ has period 2, \item for $n$ in $2\,\mathbb N\!+\!1$ the OQRW $\mathfrak{M}_{(n)}$ is aperiodic \item for $n$ in $\mathbb N^*\cup\{\infty\}$, the OQRW $\mathfrak{M}_{(n,\varepsilon)}$ is aperiodic, \item for $n$ in $\mathbb N^*$, the OQRWs $\mathfrak{M}_{(n)}$ and $\mathfrak{M}_{(n,\varepsilon)}$ have as unique invariant state \[\rho^{\mathrm{inv}}=\sum_{i\in V_n} \frac1{2n}\,\mathrm{Id}\otimes \ketbra ii. \] \end{itemize} \end{prop} \smallskip We now describe the results of numerical simulations. Because we cannot display all data, we choose to focus on what happens ``at site 1". We always start from the initial state $\rho=\begin{pmatrix}1 & 0 \\ 0 & 0 \end{pmatrix} \otimes |1\rangle \langle 1|$, but the phenomena are insensitive to the particular choice of $\rho$. Whenever we describe a state on $\mathbb{C}^2$, we give its $(1,1)$ and $(1,2)$ coordinates. Note that: \begin{itemize} \item these two coordinates describe the state entirely, \item because of our choice of $\rho$ and $L_+$, $L_-$, those coordinates are real. \end{itemize} \medskip In every case, we display for different values of $n$: \begin{enumerate} \item the probability $\mathbb{P}(Q_n=1)$, and its average $\frac1n\sum_{k=0}^{n-1}\mathbb{P}(Q_k=1)$ (Figures~1,3,5, top row), \item the $(1,1)$ and $(1,2)$-coefficients of the (non-normalized) ``state at site $1$", \emph{i.e.} $\mathfrak M^n(\rho,1)$ (Figures~1,3,5, middle row), and of the average $\frac1n\sum_{k=0}^{n-1}\mathfrak M^k(\rho,1)$ (Fig. 1,3,5, bottom row), \item the different values of $X_n$ in a (randomly chosen) quantum trajectory, and the proportion of $1$'s in $X_0,\ldots,X_{n-1}$ (Figures 2,4,6, top row), \item the $(1,1)$ and $(1,2)$-coefficients of the (normalized) state $\rho_k$ for those times~$k\leq n$ such that $X_k=1$ (Figures 2,4,6, middle row), and of the average $\frac1{N_{n,1}}\sum_{k=0}^{n-1}\rho_k\,\ind_{X_k=1}$ where $N_{n,1}$ is the number of $k$ in $\{0,\ldots,n-1\}$ such that $X_k=1$ (Figures 2,4,6, bottom row). \end{enumerate} The series of data 1 and 2 (corresponding to Figures 1,3,5) we call ``without measurement", the series 3 and 4 (corresponding to Figures 2,4,6) we call ``with measurement". \medskip \noindent\textbf{Open quantum random walk $\mathfrak{M}_{(3)}$} We obtain numerically the data shown in Figures \ref{V3nonmeasurement} and \ref{V3measurement}. We observe all the convergences mentioned in Corollaries \ref{coro_rec1}, \ref{coro_rec2}, \ref{coro_rec3}. \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{nontraj_V3.pdf} \caption{OQRW $\mathfrak{M}_3$, data without measurements} \label{V3nonmeasurement} \end{figure*} \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{traj_V3.pdf} \caption{OQRW $\mathfrak{M}_3$, data with measurements} \label{V3measurement} \end{figure*} \smallskip \noindent\textbf{Open quantum random walk $\mathfrak{M}_{(4)}$} We obtain numerically the data in Figures \ref{V4nonmeasurement} and \ref{V4measurement}. We observe the convergences mentioned in Corollaries \ref{coro_rec1}, \ref{coro_rec2} but not that of Corollary \ref{coro_rec3}, as the OQRW is not aperiodic. The sequences $\mathbb{P}(Q_n=1)$ and $\mathfrak M^n(\rho,1)$ exhibit periodic behavior, in a way that is reminiscent of periodic (classical) Markov chains. \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{nontraj_V4.pdf} \caption{OQRW $\mathfrak{M}_4$, data without measurements} \label{V4nonmeasurement} \end{figure*} \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{traj_V4.pdf} \caption{OQRW $\mathfrak{M}_4$, data with measurements} \label{V4measurement} \end{figure*} \smallskip \noindent\textbf{Open quantum random walk $\mathfrak{M}_{(4,\varepsilon)}$ for $\varepsilon=0.05$} We obtain numerically the data shown in Figures \ref{V4pertnonmeasurement} and \ref{V4pertmeasurement}. In addition to the convergences mentioned in Corollaries \ref{coro_rec1}, \ref{coro_rec2} we recover those of Corollary~\ref{coro_rec3}, as we have perturbed the OQRW into an aperiodic one. \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{nontraj_perturbedV4.pdf} \caption{perturbed OQRW $\mathfrak{M}_{(4,0.05)}$, data without measurements} \label{V4pertnonmeasurement} \end{figure*} \begin{figure*}[H] \hspace{-2cm}\includegraphics[width=1.4\textwidth]{traj_perturbedV4.pdf} \caption{perturbed OQRW $\mathfrak{M}_{(4,0.05)}$, data with measurements} \label{V4pertmeasurement} \end{figure*} \begin{remark} the data we obtained show that aperiodicity does not imply a convergence of $\rho_n$, even when we condition it on a measurement of $X_n$ at a given site: only convergence in the mean holds. \end{remark} \end{example} \begin{example}\label{example_Dnotconstant} We use $V_\infty = \mathbb Z$, $\mathfrak h=\mathbb C^2$ as in the previous example and change the transition matrices, $$ L_+=p\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right), \qquad L_-=q\left( \begin{array}{cc} 1 & 0 \\ 0 & e^{i\alpha} \end{array}\right) $$ with $\alpha\in[0,2\pi)$, $p,q\in{\mathbb C}\setminus \{0\}$, $|p|^2+|q|^2=1$. This OQRW is irreducible when $\alpha\neq 0, \pi$. We shall denote by $\{e_0,e_1\}$ the orthonormal basis of $\mathfrak h$ with respect to which we have written the matrix representation of the operators $L_+,L_-$. Then it is easy to verify irreducibility by Proposition \ref{prop_ergodicityOQRW} : if we consider the non-zero vector $v=\begin{pmatrix}a\\ b\end{pmatrix}$ in $\mathfrak h_i$, we have that, for all $n>0$, \begin{eqnarray*} &&{\rm span}\{L_+^n v, L_+^{n+1}L_- v, L_+^{n}L_- L_+ v\} \\ &&=\left\{\begin{array}{ll} {\rm span}\{\begin{pmatrix}a\\ b\end{pmatrix}, \begin{pmatrix}e^{i\alpha}b\\ a\end{pmatrix} , \begin{pmatrix}b\\ e^{i\alpha} a\end{pmatrix} \} & \quad n \, {\rm even}, \\ {\rm span}\{\begin{pmatrix} b\\ a\end{pmatrix} , \begin{pmatrix} e^{i\alpha} a \\ b \end{pmatrix} , \begin{pmatrix}a\\ e^{i\alpha} b\end{pmatrix} \} & \quad n \, {\rm odd}, \end{array}\right. \end{eqnarray*} in both cases, it coincides with $\mathfrak h_{i+n}$. Similarly we can proceed for $n\le 0$. The period is $4$: we can choose the resolution of the identity \[P_k = \sum_{i\in \mathbb{Z}} |e_0\rangle \langle e_0|\otimes |4\,i+k\rangle \langle 4\,i+k| +\sum_{i\in \mathbb{Z}} |e_1\rangle \langle e_1|\otimes |4\,i+k+2\rangle \langle 4\,i+k+2|,\] for $k=0,\ldots,3$. Obviously, from the properties of this OQRW and Theorem~\ref{theo_caracaperiodicite}, the period cannot be greater than $4$. So we can conclude that the period is exactly $4$. Finally, notice that the quantity $D(i,x)$ introduced in Theorem \ref{theo_caracaperiodicite} is not the same for all vectors: $D(i,e_0)=D(i,e_1)=4$ but, if we call $x=\begin{pmatrix}1\\ e^{i\alpha/2} \end{pmatrix}$, then $x$ is an eigenvector for $L_-L_+$ and so the set of lengths $\ell$ introduced in the definition of~$D(i,x)$ contains $2$. Since it is clear that all those lengths are even, then~$D(i,x)=2$. \end{example} \begin{example}\label{example-nonuniquedec} We consider an OQRW $\mathfrak M$ as introduced in Example \ref{ex_3}. Then~$\mathfrak M$ does not have a unique decomposition in irreducible components. Indeed, it is easy to see that the $\mathfrak M$-invariant states are all the states of the form $$ \rho = \rho_1 \otimes |1\rangle\langle 1| + B\rho_1 B \otimes |2\rangle\langle2| $$ for any $2\times 2$ matrix $\rho_1$ such that $2\rho_1$ is a state in $M_2(\mathbb C)$. So $\mathcal R=\mathcal H$ for this $\mathfrak M$, and the minimal enclosures are exactly all the enclosures generated by vectors of the form $x\otimes |1\rangle$, for $x =\begin{pmatrix}a\\b\end{pmatrix}$ in $\mathbb C^2$, $$ \mbox{Enc}(x\otimes |1\rangle) =\mbox{Vect}\{\begin{pmatrix}a\\b\end{pmatrix} \otimes |1\rangle, \begin{pmatrix}b\\a\end{pmatrix} \otimes |2\rangle \}. $$ Therefore, the decomposition of $\mathcal R =\mathcal H$ into a sum of minimal enclosures is non-unique. To illustrate Theorem \ref{theo_invariantstates}, consider an invariant state $\rho$; from the above discussion, it is of the form \[\rho = \frac12 \begin{pmatrix}t & s\\ \overline{s}& 1-t\end{pmatrix}\otimes\ketbra 11 + \frac12 \begin{pmatrix}1-t & \overline{s}\\ {s}& t\end{pmatrix}\otimes\ketbra 22 \] with $t\in[0,1]$, $|s|^2\leq t(1-t)$. Writing this $\rho$ in the decomposition \[\mathcal H = \mathrm{Enc}(\begin{pmatrix}1\\0\end{pmatrix}\otimes \vec 1)\oplus \mathrm{Enc}(\begin{pmatrix}0\\1\end{pmatrix}\otimes \vec 1),\] which is a possible choice of decomposition \eqref{eq_finaldec}, we obtain \[\rho = \frac12\,\left(\begin{array}{cccc}t & \hphantom{t}0\hphantom{t} & s & 0 \\ \hphantom{t}0\hphantom{t} & t & 0 & s \\ \overline{s} & 0 & 1\!-\!t & 0 \\ 0 & \overline s & 0 & 1\!-\!t\end{array}\right).\] In agreement with Theorem \ref{theo_invariantstates}, this $\rho$ is of the form $t\, \rho^{\mathrm{inv}}_1 + (1-t)\, \rho^{\mathrm{inv}}_2 + s\ \eta_{1,2} + \overline{s} \ \eta_{2,1}$, where $\rho^{\mathrm{inv}}_1$ and $\rho^{\mathrm{inv}}_2$ are invariant states with support equal to $\mathrm{Enc}(\begin{pmatrix}1\\0\end{pmatrix}\otimes \vec 1)$, $\mathrm{Enc}(\begin{pmatrix}0\\1\end{pmatrix}\otimes \vec 1)$ respectively. In addition, the off-diagonal blocks $\eta_{1,2}$ and $\eta_{2,1}$ are also~$\mathfrak{M}$-invariant, and with $Q$ the partial isometry of the form \[Q=\begin{pmatrix}0&0&0&0 \\ 0&0&0&0 \\ 1&0&0&0 \\ 0&1&0&0\end{pmatrix}\] we see that $\rho^{\mathrm{inv}}_2=Q\rho^{\mathrm{inv}}_1 Q^* $ and $\eta_{1,2}$ is proportional to $Q^*\rho^{\mathrm{inv}}_2=\rho^{\mathrm{inv}}_1 Q^*$. \end{example}
1,314,259,996,860
arxiv
\section{Introduction} Thermodynamics emerged as a theory to describe the properties of systems consisting of macroscopically many degrees of freedom. While it served from the very beginning as a practical tool to quantify the performance of heat engines, it also initiated substantial efforts in fundamental research to pave the road for fields like statistical and non-equilibrium physics. In recent years, triggered by the on-going progress in miniaturizing devices down to the nanoscale, the crucial question if, and if yes, to what extent macroscopic thermodynamics is influenced by quantum mechanics has led to a flurry of literature and the appearance of the new field of quantum thermodynamics. Of particular relevance as a pre-requisite for the design of actual devices is the understanding of heat transfer and heat rectification. In macroscopic structures heat transport is typically characterized by normal diffusion such that the heat conductivity is independent of the size of the probe leading to the conventional picture of \emph{local} thermal equilibrium and Fourier's law. The requirements for microscopic models that support this type of normal heat flow has been subject to controversial debates \cite{Lepri1998, Lepri2003, Bonetto2004, Berman2005, Liu2014, Li2015, Dhar2008, Li_Prosen2004} with dimensionality, disorder, and nonlinearities as potential ingredients. In the context of mesoscopic physics, it is even less clear under which circumstances the conventional scenario applies. The quantum state of the transport medium may be non-thermal, while energies extracted from or added to thermal baths can still be identified as heat. An extreme case of this type is purely ballistic transport between reservoirs \cite{Nazarov}. Thermal rectification occurs when the absolute value of the heat flux through a two-terminal device changes after the temperature difference between the terminals is reversed. Nonlinearities in combination with spatial symmetry breaking are pivotal conditions for the occurrence of rectification for a microscopic modeling \cite{Segal2006,Wu2009}. In a macroscopic, phenomenological setting, a thermal conductivity which depends on space and temperature has been found to be crucial \cite{Peyrard2006}. Rectification can be used in thermal diodes or heat valves which have been proposed in classical \cite{Terraneo2002, Li2004, Hu2006, Casati2007, Liu2010, Pereira2013, Bagchi2017, Romero2017, Kaushik2018} as well as in quantum systems \cite{Segal2005, Segal2005_2, Segal2006, Wu2009, Ruokola2009, Zhang2009, Sanchez2015}. As an alternative to nonlinearities, a coupling of the system to self-consistent reservoirs which guide the constituents of a chain to local equilibrium have been studied \cite{Pereira2011, Pereira2010_2, Bandyopadhyay2011, Pereira2010, Romero_Bastida2014, Pereira2015, Mendonca2015, Guimaraes2015}. Experimentally, thermal rectification has been realized in solid state systems following the ideas of Peyrard \cite{Kobayashi2009, Sawaki2011, Kobayashi2012}. Other implementations for rectification include carbon nanotubes \cite{Chang2006}, trapped ions in optical lattices \cite{Pruttivarasin2011}, hybrid systems where normal metals are tunnel-coupled to superconductors \cite{Martinez-Perez2015} and, recently, superconducting quantum bits \cite{Ronzani2018}. Particularly superconducting circuits allow for a well-controlled modulation of nonlinearities \cite{Ronzani2018, Shalibo2012, Shalibo2013} and may thus serve as promising candidates for the realization of heat engines operating in the deep quantum regime. In fact, situations where heat transfer is beneficial for the performance of devices have been discussed for the fast initialization of quantum bits by cooling \cite{Tuorila2017} as well as for properties of heat valves, thermal memories, switches, and transistors \cite{Sanchez2015, Li2012, Vannucci2015, Joulain2016}. From the theory point of view, the description of quantum heat transfer is a challenging issue. As part of the broad field of open quantum systems, it became clear in the last years that the underlying processes are extremely sensitive to a consistent treatment of the coupling between system and thermal reservoirs. Lindblad master equations based on \emph{local} dissipators may even violate the second law. \cite{Levy2014, Stockburger2017}. The `orthodox' approach to Lindblad dissipators suffers from considerable complexity for systems lacking symmetry since it involves all allowed transitions between energy eigenstates. Its perturbative nature typically limits it to systems with weak thermal contact. On the other hand, current experimental activities call for a systematic theoretical approach to quantum heat transfer valid from weak up to strong system-reservoir couplings and down to very low temperatures. The goal of this work is to present such a formulation that, in addition, stands out for its numerical efficiency. While we here focus on the experimentally relevant case of one-dimensional chains of anharmonic oscillators, generalizations to higher dimensions, more complex geometries, external driving, or set-ups such as heat valves and heat engines are within the scope of the method. The one-dimensional character of the system we study is most clearly maintained by attaching reservoirs at the chain ends only (no scattering to or from transverse channels). This approach originates from the description of non-equilibrium quantum dynamics through a stochastic Liouville-von Neumann equation (SLN) \cite{Stockburger2002}, which is an \emph{exact} representation of the formally exact Feynman-Vernon path integral formulation for dissipative quantum systems in terms of a stochastic equation of motion \cite{Stockburger1998, Stockburger2002, Stockburger2003, FeynmanVernon1963}. For reservoirs with ohmic spectral densities and large bandwidths, the SLN can further be simplified to a mixed type of dynamics governed by a stochastic Liouville-von Neumann equation with dissipation (SLED) \cite{Stockburger1999, Gardiner1988}. It has then proven to be particularly powerful to describe systems with continuous degrees of freedom and in presence of external time-dependent driving \cite{Schmidt2011, Schmidt2013}. However, the main challenge to practically implement the SLED is the degrading signal to noise ratio for increasing simulation time. Here, we address this problem by formulating the quantum dynamics in terms of hierarchies of cumulants which are truncated properly. An obvious additional benefit is a vastly improved scaling with system size. As we explicitly demonstrate, this treatment provides a very versatile tool to analyze quantum heat transfer in steady state across single or chains of anharmonic oscillators with weak to moderate anharmonicities. A comparison with benchmark data from a direct sampling of the full SLED proves the accuracy of the approach for a broad range of values for the anharmonicity parameter. It thus allows to cover within one formulation domains in parameters that are not accessible by alternative approaches \cite{Segal2005, Segal2005_2}. Previous work representing quantum states through cumulants seems mostly limited to closed systems \cite{Prezhdo2000, Prezhdo2002, Pereverzev2008, Shigeta2006, Pereverzev2008_2}. However, there is a conceptually related approach including reservoirs \cite{Ruokola2009}, with a focus on smaller structures though. In a classical context, cumulant expansions of fairly high order have been used \cite{Bricmont2007, Bricmont2007_2}. The paper is organized as follows: In Section~\ref{sec:stoch_dyn}, we introduce the SLN which represents the basis for the cumulant truncation scheme presented in Section~\ref{sec:cum_trunc}. First applications and comparison with benchmark results are discussed in Section~\ref{sec:state_osci_chain}. The main findings for heat rectification are then presented in Sections~\ref{sec:rect_1HO} and \ref{sec:rect_chains} including an analysis of the physical mechanism that determine the occurrence of rectification. The impact of disorder is considered in Section~ \ref{sec:rect_disordered} before a summary and outlook is given in \ref{sec:sum_out}. \section{Non-perturbative reduced Dynamics} \label{sec:stoch_dyn} The description of heat transfer is a delicate and challenging issue. In the sequel we develop an approach which is based on a formally exact formulation of open quantum dynamics derived within a system + bath model. The total Hamiltonian consists of a system Hamiltonian $H_s$, a reservoir Hamiltonian $H_R$ (for the moment we consider a single reservoir), and a system reservoir coupling $\opind{H}{I} = -qX$, where the latter captures a bilinear coupling of a system's coordinate $q$ and a collective bath coordinate $X$. Due to the macroscopically many degrees of freedom of the reservoir, the fluctuations of the latter in thermal equilibrium can assumed to be Gaussian.\newline Then, as shown previously, the formally exact path integral representation of the dynamics of the reduced density $\rho(t)={\rm Tr}_{\rm R}\{W(t)\}$ with $W(t)$ being the time evolved density operator in full Hilbert space, has an equivalent representation in terms of a stochastic Liouville-von Neumann equation (SLED) \cite{Stockburger1999}, i.e., \begin{equation} \frac{\rmd}{\rmd t}\rho_\xi = \mathcal{L} \rho_\xi =-\frac{\mathrm{i}}{\hbar}[H_s,\rho_\xi] + \frac{\mathrm{i}}{\hbar}\xi(t)[q,\rho_\xi]-\frac{\mathrm{i}}{\hbar}\frac{\gamma}{2}[q,\{p,\rho_\xi\}]\nonumber\;. \label{eq:SLED} \end{equation} Here, $\xi(t)$ denotes a c-valued noisy driving force whose auto-correlation reconstructs the real part of the bath quantum correlation $L(t-t')$% \begin{equation} \langle\xi (t)\xi (t')\rangle = \Re L(t-t')\;, \label{eq:noisecor} \end{equation} where \begin{equation} L(t) = \frac{\hbar}{\pi}\int_0^\infty\d \omega J(\omega)[\coth\Big(\frac{\beta\hbar\omega}{2}\Big)\cos(\omega t) - \mathrm{i}\sin(\omega t)]\, \label{eq:bath_corr} \end{equation} with inverse temperature $\beta=1/(k_\mathrm{B} T) $ and the spectral density $J(\omega)$. The latter we assume to be ohmic with a large cut-off frequency $\omega_c$ and coupling constant $\gamma$ acting on a system with mass $m$, i.e., \begin{equation} J(\omega)=m\frac{\gamma\omega}{[1+(\omega/\omega_c)^2)]^2}\, . \label{eq:spectralden} \end{equation} In this regime (large cut-off), the imaginary part of the correlation function Im$L(t)$ collapses to a $\delta$-function and is accounted for by the $\gamma$-dependent in (\ref{eq:SLED}). We note in passing that Gardiner has identified this equation as the adjoint of a quantum Langevin equation \cite{Gardiner1988, Ford1965}. The random density $\rho_\xi$ propagated according to (\ref{eq:SLED}) by itself lacks of a physical interpretation, while only the expectation value of $\rho=\langle \rho_\xi\rangle_\xi$ represents the physical reduced density of the system. The SLED is particularly suited to capture the dynamics of open quantum systems with continuous degrees of freedom and has been explicitly applied so previously in various contexts \cite{Stockburger2017, Wiedmann2016}. In the sequel, we follow a somewhat different route though by exploiting that the multiplicative noise $\xi$ in the SLED turns into additive noise if an adjoint equation governed by $\mathcal{L}^\dagger$ \cite{Breuer} for the dynamics of Heisenberg operators $A$ is used, i.e., \begin{eqnarray} \frac{\rmd}{\rmd t} A_\xi = & \frac{\rmi}{\hbar}[H_s,A_\xi] - \frac{\rmi}{\hbar}\xi(t)[q,A_\xi] + \frac{\rmi}{\hbar}\frac{\gamma}{2}\{p,[q,A_\xi]\}\,.\phantom{.....} \label{eq:adjoint_sled} \end{eqnarray} Expectation values are then obtained from a quantum mechanical average $\langle \cdot\rangle_\mathrm{tr} = \mathrm{tr}(\cdot\rho_\xi)$ and a subsequent noise average $\langle \cdot \rangle_\xi$, i.e., \begin{equation} \langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle = \langle\hspace{-2.4pt}\langle A\rangle_\mathrm{tr}\rangle_\xi\;. \label{eq:doubleav_mom} \end{equation} Now, to avoid an explicit noise sampling with its inherent degradation of the signal to noise ratio for longer simulation times, we do not explicitly work with (\ref{eq:adjoint_sled}) but rather derive from it sets of equations of motion for expectation values \cite{Schmidt2011, Schmidt2013}. This way, one arrives at a very efficient scheme to construct the open system dynamics for ensembles of anharmonic oscillators also in regimes which are not accessible by perturbative methods for open quantum systems. A central element of this formulation is the covariance of two operators $A$ and $B$ which can be transformed to \cite{Stockburger2017, Motz2017} \begin{eqnarray} \mathrm{Cov}(A,B) & = & \textstyle{\frac{1}{2}}\langle\hspace{-2.4pt}\langle AB+BA\rangle\hspace{-2.4pt}\rangle - \langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle\langle\hspace{-2.4pt}\langle B\rangle\hspace{-2.4pt}\rangle\nonumber\\ & = & \textstyle{\frac{1}{2}}\langle\hspace{-2.4pt}\langle AB+ BA\rangle\hspace{-2.4pt}\rangle - \langle\hspace{-2.4pt}\langle A\opind{\rangle}{tr}\langle B\opind{\rangle}{tr}\rangle_\xi \nonumber\\ & + &\langle\hspace{-2.4pt}\langle A\opind{\rangle}{tr}\langle B\opind{\rangle}{tr}\rangle_\xi - \langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle\langle\hspace{-2.4pt}\langle B\rangle\hspace{-2.4pt}\rangle\\ & = &\langle\mathrm{Cov}_\mathrm{tr}(A,B)\rangle_\xi + \mathrm{Cov}_\xi(\langle A\opind{\rangle}{tr},\langle B\opind{\rangle}{tr})\, \label{eq:covsplit} \end{eqnarray} and thus provides a separation into two covariances, one with respect to the trace average and one with respect to the noise average. Hence, choosing $A$ and $B$ as elements of the operator valued vector $\vec{\sigma}=(q,p)^t$ carrying position and momentum operators, the corresponding covariance matrix $\mathbf{\Sigma}$ takes the form \begin{eqnarray} \mathbf{\Sigma} =& \upperind{\mathbf{\Sigma}}{mtr} + \upperind{\mathbf{\Sigma}}{msc}\;,\\ \label{eq:Sigm_decomp} \upperind{\mathbf{\Sigma}}{mtr}_{jk} =& \langle\mathrm{Cov}_\mathrm{tr}(\sigma_j,\sigma_k)\rangle_\xi\;,\\ \upperind{\mathbf{\Sigma}}{msc}_{jk} =& \mathrm{Cov}_\xi(\langle \sigma_j\opind{\rangle}{tr},\langle \sigma_k\opind{\rangle}{tr})\,. \label{eq:Sigm_parts} \end{eqnarray} Technically, this decomposition requires to carefully distinguish between \textit{trace moments} and \textit{trace cumulants}. Namely, the elements of $\mathbf{\Sigma}$ contain noise expectation values of \textit{trace moments}, while $\upperind{\mathbf{\Sigma}}{mtr}$ contain noise expectation values of \textit{tracecovariances}. For general operators $A, B$ we thus introduce the compact notation for noise expectation values of tracecovariances $\langle\mathrm{Cov}_\mathrm{tr}(A,B)\rangle_\xi$: \begin{equation} \langle\hspace{-2.4pt}\langle AB\rangle\hspace{-2.4pt}\rangle_c = \langle\hspace{-2.4pt}\langle AB\rangle_{\mathrm{tr},c}\rangle_\xi\,, \label{eq:doubleav_cum} \end{equation} with $\langle AB\rangle_{\mathrm{tr},c}=\langle AB\rangle_\mathrm{tr} -\langle A\rangle_\mathrm{tr}\langle B\rangle_\mathrm{tr}$. We also emphasize the equality of the \emph{first} moments and cumulants \begin{equation} \langle A\rangle_{\mathrm{tr},c} = \langle A\rangle_\mathrm{tr}\, \ \ , \ \ \langle\hspace{-2.4pt}\langle AB\rangle\hspace{-2.4pt}\rangle_c=\langle\hspace{-2.4pt}\langle AB\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\, . \label{eq:doubleav} \end{equation} With these tools at hand, we can now proceed to develop the approach for nonlinear oscillators in detail. \section{Cumulant formulation of open quantum dynamics for nonlinear oscillators} \label{sec:cum_trunc} \begin{figure} \begin{center} \includegraphics[width=7.5cm]{potential_anham.pdf} \end{center} \caption{Schematic illustration of the considered anharmonic potential $V(q) = \frac{1}{2}m\omega^2q^2+\frac{1}{4}m\kappa q^4$ with the level spacings for $\kappa>0$ (blue) and $\kappa<0$ (red) which are not equidistant. For $\kappa<0$, only metastable states in the vicinity of $q=0$ are considered. The tunneling rates out of the region of the local minimum are supposed to be small and only sufficiently low temperatures which do not induce fluctuations beyond the barrier are considered.} \label{fig:pot_anharm} \end{figure} We will start to consider a single anharmonic oscillator to arrive at a formulation that can then easily be generalized to chains of oscillators. As a paradigmatic model for a nonlinear oscillator of mass $m$ we choose \begin{equation} H_s = \frac{p^2}{2m}+\frac{1}{2}m\omega^2q^2+\frac{1}{4}m\kappa q^4\,, \label{eq:Hamonean} \end{equation} with fundamental frequency $\omega$ and anharmonicity parameter $\kappa$, see figure.~\ref{fig:pot_anharm}, which can be both positive (stiffer mode) or negative (softer mode). Our main interest is in weakly anharmonic systems, i.e., even for negative $\kappa$, where the potential is not bounded from below, resonances are long-lived and may be treated approximately as eigenstates of the Hamiltonian. This is valid if thermalization through external reservoirs is fast compared to the resonance lifetime. While for a purely linear system ($\kappa=0$) a formulation in terms of cumulants leads to closed equations of motion for the first and second order cumulants \cite{Schmidt2011, Schmidt2013}, this is no longer the case for nonlinear oscillators. The challenge is thus to implement a systematic procedure for the truncation of higher order cumulants which on the one hand provides an accurate description for weak to moderate anharmonicity parameters and on the other hand leads to an efficient numerical scheme also for ensembles of those oscillators. We emphasize that we are primarily interested in quantum heat transfer in steady state situations and not in the full wealth of nonlinear dissipative quantum dynamics. Accordingly, in a nutshell, a formulation which satisfies both criteria factorizes all higher than second order moments (Wick theorem) and allows to capture anharmonicities effectively by state dependent frequencies which must be determined self-consistently. This perturbative treatment of nonlinearities for quantum oscillators has some similarities in common with the one for classical systems \cite{Landau} but turns out to be much more involved due to the highly complex equation of motion (\ref{eq:adjoint_sled}) as we will show now. For this purpose, for systems, where the relevant dynamics occurs around a single potential minimum, it is convenient to set initial values of operators $A$ equal to zero and use \begin{equation} \langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle = 0 \label{eq:doubleav_zero} \end{equation} henceforth. Then, starting from (\ref{eq:adjoint_sled}) one obtains two coupled equations for position and momentum according to \begin{eqnarray} \frac{\rmd}{\rmd t}\langle q\rangle_{\mathrm{tr},c} &=\frac{1}{m}\langle p\rangle_{\mathrm{tr},c}\nonumber\\ \frac{\rmd}{\rmd t}\langle p\rangle_{\mathrm{tr},c} &=-m\omega^2\langle q\rangle_{\mathrm{tr},c}-m\kappa\langle q^3\rangle_\mathrm{tr} -\gamma\langle p\rangle_{\mathrm{tr},c} +\xi(t)\,. \label{eq:first_moments_ex} \end{eqnarray} Here, the third trace moment can be expressed as a linear combination of products of cumulants \begin{equation} \langle q^3\rangle_\mathrm{tr} = \langle q^3\rangle_{\mathrm{tr},c} + 3\langle q^2\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c} +\langle q\rangle_{\mathrm{tr},c}^3\,. \label{eq:thirdmom_cum} \end{equation} This kind of transformation represents a systematic separation and summation over all possible subsets of trace averaged products, for details see~\cite{Kubo1962}.\newline \textit{a) Truncation of trace and noise cumulants}\newline A straightforward approach to treat higher than second order moments is to assume the approximate validity of Wick's theorem and neglect all higher than second order cumulants so that (\ref{eq:thirdmom_cum}) reduces to \begin{equation} \langle q^3\rangle_\mathrm{tr} \approx 3\langle q^2\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c} +\langle q\rangle_{\mathrm{tr},c}^3\, . \label{eq:thirdmom_cumtrunc} \end{equation} This procedure does not immediately lead to a closure of (\ref{eq:first_moments_ex}) although with parametric driving through $\langle q^2\rangle_{\mathrm{tr},c}$, which couples to the equations for the second cumulants which we analyze later. As shown for the linear system, the covariance matrix consists of two parts, where the elements of one part is given by the noise averaged product $\langle\hspace{-2.4pt}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_\xi$ \cite{Motz2017}. To handle these terms, we derive the equations of motion for these products with $\mathcal{L}^\dagger$ and turn the noise moments to \textit{noise cumulants} which is trivial for all linear contributions since $\langle\hspace{-2.4pt}\langle A\rangle_{\mathrm{tr},c}\rangle_{\xi,c}=\langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle=0$ and also for the system- bath correlation which contains the quantum noise whose expectation value is zero. The resulting equations of motion then read \begin{eqnarray} \frac{\rmd}{\rmd t}\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c}\rangle_{\xi,c} =&\frac{2}{m}\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\nonumber\\ \frac{\rmd}{\rmd t}\langle\hspace{-2.4pt}\langle p\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c} =&-2m\omega^2\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\nonumber\\ &-2m\kappa[3\langle\hspace{-2.4pt}\langle q^2\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi} +\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^3\langle p\rangle_{\mathrm{tr},c}\rangle_\xi]\nonumber\\ &-2\gamma\langle\hspace{-2.4pt}\langle p\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c} +2\langle\xi(t)\langle p\rangle_\mathrm{tr}\rangle_{\xi,c}\nonumber\\ \frac{\rmd}{\rmd t}\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c} =&\frac{1}{m}\langle\hspace{-2.4pt}\langle p\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c}-m\omega^2\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\nonumber\\ &-m\kappa[3\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^2\langle q^2\rangle_{\mathrm{tr},c}\rangle_\xi+\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^4\rangle_\xi]\nonumber\\ &-\gamma\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle p\rangle_{\mathrm{tr},c}\rangle_{\xi,c} +\langle\xi(t)\langle q\rangle_\mathrm{tr}\rangle_{\xi,c}\,. \label{eq:sigma_stoch_tr} \end{eqnarray} In equivalence to the equations of the trace cumulants, here the contributions which can not be turned immediately from moments to cumulants are the ones which enter from the nonlinearity of the system. The second and third equations in (\ref{eq:sigma_stoch_tr}) contain higher order moments with respect to the noise average. The trace cumulants in these moments constitute c-numbers, and hence, the order of the higher order noise moments is determined by the sum of the exponents outside of the trace averages $\langle .\rangle_{\mathrm{tr},c}$. Hence, $\langle\hspace{-2.4pt}\langle q^2\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi}$ ($A,B$=$q,p$) constitutes a third order moment with respect to the noise and $\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^3\langle A\rangle_{\mathrm{tr},c}\rangle_\xi$ a fourth order moment. These moments as linear combinations of cumulants are \begin{equation} \langle\hspace{-2.4pt}\langle q^2\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi}=\, \langle\hspace{-2.4pt}\langle q^2\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi,c}+\langle\hspace{-2.4pt}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_c\nonumber\,\\ \label{eq:full_noise_cums_1} \end{equation} and \begin{equation} \langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^3\langle A\rangle_{\mathrm{tr},c}\rangle_\xi =\, \langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^3\langle A\rangle_{\mathrm{tr},c}\rangle_{\xi,c} +3\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\,, \nonumber\\ \label{eq:full_noise_cums_2} \end{equation} where $\langle\hspace{-2.4pt}\langle A\rangle\hspace{-2.4pt}\rangle_c=0$ is already considered. Setting the third- and fourth-order cumulant to zero gives \begin{equation} \langle\hspace{-2.4pt}\langle q^2\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi}\approx\,\langle\hspace{-2.4pt}\langle A\rangle_{\mathrm{tr},c}\langle B\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_c=\langle\hspace{-2.4pt}\langle A\rangle_\mathrm{tr}\langle B\rangle_\mathrm{tr}\rangle_\xi\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_c \label{eq:approx_noise_cums1} \end{equation} and \begin{eqnarray} \langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}^3\langle A\rangle_{\mathrm{tr},c}\rangle_\xi &\approx 3\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle A\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\langle\hspace{-2.4pt}\langle q\rangle_{\mathrm{tr},c}\langle q\rangle_{\mathrm{tr},c}\rangle_{\xi,c}\nonumber\\ &=3\langle\hspace{-2.4pt}\langle q\rangle_\mathrm{tr}\langle A\rangle_\mathrm{tr}\rangle_\xi\langle\hspace{-2.4pt}\langle q\rangle_\mathrm{tr}\langle q\rangle_\mathrm{tr}\rangle_\xi\,, \label{eq:approx_noise_cums2} \end{eqnarray} where the equalities in (\ref{eq:approx_noise_cums1}) and (\ref{eq:approx_noise_cums2}) follow directly from (\ref{eq:doubleav}). With (\ref{eq:approx_noise_cums1}), products of the noise expectation values of the \textit{first} and \textit{second} trace cumulants enter. The dynamics of the second trace cumulants is shown in \ref{app:2ndtrace_cumulants}, where an analysis of the steady-state shows that the fluctuations induced by the coupling to the first trace moments are exponentially suppressed and, therefore, this second trace cumulants have a stable fixed point at zero. This leads to a decoupling of the equations for the first- and second trace cumulants and a reduction of the covariance matrix to $\mathbf{\Sigma} = \upperind{\mathbf{\Sigma}}{msc}$. This allows us to simplify the notation by using $\langle\hspace{-2.4pt}\langle A\rangle_\mathrm{tr}\langle B\rangle_\mathrm{tr}\rangle_\xi=\langle\hspace{-2.4pt}\langle AB\rangle\hspace{-2.4pt}\rangle$ which is valid for steady-states. The elements of $\mathbf{\Sigma}$ are then determined by a system of differential equations \begin{eqnarray} \frac{\rmd}{\rmd t} \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle =&\frac{2}{m}\langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle\nonumber\\\frac{\rmd}{\rmd t} \langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle =&-2m\tilde{\omega}^2\langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle-2\gamma\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle+2\langle\xi(t)\langle p\rangle_\mathrm{tr}\rangle_\xi\nonumber\\ \frac{\rmd}{\rmd t}\langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle =&\frac{1}{m}\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle-m\tilde{\omega}^2\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle-\gamma\langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle+\langle\xi(t)\langle q\rangle_\mathrm{tr}\rangle_\xi\,, \label{eq:stoch_mat} \end{eqnarray} which represent the steady-state by algebraic equations if the left hand sides are set to zero and the system-bath correlations $\langle\xi(t)\langle \sigma_j\rangle_\mathrm{tr}\rangle_\xi$ are in their respective steady-state value. We introduced the effective frequency \begin{equation} \tilde{\omega}^2=\omega^2+3\kappa\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle\,, \label{eq:eff_freq} \end{equation} Since $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ is an element of the covariance but enters also the effective frequency, our approach can be seen as a type of self-consistent mean-field formulation with some similarities to the one developed by Ruokola et al. in \cite{Ruokola2009}. We note in passing that state-dependent frequencies are also known from classical perturbative treatments of anharmonic oscillators \cite{Landau}. In the quantum model considered here, the expectation value of the amplitude is zero ($\langle\hspace{-2.4pt}\langle q\rangle\hspace{-2.4pt}\rangle =0$) but the width of the state $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ enters into the effective frequency. The noise average leads to a temperature dependency of $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ and therefore to an effective frequency $\tilde{\omega}$ which depends on the temperature of the heat bath. In case of oscillators interacting with two thermal reservoirs at different temperatures, the situation we consider below, the effective frequency does not only depend on both of these temperatures but also on cross-correlations between system and reservoirs. A direct calculation of the dynamics of the full covariance matrix $\mathbf{\Sigma}$ reveals a dependence on higher order moments which are given by $\langle\hspace{-2.4pt}\langle q^4\rangle\hspace{-2.4pt}\rangle$ and $\Big\langle\hspace{-2.5pt}\Big\langle\frac{q^3p+pq^3}{2}\Big\rangle\hspace{-2.5pt}\Big\rangle$. The presented formalism represents a transformation of the moments contained in the covariance matrix into cumulants and an expansion with subsequent truncation of higher order cumulants. This procedure is equivalent to an approximation of the higher order moments given by \begin{eqnarray} &\langle\hspace{-2.4pt}\langle q^4\rangle\hspace{-2.4pt}\rangle \approx 3\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle^2\nonumber\\ &\Big\langle\hspace{-2.5pt}\Big\langle\frac{q^3p+pq^3}{2}\Big\rangle\hspace{-2.5pt}\Big\rangle \approx 0\,. \label{eq:moment_approx} \end{eqnarray} The first equation is obviously the content of Wick's theorem, just like the second equation if the system is in steady-state where $\Big\langle\hspace{-2.5pt}\Big\langle\frac{qp+pq}{2}\Big\rangle\hspace{-2.5pt}\Big\rangle=0$ holds. Nevertheless, the truncation scheme we present here provides a systematic approach that accounts for both, the quantum and the thermal fluctuations. Therefore it represents an extension of previous schemes based on truncations of higher order moments or cumulants for closed systems \cite{Prezhdo2000, Prezhdo2002, Pereverzev2008, Shigeta2006, Pereverzev2008_2}. \bigskip \textit{b) System-bath correlation function}\newline For a compact treatment of the system-bath correlations and a subsequent extension to multipartite systems we treat the system as a linear one with effective frequency and use a matrix notation of the derived steady-state equations (\ref{eq:stoch_mat}) via a Lyapunov equation as in previous work \cite{Stockburger2016, Motz2017}: \begin{equation} \mathbf{M}\mathbf{\Sigma} + \mathbf{\Sigma}\mathbf{M}^\dagger\ + \mathbf{y}^\dagger + \mathbf{y} = 0\,. \label{eq:sigma_noisesteady} \end{equation} The matrices have dimension $2N\times 2N$ with $N$ being the number of oscillators. $\mathbf{M}$ contains details of the model and the damping and reads for one oscillator \begin{equation} \mathbf{M}= \left(\begin{array}{cc} 0 & 1/m\\ -m\tilde{\omega}^2 & -\gamma\\ \end{array}\right)\,. \label{eq:M_mat} \end{equation} with the effective frequency $\tilde{\omega}^2=\omega^2+3\kappa\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ which itself depends on an element of the covariance matrix $\mathbf{\Sigma}$. Therefore, (\ref{eq:sigma_noisesteady}) is solved self-consistently with the solution for the linear system ($\kappa=0$) as an initial guess for $\mathbf{M}$ and the system-bath correlations. These are shifted to the matrix $\mathbf{y}$ which reads \begin{equation} \mathbf{y}(t)= \left(\begin{array}{cc} 0 & \langle\xi(t)\langle q\rangle_\mathrm{tr}\rangle_\xi\\ 0 & \langle\xi(t)\langle p\rangle_\mathrm{tr}\rangle_\xi\\ \end{array}\right)\,. \label{eq:y_mat} \end{equation} In terms of a tensor product of the vector carrying the phase space variables $\vec{\sigma}=(q,p)^t$ and a vector with the noise $\vec{\xi}=(0,\xi)^t$, the correlation matrix is \begin{equation} \mathbf{y}(t) = \langle\hspace{-2.4pt}\langle\vec{\sigma}\opind{\rangle}{tr}\vec{\xi}^t(t)\rangle_\xi\,. \label{eq:sysbathcorr} \end{equation} If the effective frequency is supposed to be fixed, the equations (\ref{eq:stoch_mat}) can considered as a linear set of equations with additional driving $\xi(t)$ which are formally solved by the Green's function: \begin{equation} \langle\vec{\sigma}\rangle_\mathrm{tr} = \int_0^t\rmd t'\mathbf{G}(t-t')\vec{\xi}(t')\,. \label{eq:formsol} \end{equation} The noise expectation value of the product of $\langle\vec{\sigma}\rangle_\mathrm{tr}$ with the noise vector $\vec{\xi}^t(t)$ gives a form of $\mathbf{y}$ where all noises are shifted to the bath auto-correlation function: \begin{equation} \mathbf{y}(t) = \mathbf{y}(0) + \int_0^t\rmd t' \mathbf{G}(t-t')\langle\vec{\xi}(t)\vec{\xi}^t(t')\rangle_\xi\;. \label{eq:yfunc_int} \end{equation} The elements of the matrix $\mathbf{L'}(t-t')=\langle\vec{\xi}(t)\vec{\xi}^t(t')\rangle_\xi$ are given by (\ref{eq:bath_corr}) which completes the presented formalism to a deterministic description based solely on model parameters. For consistent results with (\ref{eq:sigma_noisesteady}) one has to integrate to sufficiently large times until a constant steady-state value of $\mathbf{y}$ is reached. By usage of the time-translational symmetry of the system, this integral can be derived with respect to time and written as a system of two coupled differential equations \begin{eqnarray} \dot{\mathbf{y}}(t) =& \mathbf{G}(t)\mathbf{L}(t) \label{eq:yfuncm}\\ \dot{\mathbf{G}}(t) =& \mathbf{M}\mathbf{G}(t)\,, \label{eq:greenfunc} \end{eqnarray} which allows a simple and computationally efficient implementation. Treating both, the system-bath correlation and the dissipative parts as a linear system with effective frequency $\tilde{\omega}$ provides a consistent formalism for the nonlinear system. \newline \textit{c) Generalization to chains of oscillators}\newline \label{sec:chains_ext} \begin{figure} \begin{center} \includegraphics[width=7.5cm]{oscis_chain_diffT.pdf} \end{center} \caption{Schematic diagram of a chain of anharmonic oscillators which is terminated by thermal reservoirs. The dotted lines illustrate the on-site potentials for $\kappa>0$ (blue) and $\kappa<0$ (red), while the solid black line represents the harmonic case with $\kappa=0$. The width of the potential determines the quantum state (green) with its effective frequency $\tilde{\omega}_n$.} \label{fig:chain_anh} \end{figure} In order to step forward to chains of anharmonic oscillators, we choose a quadratic coupling between adjacent entities, i.e., \begin{equation} H_s = \sum_{n=1}^N\frac{p_n^2}{2m}+\frac{1}{2}m\omega_n^2 q_n^2+\frac{1}{4}m\kappa q_n^4+\frac{\mu}{2}\sum_{n=1}^{N-1}(q_n-q_{n+1})^2\; \label{eq:Ham_qu} \end{equation} and, for the sake of transparency, work with chains with homogeneous mass $m$, anharmonicitiy $\kappa$, and inter-oscillator coupling $\mu$. If the on-site frequencies are also homogeneous, we denote them with $\omega$. The generalization to inhomogeneous structures is straighforward (see Sec.~\ref{sec:rect_disordered}). Note that this Hamiltonian differs from the paradigmatic Fermi-Pasta-Ulam model \cite{Berman2005} in that there nonlinearities appear in the inter-oscillator couplings while here they feature on-site pinning type of potentials. Now, to study quantum heat transfer through this extended structure, the formulation for a single oscillator is easily generalized along the lines presented in \cite{Stockburger2017, Motz2017}. The chain is terminated at its left ($l$) and right ($r$) end by two independent reservoirs, see figure.~\ref{fig:chain_anh}, with bath correlations given by \begin{equation} \langle\xi_\nu(t)\xi_{\nu'}(t')\rangle =\delta_{\nu,\nu'}\Re L_\nu(t-t')\,, \; \nu=l, r \label{eq:noisecor_multres} \end{equation} where the $L_\nu(t)$ are specified in (\ref{eq:bath_corr}) and (\ref{eq:spectralden}) with $J(\omega)\to J_\nu(\omega)$ according to $\gamma\to \gamma_\nu$ and $\beta\to \beta_\nu=1/k_{\rm B}T_\nu$. For heat transfer one considers situations with $T_l\neq T_r$ corresponding to a `cold' and a `hot' reservoir with temperatures $T_h>T_c$. In absence of cross-correlations, the contributions of the two reservoirs are additive in the corresponding Caldeira-Leggett Hamiltonian \cite{Caldeira1983a} so that we have in generalization of (\ref{eq:adjoint_sled}) \begin{eqnarray} \frac{\rmd}{\rmd t} A_\xi = & \frac{\rmi}{\hbar}[H_s,A_\xi] - \frac{\rmi}{\hbar}\xi_l(t)[q_1,A_\xi] - \frac{\rmi}{\hbar}\xi_r(t)[q_N,A_\xi]\nonumber\\ & + \frac{\rmi}{\hbar}\frac{\gamma_l}{2}\{p_1,[q_1,A_\xi]\} + \frac{\rmi}{\hbar}\frac{\gamma_r}{2}\{p_N,[q_N,A_\xi]\}\,. \label{eq:adjoint_sled_chain} \end{eqnarray} Accordingly, the cumulant truncation goes through as for a single oscillator and the dynamics of the phase space operators collected in the operator-valued vector $\vec{\sigma}=(q_1,p_1,\dots,q_N,p_N)^t$ follows from \begin{equation} \frac{\rmd}{\rmd t}\langle\vec{\sigma}\rangle_\mathrm{tr}=\mathbf{M}\langle\vec{\sigma}\rangle_\mathrm{tr}+\vec{\xi}(t)\,. \label{eq:1st_cumulants} \end{equation} Here $\vec{\xi}=(0, \xi_l, 0, \dots, 0, \xi_r)^t$ and the matrix $\mathbf{M}$ contains the dissipative parts and the effective frequencies $\tilde{\omega}_n$ according to (\ref{eq:eff_freq}) to effectively account for the nonlinearities and to be determined self-consistently. The detailed structure of $\mathbf{M}$ for the Hamiltonian (\ref{eq:Ham_qu}) is shown in \ref{app:system_matrix}. Having $\mathbf{M}$ at hand, the multipartite system obeys the steady-state equations (\ref{eq:sigma_noisesteady}), (\ref{eq:yfuncm}), and (\ref{eq:greenfunc}), where the only non-zero elements in $\mathbf{L}(t-t')$ are the second and last diagonal elements containing the auto-correlation functions $L_l(t)$ and $L_r(t)$, respectively. \bigskip \textit{d) Energy fluxes}\newline Temperature differences between thermal reservoirs at different ends of the chain give rise to an energy flux and thus heat transfer. Locally, for the change of energy at an individual site $n$ one must distinguish between the oscillators at the boundaries ($n=1, N$) and the oscillators in the bulk. For the latter, the change in energy follows from \begin{equation} H_n = \frac{p_n^2}{2m}+\frac{1}{2}m\omega_n^2 q_n^2+\frac{1}{4}m\kappa q_n^4+\frac{\mu}{2}(q_{n-1}-q_n)^2 + \frac{\mu}{2}(q_n-q_{n+1})^2\, \end{equation} and includes also the nearest neighbor coupling, while for the oscillators at the boundaries only one coupling term appears and the other one is replaced by the coupling to the respective reservoir. Calculating the time evolution for each of these parts according to $\frac{\rmd}{\rmd t} \langle H_n\rangle_\mathrm{tr} = \langle\mathcal{L}^\dagger H_n\rangle_\mathrm{tr}$ leads, after performing the stochastic average, to the dynamics of the on-site energies. This then directly leads to the energy fluxes between adjacent sites and between the oscillators at the ends and the respective reservoirs, namely, \begin{eqnarray} \frac{\rmd}{\rmd t} \langle\hspace{-2.4pt}\langle H_1\rangle\hspace{-2.4pt}\rangle &=& \langle\hspace{-2.4pt}\langle j_{l,1}\rangle\hspace{-2.4pt}\rangle - \langle\hspace{-2.4pt}\langle j_{1,2}\rangle\hspace{-2.4pt}\rangle \nonumber\\ \frac{\rmd}{\rmd t} \langle\hspace{-2.4pt}\langle H_n\rangle\hspace{-2.4pt}\rangle &=& \langle\hspace{-2.4pt}\langle j_{n-1,n}\rangle\hspace{-2.4pt}\rangle-\langle\hspace{-2.4pt}\langle j_{n,n+1}\rangle\hspace{-2.4pt}\rangle,\quad 2\leq n\leq N-1\nonumber\\ \frac{\rmd}{\rmd t} \langle\hspace{-2.4pt}\langle H_N \rangle\hspace{-2.4pt}\rangle &=& \langle\hspace{-2.4pt}\langle j_{N-1,N}\rangle\hspace{-2.4pt}\rangle - \langle\hspace{-2.4pt}\langle j_{r,N}\rangle\hspace{-2.4pt}\rangle\,, \nonumber\\ \end{eqnarray} where respective heat fluxes follow from \begin{eqnarray} \langle\hspace{-2.4pt}\langle j_{n-1,n}\rangle\hspace{-2.4pt}\rangle &=& \frac{\mu}{m}\langle\hspace{-2.4pt}\langle q_{n-1}p_n\rangle\hspace{-2.4pt}\rangle,\quad\phantom{-} 2\leq n\leq N\label{eq:heatfluxleft}\\ \langle\hspace{-2.4pt}\langle j_{n,n+1}\rangle\hspace{-2.4pt}\rangle &=& -\frac{\mu}{m}\langle\hspace{-2.4pt}\langle q_{n+1}p_n\rangle\hspace{-2.4pt}\rangle,\quad 1\leq n\leq N-1\label{eq:heatfluxright}\\ \langle\hspace{-2.4pt}\langle j_{l,1}\rangle\hspace{-2.4pt}\rangle &=& \frac{1}{m}\langle\xi_l(t)\langle p_1\opind{\rangle}{tr}\rangle_\xi - \gamma_l\Big\langle\hspace{-2.5pt}\Big\langle\frac{p_1^2}{m}\Big\rangle\hspace{-2.5pt}\Big\rangle\label{eq:heatfluxres1}\\ \langle\hspace{-2.4pt}\langle j_{r,N}\rangle\hspace{-2.4pt}\rangle &=& -\frac{1}{m}\langle\xi_r(t)\langle p_N\opind{\rangle}{tr}\rangle_\xi + \gamma_r\Big\langle\hspace{-2.5pt}\Big\langle\frac{p_N^2}{m}\Big\rangle\hspace{-2.5pt}\Big\rangle\label{eq:heatfluxresN}\,. \end{eqnarray} One should note that the above averages are trace and noise moments which means that the respective currents in the bulk in (\ref{eq:heatfluxleft}) and (\ref{eq:heatfluxright}) are determined by elements of the covariance matrix. The first terms in (\ref{eq:heatfluxres1}) and (\ref{eq:heatfluxresN}) represent system-bath correlations and have to be computed as elements of $\mathbf{y}$ (\ref{eq:sysbathcorr}) to obtain the covariance matrix. Therefore, once the covariance matrix is known, the energy currents are implicitly known as well. \bigskip \textit{e) Rectification of heat transfer}\newline An important measure for the asymmetry in heat transfer is the rectification coefficient. It quantifies the net heat current when the temperature difference in the reservoirs is reversed. The configuration with the left reservoir being hot at temperature $T_l=T_h$ and the right one being cold at temperature $T_r=T_c<T_h$ is denoted by $h\rightarrow c$ with the corresponding heat current in steady state $\llangle j\rrangle_\mathrm{hc}$. The opposite situation, where {\em only} the temperatures are exchanged, i.e.\ $T_l=T_c<T_r=T_h$, is termed $c\leftarrow h$ with the respective heat current $\llangle j\rrangle_\mathrm{ch}$. Based on this notation the rectification coefficient reads \begin{equation} \alpha = \frac{|\llangle j\rrangle_\mathrm{hc}|-|\llangle j\rrangle_\mathrm{ch}|}{|\llangle j\rrangle_\mathrm{hc}|+|\llangle j\rrangle_\mathrm{ch}|}\, . \label{eq:rectcoef} \end{equation} Since heat always flows from hot to cold, $\alpha>0\ (<0)$ means that the heat flow $T_l\to T_r$ ($T_r\to T_l$) exceeds that from $T_r\to T_l$ ($T_l\to T_r$). The basic ingredients for finite rectification are {\em both} nonlinearity and spatial symmetry breaking \cite{Segal2006, Wu2009}. A purely harmonic system does not show any rectification even in presence of spatial asymmetry which implies that non-equidistant energy level spacings induced by the nonlinearities are a crucial pre-requisite. In the present case of anharmonic oscillators, nonlinearities are effectively accounted for by effective state dependent frequencies $\tilde{\omega}_n$ that depend on temperature and damping. \bigskip \section{Test of the approach} \label{sec:state_osci_chain} \begin{figure*}[] \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{qsps_anham_comp_michi_thomas_beta_3_gamma_0p1_insetmom.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{qsps_anham_comp_michi_thomas_beta_5_gamma_0p5_insetmom.pdf} \end{minipage} \caption{The diagonal elements of the covariance matrix $\mathbf{\Sigma}$ versus the anharmonicity $\kappa$ for a single oscillator whose Hamiltonian is given by (\ref{eq:Hamonean}) and which is coupled to a reservoir with $\beta=3$ and damping $\gamma=0.1$ (left) and $\beta=5$ and $\gamma=0.5$ (right). The blue symbols are results from the deterministic cumulant expansion presented here, while the red crosses are obtained from a stochastic sampling of the SLED in position representation. The inset shows the higher order moments (multiplied with the respective $\kappa$) of the exact dynamics obtained by a stochastic sampling of the SLED (red) and the approximations of these moments according to (\ref{eq:moment_approx}) (blue). Other parameters are $\omega=1$, $m=1$, $\hbar=1$ and $\omega_c=30$.} \label{fig:anham_comp_det_stoch} \end{figure*} \textit{a) The classical limit}\newline As a first and simple illustration of the presented method, we consider an anharmonic oscillator coupled to a single classical thermal reservoir. In thermal equilibrium, (\ref{eq:stoch_mat}) lead to algebraic equations which can be rearranged as \begin{eqnarray} \langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle=& 0\nonumber\\ \langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle =& \frac{1}{\gamma}\langle\xi(t)\langle p\rangle_\mathrm{tr}\rangle_\xi\nonumber\\ \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle =&\frac{1}{m\tilde{\omega}^2}[\frac{1}{m\gamma}\langle\xi(t)\langle p\rangle_\mathrm{tr}\rangle_\xi+\langle\xi(t)\langle q\rangle_\mathrm{tr}\rangle_\xi]\,. \label{eq:app_stoch_mat_final} \end{eqnarray} Now, only the system-bath correlations have to be calculated to arrive at a quadratic equation for $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$.\newline In the classical limit with $\omega_c\to\infty$ and $\beta\to 0$, the real part of the bath auto-correlation function (\ref{eq:bath_corr}) reduces to $L'(t-t')=2k_\mathrm{B} Tm\gamma\delta(t-t')$ so that (\ref{eq:yfunc_int}) reads $\mathbf{y}(t) = k_\mathrm{B} Tm\gamma \mathbf{G}(0)$; no initial system-bath correlations are assumed since we take $\mathbf{y}(0)=0$. This way, from $\mathbf{G}(0)=\mathbb{1}$, one arrives at \begin{equation} \mathbf{y}(t)= \left(\begin{array}{cc} 0 & 0\\ 0 & y_p\\ \end{array}\right)\,, \label{eq:y_mat_class} \end{equation} with $y_p=\langle\xi(t)\langle p\rangle_{\mathrm{tr},c}\rangle_\xi=k_\mathrm{B} Tm\gamma$ while $\langle\xi(t)\langle q\rangle_{\mathrm{tr},c}\rangle_\xi=0$. With these findings, (\ref{eq:app_stoch_mat_final}) takes the form known for classical harmonic oscillators \cite{Weiss}, however, with effective frequency $\tilde{\omega}$ \begin{eqnarray} \langle\hspace{-2.4pt}\langle qp\rangle\hspace{-2.4pt}\rangle=& 0\nonumber\\ \langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle =& \frac{y_p}{\gamma}=mk_\mathrm{B} T\nonumber\\ \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle =&\frac{y_p}{(m\tilde{\omega})^2\gamma}=\frac{k_\mathrm{B} T}{m\tilde{\omega}^2}\,. \label{eq:app_stoch_mat_class} \end{eqnarray} The quadratic equation for $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ has two solutions with only one being physical, namely, \begin{equation} \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle=\frac{\omega^2}{6\kappa}\Big(1-\sqrt{1+\frac{12\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_0}{\omega^2}\kappa}\Big)\;. \label{eq:sol_class} \end{equation} Illuminating is an expansion for small $|\kappa|$ which yields \begin{equation} \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle\approx\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_0(1-\frac{3\kappa}{\omega^2}\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_0)\;, \label{eq:sol_class_taylor} \end{equation} with the harmonic result $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_0=k_\mathrm{B} T/(m\omega^2)$. As expected, $\kappa>0$ (stiffer mode) decreases $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$, while $\kappa<0$ (softer mode) leads to a broadening. (\ref{eq:sol_class}) also provides an upper bound for the validity of the perturbative treatment in case $\kappa<0$, namely, $|\tilde{\omega}^2-\omega^2|=3 |\kappa| \langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle_0/\omega^2 < 1/4$. Indeed, in comparison to an exact numerical calculation of $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$, one finds that in this range the result (\ref{eq:sol_class}) provides an accurate description with deviations at most of order 1\%. \bigskip \textit{b) Quantum oscillator in thermal equilibrium}\newline \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{qsps_anham_kap_neg_eta_0p01_kbT_0.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{qsps_anham_kap_neg_eta_0p10_kbT_0.pdf} \end{minipage} \caption{$\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$, $\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle$ and the uncertainty $\sqrt{\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle}$ versus the anharmonicity $\kappa$ for a damped anharmonic oscillator with system Hamiltonian (\ref{eq:Hamonean}). Left plot shows $\gamma=0.01$ and the right one $\gamma=0.10$. The dashed lines are a guide for the eye and represent the value $0.5$. Other parameters are $\omega=1$, $m=1$, $\hbar=1$, $k_\mathrm{B} T=0$ and $\omega_c=30$.} \label{fig:anham_neg_kap} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=17cm]{wigner_var_kappa.pdf} \end{center} \caption{Wigner functions of an anharmonic oscillator (\ref{eq:Hamonean}) for different values of $\kappa=-0.2$ (left), $0.0$ (middle) and $0.5$ (right). The plots show the broadening of the state for negative $\kappa$ and the squeezing for positive $\kappa$, while $\kappa=0$ gives an almost circular distribution which is a consequence of small damping $\gamma=0.01$. Other parameters are $\omega=1$, $m=1$, $\hbar=1$, $k_\mathrm{B} T=0$ and $\omega_c=30$.} \label{fig:wigner_T_0} \end{figure*} We now proceed to analyze the performance of the approach in the quantum regime by considering the properties in thermal equilibrium of a single anharmonic oscillator embedded in a single thermal reservoir. In case of $\kappa>0$ corresponding to a globally stable potential surface, this particularly allows to compare with numerically exact results from a SLED simulation with the full anharmonicity taken into account. Figure~\ref{fig:anham_comp_det_stoch} displays the variances $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ and $\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle$ obtained from both the deterministic method presented here and from the SLED calculation which serves as a benchmark for the cumulant truncation. Apparently, the latter performs very accurately in the considered window of values for $\kappa$ for both phase space operators even for stronger dissipation. The variation of the position and momentum fluctuations caused by the increasing $\kappa$ is almost $50\%$ for weak coupling. Our approach covers this variation in accordance with the benchmark data. It is also seen (cf.\ the insets) that the perturbative treatment provides accurate results for higher order momements such as $\kappa\langle\hspace{-2.4pt}\langle q^4\rangle\hspace{-2.4pt}\rangle$ and $\kappa\Big\langle\hspace{-2.5pt}\Big\langle\frac{qp^3+p^3q}{2}\Big\rangle\hspace{-2.5pt}\Big\rangle$, see (\ref{eq:moment_approx}). Note that in the considered range for $\kappa$ one also clearly sees the impact of friction on the variances: finite dissipation tends to suppress fluctuations in position and to enhance those in momentum. The cumulant formulation nicely captures this feature. The comparison with the exact data also allows us to quantify more precisely the accuracy of the perturbative treatment of the nonlinearities. For this purpose, according to (\ref{eq:Hamonean}), it is natural to consider the dimensionless quantity $\tilde{\kappa}= \kappa q_0^2/2 \omega^2$ with $q_0$ being a typical harmonic length scale as a measure for the relative impact of anharmonicities. In the low temperatures range of interest here, one chooses the ground state width of the harmonic system $q_0^2=\hbar/2m\omega$ which implies $\tilde{\kappa}=\hbar\kappa/4m\omega^2$. It then turns out that the perturbative treatment provides accurate results even for $\tilde{\kappa}\sim 0.5$, i.e. not only for weak but also for moderately strong anharmonicities. In case of a softer mode ($\kappa<0$) a direct comparison with full numerical findings is no longer possible as the anharmonic potential is only locally stable. The perturbative treatment is thus physically sensible only as long as all relevant processes remain sufficiently localized around the minimum of the potential. In particular, the approach does not capture quantum tunneling through the potential barriers from the well into the continuum. However, it provides the nonlinearity required for rectification of heat transfer with the finite dissipation promoting localized states in the well. Figure~\ref{fig:anham_neg_kap} shows the phase space variances $\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle$ and $\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle$ together with the uncertainty product$\sqrt{\langle\hspace{-2.4pt}\langle q^2\rangle\hspace{-2.4pt}\rangle\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle}$. For very weak damping the latter remains at the minimal values 1/2, while position fluctuations increase and momentum fluctuations decrease with growing $|\kappa|$. This is in contrast to stronger friction, where already for $\kappa=0$ (harmonic limit) the uncertainty product exceeds 1/2 with strongly suppressed (enhanced) momentum (position) fluctuations for larger $|\kappa|$. Note that the range, where the approach is expected to be applicable, is restricted to substantially smaller values of $|\kappa|$ compared to the situation with $\kappa>0$. We close this analysis by presenting Wigner functions $W(q,p)$ for various values of $\kappa$ at $k_\mathrm{B} T=0$, see figure.~\ref{fig:wigner_T_0}. The squeezing of phase space distributions is clearly seen with momentum squeezing for $\kappa<0$ and squeezing in position for $\kappa>0$. \newline \bigskip \textit{c) Towards quantum heat transfer: anharmonic chains}\newline \begin{figure} \begin{center} \includegraphics[width=15cm]{covarianzmatrix_10HO_kap_-0p10_splitmat.pdf} \vfill \vfill \includegraphics[width=15cm]{covarianzmatrix_10HO_kap_0p50_splitmat.pdf} \end{center} \caption{Blocks of the covariance matrices $\mathbf{\Sigma}$ for ordered anharmonic chains with $\kappa=-0.10$ (top) and $\kappa=0.50$ (bottom). The columns show different parts of $\mathbf{\Sigma}$: all covariances of the positions $\langle\hspace{-2.4pt}\langle q_nq_l\rangle\hspace{-2.4pt}\rangle$ (left), of the momenta $\langle\hspace{-2.4pt}\langle p_n p_l\rangle\hspace{-2.4pt}\rangle$ (middle) and symmetrized mixed products $\langle\hspace{-2.4pt}\langle q_np_l+p_lq_n\rangle\hspace{-2.4pt}\rangle/2$ (right). The chains accord to the Hamiltonian in (\ref{eq:Ham_qu}) with $N=10$ oscillators. The first one $n=1$ is attached to a bath with $T_h=1$ and the last one $n=N=10$ to a reservoir with temperature $T_c=0.0$. Other parameters are $\gamma_l=\gamma_r=0.1$, $\mu=0.3$, $\omega=1$, $m=1$, $\hbar=1$ and cut-offs for both reservoirs are $\omega_c=100$.} \label{fig:covmats_chains} \end{figure} One advantage of the developed methodology is its high computational efficiency also for large systems. Here, we analyze heat transfer through a chain of anharmonic oscillators of length $N=10$ according to (\ref{eq:Ham_qu}) terminated at both ends by thermal reservoirs at temperatures $T_l=T_h$ and $T_r=T_c<T_h$, respectively. We put all intra-oscillator couplings to $\mu=0.3$, masses to $m=1.0$ and frequencies to $\omega=1.0$; further $T_c=0$ while $T_h=1$ in dimensionless units. Density plots in figure~\ref{fig:covmats_chains} show the covariance matrices $\mathbf{\Sigma}$ for two anharmonicity parameters $\kappa=-0.10$ (top) and $\kappa=0.50$ (bottom). One clearly sees the impact of a temperature difference as the diagonal elements corresponding to the mode coupled to the hot bath are substantially larger than those attached to the cold one. The temperature dependence of the effective frequencies leads to a more distinct impact of the nonlinearity on the oscillators coupled to the hot bath. Apparently, for these sites the negative $\kappa$ leads to a broading of the state in position as compared to the case for positive $\kappa$. \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{effom_quant_overind_N_50_mu_0p3_eta_0p1.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{efftemp_quant_overind_N_50_mu_0p3_eta_0p1.pdf} \end{minipage} \caption{The effective frequency $\tilde{\omega}$ (left) and temperature $\tilde{T}$ (right) over the site index $n$ of a chain with $N=50$ oscillators. Other model parameters are the same as in figure~\ref{fig:covmats_chains}.} \label{fig:anham_effomtemp} \end{figure*} It is instructive to display the effective frequencies $\tilde{\omega}_n$ along the chain (cf.\ figure~\ref{fig:anham_effomtemp} left) for a long chain $N=50$. Apparently, the distribution is rather flat in the bulk and shows deviations only near the interfaces to the reservoirs with the frequencies being smaller (larger) for $\kappa<0$ ($\kappa>0$). We also show in figure~\ref{fig:anham_effomtemp} (right) the distribution of effective temperatures $\tilde{T}_n$ reconstructed from $\langle\hspace{-2.4pt}\langle p^2_n\rangle\hspace{-2.4pt}\rangle=(m\tilde{\omega}_n\hbar/2)\coth[\hbar\tilde{\omega}_n/(2k_\mathrm{B} \tilde{T}_n)]$. For further discussions of thermometry for dissipative systems we refer to very recent literature such as in ref.~\cite{Hovhannisyan2018}. Figure \ref{fig:anham_effomtemp} reveals a similar profile as that for the effective frequencies, thus indicating ballistic transport. This is confirmed by results for the heat currents (not shown), which do not reveal any dependence on the chain length. The fact that the temperature difference drops only within narrow interfaces between phonon chain and reservoir is reminiscent of the voltage drop in molecular chains in contact to electronic leads \cite{Cuevas}. The question to what extent anharmonicities alone may lead to normal heat flow has led to conflicting results recently depending on the model under consideration \cite{Lepri1998, Terraneo2002}. For the weak to moderate anharmonicities considered here, a combination of nonlinearity and disorder such as studied for example in \cite{Li_Prosen2004, Dhar2008} may provide a mechanism to induce normal heat flow. This is, however, beyond the scope of the current study, where we focus on heat rectification, and will be explored elsewhere. \section{Rectification of quantum heat transport: single oscillator} \label{sec:rect_1HO} One of the simplest models to realize rectification consists of an anharmonic oscillator coupled to two reservoirs at different temperatures $T_l$ and $T_r$. The only way to induce a spacial asymmetry in such a model is to choose different coupling strengths $\gamma_l$ and $\gamma_r$ which remain constant under an exchange of reservoir temperatures. In steady-state, we are free to choose any of the two currents between system and reservoirs as they have identical absolute values. The energy current between the left reservoir and the oscillator reads according to (\ref{eq:heatfluxres1}) \[ \langle\hspace{-2.4pt}\langle j_{l,1}\rangle\hspace{-2.4pt}\rangle = \frac{y_p^{(l)}}{m}- \gamma_l\Big\langle\hspace{-2.5pt}\Big\langle\frac{p^2}{m}\Big\rangle\hspace{-2.5pt}\Big\rangle \] with $y_p^{(l)}$ being the momentum part of the system-bath correlation function corresponding to the left reservoir. In the classical limit, this is $y_p^{(l)}=m\gamma_l/\beta_l$, while $\langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle$ for a system coupled to two baths is determined by additive fluctuations and damping strengths, respectively, i.e., \[ \langle\hspace{-2.4pt}\langle p^2\rangle\hspace{-2.4pt}\rangle=\frac{y_p^{(l)}+y_p^{(r)}}{\gamma_l+\gamma_r}\,. \] The resulting heat current reads \[ \langle\hspace{-2.4pt}\langle j_{l,1}\rangle\hspace{-2.4pt}\rangle = \frac{\gamma_l\gamma_r}{\gamma_l+\gamma_r}k_\mathrm{B}\Delta T \] with $\Delta T=T_r-T_l$. This immediately shows that {\em no} rectification can be observed for a single classical mode since an exchange of temperatures would result in a current with identical absolute value \mbox{$\llangle j\rrangle_\mathrm{hc} = -\llangle j\rrangle_\mathrm{ch}$}. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{rect_vareta_quant_Th_1_Tc_0_1HO.pdf} \end{center} \caption{Rectification coefficient $\alpha$ versus the difference of damping strengths $\Delta\gamma = \gamma_r-\gamma_l$ for a single anharmonic oscillator coupled to two quantum reservoirs with constant $\gamma_l=0.1$ and varying $\gamma_r$. The reservoirs have temperatures $T_h=1$ $T_c=0.0$ and cut-off frequencies $\omega_c=100$. Other parameters are $\omega=1$, $m=1$ and $\hbar=1$.} \label{fig:rect_oneHO} \end{figure} The previous argument, however, does not apply to the quantum case, where the system-reservoir correlations depend on details of the Green's function caused by the non-Markovianity of the reservoirs. This Green's function is calculated with the effective frequency $\tilde{\omega}$ which varies with an exchange of the reservoir temperatures. Therefore, rectification is possible with a single nonlinear system degree of freedom as shown in refs.~\cite{Segal2005, Ruokola2009}. Figure~\ref{fig:rect_oneHO} shows the rectification coefficient $\alpha$ versus $\Delta\gamma = \gamma_r-\gamma_l$ where $\gamma_l=0.1$ is kept constant. Results for positive and negative values of $\kappa$ over the whole range of $\Delta\gamma$ reveal that $\kappa <0$ leads to $\alpha<0$ and $\kappa>0$ to $\alpha >0$. This difference in the signs can be attributed to the different level structure of the system, where for the softer mode the energy level spacings are narrower for higher lying states while the opposite is true for the stiffer mode, see figure~\ref{fig:pot_anharm}. \section{Rectification of quantum heat transport: chains of oscillators}\label{sec:rect_chains} \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_overeta_varkappa_pos_N_2.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_overeta_varkappa_neg_N_2.pdf} \end{minipage} \caption{Rectification coefficient $\alpha$ over $\Delta\gamma=\gamma_r-\gamma_l$ for positive $\kappa$ (left) and negative (right) for a system consisting of two anharmonic oscillators according to (\ref{eq:Ham_qu}) for $N=2$. $\gamma_l=0.1$ is constant, while $\gamma_r$ is varied. Each oscillator is coupled to its own reservoir which have different temperatures $T_h=1$, $T_c=0$ and equal $\omega_c=100$. Other parameters are: $\mu=0.3$, $\omega=1$, $m=1$, $\hbar=1$.} \label{fig:rect_overgam_varkappa} \end{figure*} Going from a single oscillator to chains of oscillators is not just a quantitative modification of the setting. It is particularly interesting as it allows, by tuning the asymmetry either in the coupling to the reservoirs or in the on-site frequencies, to control the rectification of being positive or negative. This may be of relevance for experimental realizations of heat valves as they have very recently been explored in \cite{Ronzani2018}. Underlying physical principles can be revealed already for a set-up consisting of two oscillators, where we keep frequency and coupling strength of the oscillator at site $n=1$ fixed and vary those at $n=2$, i.e.\ $\omega_2=\omega_1+\Delta\omega$ and $\gamma_r=\gamma_l+\Delta\gamma$. Before we discuss specific results below, let us already here elucidate the main mechanism that governs the rectification process. For the Hilbert space of the oscillator chain alone (no reservoirs) two sets of basis functions are distinguished, namely, the one consisting of the localized eigenstates of the individual oscillators and the one consisting of the normal modes of the coupled system. Then, roughly speaking, if the inter-oscillator coupling $\mu$ dominates against the oscillator-reservoir couplings $\gamma_l,\gamma_r$, heat transfer occurs along the delocalized normal modes, while in the opposite situation localized states rule the game. It turns out that the changeover from one regime to the other one by tuning either $\Delta\omega$ or $\Delta\gamma$ is associated with a sign change in the rectification coefficient $\alpha$. This also implies that starting from the symmetric situation a growing asymmetry first leads to an extremum of $\alpha$ before one reaches the point $\alpha=0$. \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_overeta_varmu_pos.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_class_overeta_varmu_pos.pdf} \end{minipage} \caption{Rectification coefficient $\alpha$ over $\Delta\gamma$ for positive $\kappa$ from a quantum (left) and a classical (right) model with delta correlated (Markovian) reservoirs. The models have similar parameters like that from figure~\ref{fig:rect_overgam_varkappa} (left) but with different intra-oscillator couplings $\mu$. The cut-off frequencies of the quantum reservoirs are $\omega_c=100$ and the temperatures for classical and quantum baths are $T_h=1$ and $T_c=0$.} \label{fig:rect_overeta_varmu} \end{figure*} This discussion can be made more quantitatively by considering the effective spectral density $J_{\mathrm{eff},r}(\omega)$ of the right reservoir as seen from the first oscillator (fixed parameters) through the second one (varying parameters). For this purpose, we employ the formalism developed in \cite{Garg1985} and obtain for a purely ohmic bath [limit in (\ref{eq:spectralden}) for $\omega_c\to \infty$] the expression \begin{equation} J_{\mathrm{eff},r}(\omega)=\bar{\mu}^2\, \frac{m\omega\gamma_r \tilde{\omega}_2^4}{(\tilde{\omega}^2_2-\omega^2)^2+4\omega^2\gamma_r^2}\,, \label{eq:eff_spect} \end{equation} with the dimensionless inter-oscillator coupling $\bar{\mu}=\mu/m\tilde{\omega}_2^2$ and effective frequencies $\tilde{\omega}_{1/2}^2=\omega_{1/2}^2+3\kappa\langle\hspace{-2.4pt}\langle q_{1/2}^2\rangle\hspace{-2.4pt}\rangle$. It has the expected Lorentzian form with a maximum around $\omega=\tilde{\omega}_2$ and reduces in the low frequency regime to an ohmic type of density $J_{\mathrm{eff},r}(\omega\to 0)\approx \bar{\mu}^2 \, m \omega \gamma_r$. Note that the effective spectral density (\ref{eq:eff_spect}) contains all three relevant parameters: the inter-oscillator coupling $\mu$ as well as $\gamma_r$ and $\omega_2$, which we use to induce spatial asymmetry to the system. Now, let us consider the case with $\Delta\omega=0$ and varying $\Delta\gamma$ such that in the symmetric situation, $\Delta\gamma=0$, one has\ $\mu > \gamma_l, \gamma_r$. As a consequence, for growing $\Delta\gamma$ first the delocalized mode picture applies and the rectification coefficient approaches an extremum around that asymmetry point, where heat transfer at resonance is optimal due to a matching of reservoir coupling constants to oscillator 1, i.e.\ $\gamma_{\mathrm{eff},r}\approx \gamma_l$ with the effective coupling from oscillator 1 to the right reservoir $\gamma_{\mathrm{eff},r}\equiv J_{\mathrm{eff},r}(\omega=\tilde{\omega}_1\approx \tilde{\omega}_2)/m\tilde{\omega}_2\equiv \bar{\mu}^2\, \tilde{\omega}_2^2/4\gamma_r$, i.e., \begin{equation} \bar{\mu}^2 \tilde{\omega}_2^2 \approx 4\gamma_l \gamma_r\, . \label{eq:rectmatching} \end{equation} With further increasing $\Delta\gamma$ the rectification tends to zero, changes sign, the asymmetry dominates against the inter-oscillator coupling, and a picture based on localized modes captures the heat transfer. We will see below that the sign of the rectification coefficient $\alpha$ also depends on the sign of the anharmonicity parameter $\kappa$. \bigskip \textit{a) Rectification by variation of the damping}\newline Detailed results are now first discussed for the situation, where asymmetry is induced by a varying damping, see Figure~\ref{fig:rect_overgam_varkappa}. As anticipated, one indeed observes a change in the sign of the rectification coefficient together with an extremum for moderate asymmetries. When comparing the location of the extrema with the prediction according to the matching condition (\ref{eq:rectmatching}), one finds an excellent agreement. For the chosen parameters $\mu>\gamma_l, \gamma_r$ in the symmetric case, the previous discussion directly applies. The sign change in $\alpha$ can also be understood from the impact of friction onto $\langle\hspace{-2.4pt}\langle q_{1/2}^2\rangle\hspace{-2.4pt}\rangle$ and thus onto the effective frequencies $\tilde{\omega}_{1/2}$: For strong coupling to the right reservoir ($\gamma_l\ll \gamma_r$) one always has (for identical harmonic frequencies) $\tilde{\omega}_2 < \tilde{\omega}_1$ for $\kappa>0$ and $\tilde{\omega}_2 > \tilde{\omega}_1$ for $\kappa<0$ due to the strong squeezing in position of oscillator 2. Quantum mechanically, the different energy level spacings then lead to $|\llangle j\rrangle_\mathrm{ch}|>|\llangle j\rrangle_\mathrm{hc}|$ and thus $\alpha<0$ for $\kappa>0$ and $|\llangle j\rrangle_\mathrm{ch}|<|\llangle j\rrangle_\mathrm{hc}|$ with $\alpha>0$ for $\kappa<0$. For weak asymmetry ($\gamma_l \leq\gamma_r$) the normal modes couple slightly more efficient to the right reservoir and the bottleneck is the coupling to the left reservoir. Consequently, the relation between the respective heat currents and the sign of $\alpha$ interchanges compared to the strongly asymmetric situation. It is now also instructive to look for the rectification when the inter-oscillator coupling is tuned, see figure~\ref{fig:rect_overeta_varmu}: Following our above reasoning the larger $\mu$, the stronger needs the asymmetry to be in order to induce a sign change in the rectification, while for very weak coupling no sign change occurs at all. The mechanism developed above, namely, delocalized modes versus localized modes depending on the strength of the asymmetry, suggests that for purely classical reservoirs a qualitative similar behavior should be present. This is indeed the case as figure~\ref{fig:rect_overeta_varmu} (left) reveals. The impact of the number of oscillators $N$ in the chain on the rectification is displayed in figure~\ref{fig:rect_overeta_varN}. With growing $N$ the asymmetry must increase as well in order to induce a sign change in $\alpha$. We ascribe this to the feature already addressed above (cf.~figure~\ref{fig:anham_effomtemp}), namely, an effective screening of the asymmetry such that the bulk remains robust against variations of the coupling to the right reservoir. However, the absolute value of $\alpha$ increases with increasing chain length. In this and the previous cases rectification up 10\%-15\% can be seen in agreement with what has been found in the quantum regime in other studies \cite{Ruokola2009}. This also applies to situations where the frequency is modulated (next section) and reflects the fact that in the low energy sector (low temperatures) quantum oscillators are less influenced by anharmonicties than for higher lying states. \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_overeta_pos_var_N.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_overeta_neg_var_N.pdf} \end{minipage} \caption{Rectification coefficient $\alpha$ over $\Delta\gamma$ for positive $\kappa=0.10$ (left) and negative $\kappa=-0.10$ (right). The number of oscillators $N$ is varying, while all other parameters are as in figure~\ref{fig:rect_overgam_varkappa}.} \label{fig:rect_overeta_varN} \end{figure*} \bigskip \textit{b) Rectification by modulation of the frequencies}\newline \begin{figure} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_quant_varT_N_2_eta_0p1_kap2_0p1} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{maxrect_varT_N_2_eta_0p1_kap2_0p1} \end{minipage} \caption{Left: Rectification coefficient $\alpha$ versus the difference of the frequencies $\Delta\omega = \omega_2-\omega_1$ with $\omega_1=1.0$ for a system of two coupled modes according to (\ref{eq:Ham_qu}) with $N=2$. The system is terminated by reservoirs with $\Delta T= T_h-T_c=1.0$, while different combinations of $T_h$ and $T_c$ are shown. The black circles show $\alpha$ for classical delta correlated reservoirs. Right: The rectification versus the temperature of the hot bath $T_h$ for $\Delta T=1.0$ and $\omega_2=1.4$ which is the value where $\alpha$ is maximal. Both plots: The cut-off frequency of the quantum reservoirs is $\omega_c=100$ and the dampings are constant $\gamma_l=\gamma_r=0.1$. The intra-oscillator coupling is $\mu=0.3$ and both modes are anharmonic with $\kappa=0.1$. $\alpha <0$ is a quantum feature which occurs at low temperatures. Other parameters are $m=1$ and $\hbar=1$.} \label{fig:rect_varomega_diffT} \end{figure} \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.0cm]{covarianzmatrix_quant_kT1_1_kT2_0_2HO_hc_bar3d.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.0cm]{covarianzmatrix_quant_kT1_1_kT2_0_2HO_ch_bar3d.pdf} \end{minipage} \vfill \begin{minipage}{8.5cm} \includegraphics[width=7.0cm]{covarianzmatrix_class_kT1_1_kT2_0_2HO_hc_bar3d.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.0cm]{covarianzmatrix_class_kT1_1_kT2_0_2HO_ch_bar3d.pdf} \end{minipage} \caption{The steady-state covariance matrix $\mathbf{\Sigma}$ for the system from figure~\ref{fig:rect_varomega_diffT} for both configurations $h\rightarrow c$ (left) and $c\leftarrow h$ (right) with quantum (top) and classical baths (bottom). The matrices are plotted for $\Delta\omega=4.0$ and reservoir temperatures $T_h=1$ and $T_c=0.0$. These parameters lead to $\alpha <0$ for the quantum case which can be explained by a tunneling between the modes for $c\leftarrow h$ where both modes are in the ground state (right). Instead, the first mode for $h\rightarrow c$ experiences thermal fluctuations (left) which reduces the overlap of both wave functions.}\label{fig:covmat_bar3d} \end{figure*} As an alternative to a variation of the chain-reservoir coupling, spatial asymmetry can also be induced by varying on-site frequencies $\omega_n$. Figure~\ref{fig:rect_varomega_diffT} shows $\alpha$ for a system of two coupled anharmonic oscillators with the asymmetry being quantified by $\Delta\omega=\omega_2-\omega_1$ with $\omega_1=1.0$ put constant, while $\omega_2$ is tuned. Instead of analyzing different values for the anharmonicity parameter as above, we here consider a constant temperature difference at different individual temperatures from the quantum up to the classical regime. One again observes a distinct maximum for $\alpha$, whose value $\alpha_\mathrm{max}$ is depicted in figure~\ref{fig:rect_varomega_diffT} (right) when $T_h$ is varied. The positive $\alpha$ for moderate $\Delta\omega$ can be understood by noting that $\langle\hspace{-2.4pt}\langle q_n\rangle\hspace{-2.4pt}\rangle$ is larger if the $n$-th mode is coupled to the hot bath which in turn increases the effective frequency $\tilde{\omega}_n$ if $\kappa>0$. For our setting this means that the coupling of the hot bath for $h\rightarrow c$ to the first mode compensates the frequency increase of the second mode for finite $\Delta\omega$. For $c\leftarrow h$ instead, the coupling of the hot bath to the second mode amplifies the detuning caused by $\Delta\omega$. For the quantum case, this effect is less distinct as ground state fluctuations act also on the mode which is coupled to the cold bath (see figure~\ref{fig:covmat_bar3d}). As already mentioned in the previous section, typical values of the rectification for the models considered here are on the order of 10\% and do not exceed 15\%-20\% in accordance with results reported in \cite{Segal2005}. In fact, even for the extreme case of a single two level system, rectification was found to be on the order of 10\% \cite{Ruokola2009}. It would be interesting to explore whether this can be improved by particular designs of chains. We emphasize that figure~\ref{fig:rect_varomega_diffT} (right) reveals that our approach also provides finite rectification in the high temperature range, where it matches the corresponding classical predictions, a non-trivial test for the consistency of cumulant-type of expansions (see \cite{Bricmont2007}). What is more striking though is the behaviour for larger asymmetries: While the classical rectification approaches zero from above, in the low temperature quantum regime we see again a change in sign of $\alpha$ and then a saturation with further increasing $\Delta\omega$. This is a genuine quantum phenomenon as an inspection of the covariance matrix reveals, see figure~\ref{fig:covmat_bar3d}. Classically, the phase space entries for the oscillator coupled to the reservoir at $T_c=0$ are absent, while this is not the case quantum mechanically. In the latter case, ground state fluctuations survive, where those of the oscillator with larger frequency $\tilde{\omega}_2$ exceed those of the softer one. This then gives rise to the observed finite rectification also for large $\Delta\omega$. Why does the rectification saturate in this limit? To understand this we again exploit the effective spectral density specified in (\ref{eq:eff_spect}): For $\omega_2\gg \omega_1$ due to $\tilde{\omega}_2\gg \tilde{\omega}_1$ the first oscillator effectively probes only the ohmic-type low frequency portion of the distribution $J_{\mathrm{eff},r}(\omega\ll \tilde{\omega}_2)\approx \bar{\mu}^2 \, m \omega \gamma_r$ which is independent of $\omega_2$. Heat transfer is thus governed by low frequency quantum fluctuations, a process that may also be interpreted as quantum tunneling through the second oscillator. \begin{figure} \begin{center} \includegraphics[width=8.0cm]{rect_varN_quant_Th_1_Tc_0_mu_0p3_series.pdf} \end{center} \caption{Rectification $\alpha$ for a series connection of heat valves terminated by quantum reservoirs. The oscillators with odd index have low frequencies $\omega_\mathrm{low}=1.0$, the oscillators with even index have high frequencies $\omega_\mathrm{high}=\Delta\omega+\omega_\mathrm{low}$. Other parameters are $\kappa=0.1$, $\mu=0.3$, $m=1$, $\hbar=1$, $T_h=1$, $T_c=0.0$, $\gamma_l=\gamma_r=0.1$ and $\omega_c=100$ for both reservoirs. } \label{fig:rect_overomega_series} \end{figure} The detuning of the oscillator's frequency as a resource for rectification suggests an extension to a chain consisting of an even number of oscillators where the frequencies with even index are set to $\omega_\mathrm{high}$ and are varied, while the ones with odd index are kept constant at $\omega_\mathrm{low}=1.0$. Such a setting can be considered as a chain of heat valves, where one element consists of two coupled oscillators with $\omega_\mathrm{low}$ and $\omega_\mathrm{high}$, respectively. Figure~\ref{fig:rect_overomega_series} shows the rectification for such a series connection of heat valves. Apparently, $\alpha < 0$ is only obtained for one single valve with $N=2$, while a series of multiple valves leads to $\alpha > 0$, also for large $\Delta\omega$ and independently of the number of oscillators $N$. To understand the reason for this quite different behavoir, we consider the situation with two valve elements ($N=4$): Then, we have for $h\rightarrow c$ a significant increase of $\tilde{\omega}_1$ (strong fluctuations in position) which supports heat transport through the whole chain since then also $\tilde{\omega}_2$ is affected by the hot bath. Instead, for $c\leftarrow h$ even a small $\Delta\omega$ is sufficient to screen the effect of the hot bath on $\tilde{\omega}_3$ so that this respective oscillator remains in the ground state. Therefore, the detuning between the oscillators $n=4$ and $n=3$ is larger than for $h\rightarrow c$ which implies $\alpha >0$ for chains with $N\geq 4$. In other words, the length dependence for $N\geq 4$ is determined by the presence of detuned modes in the bulk which are not attached to a reservoir, while the total number of those modes is not important.\newline \section{Rectification in disordered systems} \label{sec:rect_disordered} The rather delicate interplay of nonlinearity and spatial asymmetry which the rectification is based on rises the question of the stability of these effects in presence of disorder. In the sequel, we restrict ourselves to a randomization of the on-site frequencies $\omega_n$ and obtain the rectification as an average over several thousand of realizations, where the heat currents $\llangle j\rrangle_\mathrm{hc}$ and $\llangle j\rrangle_\mathrm{ch}$ are calculated with identical realizations. In the left panel in figure~\ref{fig:random_rect} we present $\langle\alpha\rangle_\mathrm{dis.}$ for varying $\Delta\gamma=\gamma_r-\gamma_l$ and use normally distributed on-site frequencies with standard deviation $\sigma_\omega$ and mean $\langle \omega_n\rangle_\mathrm{dis.}=1.0$, where we reject negative values of $\omega_n$. We emphasize that $ \langle \omega_n\rangle_\mathrm{dis.}$ is an average with respect to random chains and \emph{not} over the individual sites $n$ of one chain. The right panel shows $\langle\alpha\rangle_\mathrm{dis.}$ for varying $\Delta\omega=\langle\omega_N\rangle_\mathrm{dis.}-\langle\opind{\omega}{low}\rangle_\mathrm{dis.}$ with normally distributed on-site frequencies and equal dampings $\gamma_l=\gamma_r=0.1$ on both ends. Obviously, for varying $\Delta\gamma$ (left), the disorder frustrates the zero-crossing of the rectification observed at $\Delta\gamma\sim 2.0$ for the ordered case shown in figure~\ref{fig:rect_overeta_varN}. The profile of $\langle\alpha\rangle_\mathrm{dis.}$ for stronger disorder even suggests a convergence of $\langle\alpha\rangle_\mathrm{dis.}\to 0$ for large $\Delta\gamma$. For a system of two oscillators, we found that $\alpha<0$ arises from a slightly stronger detuning of the two frequencies for $h\rightarrow c$ than for $c\leftarrow h$. For the disordered case, this small effect seems to be washed out. Concerning the case of a varying $\Delta\omega$ shown in the right panel, it is interesting to observe that here disorder is less effective: one has $\alpha <0$ for large $\Delta \omega$ also for stronger disorder. Since in this regime $\Delta\omega$ is very large, it is clear that the system is more robust against small variations of the $\omega_n$. This manifests as well in the relatively small error bars for the random $\alpha$. Random couplings $\mu_n$ were also investigated and found to cause a similar behavior as random frequencies (not shown). \bigskip \begin{figure*} \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_overeta_kappa_0p10_N_10_dfreq.pdf} \end{minipage} \hfill \begin{minipage}{8.5cm} \includegraphics[width=7.5cm]{rect_overomega_kappa_0p10_N_10_dfreq} \end{minipage} \caption{Averaged rectification $\langle\alpha\rangle_\mathrm{dis.}$ for $3\times 10^3$ samples of random chains given by (\ref{eq:Ham_qu}) for $N=10$. The disorder is induced by normally distributed on-site frequencies with expectation values $\langle\omega_n\rangle_\mathrm{dis.} =1.0$, while the different colors show various standard deviations $\sigma_\omega$. Negative random values of $\omega_n$ are rejected. Other parameters are $\mu=0.3$, $m=1$, $\hbar=1$ and $T_h=1$, $T_c=0.0$, $\omega_c=100$ for both reservoirs, $\kappa=0.1$. The left panel shows a variation of the damping with $\Delta\gamma=\gamma_r-\gamma_l$ where $\gamma_l=0.1$ , while the on-site frequencies are homogeneous with $\langle\omega\rangle_\mathrm{dis.}=1.0$. The right panel shows varying frequencies $\langle\omega_N\rangle_\mathrm{dis.}$ with $\Delta\omega=\langle\omega_N\rangle_\mathrm{dis.}-\langle\opind{\omega}{low}\rangle_\mathrm{dis.}$ and $\langle\opind{\omega}{low}\rangle_\mathrm{dis.}=1.0$ representing all on-site frequencies but the last, while the dampings are equal $\gamma_l=\gamma_r=0.1$. The error bars represent the standard deviation of the distribution of the random $\alpha$.} \label{fig:random_rect} \end{figure*} \section{Summary and Outlook} \label{sec:sum_out} In this paper we developed a framework to describe heat transfer in open quantum systems which also applies to strong thermal contact, allows to cover the full temperature range, various realizations (weak -- strong coupling, high -- low temperatures), and can be generalized to higher dimensions and other geometries. It thus may serve as platform to quantify the performance of heat valves, heat engines, or cooling devices as they are currently under study in atomic and mesoscopic physics. More specifically, the heat transfer and rectfication across chains of anharmonic oscillators has been explored, where the nonlinearity can be tuned from weak to moderately strong. Tuning the level of symmetry breaking by either changing the asymmetry in the chain-reservoir coupling or in the frequency distribution of the oscillators we find the rectification coefficient to pass from positive to negative or vice versa by running through an extremum. A deeper analysis reveals a mechanism, where heat is predominantly carried by non-local modes (weak symmetry breaking) or localized modes (strong symmetry breaking) with a smooth turnover between the two scenarios. Similar findings have been reported recently in an experimental realization of a heat valve based on superconducting circuits \cite{Ronzani2018}. While for symmetry breaking induced by the chain-reservoir coupling this mechanism also applies to the classical regime, a genuine quantum effect is found for symmetry breaking induced by frequency mismatch between adjacent oscillators. In this situation, strong symmetry breaking gives rise to a finite rectification (and thus a finite heat transfer) in contrast to the classical prediction. This finding may have relevance for recent proposals for the fast initializations of quantum bits (cooling) by frequency tuning \cite{Tuorila2017}. \ack Valuable discussion with J. Pekola, M. M\"ott\"onen, and R. Kosloff are gratefully acknowledged. Financial support was provided by the German Science Foundation through grant AN336/11-1, the Land Baden-W\"urttemberg through the LGFG program (M.W.), and the Center for Integrated Quantum Science and Technology (IQST).
1,314,259,996,861
arxiv
\section{Introduction} In 1943, Hadwiger~\cite{Hadwiger} conjectured that every $K_t$-minor-free graph is $(t-1)$-colorable for every $t\ge 1$. In the 1980s, Kostochka~\cite{Kos82,Kos84} and Thomason~\cite{Tho84} independently proved that every graph with no $K_t$ minor has average degree $O(t\sqrt{\log t})$ and hence is $O(t\sqrt{\log t})$-colorable. For a survey on Hadwiger's conjecture see the article by Seymour~\cite{Seymour}; for an overview of more recent progress see Norin and Song~\cite{NS}. Very recently, Norin and Song~\cite{NS} proved the following theorem. \begin{theorem}[Norin and Song]\label{NorinSongHadwiger} Every graph with no $K_t$ minor is $O(t(\log t)^{0.354})$-colorable. \end{theorem} The proof of Norin and Song has two essential parts. The first part shows that every highly connected $K_t$-minor-free graph with many small vertex-disjoint dense subgraphs has a $K_t$ minor. The second part of the argument shows how to construct one such subgraph in a $K_t$-minor-free graph of high density. Finally, the two parts are brought together by showing there are either many vertex-disjoint such subgraphs and hence a $K_t$ minor or the graph is colorable with few colors. We make an improvement on the second part of the argument. Here then is our main result. \begin{theorem}\label{Main} For every $\beta > \frac{1}{4}$, every graph with no $K_t$ minor is $O(t(\log t)^{\beta})$-colorable. \end{theorem} To explain our improvement in their argument, we first need some notation. Let $G$ be a graph. We let $v(G)$ denote the number of vertices of $G$ and $e(G)$ denote the number of edges of $G$. We let $d(G) = e(G)/v(G)$ denote the \emph{density} of $G$. A collection $\mathcal{H} = \{H_1,H_2,\ldots,H_h\}$ of pairwise disjoint subsets of $V(G)$ is a \emph{model} of a graph $H$ in a graph $G$ if $G[H_i]$ is connected for every $i \in [h]$, and there exists a bijection $\phi : V(H) \rightarrow [h]$, such that $G[H_{\phi(u)}]$ and $G[H_{\phi(v)}]$ are adjacent for every $uv \in E(H)$. An easy observation is that a graph $G$ has an $H$ minor if and only if there exists a model of $H$ in $G$. Following Norin and Song, we say a graph $H$ is a \emph{$k$-bounded minor} of a graph $G$ if there exists a model $\tau$ of $H$ in $G$ such that $|T|\le k$ for every $T\in \tau$. Specifically, Norin and Song proved the following theorem. \begin{theorem}[Norin and Song, Theorem 4.1 in~\cite{NS}]\label{DenseSubgraph} Let $0 < \varepsilon < 1$, $K > 1$ be real. Let $G$ be a graph with $d=d(G) \ge 2/\varepsilon$. Then $G$ contains at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H)\le 4Kd$ and $e(H)\ge \varepsilon^2d^2/2$, or \item[(ii)] a $2$-bounded minor $H$ with $d(H) \ge \frac{3}{2} \frac{K(1-4\varepsilon)}{K+3}d$, or \item[(iii)] a $3$-bounded minor $H$ with $d(H) \ge 2 \frac{K(1-10\varepsilon)}{K+4}d$. \end{enumerate} \end{theorem} We say a pair of real numbers $(n,d)$ is \emph{$(D,t)$-forced} if every graph $G$ with $d(G)\ge D$ and no $K_t$ minor has a subgraph $H$ with $v(H)\le n$ and $d(H)\ge d$. Theorem~\ref{DenseSubgraph} implies the following corollary. (Here and throughout this paper all logarithms have base 2). \begin{corollary}[Norin and Song, Corollary 4.3 in~\cite{NS}]\label{Forced} For $0 < \varepsilon < 1/30$, let $$\lambda = \max \left\{ \frac{\log 2}{\log (3(1-7\varepsilon)/2)}, \frac{\log 3}{\log (2(1-14\varepsilon))}\right\}.$$ Let $t$ be a positive integer and let $D=D(t)$ be such that every graph with $d(G)\ge D$ has a $K_t$ minor. Then $(4r^\lambda D/\varepsilon, \varepsilon^3r^{-\lambda}D/8)$ is $(D/r, t)$-forced for every $1\le r \le \varepsilon D/2$. \end{corollary} Here we make an improvement on Theorem~\ref{DenseSubgraph} as follows. \begin{theorem}\label{DenseSubgraph2} Let $k \ge \ell \ge 2$. Let $\varepsilon \in \left(0,\frac{1}{16k^2}\right]$. Let $G$ be a graph with $d=d(G) \ge 2/\varepsilon$. Then $G$ contains at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H)\le 6k^3d$ and $e(H)\ge \varepsilon^2d^2/2$, or \item[(ii)] an $(\ell+1)$-bounded minor $H$ with $d(H) \ge \ell \cdot (1-14k^2\varepsilon) \cdot d$, or \item[(iii)] a $k$-bounded minor $H$ with $d(H) \ge \frac{k}{8\ell} \cdot (1-2k\varepsilon) \cdot d$. \end{enumerate} \end{theorem} Then we can prove an improved version of Corollary~\ref{Forced} with $\lambda = \frac{1}{1 - \alpha}$ for any small enough $\alpha$. To do this, we apply Theorem~\ref{DenseSubgraph2} with \begin{itemize} \item $\ell = 2^{2/\alpha}-1$, and \item $k = 2^{4/\alpha^2}$, and \item $\varepsilon = \frac{1}{28k^2}$. \end{itemize} \noindent We now state our improved version of Corollary~\ref{Forced} as follows. \begin{corollary}\label{Forced2} For $\alpha \in (0,1/2]$ such that $1/\alpha$ is integer, let $\varepsilon = \frac{1}{28\cdot 2^{4/\alpha^2}}$, and let $$\lambda = \frac{1}{1-\alpha}.$$ Let $t$ be a positive integer and let $D=D(t)$ be such that every graph with $d(G)\ge D$ has a $K_t$ minor. Then $(2^{16/\alpha^2}r^\lambda D, 2^{-16/\alpha^2} r^{-\lambda}D)$ is $(D/r, t)$-forced for every $1\le r \le \varepsilon D/2$. \end{corollary} Norin and Song~\cite{NS} noted in Section 5 of their paper that if a version of Corollary~\ref{Forced} could be proved with $\lim_{\varepsilon\rightarrow} \lambda(\varepsilon)=1$, then Theorem~\ref{Main} would follow. Note that the equation for $\varepsilon$ when solved for $\alpha$ gives $$\alpha = \sqrt{\frac{10}{\log (\frac{1}{28\varepsilon})} },$$ \noindent which goes to $0$ as $\varepsilon$ goes to $0$. Hence $\lim_{\varepsilon\rightarrow 0} \lambda(\varepsilon)=1$. Thus Corollary~\ref{Forced2} confirms that this is indeed possible. \begin{proof}[Proof of Theorem~\ref{Main}] The proof follows identically to the proof of Theorem~\ref{NorinSongHadwiger} in~\cite{NS} with Corollary~\ref{Forced2} used in place of Corollary~\ref{Forced}. \end{proof} \subsection{Outline of Paper} In Section~\ref{Defs}, we review some preliminary definitions. In Section~\ref{Bip}, we prove that a very unbalanced bipartite graph of high minimum degree has either a small, dense subgraph or an $(\ell+1)$-bounded minor with density almost $\ell$ times the original. In Section~\ref{Shrub}, we prove that a graph of high density has either a small, dense subgraph, or a very unbalanced bipartite graph of high density, or a $k$-bounded minor with density almost $k/\ell$. In Section~\ref{All}, we combine these results to prove Theorem~\ref{DenseSubgraph2}. We then choose $k$ and $\ell$ (as well as $K$ and $\varepsilon$) appropriately to prove Corollary~\ref{Forced2}. \section{Preliminaries}\label{Defs} \subsection{Small and Mates} \begin{definition} Let $G$ be a graph and $d\ge 1$. \begin{itemize} \item If $K\ge 1$ is real, then we say a vertex of $G$ is \emph{$(K,d)$-small} in $G$ if ${\rm deg}_G(v) \le Kd$ and \emph{$(K,d)$-big} otherwise. \item If $\varepsilon \in (0,1)$, then we say two vertices of $G$ are \emph{$(\varepsilon,d)$-mates} if they have at least $\varepsilon d$ common neighbors. \end{itemize} \end{definition} \begin{definition} Let $G$ be a graph and $d\ge 1$. Let $K\ge 1$ and $\varepsilon \in (0,1)$. We say $G$ is \emph{$(K,\varepsilon, d)$-unmated} if for every $(K,d)$-small vertex $v$, there exist strictly less than $\varepsilon d$ vertices in $G$ that are $(\varepsilon,d)$-mates of $v$. \end{definition} Here is a useful proposition. \begin{proposition}\label{SmallDense} Let $G$ be a graph and $d\ge 1$. For every $K\ge 1$ and $\varepsilon\in (0,1)$, at least one of the following hold: \begin{enumerate} \item[(i)] there exists a subgraph $H$ of $G$ with $v(H) \le 3Kd$ and $e(H)\ge \varepsilon^2 d^2/2$, or \item[(ii)] $G$ is $(K,\varepsilon,d)$-unmated. \end{enumerate} \end{proposition} \begin{proof} Suppose not. Since (ii) does not hold, there exists a $(K,d)$-small vertex $v$ with at least $\varepsilon d$ vertices that are $(\varepsilon,d)$-mates of $v$. Let $v_1, \ldots, v_{\lceil \varepsilon d \rceil}$ be distinct $(\varepsilon,d)$-mates of $v$. Let $H = G[v\cup N(v) \cup \{v_1,\ldots, v_{\lceil \varepsilon d \rceil} \}]$. Now $v(H) \le 1 + Kd + \lceil \varepsilon d \rceil \le 3Kd$ and $e(H)\ge \varepsilon^2 d^2/2$. Thus (i) holds, a contradiction. \end{proof} \begin{corollary}\label{SmallDenseBounded} Let $d,k\ge 1$. Let $G'$ be a $k$-bounded minor of a graph $G$. For every $K\ge 1$ and $\varepsilon\in (0,1)$, at least one of the following hold: \begin{enumerate} \item[(i)] there exists a subgraph $H$ of $G$ with $v(H) \le 3kKd$ and $e(H)\ge \varepsilon^2 d^2/2$, or \item[(ii)] $G'$ is $(K,\varepsilon,d)$-unmated. \end{enumerate} \end{corollary} \begin{proof} Apply Proposition~\ref{SmallDense} to $G'$. If Proposition~\ref{SmallDense}(ii) holds, then (ii) holds as desired. So we may assume that Proposition~\ref{SmallDense}(i) holds. That is, there exists a subgraph $H'$ of $G'$ with $v(H') \le 3Kd$ and $e(H')\ge \varepsilon^2 d^2/2$. But then there exists a corresponding subgraph $H$ of $G$ with $v(H) \le 3kKd$ and $e(H)\ge \varepsilon^2 d^2/2$. \end{proof} \subsection{Forests and Shrubberies} \begin{definition} Let $G$ be a graph and $d\ge 1$. Let $F$ a non-empty forest of $G$. Let $K\ge 1$ be real and let $\varepsilon, c \in (0,1)$. We say $F$ is \begin{itemize} \item \emph{$(K,d)$-small} if every vertex in $V(F)$ is $(K,d)$-small in $G[V(F)]$, \item \emph{$(\varepsilon,d)$-mate-free} if there does not exist a component $T$ of $F$ and $u\ne v\in V(T)$ such that $u$ and $v$ are $(\varepsilon,d)$-mates in $G$. \item \emph{$(c,d)$-clean} if $e(G) - e(G/F) \le c\cdot d \cdot v(F)$. \end{itemize} \end{definition} \begin{definition} Let $G$ be a graph. We say a non-empty forest $F$ of $G$ is \begin{itemize} \item \emph{$k$-bounded} if $v(T)\le k$ for every component $T$ of $F$, \item a \emph{$k$-shrubbery} if $k/2 < v(T) \le k$ for every component $T$ of $F$, \end{itemize} \end{definition} Let $\ell\ge 1$ be an integer. An \emph{$\ell$-star} is a star with $\ell$ leaves. An \emph{$\ell^-$-star} is a star with at least one but at most $\ell$ leaves. \begin{definition} Let $G$ be a graph and let $(A,B)$ be a partition of $V(G)$. Let $\ell\ge 1$ be an integer. We say a forest $F$ is \begin{itemize} \item an \emph{$\ell^-$-star-matching from $B$ to $A$} if for every component $T$ of $F$, then $T$ is an $\ell^-$-star, the center of $T$ is in $B$ and the leaves of $T$ are in $A$, \item an \emph{$\ell^-$-claw-matching from $B$ to $A$} if $F$ is an $\ell^-$-star-matching and every component $T$ of $F$ is an induced subgraph of $G$. \end{itemize} Similarly we define \emph{$\ell$-star-matching} and \emph{$\ell$-claw-matching} from $B$ to $A$ as above if every component of $F$ is an $\ell$-star instead of an $\ell^-$-star. \end{definition} Here are two simple but useful propositions whose proofs we omit. \begin{proposition}\label{Subforest} Let $G$ be a graph. If $F$ is a forest of $G$ and $F'$ is a proper subgraph of $F$ such that $e(F') < e(F)$, then $$e(G)-e(G/F') \le e(G) - e(G/F) - 1.$$ \end{proposition} \begin{proposition}\label{ContractingNeighbors} Let $G$ be a graph. If $uv\in E(G)$, then $$e(G)-e(G/uv) = 1 + |N(u)\cap N(v)|.$$ \end{proposition} One final bit of notation: if $G$ is a graph and $A,B$ are disjoint subsets of $V(G)$, then we let $G(A,B)$ denote the subgraph with $V(G(A,B))=A\cup B$ and $E(G(A,B)) = \{uv\in E(G): u\in A, v\in B\}$. \section{Bipartite Subgraph Lemma}\label{Bip} In this section, we prove Theorem~\ref{kclawDense} which says that a bipartite graph $G=(A,B)$ that is very unbalanced (i.e. $|A|\ge \ell |B|$) and dense (i.e. every vertex in $A$ has at least $d$ neighbors in $B$), either has a small dense subgraph or an $(\ell+1)$-bounded minor with density roughly $\ell d$. For the latter outcome, we in fact find an $\ell$-claw-matching $F$ from $B$ to $A$ in $G$ such that every leaf in $F$ (a vertex in $V(F)\cap A$) has most of its neighbors in $V(F)\cap B$ (the centers of $F$). Furthermore, we find such an $F$ that is mate-free and clean. Before proceeding, we informally discuss how one could even find such an $F$ in the first place (without worrying about it being mate-free and clean). In fact, $G$ contains an $\ell$-claw-matching $F$ from $B$ to $A$ such that for every vertex $v\in V(F)\cap A$, $N(v)\subseteq V(F)\cap B$. To see this, construct an auxiliary graph $G'$ by replacing every vertex in $B$ with $\ell$ copies of itself. Take a maximum matching $M$ of $G$ and minimum vertex cover $C$ of $G$ such that $C\cap B$ is minimized. If $M$ is a perfect matching, then $M$ induces an $\ell$-claw-matching $F$ in $G$ as desired. Otherwise, we take $M'$ to be the set of edges in $M$ incident with a vertex in $B\cap C$. It follows from K{\"o}nig's theorem that $M'$ induces an $\ell$-claw-matching $F$ in $G$ as desired. Now such a forest $F$ has the potential to become an $(\ell+1)$-bounded minor with density roughly $\ell d$ if it was mate-free and clean. In order to make $F$ mate-free, we instead use an alternating paths argument to build $F$ while keeping the forest mate-free. Unfortunately, we no longer have the property that $N(v)\subseteq V(F)\cap B$ for every $v\in V(F)\cap A$; rather $v$ could have a small set of neighbors outside $V(F)\cap B$ (namely to trees in $F$ containing mates of $v$). Still, the $(\ell+1)$-bounded minor will have density roughly $\ell d$ as needed. Now we will prove how to construct such an $F$ that is mate-free. First though we prove the following more general lemma where instead of avoiding mates, we have edges inside $A$ and we seek an $\ell$-claw-matching in this non-bipartite graph. We then apply this lemma in Lemma~\ref{kclawBipartite} where the extra edges are between mates. \begin{lemma}\label{kclawNonBipartite} Let $\ell\ge 1$ and $d_B\ge 1$ and $d_B > \ell\cdot d_A$ be integers. Suppose that $G$ is a graph and $(A,B)$ is a partition of $V(G)$ such that $|A| \ge \ell |B|$. \\ If every vertex in $A$ has at least $d_B$ neighbors in $B$ and at most $d_A$ neighbors in $A$, then $G$ contains an $\ell$-claw-matching $F$ from $B$ to $A$ such that every vertex in $V(F)\cap A$ has at most $d_A$ neighbors in $B\setminus V(F)$. \end{lemma} \begin{proof} Let $F_0$ be an $\ell^-$-claw-matching from $B$ to $A$ such that $|V(F_0)\cap A|$ is maximized. First suppose that $V(F_0)\cap A = A$. Note that $|V(F_0)\cap B|\ge |V(F_0)\cap A|/\ell$ and hence $V(F_0)\cap B=B$. But then $V(F_0)=V(G)$. Now $F=F_0$ is as desired. So we may assume that $V(F_0)\setminus A\ne \emptyset$. Let $u\in V(F_0)\setminus A$. For $v\ne u\in V(G)$, we say a path $P$ from $u$ to $v$ is a \emph{($u,v$)-$F_0$-alternating path} if \begin{itemize} \item $P$ is path in $G(A,B)$, and \item every internal vertex of $P$ has degree exactly one in $F$ (i.e. informally every other edge of $F_0$ is in $P$), and \item there does not exist a triangle of $G$ containing both an edge of $F_0$ and an edge of $P-E(F_0)$. \end{itemize} Let $B_u$ be the set of all vertices $v \in B$ such that there exists a $(u,v)$-$F_0$-alternating path. \begin{claim}\label{SizeEll} If $v\in B_u$, then $v\in V(F_0)$ and the component of $F_0$ containing $v$ has exactly $\ell$ vertices. \end{claim} \begin{proof} Suppose not. Since $v\in B_u$, there exists a $(u,v)$-$F_0$-alternating path $P$. Let $F_0' = F_0 \triangle P$. Since $P$ is a path in $G(A,B)$, we have the $E(F_0')\subseteq E(G(A,B))$. Since $v\in B$, every vertex $w\in A\cap V(P)\setminus \{u\}$ has degree exactly two in $P$. Moreover, every vertex in $V(P)\setminus \{u,v\}$ is in $F_0$. Since every vertex in $A$ has degree at most one in $F_0$, it now follows that every vertex in $A$ has degree at most one in $F_0'$ (since $u$ is not in $F$). It follows that every vertex in $G-\{u,v\}$ has the same degree in $F_0'$ as in $F_0$. Moreover, $v$ has degree one in $F_0'$. Since either $v\notin V(F_0)$ or the component of $F_0$ containing $u$ has strictly less than $\ell$ vertices, we have that $v$ has degree at most $\ell-1$ in $F_0$. But then $v$ has degree at most $\ell$ in $F_0'$. Thus $F_0'$ is an $\ell^-$-star-matching. Note that $|V(F_0')\cap A| = |V(F_0)\cap A|+1$. Hence by the choice of $F_0$, we find that $F_0'$ is not an $\ell^-$-claw-matching. That is, there exists a component $T$ of $F_0'$ that is not induced in $G$, that is, there exists an edge $e=xy \in G[V(T)]\setminus E(T)$. Since $T$ is an $\ell^-$-star, it follows that $x,y \in A$. Let $z$ be the center of $T$. Since $F_0$ is an $\ell^-$-claw matching, it follows that $x$ and $y$ are in different components of $F_0$. Thus at least one of $xz,yz$ is in $E(P)$. We may assume without loss of generality that $xz\in E(P)$ and hence $xz\notin E(F_0)$. Since every internal vertex of $P$ has degree exactly one in $F$, it follows that $yz\notin E(P)$. Hence $yz\in E(F_0)$. But now $xyz$ is a triangle containing both an edge of $F_0$ (namely $yz$) and an edge of $P-E(F_0)$ (namely $xz$), contradicting that $P$ is a $(u,v)$-$F_0$-alternating path. \end{proof} Let $F$ be the subgraph of $F_0$ consisting of components of $F_0$ containing vertices in $B_u$. By Claim~\ref{SizeEll}, we have that $F$ is an $\ell$-claw-matching from $B$ to $A$. \begin{claim}\label{BNeighbors} If $v\in V(F)\cap A$ and $x$ is a neighbor of $v$ in $B\setminus V(F)$, then $x$ is the center of a star in $F_0\setminus F$ that contains a neighbor of $v$ in $A$. \end{claim} \begin{proof} Let $w$ be such that $vw\in E(F)$. Note that $w\in B_u$. By definition of $B_u$, there exists a $(u,w)$-$F_0$-alternating path $P$. If $v\in V(P)$, let $P'=P-w$. Otherwise, let $P'=P+v$. Now $P'$ is a $(u,v)$-$F_0$-alternating path. It follows that $P''=P'+x$ is not a $(u,x)$-$F_0$-alternating path. Since $P''$ is a path in $G(A,B)$ from $u$ to $x$ such that every other edge is in $F$, it follows that there exists a triangle $T=y_1y_2y_3$ of $G$ containing an edge $y_1y_2$ of $F_0$ and an edge $y_2y_3$ of $P''-E(F_0)$. It follows that $y_2\in B$ and $y_1,y_3\in A$. Since $P'$ is a $(u,v)$-$F_0$-alternating path, it follows that $x\in v(T)$. Thus $y_2=x$. But then $y_3=v$, $xy_1\in F_0$ and $vy_1\in E(G)$. Thus $x$ is the center of a star in $F_0$ that contains a neighbor of $v$ in $A$, as desired. \end{proof} By Claim~\ref{BNeighbors}, every vertex in $V(F)\cap A$ has at most $d_A$ neighbors in $B\setminus V(F)$. So $F$ is as desired. \end{proof} We now apply Lemma~\ref{kclawNonBipartite} to obtained a mate-free $\ell$-claw-matching assuming that the graph itself is unmated as follows. \begin{lemma}\label{kclawBipartite} Let $K\ge \ell\ge 1$ and $\varepsilon_0 < 1/\ell$ and $d$ be constants. Let $G=(A,B)$ be a bipartite graph such that $|A| \ge \ell |B|$ and every vertex in $A$ has exactly $d_0$ neighbors in $B$. If $G$ is $(K,\varepsilon_0, d_0)$-unmated, then $G$ contains an $(\varepsilon_0,d_0)$-mate-free $\ell$-claw-matching $F$ from $B$ to $A$ such that every vertex in $V(F)\cap A$ has at most $\varepsilon_0 d_0$ neighbors in $B\setminus V(F)$. \end{lemma} \begin{proof} We may assume without loss of generality that every vertex in $A$ has exactly $d$ neighbors in $B$. Let $G'=G(A,B)\cup \{uv: u,v\in A, u,v$ are $(\varepsilon_0,d_0)$-mates in $G\}$. Note that there does not exist $uv\in E(G)$ such that $u$ and $v$ are $(\varepsilon_0,d_0)$-mates since $G$ is bipartite. Since $K\ge 1$, $G$ is bipartite and every vertex in $A$ has exactly $d$ neighbors in $B$, we have that every vertex of $A$ is $(K,d_0)$-small in $G$. Since $G$ is $(K,\varepsilon_0,d_0)$-unmated, by defintion every $(K,d_0)$-small vertex has at most $\varepsilon_0 d_0$ vertices that are $(\varepsilon_0,d_0)$-mates of $u$. Hence in $G'$, every vertex of $A$ has at least $d_B = d_0$ neighbors in $B$ and at most $d_A = \varepsilon d_0$ neighbors in $A$. By Lemma~\ref{kclawNonBipartite}, $G'$ contains an $\ell$-claw-matching $F$ from $B$ to $A$ such that every vertex in $V(F)\cap A$ has at most $d_A$ neighbors in $B\setminus V(F)$. Let $M=G(V(F)\cap A, V(F)\cap B)$. Now every vertex in $A\cap M$ has degree at least $d_0(1-\varepsilon_0)$ in $M$. \begin{claim}\label{MateFree} Every component $T$ of $F$ is $(\varepsilon_0,d_0)$-mate-free. \end{claim} \begin{proof} Let $x\ne y \in V(T)$. We may assume without loss of generality that $x\in A$. If $y\in B$, then $xy\in E(G)$ and since $xy\in E(G')$, we find that $x$ and $y$ are not $(\varepsilon_0,d_0)$-mates in $G$ as claimed. So we may assume that $y\in A$. Since $T$ is claw in $G'$, we have that $xy\notin E(G')$. But then $x$ and $y$ are not $(\varepsilon_0,d_0)$-mates in $G$ as claimed. \end{proof} It follows from Claim~\ref{MateFree}, that $F$ is $(\varepsilon_0,d_0)$-mate-free and hence $F$ is as desired. \end{proof} Next we may clean such an $\ell$-claw-matching $F$. To do this, we have to remove components whose centers are big in $G[F]$ and then switch edges as necessary. \begin{lemma}\label{kclawBipartite2} Let $K\ge \ell\ge 1$. Let $\varepsilon_1 \in (0,1/\ell)$ and $d_1\ge 1 / \varepsilon_1$ be an integer. Let $G=(A,B)$ be a bipartite graph such that $|A| = \ell |B|$ and every vertex in $A$ has exactly $d_1$ neighbors in $B$. If $G$ is $(K,\varepsilon_1, d_1)$-unmated and there exists an $(\varepsilon_1,d_1)$-mate-free $\ell$-claw-matching $F_1$ from $B$ to $A$ such that $V(F)=V(G)$, then there exists at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H) \le 6\ell^2 Kd_1$ and $e(H)\ge \varepsilon_1^2 d_1^2/2$. \item[(ii)] a $(K,d_1)$-small $(\varepsilon_1,d_1)$-mate-free $(\ell^2\varepsilon_1,d_1)$-clean $\ell$-claw-matching $F$ from $B$ to $A$ in $G$ such that $v(F) \ge v(G) \left(1-\frac{1}{K}\cdot \frac{\ell}{\ell+1}\right)$. \end{enumerate} \end{lemma} \begin{proof} Suppose not. Let $F_2$ be the subgraph of $F_1$ consisting of components of $F$ that contain only $(K,d_1)$-small vertices of $G$. Note that every vertex in $A$ is $(K,d_1)$-small since $K\ge 1$. Note that $e(G) = d_1|A| = d_1\ell |B|$. Hence the number of $(K,d_1)$-big vertices in $G$ is at most $\frac{\ell}{K}|B| \le \frac{v(G)}{K} \cdot \ell/ (\ell+1)$. Hence $v(F_2) \ge v(G)\left(1 - \frac{1}{K} \cdot \frac{\ell}{\ell+1}\right)$. If $F$ is an $\ell$-claw matching from $B$ to $A$, then a \emph{bad pair} of $F$ is a pair of edges $e_1,e_2\in E(F)$ in distinct components $T_1,T_2$ of $F$ such that $e_1,e_2$ are in a $4$-cycle $C$ in $G$ and $E(G(V(T_1),V(T_2))) = E(C)\setminus \{e_1,e_2\}$ (that is, the only edges between $T_1$ and $T_2$ are edges in $C$). Now let $F$ be an $(\varepsilon_1,d_1)$-mate-free $\ell$-claw-matching from $B$ to $A$ such that $V(F)=V(F_2)$, and subject to those conditions, that the number of bad pairs of $F$ is minimized. \begin{claim}\label{NotTooBad} Every edge $e\in E(F)$ is in at most $\ell(\varepsilon_1 d_1+1)$ bad pairs of $F$. \end{claim} \begin{proof} Suppose not. Thus $e$ is in strictly greater than $\ell(\varepsilon_1d_1+1)$ bad pairs of $F$. Let $r=\lfloor \varepsilon_1d_1 \rfloor$. Now there exist $e_1, e_2, \ldots e_{r+2} \in E(F)$ in distinct components of $F$ such that for every $1\le i \le r+2$, $e_i$ is non-incident with $e$ and $e_i, e$ are in a $4$-cycle $C_i$. Let $G'$ be obtained from $G$ by contracting the edges of $F$. Note that $G'$ is an $\ell$-bounded minor of $G$. Apply Corollary~\ref{SmallDenseBounded} to $G'$ and $G$ with $d_1, \varepsilon_1$ and $K(\ell+1)$. First suppose that Corollary~\ref{SmallDenseBounded}(i) holds. That is, there exists a subgraph $H$ of $G$ with $v(H) \le 3\ell (\ell+1)Kd_1$ and $e(H)\ge \varepsilon_1^2 d_1^2/2$. Since $\ell \ge 1$, we have that (i) holds, a contradiction. So we may assume that Corollary~\ref{SmallDenseBounded}(ii) holds. That is, $G'$ is $(K(\ell+1),\varepsilon_1,d_1)$-unmated. Since every vertex in $V(F)$ is $(K,d_1)$-small in $G$, we have that every vertex of $G'$ corresponding to a component of $F$ is $(K(\ell+1),d_1)$-small in $G'$. Hence each such vertex has at most $\varepsilon_1d_1$ vertices in $G'$ that are $(\varepsilon_1,d_1)$-mates in $G'$. Let $T$ be the component of $F$ containing $e$. Thus the vertex $v_T$ corresponding to $T$ in $G'$ has at most $r$ vertices in $G'$ that are $(\varepsilon_1,d_1)$-mates in $G'$. So we may assume without loss of generality that $e_1$ is in a component of $F$ that does not correspond to an $(\varepsilon_1,d_1)$-mate of $v_T$ in $G'$. Let $F' = F\triangle C_1$. Now $F'$ is an $\ell$-claw-matching from $A$ to $B$ such that $V(F')=V(F)=V(F_2)$. Let $\{f_1,f_2\} = E(C_1)\setminus \{e,e_1\}$. Since $F$ minimized the number of bad pairs, it follows that the sum of the number of bad pairs in $F'$ containing $f_1$ or $f_2$ other than the pair $f_1,f_2$ is at least $r+1$. But then the vertex $v_{T_1}$ corresponding to the component $T_1$ of $F$ containing $e_1$ is an $(\varepsilon_1,d_1)$-mate of $v_T$ in $G'$, a contradiction. \end{proof} By Claim~\ref{NotTooBad} and since $F$ is $(\varepsilon_1,d_1)$-mate-free, it follows that $$e(G)-e(G/F) \le \binom{\ell}{2} \varepsilon_1 d_1 v(F) + \frac{1}{2} \ell(\varepsilon_1 d_1 + 1) v(F) \le \ell^2\varepsilon_1 d_1 v(F),$$ \noindent since $\ell \ge 1$ and $1\le \varepsilon_1 d_1$. Thus $F$ is $(\ell^2\varepsilon_1,d_1)$-clean and (ii) holds, a contradiction. \end{proof} Altogether, we get the following lemma. \begin{theorem}\label{kclawDense} Let $K\ge \ell\ge 1$, $\varepsilon_0 \in (0,1/\ell)$ and $d_0 \ge 1/\varepsilon_0$ be constants. Let $G=(A,B)$ be a bipartite graph such that $|A| \ge \ell |B|$ and every vertex in $A$ has at least $d_0$ neighbors in $B$. Then $G$ contains at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H) \le 6\ell^2 Kd_0$ and $e(H)\ge \varepsilon_0^2 d_0^2/2$. \item[(ii)] an $(\ell+1)$-bounded minor $H$ with $d(H) \ge \frac{\ell}{2} \cdot (1-3\ell^3\varepsilon_0) \cdot d_0$. \end{enumerate} \end{theorem} \begin{proof} First suppose that $G$ is not $(K,\varepsilon_0,d_0)$-unmated. Thus Proposition~\ref{SmallDense}(ii) does not hold for $G$. Thus by Proposition~\ref{SmallDense}, we have that Proposition~\ref{SmallDense}(i) holds. That is, there exists a subgraph $H$ of $G$ with $v(H) \le 3Kd_0$ and $e(H)\ge \varepsilon_0^2 d_0^2/2$. Hence (i) holds as desired. So we may assume that $G$ is $(K,\varepsilon_0,d_0)$-unmated. By Lemma~\ref{kclawBipartite}, $G$ contains a $(\varepsilon_0,d_0)$-mate-free $\ell$-claw-matching $F_1$ from $B$ to $A$ such that every vertex in $V(F_1)\cap A$ has at most $\varepsilon_0 d_0$ neighbors in $B\setminus V(F_1)$. Let $d_1 = d_0(1-\varepsilon_0)$ and $\varepsilon_1 = \varepsilon_0 d_0 / d_1 = \frac{\varepsilon_0}{1-\varepsilon_0}$. Let $G'=G[V(F_1)]$. Since $G$ is $(K,\varepsilon_0,d_0)$-unmated, we have that $G'$ is $(K,\varepsilon_1,d_1)$-unmated. Furthermore, $F_1$ is an $(\varepsilon_1,d_1)$-mate-free $\ell$-claw-matching $F_0$ from $B$ to $A$ such that $V(F)=V(G')$. Hence by Lemma~\ref{kclawBipartite2} applied to $G'$, we find that there exists a $(K,d_1)$-small $(\varepsilon_1,d_1)$-mate-free $(\ell^2\varepsilon_1,d_1)$-clean $\ell$-claw-matching $F$ from $B$ to $A$ in $G'$ such that $v(F) \ge v(G') \left(1-\frac{1}{K}\cdot\frac{\ell}{\ell+1}\right)$. Let $H= G'/F$. Now $v(H) \le (\frac{1}{\ell+1} + \frac{1}{K}\cdot\frac{\ell}{\ell+1}) v(G')$. Since $K\ge \ell$, we find that $v(H) \le \frac{2}{\ell+1} v(G')$. Since $F$ is $(\ell^2\varepsilon_1,d_1)$-clean it follows that \begin{align*} e(H)&\ge e(G') - \ell^2\varepsilon_1 d_1 \cdot v(F) \\ &\ge d_1\frac{\ell}{\ell+1}v(G') - \ell^2\varepsilon_1d_1 \cdot v(G') \\ &= \left( d_0(1-\varepsilon_0)\frac{\ell}{\ell+1} -\ell^2\varepsilon_0 d_0\right)v(G') \\ &\ge \frac{\ell}{\ell+1} \cdot (1-3\ell^3 \varepsilon_0) d_0 \cdot v(G'), \end{align*} \noindent where we used that $\ell + \ell^2 + \ell^3 \le 3\ell^3$ since $\ell\ge 1$. Thus $$d(H) = \frac{e(H)}{v(H)} \ge \frac{\ell}{2} \cdot (1-3\ell^3\varepsilon_0) d_0,$$ and (ii) holds as desired. \end{proof} \section{Finding a Bountiful Shrubbery}\label{Shrub} In this section, we prove Theorem~\ref{DenseSubgraph3} which roughly says that every graph of density $d$ contains: a small, dense subgraph; or a very unbalanced bipartite $H=(X,Y)$ (i.e. $|X|\ge \ell |Y|$) with density almost $d$; or a clean $k$-shrubbery containing most of the vertices (all but $3v(G)/\ell$ vertices). The last outcome leads directly to a $k$-bounded minor of density roughly $kd/\ell$. The second outcome will lead by Theorem~\ref{kclawDense} to an $(\ell+1)$-bounded minor with density roughly $\ell d$. To prove Theorem~\ref{DenseSubgraph3}, we will build up a $(K,d)$-small, $(\varepsilon,d)$-mate-free, $(c,d)$-clean $k$-shrubbery (or rather take a maximum such shrubbery) $F$. First, we need the following definition and proposition. \begin{definition} Let $T$ be a tree. We say a vertex $v$ of $T$ is a \emph{centroid} of $T$ if for every edge $e\in E(T)$ incident with $v$, the component of $T-e$ containing $v$ has at least $v(T)/2$ vertices, and a \emph{non-centroid vertex} otherwise. Let $v$ be a non-centroid vertex of $T$. If $e\in E(T)$ incident with $v$ such that the component $H$ of $T-e$ containing $v$ has at most $\frac{v(T)-1}{2}$ vertices, then we say $e$ is a \emph{central edge for $v$} in $T$ and that $H$ is a \emph{peripheral piece for $v$}. \end{definition} The following proposition is standard. \begin{proposition}\label{UniqueCenter} The number of centroids in a non-empty tree is either 1 or 2. \end{proposition} \begin{proof} Let $T$ be a non-empty tree. Let $D$ be the directed graph obtained from $T$ by directing every edge $e\in E(T)$ toward the component of $T-e$ with strictly greater than $v(T)/2$ vertices if such a component exists. There may be edges of $T$ which receive no direction (since $v(T)$ may be even); however there exists at most one edge that does not receive a direction. Note that every vertex of $T$ has outdegree at most one in $D$ as otherwise $T$ has at least $2(v(T)/2) + 1 > v(T)$ vertices. If $v(T)\in \{1,2\}$, then every vertex of $T$ is a centroid as desired. So we may assume that $v(T)\ge 3$. Hence every leaf of $T$ has outdegree one in $D$. Now note that a vertex $v$ of $T$ is a centroid if and only if $v$ is a sink in $D$. Since every vertex has outdegree at most one in $D$ and every leaf has outdegree exactly one, it follows that there exists either one or two sinks in $D$ (depending on whether some edge of $T$ does not receive a direction), as desired. \end{proof} Here we informally explain the proof of Theorem~\ref{DenseSubgraph3} before proceeding with the formal proof. Let $F$ be a maximum shrubbery as above. Let $A$ be the set of $(K,d)$-small vertices of $G$, let $B$ be the set of $(K,d)$-big vertices of $G$, and let $C$ be the set of centroids of components in $F$ with exactly $k$ vertices. If $V(F)$ is too small, then $A' = A\setminus V(F)$ will be decently large. If every vertex in $A'$ has most of its neighbors in $B\cup C$, then we obtain a very unbalanced dense bipartite subgraph as desired. So we may assume there exists a vertex $v\in A'$ with a decent number of neighbors in either: components with $<k$ vertices in $F$; non-centroid vertices of components in $F$ with exactly $k$ vertices; or in the rest of $A'$. If $v$ has many neighbors in trees with strictly less than $k$ vertices, then we find one to add $v$ that will keep $F$ $(\varepsilon,d)$-mate-free and $(c,d)$-clean, contradicting maximality. Otherwise, we inductively build up a tree $T$ containing $v$ such that $k/2 < v(T) \le k$, by adding neighbors in $A'$ or peripheral pieces of $F$ that neighbor $v$, ensuring that the result in each step is $(\varepsilon,d)$-mate-free and clean enough. Crucially, this works because peeling off peripheral pieces from distinct components will leave new components which still have $>k/2$ vertices and so we will retain that $F$ is a $k$-shrubbery. Another crucial point is that our threshold for $V(F)$ being too small allows for many vertices in $A\setminus V(F)$ ($\ell v(G)/k$ roughly instead of say $v(G)/k$) and so we will only get a $k$-bounded minor of density roughly $kd/\ell$ (since there will be many uncontracted vertices); this is acceptable since $\ell$ is much smaller than $k$ in our application. We do this in order to obtain a very unbalanced bipartite subgraph when $F$ is not large enough, allowing us to find an $(\ell+1)$-bounded minor of density roughly $d\ell$. \begin{theorem}\label{DenseSubgraph3} Let $K \ge k \ge \ell \ge 2$ be integers. Let $\varepsilon \in (0,1/k)$ and $c=2k\varepsilon$. Let $G$ be a graph with $d=d(G) \ge 2/\varepsilon$. Then $G$ contains at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H)\le 3k^2Kd$ and $e(H)\ge \varepsilon^2d^2/2$, or \item[(ii)] a bipartite subgraph $H=(X,Y)$ with $|X| \ge \ell |Y|$ and every vertex in $X$ has at least $(1-8k^2\varepsilon)d$ neighbors in $Y$, or \item[(iii)] a $(K,d)$-small, $(\varepsilon$,d)-mate-free, $(c,d)$-clean $k$-shrubbery $F$ such that $$v(F)\ge \left(1 - \frac{2+4\ell}{k}\right)v(G).$$ \end{enumerate} \end{theorem} \begin{proof} Suppose not. We may assume that every proper subgraph $H$ of $G$ has $d(H) < d(G)$ and hence ${\rm deg}(v)\ge d$ for every vertex $v$ of $G$. Let $F$ be a $(K,d)$-small, $(\varepsilon,d)$-mate-free, $(c,d)$-clean $k$-shrubbery such that $v(F)$ is maximized. Let $A$ be the set of $(K,d)$-small vertices of $V(G)$. Let $B$ be the set of $(K,d)$-big vertices in $G$. Note that $Kd|B| \le 2e(G) \le 2dv(G)$. Hence $|B|\le \frac{2}{K}v(G) \le \frac{2}{k}v(G)$ since $K\ge k$. If $|A\setminus V(F)|\le \frac{4\ell}{k}v(G)$, then $v(F)\ge \left(1-\frac{2+4\ell}{k}\right)v(G)$ and (iii) holds as desired. So we may assume that $|A\setminus V(F)|\ge \frac{4\ell}{k}v(G)$. Let $A' = A\setminus V(F)$. Let $C$ be the set of centroids of components of $F$ with exactly $k$ vertices. Recall that by Proposition~\ref{UniqueCenter}, every component of $F$ has either one or two centroids. \begin{claim}\label{Leftovers} Every vertex in $A'$ has at most $8k^2\varepsilon d$ neighbors in $V(G)\setminus (B\cup C)$. \end{claim} \begin{proof} Let $v\in A'$. Suppose for a contradiction that $v$ has strictly more than $8k^2\varepsilon d$ neighbors in $V(G)\setminus (B\cup C)$. Let $F_1$ be the set of components in $F$ containing vertices of $C$. Let $W=V(F_1)\setminus C$. Let $F_2$ be the set of components in $F\setminus V(F_1)$. Since $F$ is $c$-clean, we have by definition that $e(G)-e(G/F) \le cd\cdot v(F)$. Apply Proposition~\ref{SmallDense} to $G$. If Proposition~\ref{SmallDense}(i) holds, then (i) holds, a contradiction. So we may assume that Proposition~\ref{SmallDense}(ii) holds, that is, $G$ is $(K,\varepsilon, d)$-unmated. Thus there are strictly less than $\varepsilon d$ verticces that are $(\varepsilon,d)$-mates of $v$ in $G$. \begin{subclaim}\label{F2} $v$ has at most $2k\varepsilon d$ neighbors in $V(F_2)$. \end{subclaim} \begin{proof} Suppose not. Let $F_3$ be the set of components of $F_2$ that do not contain $(\varepsilon,d)$-mates of $v$ in $G$. Now $v$ has a neighbor in strictly more than $2\varepsilon d$ distinct components of $F_2$. Thus $v$ has a neighbor in strictly more than $\varepsilon d$ components of $V(F_3)$. Apply Corollary~\ref{SmallDenseBounded} to $G$ and $G'=G/F$ with $K := Kk$. If Corollary~\ref{SmallDenseBounded}(i) holds, then (i) holds, a contradiction. So we may assume that Corollary~\ref{SmallDenseBounded}(ii) holds, that is $G/F$ is $(Kk,\varepsilon,d)$-unmated. Note that every vertex of $A\setminus V(F)$ and every vertex corresponding to a component of $F$ are $(Kk,d)$-small in $G/F$. Hence there exist at most $\varepsilon d$ vertices of $G/F$ that are ($\varepsilon,d$)-mates of $v$ in $G/F$. Since $v$ had a neighbor in strictly more than $\varepsilon d$ components of $V(F_3)$, it follows that there exists a component $T$ in $F_3$ such that $v$ has a neighbor $w$ in $T$ and the vertex $v_T$ corresponding to $T$ in $G/F$ is not a $(\varepsilon,d)$-mate of $v$ in $G/F$. Note that $v(T) < k$ since $T$ is in $F_2$. Let $T' = T+vw$ and $F' = (F\setminus V(T))\cup T'$. Now $F'$ is a $k$-shrubbery since $v(T')\le k$. Note that $T'$ is $(\varepsilon,d)$-mate-free since $T$ is in $F_3$. Hence $F'$ is $(\varepsilon,d)$-mate-free. By Proposition~\ref{ContractingNeighbors} applied to $G/F$, $v$ and $v_T$, we have that $$e(G/F) - e(G/F') \le |N_{G/F}(v) \cap N_{G/F}(v_T)| + 1 \le \varepsilon d + 1,$$ \noindent since $v$ and $v_T$ are not $(\varepsilon,d)$-mates in $G/F$. But then $$e(G)-e(G/F') \le cd \cdot v(F) + \varepsilon d + 1.$$ \noindent Since $d\ge 2/\varepsilon$, we have that $1\le \varepsilon d$ and hence $$e(G)-e(G/F') \le cd \cdot v(F) + 2\varepsilon d \le cd (v(F)+1) = cd \cdot v(F'),$$ \noindent since $c\ge 2\varepsilon$. Hence $F'$ is $(c,d)$-clean. Since $v(F') > v(F)$, we find that $F'$ contradicts the maximality of $F$. \end{proof} \begin{subclaim}\label{APrime} $v$ has at most $4k\varepsilon d$ neighbors in $A'$. \end{subclaim} \begin{proof} Suppose not. That is, $v$ has at least $4k\varepsilon d$ neigbors in $A'$. Now $v$ has at least $4k\varepsilon d-\lfloor \varepsilon d \rfloor \ge 3k\varepsilon d$ neighbors in $A'$ that are not $(\varepsilon,d)$-mates of $v$ in $G$. Let $v_1,\ldots, v_{\lceil 3k\varepsilon d \rceil}$ be neighbors of $v$ in $A'$ that are not $(\varepsilon,d)$-mates. For each $S\subseteq \{1,\ldots, \lceil 3k\varepsilon d \rceil \}$, let $T_S$ denote the star with center $v$ and leaves $\{v_i: i\in S\}$. Let $F_S = F+T_S$. Let $S$ be such that \begin{itemize} \item $|S|\le k-1$, and \item $T_S$ is $(\varepsilon,d)$-mate-free, and \item $e(G/F) - e(G/F_S) \le 2\varepsilon d |S|$, \end{itemize} \noindent and, subject to those conditions, that $|S|$ is maximized. Since $|S| \le k-1$, we find that $e(G/F) - e(G/F_S) \le 2\varepsilon d(k-1)$. Hence $$e(G)-e(G/F_S) = (e(G) - e(G/F)) + (e(G/F) - e(G/F_S)) \le cd \cdot v(F) + 2\varepsilon d(k-1) \le cd \cdot v(F_S),$$ \noindent since $v(F_S) \ge v(F) + 1$ and $2\varepsilon d(k-1) \le cd$ as $c\ge 2k\varepsilon$. Thus $F_S$ is $(c,d)$-clean. Since $T_S$ is $(\varepsilon,d)$-mate-free, we find that $F_S$ is $(\varepsilon,d)$-mate-free. Since $V(F_S)\subseteq A$, we have that $F_S$ is $(K,d)$-small. First suppose that $|S| \ge k/2$. Then $v(T_S) > k/2$ and yet $v(T_S)\le k$. Thus $F_S$ is a $k$-shrubbery. Since $v(F_S)>v(S)$, we find that $F_S$ contradicts the maximality of $F$. So we may suppose that $|S| < k/2$. Let $R = \{1,\ldots, \lceil 3k\varepsilon d \rceil\}\setminus S$. Let $R'= \{i \in R: v_i$ does not have a $(\varepsilon,d)$-mate in $\{v_j:j\in S\}\}$. Since $G$ is $(K,\varepsilon,d)$-unmated, we find that $$|R'| \ge |R| - k\varepsilon d \ge \lceil 3k\varepsilon d \rceil - |S| - k\varepsilon d \ge 2k\varepsilon d + (1-k) > k\varepsilon d,$$ \noindent since $1\le \varepsilon d$. Note that $F_S$ is a $k$-bounded forest of $G$ and hence $G/F_S$ is a $k$-bounded minor of $G$. Apply Corollary~\ref{SmallDenseBounded} to $G$ and $G'=G/F_S$ with $K := Kk$. If Corollary~\ref{SmallDenseBounded}(i) holds, then (i) holds, a contradiction. So we may assume that Corollary~\ref{SmallDenseBounded}(ii) holds, that is $G/F$ is $(Kk,\varepsilon,d)$-unmated. Note that every vertex of $A\setminus V(F_S)$ and every vertex corresponding to a component of $F_S$ are $(Kk,d)$-small in $G/F_S$. Let $v_{T_S}$ be the vertex corresponding to $T_S$ in $G/F_S$. Hence there exist at most $\varepsilon d$ vertices in $G/F_S$ that are ($\varepsilon,d$)-mates of $v_{T_S}$ in $G/F_S$. Since $|R'| > \varepsilon d$, there exists $i\in R'$ such that $v_i$ is not a $(\varepsilon,d)$-mate of $v_{T_S}$ in $G/F_S$. Let $S' = S\cup \{i\}$. Now $|S'| = |S|+1 \le k-1$ since $|S| < k/2$ and $k\ge 2$. Moreover, $F_{S'}$ is $(\varepsilon,d)$-mate-free since $i\in R'$. By Proposition~\ref{ContractingNeighbors} applied to $G/F_S$, $v_{T_S}$ and $v_i$, we have that $$e(G/F_S) - e(G/F_{S'}) \le |N_{G/F_S}(v_{T_S}) \cap N_{G/F_S}(v_i)| + 1 \le \varepsilon d + 1,$$ \noindent since $v_{T_S}$ and $v_i$ are not $(\varepsilon,d)$-mates in $G/F_S$. But then $$e(G/F)-e(G/F_{S'}) \le 2\varepsilon d |S| + \varepsilon d + 1.$$ \noindent Since $1\le \varepsilon d$, we find that $$e(G/F) - e(G/F_{S'}) \le 2\varepsilon d (|S|+1) = 2\varepsilon d |S'|.$$ \noindent Since $|S'| > |S|$, we find that $S'$ contradicts the maximality of $S$. \end{proof} \begin{subclaim}\label{W} $v$ has at most $4k^2\varepsilon d$ neighbors in $W$. \end{subclaim} \begin{proof} Suppose not. That is, $v$ has at least $4k^2\varepsilon d$ neighbors in $W$. Now $v$ has neighbor that is not a centroid in at least $4k\varepsilon d$ distinct components of $F_1$. Now there are strictly less than $\varepsilon d$ components of $F_1$ containing an $(\varepsilon,d)$-mate of $v$ in $G$. So there exists at least $3k\varepsilon d$ components of $F_1$ that contain a non-centroid vertex that is a neighbor of $A$ and that do not contain an $(\varepsilon,d)$-mate of $v$ in $G$. Let $T_1, \ldots T_{\lceil 3k\varepsilon d \rceil}$ be distinct such components. For each $1\le i \le \lceil 3k\varepsilon d \rceil$, let $v_i$ be a non-centroid vertex of $T_i$ that is a neighbor of $v$, let $H_i$ the be the peripheral piece for $v_i$ in $T_i$, and let $e_i$ be the central edge for $v_i$ in $T_i$, and let $T_i' = T_i\setminus V(H_i)$. For each $S\subseteq \{1,\ldots, \lceil 3k\varepsilon d \rceil\}$, let $T_S$ denote the tree $v\cup \{vv_i: i \in S\} \cup H_i$. Let $F_S = (F\setminus \bigcup_{i\in S} T_i)+(\bigcup_{i\in S}T_i') +T_S$. Let $S$ be such that \begin{itemize} \item $v(T_S)\le k$, and \item $T_S$ is $(\varepsilon,d)$-mate-free, and \item $e(G/F) - e(G/F_S) \le 2\varepsilon d |S|$, \end{itemize} \noindent and, subject to those conditions, that $v(T_S)$ is maximized. Since $|S| \le k-1$, we find that $e(G/F) - e(G/F_S) \le 2\varepsilon d(k-1)$. Hence $$e(G)-e(G/F_S) = (e(G) - e(G/F)) + (e(G/F) - e(G/F_S)) \le cd \cdot v(F) + 2\varepsilon d(k-1) \le cd \cdot v(F_S),$$ \noindent since $c\ge 2k\varepsilon$. Thus $F_S$ is $(c,d)$-clean. Since $T_S$ is $(\varepsilon,d)$-mate-free, we find that $F_S$ is $(\varepsilon,d)$-mate-free. Since $V(F_S)\subseteq A$, we have that $F_S$ is $(K,d)$-small. First suppose that $v(T_S) > k/2$. Note that $v(T_i') > k/2$ for every $i\in S$ since $e_i$ is a central edge for $v_i$ in $T_i$. Thus $F_S$ is a $k$-shrubbery. Since $v(F_S)>v(S)$, we find that $F_S$ contradicts the maximality of $F$. So we may suppose that $v(T_S) < k/2$. Let $R = \{1,\ldots, \lceil 3k\varepsilon d \rceil\}\setminus S$. Let $R'= \{i \in R: H_i$ does not have a $(\varepsilon,d)$-mate in $\{V(H_j):j\in S\}\}$. Since $G$ is $(K,\varepsilon,d)$-unmated, we find that $$|R'| \ge |R| - k\varepsilon d \ge 3k\varepsilon d - |S| - k\varepsilon d \ge 2k\varepsilon d - (k-1) > \varepsilon d,$$ \noindent since $1\le \varepsilon d$. Note that $F_S$ is a $k$-bounded forest of $G$ and hence $G/F_S$ is a $k$-bounded minor of $G$. Apply Corollary~\ref{SmallDenseBounded} to $G$ and $G'=G/F_S$ with $K := Kk$. If Corollary~\ref{SmallDenseBounded}(i) holds, then (i) holds, a contradiction. So we may assume that Corollary~\ref{SmallDenseBounded}(ii) holds, that is $G/F$ is $(Kk,\varepsilon,d)$-unmated. Note that every vertex of $A\setminus V(F_S)$ and every vertex corresponding to a component of $F_S$ are $(Kk,d)$-small in $G/F_S$. Let $v_{T_S}$ be the vertex corresponding to $T_S$ in $G/F_S$. Hence there exist at most $\varepsilon d$ vertices in $G/F_S$ that are ($\varepsilon,d$)-mates of $v_{T_S}$ in $G/F_S$. For each $i\in R'$, let $v_{T_i}$ be the vertex of $G/F_S$ corresponding to $T_i$. Since $|R'| > \varepsilon d$, there exists $i\in R'$ such that $v_{T_i}$ is not a $(\varepsilon,d)$-mate of $v_{T_S}$ in $G/F_S$. Let $S' = S\cup \{i\}$. Now $|S'| = |S|+1 \le k-1$ since $|S| < k/2$ and $k\ge 2$. Moreover, $F_{S'}$ is $(\varepsilon,d)$-mate-free since $i\in R'$. Let $F'_S = F_S-e_i$. Note that $F'_S$ is a proper subgraph of $F_S$ with $e(F'_S) < e(F_S)$. Hence by Proposition~\ref{Subforest}, $e(G/F'_S) \ge e(G/F) + 1$. Let $v_{H_i}$ be the vertex of $G/F'_S$ corresponding to $H_i$ (which is now a component of $F'_S$). Since $v_{H_i}$ is not an $(\varepsilon,d)$-mate of $v_{T_S}$ in $G/F_S$, it follows that $$|N_{G/F'_S}(v_{H_i}) \cap N_{G/F'_S}(v_{T_S})| \le \varepsilon d + 1.$$ \noindent By Proposition~\ref{ContractingNeighbors} applied to $G/F_S$, $v_{T_S}$ and $v_i$, we have that $$e(G/F'_S) - e(G/F_{S'}) \le |N_{G/F_S}(v_{T_S}) \cap N_{G/F_S}(v_i)| + 1 \le \varepsilon d + 2.$$ \noindent Thus $$e(G/F_S) - e(G/F_{S'}) \le \varepsilon d + 1.$$ \noindent But then $$e(G/F)-e(G/F_{S'}) \le 2\varepsilon d |S| + \varepsilon d + 1 \le 2\varepsilon d |S'|,$$ \noindent since $1\le \varepsilon d$ and $|S'|=|S|+1$. Since $v(T_{S'}) > v(T_S)$, we find that $S'$ contradicts the maximality of $S$. \end{proof} By Subclaims~\ref{F2},~\ref{APrime}, and~\ref{W}, we find that $v$ has at most $4k(k+1)\varepsilon d \le 8k^2 \varepsilon d$ neighbors in $V(G)\setminus (B\cup C)$, a contradiction. \end{proof} Recall that $|B|\le \frac{2}{k}v(G)$. Since every vertex in $C$ is a centroid of a tree on $k$ vertices and there are at most two centroids of a tree, it follows that $|C|\le \frac{2}{k}v(G)$. Hence $|B\cup C|\le \frac{4}{k}v(G)$. Recall that $|A'|\ge \frac{4\ell}{k}v(G)$. Hence $|A'|\ge \ell |B\cup C|$. Since $G$ has minimum degree at least $d$, it follows from Claim~\ref{Leftovers} that every vertex in $A'$ has at least $(1-8k^2\varepsilon)d$ neighbors in $B\cup C$. Letting $X=A'$, $Y=B\cup C$ and $H=G(X,Y)$, we find that outcome (ii) holds as desired. \end{proof} \section{Putting It All Together}\label{All} We are now ready to prove Theorem~\ref{DenseSubgraph2}. \begin{proof}[Proof of Theorem~\ref{DenseSubgraph2}] Let $K=k$. Apply Theorem~\ref{DenseSubgraph3} to $G$. First suppose Theorem~\ref{DenseSubgraph3}(i) holds. But then (i) holds as desired. Next suppose Theorem~\ref{DenseSubgraph3}(iii) holds. That is, there exists a $(K,d)$-small, $(\varepsilon$,d)-mate-free, $(2k\varepsilon,d)$-clean $k$-shrubbery $F$ of $G$ such that $$v(F)\ge \left(1 - \frac{2+4\ell}{k}\right)v(G).$$ \noindent Since $F$ is a $k$-shrubbery, we find that $$v(G/F) \le \frac{2}{k} v(F) + (v(G)-v(F)) \le \frac{2}{k}v(G) + \frac{2+4\ell}{k} v(G) \le \frac{8\ell}{k} v(G).$$ \noindent Since $F$ is $(2k\varepsilon,d)$-clean, we have by definition that $$e(G)-e(G/F) \le 2k\varepsilon d \cdot v(F) \le 2k \varepsilon d \cdot v(G).$$ \noindent Since $d=d(G)$ and hence $e(G)\ge d\cdot v(G)$, we have that $$e(G/F) \ge (1-2k\varepsilon) d \cdot v(G).$$ \noindent Hence $$d(G/F) = \frac{e(G/F)}{v(G/F)} \ge \frac{k}{8\ell} (1-2k\varepsilon) d,$$ \noindent (iii) holds with $H=G/F$ as desired. So we may assume that Theorem~\ref{DenseSubgraph3}(ii) holds. That is, there exists a bipartite subgraph $H=(X,Y)$ of $G$ with $|X| \ge \ell |Y|$ and every vertex in $X$ has at least $(1-8k^2\varepsilon)d$ neighbors in $Y$. Apply Theorem~\ref{kclawDense} with $d_0:= (1-8k^2\varepsilon)d$ and $\varepsilon_0 := 2\varepsilon$ to $H$. First suppose Theorem~\ref{kclawDense}(i) holds. That is, there exists a subgraph $H_0$ of $H$ with $v(H_0) \le 6\ell^2Kd_0 \le 6k^3d$ and $e(H_0)\ge \varepsilon_0^2 d_0^2/2 = 4\varepsilon^2 (1-8k^2\varepsilon)^2 d^2/2.$ Since $8k^2\varepsilon\le 1/2$ as $\varepsilon \le \frac{1}{16k^2}$, we find that $e(H_0)\ge \varepsilon^2 d^2/2$ and (i) holds as desired. So we may assume that Theorem~\ref{kclawDense}(ii) holds. That is, $H$ contains an $(\ell+1)$-bounded minor $H_0$ with $$d(H_0) \ge \ell (1-3\ell^2\varepsilon_0) d_0 \ge \ell (1-6\ell^2\varepsilon)(1-8k^2\varepsilon) d \ge \ell (1-14k^2\varepsilon) d,$$ \noindent since $\ell \le k$. But now (ii) holds as desired. \end{proof} We now choose values for $k, \ell$ and $\varepsilon$ in Theorem~\ref{DenseSubgraph2} so as to obtain the required growth in density to prove Corollary~\ref{Forced2}. \begin{corollary}\label{DenseSubgraph4} Let $\alpha \in (0,1/2]$ such that $1/\alpha$ is integer, $\ell = 2^{2/\alpha} - 1$ and $k= 2^{4/\alpha^2}$ and $\varepsilon = \frac{1}{28k^2}$. Let $G$ be a graph with $d=d(G) \ge 2/\varepsilon$. Then $G$ contains at least one of the following: \begin{enumerate} \item[(i)] a subgraph $H$ of $G$ with $v(H)\le 3k^3d \le 2^{16/\alpha^2} d$ and $e(H)\ge \varepsilon^2d^2/2 \ge 2^{-16/\alpha^2} d^2$, or \item[(ii)] an $(\ell+1)$-bounded minor $H$ with $d(H) \ge \ell \cdot (1-14k^2\varepsilon) \cdot d \ge (\ell+1)^{1-\alpha} d$, or \item[(iii)] a $k$-bounded minor $H$ with $d(H) \ge \frac{k}{8\ell} \cdot (1-2k\varepsilon) \cdot d \ge k^{1-\alpha} d$. \end{enumerate} \end{corollary} \begin{proof} By Theorem~\ref{DenseSubgraph2}, it suffices to check that the varying inequalities our satisfied by our choice of $\ell,k$ and $\varepsilon$. Note that $k$ and $\ell$ are integer since $1/\alpha$ is integer. We will use the fact that $\log(1-x) \ge -2x$ for every $x\in [0,1/2]$. First we verify the inequalities in outcome (i). Since $\alpha \le 1/2$, we have that $k\ge 2^8 \ge 6$. Hence $$6k^3 \le k^4 = 2^{16/\alpha^2}.$$ \noindent Similarly, $$\varepsilon^2/2 \ge 2^{-11} k^{-2} \ge 2^{-11 - (8/\alpha^2)} \ge 2^{-16/\alpha^2},$$ \noindent since $\alpha\le 1/2$, as desired. Next we verify the inequality in outcome (ii). Now $$\frac{\log (\ell \cdot (1-14k^2\varepsilon) )}{\log(\ell +1)} = 1+\frac{\log(1-\frac{1}{\ell+1}) + \log (1-14k^2\varepsilon) }{\log(\ell + 1)}.$$ \noindent Since $\ell \ge 1$, we have that $\frac{1}{\ell+1}\le \frac{1}{2}$. Hence $\log(1-\frac{1}{\ell+1}) \ge -1$. Similarly, $14k^2\varepsilon = \frac{1}{2}$ and hence $\log (1-14k^2\varepsilon) = -1$. Thus we find that $$\frac{\log (\ell \cdot (1-14k^2\varepsilon) )}{\log(\ell +1)} \ge 1 - \frac{2}{\log(\ell +1)} = 1 -\frac{2}{2/\alpha} = 1 -\alpha,$$ \noindent and hence $\ell \cdot (1-14k^2\varepsilon) \cdot \ge (\ell+1)^{1-\alpha}$, as desired. Finally we verify the inequality in outcome (iii). Now $$\frac{ \log \left(\frac{k}{8\ell} \cdot (1-2k\varepsilon) \right)}{\log k} = 1 + \frac{-\log(8) - \log(\ell) + \log(1-2k\varepsilon)}{\log k}.$$ \noindent Since $\alpha \le 1/2$, we have that $\ell \ge 15$. Yet $2k\varepsilon \le \frac{1}{2}$. Hence $\log(1-2k\varepsilon) \ge -1$. So $-\log(8) - \log(\ell) - 1 \ge -4 -\log (\ell+1) \ge -2\log(\ell+1)$ since $\ell\ge 15$. \noindent Hence $$\frac{ \log \left(\frac{k}{8\ell} \cdot (1-2k\varepsilon) \right)}{\log k} \ge 1 -\frac{2\log(\ell)}{\log k} \ge 1 - \frac{2(2/\alpha)}{4/\alpha^2} = 1 -\alpha,$$ \noindent and hence $\frac{k}{8\ell} \cdot (1-2k\varepsilon) \ge k^{1-\alpha}$, as desired. \end{proof} We are now prepared to prove Corollary~\ref{Forced2}. \begin{proof}[Proof of Corollary~\ref{Forced2}] Suppose for a contradiction that the corollary fails for some $1\le r_0 \le \varepsilon D /2$ but holds for all $1\le r \le \frac{r_0}{(\ell+1)^{1-\alpha}}$. Let $\ell = 2^{2/\alpha} - 1$ and $k= 2^{4/\alpha^2}$ and $\varepsilon = \frac{1}{28k^2}$. Note that $\ell$ and $k$ are integer since $1/\alpha$ is integer. Thus there exists a $K_t$-minor-free graph $G$ with $d(G) \le \frac{D}{r_0}$ such that no subgraph $J$ of $G$ satisfies $v(J) \le 2^{16/\alpha^2}d$, and $d(J) \ge 2^{-16/\alpha^2}d^2$. Note that $d(G) \ge 2/\varepsilon$ since $r\le \varepsilon D / 2$. Hence by Corollary~\ref{DenseSubgraph4}, $G$ contains a minor $H$ satisfying Corollary~\ref{DenseSubgraph4}(ii) or (iii). Suppose first that $H$ satisfies (ii). Let $r = \frac{D}{d(H)}$. Thus $r_0 \ge (\ell+1)^{1-\alpha}r$. Moreover, $H$ is $K_t$-minor-free since $G$ is and hence $r\ge 1$. Thus by the choice of $r_0$ there exists a subgraph $J'$ of $H$ such that $$v(J') \le 2^{16/\alpha^2} r^\lambda D,$$ \noindent and $$d(J') \ge 2^{-16/\alpha^2}r^{-\lambda}D.$$ \noindent As $H$ is an $(\ell+1)$-bounded minor of $G$, there exists a subgraph $J$ of $G$ corresponding to $J'$ such that $v(J) \le (\ell+1)v(J')$ and $d(J) \ge \frac{d(J')}{\ell+1}$. Thus we have $$v(J) \le (\ell+1) 2^{16/\alpha^2} r^\lambda D \le 2^{16/\alpha^2} (\ell+1) \left(\frac{r_0}{(\ell+1)^{1-\alpha}}\right)^\lambda D.$$ \noindent Since $\lambda = \frac{1}{1-\alpha}$, we have that $$v(J)\le 2^{16/\alpha^2} r_0^\lambda D.$$ \noindent Similarly $$d(J) \ge 2^{-16/\alpha^2}r_0^{-\lambda}D,$$ \noindent contradicting that no such subgraph of $G$ existed. So we may assume that $H$ satifies (iii). Let $r = \frac{D}{d(H)}$. Thus $r_0 \ge k^{1-\alpha}r$. Moreover, $H$ is $K_t$-minor-free since $G$ is and hence $r\ge 1$. Thus by the choice of $r_0$ there exists a subgraph $J'$ of $H$ such that $$v(J') \le 2^{16/\alpha^2}r^\lambda D,$$ \noindent and $$d(J') \ge 2^{-16/\alpha^2}r^{-\lambda}D.$$ \noindent As $H$ is an $k$-bounded minor of $G$, there exists a subgraph $J$ of $G$ corresponding to $J'$ such that $v(J) \le kv(J')$ and $d(J) \ge \frac{d(J')}{k}$. Thus we have $$v(J) \le k 2^{16/\alpha^2}r^\lambda D \le 2^{16/\alpha^2} k \left(\frac{r_0}{k^{1-\alpha}}\right)^\lambda D.$$ \noindent Since $\lambda = \frac{1}{1-\alpha}$, we have that $$v(J)\le 2^{16/\alpha^2} r_0^\lambda D.$$ \noindent Similarly $$d(J) \ge 2^{-16/\alpha^2}r_0^{-\lambda}D,$$ \noindent contradicting that no such subgraph of $G$ existed. \end{proof} \section*{Acknowledgments} I would like to thank Michelle Delcourt for helpful comments when preparing this manuscript.
1,314,259,996,862
arxiv
\section*{Supplemental Material} \beginsupplement \subsection*{Experimental details} The 455~nm laser is frequency-stabilized with a blue detuning of $\delta_{7{\rm P}}=1.5$~GHz to the $6{\rm S}_{1/2}, {\rm F}=4 \rightarrow 7{\rm P}_{3/2}, {\rm F'}=5 $ transition (see Fig.~\ref{fig:fig2}). The 1070~nm laser is scanned over the two-photon resonance to the Rydberg state $|n{\rm S}_{1/2} \rangle$. Both lasers have an estimated linewidth below 5~MHz. The frequency of the 1070~nm laser is calibrated for each measurement using a Fabry-P\'{e}rot interferometer and additionally by an EIT-signal \cite{Urvoy2013} to fix the origin of the frequency axis. The 455~nm laser typically has a power of 3~mW. The 1070~nm laser has a power of 15~W and passes through a Pockels cell, allowing to switch the power of this laser for the experiment as fast as $1.5$~ns with a repetition rate of 10~kHz. The two-photon Rabi frequency is $ \Omega = 2 \pi \times 0.05 \dots 0.5 $~GHz, the detuning to the Rydberg state is $ \Delta = 2 \pi \times - 1 \ldots -10 $~GHz. The glass cell is home-made and consists of two 1~mm-thick quartz optical flats of $5\times5$~${\rm cm}^2$ separated by a 220~${\rm \mu{}m}$ spacer and sealed at the edge. A glass tube was connected to the cell, filled with cesium under vacuum and sealed off. This glass tube serves as a reservoir. The temperature of the cell is kept constant at $200^\circ$C to prevent the metallic cesium from condensing, whereas the temperature of the reservoir is varied between $70^\circ$C and $150^\circ$C to tune the atom number density. The atom number density in the cell is determined for each measurement by performing absorption spectroscopy on the D2-line of cesium \cite{Siddons2008}. Both beams are linearly polarized, overlapped in a counter-propagating configuration and focused inside the cell to a beam waist of approx. 15~${\rm \mu{}m}$. After passing through the cell the blue beam is focused on a pinhole in order to select the central part of radius 6.25~${\rm \mu{}m}$ inside the cell. We verified that the resulting imaged volume is indeed almost cylindrical inside the cell. The change in transmission is detected using a fast amplified silicon photodiode (Femto HSA-X-S-1G4-Si). During the measurement, the transmission change of the 455~nm laser is monitored and averaged 300~times by a fast oscilloscope. We ensured that we operate in the linear response regime of the photodiode and measured the conversion efficiency of the detection to be approx. 850~V/W. This allows us to convert the transmission change into the real number of Rydberg atoms that are excited, assuming that no decays and losses of Rydberg atoms occur. This assumption is valid for short times as the time spent by an atom inside the excitation volume is on the order of 20~ns. \subsection*{Approximation to 2-level atoms} \begin{figure} \includegraphics[scale=1]{fig_S1.pdf} \caption{Left panel: 3-level ladder system with the useful definitions. The transition between the levels $|i\rangle$ and $|j\rangle$ is driven with the Rabi frequency $\Omega_{ij}$. The detuning to the intermediate state is $\delta$. Right panel: Approximation to a 2-level system after adiabatic elimination of the intermediate state $|{\rm e}\rangle$. The effective 2-level Rabi frequency is $\Omega_{\rm eff} = \Omega_{\rm ge}\Omega_{{\rm er}}/(2|\delta|)$. For simplicity reasons the detuning to state $|{\rm r}\rangle$ and the arising light shifts are not shown.} \label{fig:figS1} \end{figure} We reduce our 3-level system to a 2-level system by making use of the adiabatic elimination of the intermediate state. The basic principles of this approximation and the relevant parameters are shown in Fig.~\ref{fig:figS1}. The density matrix for the 3-level system is $\rho=(\rho_{ij})_{i,j=\{{\rm g,e,r}\}}$, and we define the density matrix for the effective 2-level system as $\hat{\rho}=(\hat{\rho}_{ij})_{i,j=\{{\rm g,r}\}}$. The populations of states $|{\rm g}\rangle$ and $|{\rm r}\rangle$ coincide between both descriptions, i.e. $ \rho_{{\rm gg}} = \hat{\rho}_{{\rm gg}} $ and $ \rho_{{\rm rr}} = \hat{\rho}_{{\rm rr}} $. The approximation relies on assuming that the intermediate state remains unpopulated ($\partial_t \rho_{{\rm ee}}=0$ and $\left.\rho_{{\rm ee}}\right|_{t=0}=0$). The resulting identity from the optical Bloch equations without decays is \begin{equation} 0 = \Omega_{{\rm ge}} \operatorname{Im}(\rho_{{\rm ge}}) - \Omega_{{\rm er}} \operatorname{Im}(\rho_{{\rm er}}) \end{equation} Using $\partial_t \rho_{{\rm rr}} = \Omega_{{\rm er}} \operatorname{Im}(\rho_{{\rm er}})$ one obtains \begin{equation} \operatorname{Im}(\rho_{{\rm ge}}) = \frac{1}{\Omega_{{\rm ge}}} \partial_t \rho_{{\rm rr}} \end{equation} which means that the absorption on the lower transition is proportional to the excitation rate to the Rydberg state $|{\rm r}\rangle$. For the 2-level system we also obtain from the optical Bloch equations $\partial_t \hat{\rho}_{{\rm rr}} = \Omega_{\rm eff} \operatorname{Im}(\hat{\rho}_{{\rm gr}})$ and therefore \begin{equation} \operatorname{Im}(\rho_{{\rm ge}}) = \frac{\Omega_{\rm eff}}{\Omega_{{\rm ge}}} \operatorname{Im}(\hat{\rho}_{{\rm gr}}) = \frac{\Omega_{{\rm er}}}{2|\delta|} \operatorname{Im}(\hat{\rho}_{{\rm gr}}) \end{equation} This last relation shows the link between the 2-level model and what is experimentally measured. \subsection*{S-state potentials in cesium} At short inter-atomic distances, i.e. below 1~${\rm \mu{}m}$, the pair-state interaction potentials for Rydberg S-states become very complex, as shown in Fig.~\ref{fig:figS2} for the 32S state. We will focus on this 32S state, but the situation is similar for other principal quantum numbers. Neighboring $n^{\prime}{\rm P}\text{-}n^{\prime\prime}{\rm D}$ pair-states interact with the $\rm 32S\text{-}32S$ state with weak but resonant dipole-quadrupole interaction, leading to avoided crossings and state mixing. This means that the $n^{\prime}{\rm P}\text{-}n^{\prime\prime}{\rm D}$ pair-states, which are dipole-forbidden for laser excitation from the $7{\rm P}_{3/2}$ state in the non-interacting case, carry some admixture $\varepsilon_{\rm 32S\text{-}32S}$ of the $\rm 32S\text{-}32S$ state. Therefore it is possible to excite pair-states at negative detunings, where the detuning is defined with respect to the unperturbed $\rm 32S\text{-}32S$ state, which would not be the case with purely repulsive van-der-Waals interaction. \begin{figure} \includegraphics[scale=1]{fig_S2.pdf} \caption{Top panel: Density plot of the 32S-32S admixture $\varepsilon_{\rm 32S\text{-}32S}$ versus the inter-atomic distance. The molecular quantum number here is $M=0$, and interactions up to the quadrupole-quadrupole order are included \cite{Schwettmann2006}. The green (resp. red) line depicts the extrapolated van-der-Waals (resp. dipole-dipole) pair-state potential. Bottom panel: 32S-32S admixture $\varepsilon_{\rm 32S\text{-}32S}$ versus the inter-atomic distance at a $-2$~GHz potential energy (depicted as the blue dash-dot line in the upper panel).} \label{fig:figS2} \end{figure} These pair state potentials are actually only valid for an excitation close to an atom in the 32S state. Once such a pair is created, it is mainly a $n^{\prime}{\rm P}\text{-}n^{\prime\prime}{\rm D}$ pair, with an amplitude of $1-\varepsilon_{\rm 32S\text{-}32S}^2 > 0.99$. For any subsequent excitation around this pair, one therefore has to consider the pair-state interaction potentials for the $32{\rm S}\text{-}n^{\prime}{\rm P}$ and $32{\rm S}\text{-}n^{\prime\prime}{\rm D}$ pair states. For an S-P pair state the interaction potential is purely of the form $C_3 / r^3$, since the dipole-dipole interaction of an S-P pair with its permutation P-S is resonant, and the symmetric (resp. antisymmetric) linear combination exhibits attractive (resp. repulsive) interaction. The corresponding interaction strength is $C_3^{\rm S\text{-}P} \approx -290$~$\rm MHz\cdot\mu{}m^3$, which is in excellent agreement with the value of $C_3 = 2\pi \times -258$~$\rm MHz\cdot\mu{}m^3$ extracted from the experimental results. For an S-D pair state the interaction is of van-der-Waals character and repulsive ($C_6^{\rm S\text{-}D} \approx 11$~$\rm MHz\cdot\mu{}m^6$). The $n^{\prime\prime}{\rm D}$ component therefore plays no role in our case of aggregation with red detunings. The admixture $\varepsilon_{\rm 32S\text{-}32S}$ is the rescaling factor for the Rabi frequency when exciting a $n^{\prime}{\rm P}\text{-}n^{\prime\prime}{\rm D}$ pair. Because of the nature of the aggregates, which enclose previously excited Rydberg atoms, at most every second Rydberg excitation is created via this process. The other excitations occur either as a direct off-resonant excitation from the ground state, or close to a $n^{\prime}{\rm P}\text{-}n^{\prime\prime}{\rm D}$ pair. The Rabi frequency does not need rescaling in both cases. An estimate for the overall rescaling factor is therefore $\sqrt{\varepsilon_{\rm 32S\text{-}32S}} \sim 0.3$, which is close to the rescaling factors of 0.38 (for van-der-Waals interaction) and 0.23 (for dipole-dipole interaction) needed to match the experimental data to the results from the simulation. In the whole treatment, we neglected the distance dependency of $\varepsilon_{\rm 32S\text{-}32S}$. In order to justify this approximation, let us consider just two pair states $|1\rangle$ and $|2\rangle$ with energies $E_1(r) = C_6/r^6$ and $E_2(r) = E_0 - C_6/r^6$, and a small dipole quadrupole interaction $W(r) = U/r^4$ between the two pair states. The resulting admixture of pair state $|1\rangle$ at small $r$ is $\varepsilon_1 \approx W(r)/|E_2(r)-E_1(r)| \propto r^{2}$ if we consider $E_0$ to be small, and the pair state energy is $E_1^{\prime}(r) = \Delta \approx -C_6/r^6$. Therefore $\varepsilon_1 \propto \Delta^{1/3} $ under these rather crude approximations. Using the same argument as before we obtain $\Omega \propto \Delta^{1/6}$ which is a sufficiently weak dependence to be neglected. \subsection*{Dephasing rate} The first contribution to dephasing in the experiment is the Doppler effect, characterized by the two-photon Doppler width \mbox{$\gamma_{\rm D} = |k_{455}-k_{1070}| \sqrt{\frac{8 \ln2\ k_B T}{m}}$}. Here $m$ is the atomic mass of cesium, $T$ is the temperature and $k_{455}$ and $k_{1070}$ the wavenumbers of the two lasers, from which we consider the difference because the lasers are counter-propagating. At $200^\circ$C we have $\gamma_{\rm D} = 2 \pi \times 512 $~MHz. The other important source of dephasing arises from the short transit time during which the atoms are in the resonant shell (or facilitation region) of width $\Delta r$. In first approximation, only the velocity component perpendicular to the shell $\textbf{\textsf{v}}_{\perp}$ determines the transit time, as depicted in Fig.\ref{fig:figS3}. Therefore we define this motional dephasing rate as \begin{equation} \gamma_{\rm m} = \frac{\langle \textsf{v}_{\perp} \rangle}{\Delta r} \label{eq:eqS4} \end{equation} where $\langle \textsf{v}_{\perp} \rangle = \frac{1}{\sqrt{3}} \sqrt{\frac{8 k_B T}{\pi m}}$ is the one dimensional mean atomic velocity. For a temperature of $200^\circ$C, this 1D mean velocity is $\langle \textsf{v}_{\perp} \rangle = 158$~${\rm m.s^{-1}}$. Since both dephasing mechanisms are gaussian, the total dephasing rate is defined as $\gamma = \sqrt{(\gamma_{\rm D})^2 + (\gamma_{\rm m})^2 }$. Moreover $\Delta r$ is related to the total dephasing rate and on the interaction potential (see Fig.1). First assuming Rydberg interaction of the van-der-Waals (vdW) type, we rename the relevant quantities as $\Delta r^{\rm vdW}$, $\gamma^{\rm vdW}_{\rm m}$ (motional dephasing rate) and $\gamma^{\rm vdW}$ (total dephasing rate). The width of the resonant shell is also given by \cite{Lesanovsky2014, *Marcuzzi2014} \begin{equation} \Delta r^{\rm vdW} \approx \frac{1}{3} \left( \frac{C_6}{\Delta}\right) ^{\frac{1}{6}} \left( \frac{\gamma^{\rm vdW}}{2 |\Delta|} \right) \label{eq:eqS5} \end{equation} By combining equations \eqref{eq:eqS4} and \eqref{eq:eqS5} we obtain the following self consistent equation for the motional dephasing rate $\gamma^{\rm vdW}_{\rm m}$: \begin{equation} \frac{\langle \textsf{v}_{\perp} \rangle}{\gamma^{\rm vdW}_{\rm m}} = \frac{1}{3} \left( \frac{C_6}{\Delta}\right) ^{\frac{1}{6}} \left( \frac{\sqrt{\left(\gamma_{\rm D}\right)^2 + \left(\gamma^{\rm vdW}_{\rm m}\right)^2 }}{2 |\Delta|} \right) \label{eq:eqS6} \end{equation} Solving equation \eqref{eq:eqS6} yields \mbox{$\gamma^{\rm vdW}_{\rm m}= 2 \pi \times 493$~MHz}, so that the total dephasing rate is \mbox{$\gamma^{\rm vdW} = 2 \pi \times 711$~MHz}. For the case of pure dipole-dipole (dd superscript) interaction, equation \eqref{eq:eqS5} changes to \begin{equation} \Delta r^{\rm dd} \approx \frac{2}{3} \left( \frac{C_3}{\Delta}\right) ^{\frac{1}{3}} \left( \frac{\gamma^{\rm dd}}{2 |\Delta|} \right) \end{equation} and the dephasing rate can be estimated to \mbox{$\gamma^{\rm dd} = 2 \pi \times 591$~MHz} with \mbox{$\gamma^{\rm dd}_{\rm m} = 2 \pi \times 296$~MHz}. $\gamma^{\rm vdW}$ and $\gamma^{\rm dd}$ are the values that were used in the simulations. \begin{figure}[t] \includegraphics[scale=1]{fig_S3.pdf} \begin{ruledtabular} \begin{tabular}{ l c c c c c } $n$ & 1 & 2 & 3 & 4 & 5 \\ \hline $P_n(t_{\rm f})$ & 0.91 & 0.084 & 0.0061 & 0.00039 & 0.000023 \\ $\eta_n$ & 4740 & 436 & 32 & 2.0 & 0.12 \end{tabular} \end{ruledtabular} \caption{Top: Sketched snapshots of the excitation process of a 3-atom aggregate. Red (resp. blue) dots represent Rydberg (resp. ground state) atoms. The grey lines show the resonant shell of width $\Delta r$ for each atom, within which the excitation is facilitated. The velocity component ${\bf v}_{\perp}$ perpendicular to the resonant shell is shown for an atom inside the shell. Bottom: Probabilities $P_n(t_{\rm f})$ of creating an $n$-atom aggregate in an interval $t_{\rm f}$ and typical number of $n$-atom aggregates $\eta_n$ for $N_{\rm fac} = 5000$~facilitating atoms, similarly to the experimental situation.} \label{fig:figS3} \end{figure} \subsection*{Aggregate size} Since we are not in the frozen gas regime, aggregates are only spatially correlated like the ones shown in the inset of Fig.~1(b) for a short period of time. At longer times the atomic motion will destroy the spatial correlations between the atoms. This will, however, have no impact on the fact that Rydberg excitations are facilitated by already existing one. In the following we will estimate the size of the 'frozen aggregates' [Fig.~1(b)]. The time scale over which the ensemble can be considered frozen is given the transit time of the atoms through the resonant shell of width $\Delta r$, given by $t_{\rm f} = (\gamma_{\rm m})^{-1} \approx 0.32$~ns. During this time interval the ensemble can be considered as a frozen gas for our purposes. We estimate the size of the aggregates that can be formed during this time in a very simplified model. The parameters are the same as in Fig.~2(c) with the extracted values of $\gamma$ and $C_6$ : $\Delta \approx 2\pi \times -2200 $~MHz, $ \Omega = 2\pi \times 0.4 \times 100 $~MHz, $N_{\rm g}=88$~${\rm \mu{}m}^{-3}$, $C_6 = 2\pi \times -109$~$\rm MHz\cdot\mu{}m^6$ and $\gamma^{\rm vdW} = 2 \pi \times 711$~MHz. $0.4$ is the rescaling factor for $\Omega$ discussed in the main text. Let us first consider one individual Rydberg atom which acts as the seed for an aggregate. In the resonant shell (sphere of radius $r_{\rm fac} = 0.61$~${\rm \mu{}m}$ and width $\Delta r = 51$~nm, see Eq.~\eqref{eq:eqS4}) around this first atom reside $\nu_{\rm res} = N_{\rm g} \times 4\pi r_{\rm fac}^2 \Delta r = 20.7$~atoms in average. Moreover the size of the resonant shell grows with the number of Rydberg atoms. For this we approximate an $n$-atom aggregate to a sphere (whose volume is occupied by $n$~Rydberg atoms separated by the facilitation radius). Then the entire resonant shell contains $n^{2/3}\times \nu_{\rm res}$ atoms. For each ground state atom in the shell, the time constant for the excitation is \mbox{$\tau_0 = \gamma/\Omega^2 \approx 71$~ns}, and therefore the excitation rate of the $(n+1)$-th atom is $\Gamma_{n+1} = n^{2/3} \times \nu_{\rm res}/\tau_0 $. If we simplify the problem and consider that the atoms are excited sequentially, the probabilities $P_n(t)$ that an aggregate formed by $n$ Rydberg atoms at the time $t$ per pre-existing Rydberg atom obeys the following rate equation: \begin{align} \partial_n P_n &= \Gamma_{n-1} P_{n-1} - \Gamma_{n} P_{n} \nonumber \\ &= \frac{\nu_{\rm res}}{\tau_0} \left( (n-1)^{2/3} P_{n-1} - n^{2/3} P_{n} \right) \end{align} In the table in Figure~\ref{fig:figS3} we show the solutions of these equations for aggregates consisting of up to 5 atoms at the time $t_{\rm f}$. The number of $n$-atom aggregates that are excited in the whole excitation volume during $t_{\rm f}$ is given by $\eta_n = N_{\rm fac} \times P_n(t_{\rm f})$, where $N_{\rm fac}$ is the number of Rydberg atoms which can act as a nucleation grains. As the excitation occur at the boundary of the already excited Rydberg ensemble $N_{\rm fac}$ is only a fraction of the $\sim 50000$ Rydberg atoms. The values of $\eta_n$ are shown in Figure~\ref{fig:figS3}, with $N_{\rm fac} = 5000$ (corresponding to the outer shell of a sphere with $50000$ Rydberg atoms). For this Rydberg atom number the largest spatially correlated aggregates that we create in our experiment can thus be estimated to consist of approximately 4 atoms. \end{document}